research article differential neural networks for...

14
Research Article Differential Neural Networks for Identification and Filtering in Nonlinear Dynamic Games Emmanuel García and Daishi Alfredo Murano Monterrey Institute of Technology and Higher Education, State of Mexico Campus (ITESM-CEM), Guadalupe Lake Road, Kilometer 3.5, Margarita Maza de Juarez, 52926 Atizapan de Zaragoza, MEX, Mexico Correspondence should be addressed to Daishi Alfredo Murano; [email protected] Received 9 January 2014; Accepted 12 May 2014; Published 11 June 2014 Academic Editor: Gerard Olivar Copyright © 2014 E. Garc´ ıa and D. A. Murano. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. is paper deals with the problem of identifying and filtering a class of continuous-time nonlinear dynamic games (nonlinear differential games) subject to additive and undesired deterministic perturbations. Moreover, the mathematical model of this class is completely unknown with the exception of the control actions of each player, and even though the deterministic noises are known, their power (or their effect) is not. erefore, two differential neural networks are designed in order to obtain a feedback (perfect state) information pattern for the mentioned class of games. In this way, the stability conditions for two state identification errors and for a filtering error are established, the upper bounds of these errors are obtained, and two new learning laws for each neural network are suggested. Finally, an illustrating example shows the applicability of this approach. 1. Introduction 1.1. Preliminaries and Motivation. Nowadays, an investiga- tion field that has been developed widely is the design of controllers for a certain group of systems involved in a conflicting interaction, that is to say, when the objective of each implicated system differs and when the known information about this interaction may be distinct for every system (e.g., [1]). More formally, this special class of system groups can be represented by means of the dynamic noncooperative game theory, where the decision making process (or interaction) is called a game and each involved system in it is called a player [2]. From the viewpoint of control theory, a dynamic game is controlled by obtaining its equilibrium solution, and in order to do so, a (mathematical) model of this dynamic game is needed. So, several publications about dynamic games (and particularly about continuous-time dynamic games) are based on the complete knowledge of the model that describes its dynamics (see, e.g., [3, 4]). Nevertheless, having a model (or even a partial model) of a continuous-time dynamic game is not always possible. On the other hand, the equilibrium solution of a dynamic game is also based on the information structure that every player has or, in other words, on the available information that each player can use in the control strategy. For example, one can obtain an open-loop Nash equilibrium solution by using the maximum principle technique or a feedback Nash equilibrium solution by utilizing the dynamic programing method (see [2]). According to the above, the aim of an identification process in terms of a dynamic game should be the modeling of such game and the obtaining of its information structure, that is, the guarantee that the control strategy of every player gives an equilibrium solution for a game despite its dynamic uncertainties. In this way, the works of [5, 6] obtain feedback control strategies for differential games modeled through norm-bounded uncertainties; the studies of [79] achieve equilibrium solutions for several classes of differential games under a multimodel approach; the analyses of [10, 11] present adaptive algorithms for determining equilibrium solutions without the complete knowledge of the game dynamics, and the work of [12] proposes the obtaining of a suboptimal equi- librium solution where a differential game is approximated by a fuzzy model. Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2014, Article ID 306761, 13 pages http://dx.doi.org/10.1155/2014/306761

Upload: others

Post on 13-Aug-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

Research ArticleDifferential Neural Networks for Identification and Filtering inNonlinear Dynamic Games

Emmanuel Garciacutea and Daishi Alfredo Murano

Monterrey Institute of Technology and Higher Education State of Mexico Campus (ITESM-CEM) Guadalupe Lake RoadKilometer 35 Margarita Maza de Juarez 52926 Atizapan de Zaragoza MEX Mexico

Correspondence should be addressed to Daishi Alfredo Murano daishiitesmmx

Received 9 January 2014 Accepted 12 May 2014 Published 11 June 2014

Academic Editor Gerard Olivar

Copyright copy 2014 E Garcıa and D A MuranoThis is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

This paper deals with the problem of identifying and filtering a class of continuous-time nonlinear dynamic games (nonlineardifferential games) subject to additive and undesired deterministic perturbations Moreover the mathematical model of this classis completely unknownwith the exception of the control actions of each player and even though the deterministic noises are knowntheir power (or their effect) is not Therefore two differential neural networks are designed in order to obtain a feedback (perfectstate) information pattern for the mentioned class of games In this way the stability conditions for two state identification errorsand for a filtering error are established the upper bounds of these errors are obtained and two new learning laws for each neuralnetwork are suggested Finally an illustrating example shows the applicability of this approach

1 Introduction

11 Preliminaries and Motivation Nowadays an investiga-tion field that has been developed widely is the design ofcontrollers for a certain group of systems involved in aconflicting interaction that is to say when the objectiveof each implicated system differs and when the knowninformation about this interaction may be distinct for everysystem (eg [1])

More formally this special class of system groups can berepresented by means of the dynamic noncooperative gametheory where the decision making process (or interaction) iscalled a game and each involved system in it is called a player[2]

From the viewpoint of control theory a dynamic gameis controlled by obtaining its equilibrium solution and inorder to do so a (mathematical) model of this dynamic gameis needed So several publications about dynamic games(and particularly about continuous-time dynamic games) arebased on the complete knowledge of themodel that describesits dynamics (see eg [3 4]) Nevertheless having a model(or even a partial model) of a continuous-time dynamic gameis not always possible

On the other hand the equilibrium solution of a dynamicgame is also based on the information structure that everyplayer has or in other words on the available informationthat each player can use in the control strategy For exampleone can obtain an open-loop Nash equilibrium solution byusing the maximum principle technique or a feedback Nashequilibrium solution by utilizing the dynamic programingmethod (see [2])

According to the above the aim of an identificationprocess in terms of a dynamic game should be the modelingof such game and the obtaining of its information structurethat is the guarantee that the control strategy of every playergives an equilibrium solution for a game despite its dynamicuncertainties In this way the works of [5 6] obtain feedbackcontrol strategies for differential games modeled throughnorm-bounded uncertainties the studies of [7ndash9] achieveequilibrium solutions for several classes of differential gamesunder a multimodel approach the analyses of [10 11] presentadaptive algorithms for determining equilibrium solutionswithout the complete knowledge of the game dynamics andthe work of [12] proposes the obtaining of a suboptimal equi-librium solutionwhere a differential game is approximated bya fuzzy model

Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2014 Article ID 306761 13 pageshttpdxdoiorg1011552014306761

2 Mathematical Problems in Engineering

However an identification process could be deficient ifthere exist undesired perturbations in the game dynamicsthat is to say if the obtained information structure ofthe game is corrupted by (deterministic) noises Therebysome deterministic filtering works have been published inorder to solve this issue For example [13] describes theconcept of adaptive noise canceling for the estimation ofsignals corrupted by additive perturbations [14] introducesa finite horizon robust 119867

infinfiltering method that provides a

guaranteed bound for the estimation error in the presence ofboth parameter uncertainty and a known input signal [15]presents the filtering of the states of a time-varying uncertainlinear system with deterministic disturbances of limitedpower and [16] derives an optimal filtering formula for lineartime-varying discrete systems with unknown inputs

Therefore according to these preliminary works themotivation of this paper is to solve the problem of identifyingand filtering a class of nonlinear differential games withadditive deterministic perturbations where themathematicalmodel of this class and the effect (or power) of the noises arecompletely unknown

12 Main Contribution Since the introduction of con-tinuous-time recurrent neural networks (see [17]) the self-named differential neural networks have proved to be anexcellent tool in the identification state estimation andcontrol of several systems and of the appointed continuous-time dynamic games

For example in [18] differential neural networks are usedfor the identification of dynamical systems references [19ndash21] design differential neural network observers for adaptivestate estimation theworks of [22 23] propose neural networkcontrollers for several applications and [24] shows a com-pendium of differential neural networks for identificationstate estimation and control of nonlinear systems Also [25]treats the state estimation problem for affine nonlinear dif-ferential games using a differential neural network observerand in [26] a nearly optimal Nash equilibrium for classes ofdeterministic and stochastic nonlinear differential games isobtained using differential neural networks

Moreover speaking of recurrent neural networks anddeterministic filtering the work of [27] develops a recurrentneural network for robust optimal filter design in a 119867

infin

approach and in [28] algorithms are presented in orderto obtain adaptive filtering in nonlinear dynamic systemsapproximated by neural networks

Nevertheless the idea of using differential neural net-works for identification and filtering of a class of continuous-time nonlinear dynamic games is a new approach that as faras the authors know has not been treated before

Hence the main contribution of this paper is the proofthat it is possible to identify and to filter the states of a certainclass of nonlinear differential games through the designingof two differential neural networks Moreover this filteredidentification process generates a feedback (perfect state)information pattern for the mentioned class of games

More specifically although the structure of this class isknown its mathematical model is not that is to say the only

available information of the nonlinear differential game isthe control actions of each player So by using only thisavailable information a first differential neural network willbe designed for the identification of the nonlinear dynamicgame with the undesired perturbations and similarly thesecond differential neural network will identify the effectof these additive noises in the dynamics of the nonlineardifferential game which means that the perturbations areknown but their power is not

According to the above one of these two differentialneural networks will do the complete identification processof the class of nonlinear differential games and noticeablythe filtering (or the canceling) of the undesired perturbationswill be held by subtracting the state estimates of the twodifferential neural networks

Finally it is important to emphasize that these twodifferential neural networks have the structure of multilayerperceptrons (see [23 24 29]) and by means of Lyapunovrsquossecondmethod of stability the learning laws for their synapticweights are derived

2 Class of Nonlinear Differential Games

Consider the following continuous-time nonlinear dynamicgame given by

119905= 119891 (119909

119905) +

119873

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905 (1)

where 119905 isin [0infin) the index 119894 = 1 2 3 119873 denotes thenumber of players 119909

119905isin R119899 is the state vector of the game

119906119894

119905isin 119880

119894

adm sube R119904119894 denotes the admissible control actionvector of each player the mappings 119891 R119899

rarr R119899 and119892119894

R119899

rarr 119880119894

adm are unknown nonlinear functions 120585119905isin R119899

denotes a known deterministic perturbation vector and119867 isan unknown constant matrix of adequate dimensions

Similarly consider now the following set of cost functions(or performance indexes) associated with each player andgiven by

119869119894

119905= int

infin

0

120579119894

(119909119905 119906

119894

119905) 119889119905 (2)

where 120579119894 R119899

times 119880119894

adm rarr R is well-defined for the 119894 playerMoreover the information structure of each player

(denoted by 120578119894

119905) has a standard feedback (perfect state)

pattern that is to say

120578119894

119905= 119909

119905 (3)

and a permissible control strategy (or control policy) of the 119894player is defined by the set of functions 120588119894(119909

119905) satisfying

120588119894

R119899

997888rarr 119880119894

adm (4)

Nevertheless the class of nonlinear differential games (1)ndash(4)is not completely described if the following assumptions arenot fulfilled

Mathematical Problems in Engineering 3

Assumption 1 The admissible control actions 119906119894119905are measur-

able and bounded for all time 119905 isin [0infin) that is10038171003817100381710038171003817119906119894

119905

10038171003817100381710038171003817le 119906

119894

lt infin (5)

where 119906119894 isin R are known constants

Assumption 2 In order to guarantee the global existence anduniqueness of the solution of (1) the unknown nonlinearfunctions119891(sdot) and 119892119894(sdot) satisfy the Lipschitz condition that isthere exist 0 lt 120595 120595119894

isin R constants such that the equations1003817100381710038171003817119891 (1205941) minus 119891 (1205942)

1003817100381710038171003817 le 12059510038171003817100381710038171205941 minus 1205942

1003817100381710038171003817

10038171003817100381710038171003817119892119894

(1205941) minus 119892

119894

(1205942)10038171003817100381710038171003817le 120595

119894 10038171003817100381710038171205941 minus 12059421003817100381710038171003817

(6)

are fulfilled for all 1205941 120594

2isin X sub R119899 Moreover 119891(sdot) and 119892119894(sdot)

satisfy the following linear growth condition1003817100381710038171003817119891 (120594)

1003817100381710038171003817 le 1198911+ 119891

2

10038171003817100381710038171205941003817100381710038171003817

10038171003817100381710038171003817119892119894

(120594)10038171003817100381710038171003817le 119892

119894

1+ 119892

119894

2

10038171003817100381710038171205941003817100381710038171003817

(7)

where 1198911 119891

2 119892

119894

1 119892

119894

2isin R are known constants

Assumption 3 Under the permissible control strategies120588119894

(119909119905) the (feedback) class of nonlinear differential games

given by (1)ndash(4) is quadratically stable that is there exists aLyapunov (maybe unknown) function 0 le 119864

119905 R119899

rarr Rsuch that the inequalities

[120597119864

119905

120597119909119905

][119891 (119909119905) +

119873

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905] le minus120582

1

10038171003817100381710038171199091199051003817100381710038171003817

2

10038171003817100381710038171003817100381710038171003817

120597119864119905

120597119909119905

10038171003817100381710038171003817100381710038171003817

le 1205822

10038171003817100381710038171199091199051003817100381710038171003817

(8)

are satisfied for any known 0 lt 1205821 120582

2isin R constants

Assumption 4 The deterministic perturbation 120585119905is measur-

able and bounded for all time 119905 isin [0infin) that is10038171003817100381710038171205851199051003817100381710038171003817 le 120585 lt infin (9)

where 120585 isin R is a known constant

3 Problem Statement

Let the class of continuous-time nonlinear dynamic games(1)ndash(4) be such that Assumptions 1 to 4 are fulfilled Thenif one makes the following change of variables

119910119905= 119891 (119909

119905) +

119873

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905 (10)

119905= 119867120585

119905 (11)

it is clear that the expected or uncorrupted vector state of theclass of nonlinear differential games (1)ndash(4) can be defined as

119909119905= 119910

119905minus 119911

119905 (12)

Thereby and in view of the fact that 119891(sdot) 119892119894(sdot) and 119867

are unknown the tackled problem in this paper is to obtaina feedback (perfect state) information pattern 120578

119894

119905= 119909

119905

given that 119909119905isin R119899 satisfies the following filtering (or noise

canceling) equation

119909119905= 119910

119905minus

119905(13)

and where 119910119905

119905isin R119899 are the state estimates of differential

equations (10) and (11) respectively

4 Differential Neural Networks Design

In order to solve the problem described above consider a firstdifferential neural network given by the following equation

119910119905= 119882

1119905120590 (119881

1119905119910119905) +

119873

sum

119894=1

119882119894

2119905120593119894

(119881119894

2119905119910119905) 119906

119894

119905 (14)

where 119905 isin [0infin) 119894 = 1 2 3 119873 denotes the number ofplayers 119910

119905isin R119899 is the vector state of the neural network

the matrices 1198821119905

isin R119899times119896 1198811119905

isin R119896times119899 119882119894

2119905isin R119899times119903119894 and

119881119894

2119905isin R119903119894times119899 are synaptic weights of the neural network and

120590 R119896

rarr R119896 and120593119894 R119903119894 rarr R119903119894times119904119894 are activation functionsof the neural network

Remark 5 For simplicity (and from now on) 120590 = 120590(1198811119905119910119905)

and 120593119894 = 120593119894(119881119894

2119905119910119905)

According to [29] the differential neural network (14)is classified as multilayer perceptrons and its structure wasinitially taken from [23 24] Also this differential neuralnetwork only uses sigmoid activation functions that is 120590 and120593119894 have a diagonal structure with elements

120590119901(119908) =

119886119901

1 + 119890minus119887119901119908

minus 119888119901

10038161003816100381610038161003816100381610038161003816119908=1198811119905119910119905

119901 = 1 2 119896 (15)

120593119894

119902119902(119908

119894

) =

119886119902

1 + 119890minus119887119902119908119894minus 119888

119902

10038161003816100381610038161003816100381610038161003816119908119894=1198811198942119905119910119905

119902 = 1 2 min (119903119894 119904

119894)

(16)

where 0 lt 119886119901 119887

119901 119888119901

isin R and 0 lt 119886119902 119887

119902 119888119902

isin Rare known constants that manipulate the geometry of thesigmoid function

Thus in view of the fact that 120590 and 120593119894 are sigmoid

activation functions they are bounded and they satisfy thefollowing equations

[ minus 119860 [119910119905minus 119910

119905]]

119879

Λ120590[ minus 119860 [119910

119905minus 119910

119905]]

le [119910119905minus 119910

119905]119879

Λ120590[119910

119905minus 119910

119905]

[120593119894

119906119894

119905]119879

Λ119894

120593[120593

119894

119906119894

119905] le [119906

119894

]2

[119910119905minus 119910

119905]119879

Λ119894

120593[119910

119905minus 119910

119905]

(17)

1015840

= 119863120590[119881

1119905minus 119881

10] 119910

119905+ ]

120590

1205931198941015840

= ⟨119863119894

120593 119906

119894

119905⟩ [119881

119894

2119905minus 119881

119894

20] 119910

119905+ ]119894

120593119906119894

119905

(18)

4 Mathematical Problems in Engineering

[]120590]119879

Λ1205901015840 []

120590]

le 119897120590[[119881

1119905minus 119881

10] 119910

119905]119879

Λ ]120590 [[1198811119905 minus 11988110] 119910119905]

[]119894120593119906119894

119905]119879

Λ119894

1205931015840 []119894

120593119906119894

119905]

le 119897119894

120593[119906

119894

]2

[[119881119894

2119905minus 119881

119894

20] 119910

119905]119879

Λ119894

]120593 [[119881119894

2119905minus 119881

119894

20] 119910

119905]

(19)

where 119860 Λ120590 Λ

1205901015840 Λ ]120590 Λ 120590

Λ119894

120593 Λ119894

1205931015840 Λ119894

]120593 Λ119894

120593 119897120590 and 119897119894

120593are

known constants of adequate dimensions and

= 120590 (11988110119910119905) minus 120590 (119881

10119910119905)

120593119894

= 120593119894

(119881119894

20119910119905) minus 120593

119894

(119881119894

20119910119905)

1015840

= 120590 (1198811119905119910119905) minus 120590 (119881

10119910119905)

1205931198941015840

= 120593119894

(119881119894

2119905119910119905) minus 120593

119894

(119881119894

20119910119905)

119863120590=

120597

120597120595120590 (120595) isin R

119896times119896

119863119894

120593=

120597

120597120595120593119894

(120595) isin R119903119894

(20)

Nevertheless there are some design conditions that thisfirst differential neural network needs to satisfy

Assumption 6 According to Assumption 2 the approxima-tion or residual error of (14) corresponding to the unidenti-fied dynamics of (10) and that is given by

119865119905= 119910

119905minus [119882

10120590 (119881

10119910119905) +

119873

sum

119894=1

119882119894

20120593119894

(119881119894

20119910119905) 119906

119894

119905] (21)

where 11988210 119881

10 119882119894

20 and 119881

119894

20are initial synaptic weights

(when 119905 = 0) is bounded and satisfies the followinginequality

[119865119905]119879

Λ119865[119865

119905] le 119865

1+ 119865

2[119910

119905]119879

Λ119865[119910

119905] (22)

where 0 lt Λ119865isin R119899times119899 is a known constant and 0 lt 119865

1

1198652isin R

Remark 7 The constants 1198651and 119865

2in (22) are not known

a priori because of the fact that they depend on the perfor-mance of the differential neural network (14) that is to saythe residual error (21) will depend on the number of neuronsused in (14) and on its parameters and therefore 119865

1and 119865

2

will depend on this too (see [24])

Assumption 8 There exist values of Λ120590 Λ

1205901015840 Λ119894

120593 Λ119894

1205931015840 Λ 120590

Λ119894

120593 Λ

119865 119897

120590 119897119894

120593 119860 and 119876 such that they provide a 0 lt 119875 =

[119875]119879

isin R119899times119899 solution to the algebraic Riccati equation asfollows

119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876 = 0 (23)

where

119860 = 11988210119860 (24)

119877 = [11988210] [[Λ

120590]minus1

+ [Λ1205901015840]minus1

] [11988210]119879

+ [Λ119865]minus1

+

119873

sum

119894=1

[119882119894

20] [[Λ

119894

120593]minus1

+ [Λ119894

1205931015840]minus1

] [119882119894

20]119879

(25)

119876 = 2119876 + Λ120590+

119873

sum

119894=1

[119906119894

]2

Λ119894

120593 (26)

where 119860 such that 119860 isin R119899times119899 is Hurwitz and 119876 is known

On the other hand consider now a second differentialneural network given by the following equation

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905 (27)

where 119905 isin [0infin) 119905isin R119899 is the vector state the matrices

1198823119905

isin R119899times120575 and 1198824119905

isin R119899times120583 are synaptic weights and 120572

R119899

rarr R120575 and 120574 R119899

rarr R120583times119899 are sigmoid activationfunctions

Then similar to120590 and120593119894 the activation functions120572(sdot) and120574(sdot) satisfy the following inequalities

[ minus 119860 [119905minus 119911

119905]]

119879

Λ[ minus 119860 [

119905minus 119911

119905]]

le [119905minus 119911

119905]119879

Λ120572[

119905minus 119911

119905]

[120574 120585119905]119879

Λ120574[120574 120585

119905] le [ 120585 ]

2

[119905minus 119911

119905]119879

Λ120574[

119905minus 119911

119905]

(28)

where 119860 Λ Λ

120572 Λ

120574 and Λ

120574are known constants of

adequate dimensions and

= 120572 (119905) minus 120572 (119911

119905)

120574 = 120574 (119905) minus 120574 (119911

119905)

(29)

Thereby it is easy to confirm that if 120572(sdot) = 120590 if 120574(sdot) =120593119894 if 120585

119905= 119906

119894

119905 and if 119881

1119905= 119881

119894

2119905= 119868 where 119894 = 1

then the differential neural networks (14) and (27) coincideHence two new assumptions (corresponding toAssumptions6 and 8) must be satisfied for this second differential neuralnetwork

Assumption 9 According (again) to Assumption 2 theapproximation or residual error of (27) corresponding to theunidentified dynamics of (11) and that is given by

119866119905=

119905minus [119882

30120572 (119911

119905) + 119882

40120574 (119911

119905) 120585

119905] (30)

where11988230

and11988240

are initial synaptic weights (when 119905 = 0)is bounded and satisfies the inequality

[119866119905]119879

Λ119866[119866

119905] le 119866

1+ 119866

2[119911

119905]119879

Λ119866[119911

119905] (31)

where 0 lt Λ119866isin R119899times119899 is a known constant and 0 lt 119866

1

1198662isin R

Mathematical Problems in Engineering 5

Remark 10 Similar to Remark 7 the constants 1198661and 119866

2

in (31) are not known a priori because they depend on theperformance of the differential neural network (27)

Assumption 11 If there exist values of Λ Λ

120574 Λ

119866119882

30 and

11988240

such that

[11988240] [Λ

120574]minus1

[11988240]119879

=

119873

sum

119894=1

[119882119894

20] [[Λ

119894

120593]minus1

+ [Λ119894

1205931015840]minus1

] [119882119894

20]119879

11988230

= 11988210

Λ= Λ

120590+ Λ

1205901015840

Λ119866= Λ

119865

(32)

and values of Λ120572and Λ

120574such that

119876 = Λ120572+ [ 120585 ]

2

Λ120574minus Λ

120590minus

119873

sum

119894=1

[119906119894

]2

Λ119894

120593 (33)

then they provide a 0 lt 119875 = [119875]119879

isin R119899times119899 solution to thealgebraic Riccati equation (23) where

119860 = 11988230119860 (34)

119877 = [11988230] [Λ

]minus1

[11988230]119879

+ [Λ119866]minus1

+ [11988240] [Λ

119894

120593]minus1

[11988240]119879

(35)

119876 = 119876 + Λ120572+ [ 120585 ]

2

Λ120574 (36)

Remark 12 Notice that the equalities (32) and (33) werechosen in order to guarantee the same solution of thealgebraic Riccati equation (23) in Assumptions 8 and 11 Alsonotice that the first term of the right-hand side of (26) willalways be greater than the first term of the right-hand side of(36) that is 2119876 gt 119876

5 Main Result on Identification and Filtering

According to the above the main result on identification andfiltering for the class of nonlinear differential games (1)ndash(4)deals with both the development of an adaptive learning lawfor the synaptic weights of the differential neural networks(14) and (27) and the inference of a maximum value ofidentification error for the dynamics (10) and (11)

Moreover the establishment of a maximum value of fil-tering error between the uncorrupted states and the identifiedones is obtained namely an error is defined by

119890119905= 119909

119905minus 119909

119905 (37)

where 119909119905isin R119899 is given by noise canceling equation (13) and

119909119905isin R119899 is given by the expected or uncorrupted vector state

(12)More formally the main obtained result is described in

the following three theorems

Theorem 13 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural network (14) be suchthat Assumptions 6 and 8 are satisfied If the synaptic weightsof (14) are adjusted with the following learning law

1119905= minus2119870

1198821119875120576

119905[120590]

119879

1119905= minus 119870

1198811[2[119863

120590]119879

[11988210]119879

119875120576119905

+119897120590[Λ ]120590]

119879

1119905119910119905] [119910

119905]119879

119894

2119905= minus2119870

119894

1198822

119875120576119905[119906

119894

119905]119879

[120593119894

]119879

119894

2119905= minus119870

119894

1198812

[2 ⟨119863119894

120593 119906

119894

119905⟩ [119882

119894

20]119879

119875120576119905

+119897119894

120593[119906

119894

]2

[Λ119894

]120593]119879

119894

2119905119910119905] [119910

119905]119879

(38)

where120576119905= 119910

119905minus 119910

119905(39)

denotes the identification error 1119905= 119881

1119905minus119881

10 119894

2119905= 119881

119894

2119905minus

119881119894

20 and 119870

1198821 119870

1198811 119870119894

1198822

and 119870119894

1198812

are known symmetric andpositive-definite constant matrices then it is possible to obtainthe nextmaximumvalue of identification error in average sense119870

119894

1198822

as follows

lim sup119905rarrinfin

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) le 119865

1 (40)

Proof Taking into account the residual error (21) the differ-ential equation (10) can be expressed as

119910119905= 119882

10120590 (119881

10119910119905) +

119873

sum

119894=1

119882119894

20120593119894

(119881119894

20119910119905) 119906

119894

119905+ 119865

119905 (41)

Then by substituting (14) and (41) into the derivative of (39)with respect to 119905 and by adding and subtracting the terms[119882

10120590] [119882

10120590(119881

10119910119905)] [119882119894

20120593119894

119906119894

119905] and [119882119894

20120593119894

(119881119894

20119910119905)119906

119894

119905] it

is easy to confirm that

120576119905=

1119905120590 +119882

10 + 119882

101015840

minus 119865119905

+

119873

sum

119894=1

119894

2119905120593119894

119906119894

119905+

119873

sum

119894=1

119882119894

20120593119894

119906119894

119905+

119873

sum

119894=1

119882119894

201205931198941015840

119906119894

119905

(42)

where 1119905= 119882

1119905minus119882

10and 119894

2119905= 119882

119894

2119905minus119882

119894

20 Now let the

Lyapunov (energetic) candidate function

119871119905= 119864

119905+ [120576

119905]119879

119875 [120576119905] +

1

2tr ([

1119905]119879

[1198701198821]minus1

[1119905])

+1

2tr ([

1119905]119879

[1198701198811]minus1

[1119905])

+1

2

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905])

+1

2

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905])

(43)

6 Mathematical Problems in Engineering

be such that the inequalities (8) are fulfilled (seeAssumption 3) and let

119905le minus120582

1

10038171003817100381710038171199101199051003817100381710038171003817

2

+ 2[120576119905]119879

119875 [ 120576119905]

+ tr ([1119905]119879

[1198701198821]minus1

[1119905])

+ tr ([1119905]119879

[1198701198811]minus1

[1119905])

+

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905])

+

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905])

(44)

be the derivative of (43) with respect to 119905 Then by substi-tuting (42) into the second term of the right-hand side of(44) and by adding and subtracting the term [2[120576

119905]119879

119875119860120576119905]

one may get

2[120576119905]119879

119875 [ 120576119905] = 2[120576

119905]119879

11987511988210[ minus 119860120576

119905] + 2[120576

119905]119879

119875119882101015840

+

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20120593119894

119906119894

119905

+

119873

sum

119894=1

2[120576119905]119879

119875119882119894

201205931198941015840

119906119894

119905minus 2[120576

119905]119879

119875119865119905

+ 2[120576119905]119879

1198751119905120590 +

119873

sum

119894=1

2[120576119905]119879

119875119894

2119905120593119894

119906119894

119905

+ [120576119905]119879

[119875119860 + [119860 ]119879

119875] [120576119905]

(45)

Next by analyzing the first five terms of the right-hand sideof (45) with the following inequality

[119883]119879

119884 + [119884]119879

119883 le [119883]119879

[Λ]minus1

[119883] + [119884]119879

Λ [119884] (46)

which is valid for any pair of matrices 119883119884 isin RΩtimesΓ and forany constant matrix 0 lt Λ isin RΩtimesΩ where Ω and Γ arepositive integers (see [24]) the following is obtained

(i) Using (46) and (17) in the first term of the right-handside of (45) then

2[120576119905]119879

11987511988210[ minus 119860120576

119905]

le [120576119905]119879

[119875 [11988210] [Λ

120590]minus1

[11988210]119879

119875] [120576119905]

+ [120576119905]119879

Λ120590[120576

119905]

(47)

(ii) Substituting (18) into the second term of the right-hand side of (45) and using (46) and (19) then

2[120576119905]119879

119875119882101015840

le 2[120576119905]119879

11987511988210119863

1205901119905119910119905

+ [120576119905]119879

[119875 [11988210] [Λ

1205901015840]minus1

[11988210]119879

119875] [120576119905]

+ 119897120590[

1119905119910119905]119879

Λ ]120590 [1119905119910119905]

(48)

(iii) Using (46) and (17) in the third term of the right-handside of (45) then

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20120593119894

119906119894

119905

le

119873

sum

119894=1

[120576119905]119879

[119875 [119882119894

20] [Λ

119894

120593]minus1

[119882119894

20]119879

119875] [120576119905]

+

119873

sum

119894=1

[119906119894

]2

[120576119905]119879

Λ119894

120593[120576

119905]

(49)

(iv) Substituting (18) into the fourth term of the right-hand side of (45) and using (46) and (19) then

119873

sum

119894=1

2[120576119905]119879

119875119882119894

201205931198941015840

119906119894

119905

le

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20⟨119863

119894

120593 119906

119894

119905⟩

119894

2119905119910119905

+

119873

sum

119894=1

[120576119905]119879

[119875 [119882119894

20] [Λ

119894

1205931015840]minus1

[119882119894

20]119879

119875] [120576119905]

+

119873

sum

119894=1

119897119894

120593[119906

119894

]2

[119894

2119905119910119905]119879

Λ119894

]120593 [119894

2119905119910119905]

(50)

(v) Using (46) and (22) in the fifth term of the right-handside of (45) then

minus2[120576119905]119879

119875119865119905le [120576

119905]119879

[119875[Λ119865]minus1

119875] [120576119905]

+ 1198651+ 119865

2[119910

119905]119879

Λ119865[119910

119905]

(51)

So by substituting (47)ndash(51) into the right-hand side of(45) and by adding and subtracting the term [[120576

119905]119879

2119876[120576119905]]

inequality (44) can be expressed as

119905le [120576

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120576119905]

minus [120576119905]119879

2119876 [120576119905] + 119865

1+ 119865

2

1003817100381710038171003817Λ 119865

1003817100381710038171003817

10038171003817100381710038171199101199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199101199051003817100381710038171003817

2

+ tr (Υ1198821) + tr (Υ

1198811)

+

119873

sum

119894=1

tr (Υ119894

1198822

) +

119873

sum

119894=1

tr (Υ119894

1198812

)

(52)

Mathematical Problems in Engineering 7

where the algebraic Riccati equation in the first term of theright-hand side of (52) is described in (23) and in (24)ndash(26)and

Υ1198821

= [1119905]119879

[1198701198821]minus1

[1119905] + 2120590[120576

119905]119879

1198751119905

Υ1198811= [

1119905]119879

[1198701198811]minus1

[1119905] + 2119910

119905[120576

119905]119879

11987511988210119863

1205901119905

+ 119897120590119910119905[119910

119905]119879

[1119905]119879

Λ ]120590 [1119905]

Υ119894

1198822

= [119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905] + 2120593

119894

119906119894

119905[120576

119905]119879

119875119894

2119905

Υ119894

1198812

= [119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905]

+ 2119910119905[120576

119905]119879

119875119882119894

20⟨119863

119894

120593 119906

119894

119905⟩

119894

2119905

+ 119897119894

120593[119906

119894

]2

119910119905[119910

119905]119879

[119894

2119905]119879

Λ119894

]120593 [119894

2119905]

(53)

Thereby by equating (53) to zero that is

Υ1198821

= Υ1198811= Υ

119894

1198822

= Υ119894

1198812

= 0 (54)

and by respectively solving for 1119905

1119905 119894

2119905 and 119894

2119905 the

learning law given by (38) is obtained Now by choosing

1003817100381710038171003817Λ 119865

1003817100381710038171003817 le1205821

1198652

(55)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120576

119905]119879

2119876 [120576119905] + 119865

1 (56)

Thus by integrating both sides of (56) on the time interval[0 119905] the following is obtained

int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591

le 1198710minus 119871

119905+ [119865

1] 119905 le 119871

0+ [119865

1] 119905

(57)

and by dividing (57) by 119905 it is easy to verify that

1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591 le

1198710

119905+ 119865

1 (58)

Finally by calculating the upper limit as 119905 rarr infin the max-imum value of identification error in average sense is the onedescribed in (40) This means that the identification error isbounded between zero and (40) and therefore this meansthat 120576

119905is stable in the sense of Lyapunov

Theorem 14 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural network (27) be suchthat Assumptions 9 and 11 are satisfied If the synaptic weightsof (27) are adjusted with the following learning law

3119905= minus2119870

1198823119872120598

119905[120572 (

119905)]

119879

119894

4119905= minus2 119870

119894

1198824

119872120598119905[120585

119905]119879

[120574 (119905)]

119879

(59)

where

120598119905=

119905minus 119911

119905(60)

denotes the identification error and 1198701198823

and 119870119894

1198824

are knownsymmetric and positive-definite constant matrices then it ispossible to obtain the next maximum value of identificationerror in average sense 119870119894

1198822

as follows

lim sup119905rarrinfin

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) le 119866

1 (61)

Proof Taking into account the residual error (30) the differ-ential equation (11) can be expressed as

119905= 119882

30120572 (119911

119905) + 119882

40120574 (119911

119905) 120585

119905+ 119866

119905 (62)

Then by substituting (27) and (62) into the derivative of (60)with respect to 119905 and by adding and subtracting the terms[119882

30120572()] and [119882

40120574()120585

119905] it is easy to confirm that

120598119905=

3119905120572 () + 119882

30 +

4119905120574 () 120585

119905+119882

40120574120585

119905minus 119866

119905 (63)

where 3119905= 119882

3119905minus119882

30and

4119905= 119882

4119905minus119882

40 Now let the

Lyapunov (energetic) candidate function

119872119905= 119864

119905+ [120598

119905]119879

119875 [120598119905] +

1

2tr ([

3119905]119879

[1198701198823]minus1

[3119905])

+1

2tr ([

4119905]119879

[1198701198824]minus1

[4119905])

(64)

be such that the inequalities (8) are fulfilled (seeAssumption 3) and let

119905le minus120582

1

10038171003817100381710038171199111199051003817100381710038171003817

2

+ 2[120598119905]119879

119875 [ 120598119905]

+ tr ([3119905]119879

[1198701198823]minus1

[3119905])

+ tr ([4119905]119879

[1198701198824]minus1

[4119905])

(65)

be the derivative of (64) with respect to 119905 Then by substitut-ing (63) into the second term of the right-hand side of (65)and by adding and subtracting the term [2[120598

119905]119879

119875119860 120598119905] one

may get

2[120598119905]119879

119875 [ 120598119905] = 2[120598

119905]119879

11987511988230[ minus 119860120598

119905]

+ 2[120598119905]119879

11987511988240120574120585

119905minus 2[120598

119905]119879

119875119866119905

+ 2[120598119905]119879

1198753119905120572 ()

+ 2[120598119905]119879

1198754119905120574 () 120585

119905

+ [120598119905]119879

[119875119860 + [119860 ]119879

119875] [120598119905]

(66)

8 Mathematical Problems in Engineering

Next by analyzing the first three terms of the right-hand sideof (66) with the inequality (46) the following is obtained

(i) Using (46) and (28) in the first term of the right-handside of (66) then

2[120598119905]119879

11987511988230[ minus 119860120598

119905]

le [120598119905]119879

[119875 [11988230] [Λ

]minus1

[11988230]119879

119875] [120598119905]

+ [120598119905]119879

Λ120572[120598

119905]

(67)

(ii) Using (46) and (28) in the second term of the right-hand side of (66) then

2[120598119905]119879

11987511988240120574120585

119905

le [120598119905]119879

[119875 [11988240] [Λ

120574]minus1

[11988240]119879

119875] [120598119905]

+ [120585]2

[120598119905]119879

Λ120574[120598

119905]

(68)

(iii) Using (46) and (31) in the third term of the right-handside of (66) then

minus2[120598119905]119879

119875119866119905le [120598

119905]119879

[119875[Λ119866]minus1

119875] [120598119905]

+ 1198661+ 119866

2[119911

119905]119879

Λ119866[119911

119905]

(69)

So by substituting (67)ndash(69) into the right-hand side of(66) and by adding and subtracting the term [[120598

119905]119879

119876[120598119905]]

inequality (65) can be expressed as

119905le [120598

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120598119905]

minus [120598119905]119879

119876 [120598119905] + 119866

1+ 119866

2

1003817100381710038171003817Λ119866

1003817100381710038171003817

10038171003817100381710038171199111199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199111199051003817100381710038171003817

2

+ tr (Ψ1198823) + tr (Ψ

1198824)

(70)

where the algebraic Riccati equation in the first term of theright-hand side of (70) is described in (23) and in (34)ndash(36)and

Ψ1198823

= [3119905]119879

[1198701198823]minus1

[3119905] + 2120572 () [120598

119905]119879

1198753119905 (71)

Ψ1198824

= [4119905]119879

[1198701198824]minus1

[4119905] + 2120574 () 120585

119905[120598

119905]119879

1198754119905 (72)

Thereby by equating (71) and (72) to zero (Ψ1198823

= Ψ1198824

= 0)and by respectively solving for

3119905and

4119905 the learning law

given by (59) is obtained Now by choosing

1003817100381710038171003817Λ119866

1003817100381710038171003817 le1205821

1198662

(73)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120598

119905]119879

119876 [120598119905] + 119866

1 (74)

Thus by integrating both sides of (74) on the time interval[0 119905] the following is obtained

int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591

le 1198720minus119872

119905+ [119866

1] 119905 le 119872

0+ [119866

1] 119905

(75)

and by dividing (75) by 119905 it is easy to verify that

1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591 le

1198720

119905+ 119866

1 (76)

Finally by calculating the upper limit as 119905 rarr infin themaximum value of identification error in average sense is theone described in (61)

Theorem 15 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural networks (14) and (27)be such that the Assumptions 6 and 8 and Assumptions 9 and11 are satisfied If the synaptic weights of (14) and (27) arerespectively adjusted with the learning laws (38) and (59) thenit is possible to obtain the following maximum value of filteringerror in average sense

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1

if 1198651gt 119866

1

(77)

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 0

if 1198651le 119866

1

(78)

where 119890119905is given by (37)

Proof Consider the following Lyapunov (energetic) candi-date function

Φ119905= [119890

119905]119879

119875 [119890119905] (79)

where 119875 is the solution of the algebraic Riccati equation (23)with 119860 119877 and 119876 defined by (24)ndash(26) or (34)ndash(36) Thenby calculating the derivative of (79) with respect to 119905 and byconsidering the equations (37) (13) and (12) it can be verifiedthat

Φ119905= 2[119890

119905]119879

119875 [ 120576119905minus 120598

119905] (80)

and taking into account (42) (63) and the proofs ofTheorems 13 and 14 one may get

Φ119905le minus [119890

119905]119879

2119876 [119890119905] + 119865

1minus [minus [119890

119905]119879

119876 [119890119905] + 119866

1]

le minus [119890119905]119879

[2119876 minus 119876] [119890119905] + 119865

1minus 119866

1

le minus [119890119905]119879

119876 [119890119905] + 119865

1minus 119866

1

(81)

Mathematical Problems in Engineering 9

Thus by integrating both sides of (81) on the time interval[0 119905] the following is obtained

int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le Φ

0minus Φ

119905+ [119865

1minus 119866

1] 119905

le Φ0+ [119865

1minus 119866

1] 119905

(82)

and by dividing (82) by 119905 it is easy to confirm that

1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le

Φ0

119905+ 119865

1minus 119866

1 (83)

By calculating the upper limit as 119905 rarr infin equation (83) canbe expressed as

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1(84)

and finally it is clear that

(i) if 1198651gt 119866

1 the maximum value of filtering error in

average sense is the one described in (77)(ii) if 119865

1le 119866

1 the maximum value of filtering error in

average sense is the one described in (78) given thatthe left-hand side of the inequality (84) cannot benegative

Hence the theorem is proved

6 Illustrating Example

Example 16 Consider a 2-player nonlinear differential gamegiven by

119905= 119891 (119909

119905) +

2

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905(85)

subject to the change of variables (10) and (11) (119894 = 2) wherethe control actions of each player are

1199061

119905= [

05 sawtooth (05119905)H (119905)

]

1199062

119905= [

H (119905)

05 sawtooth (05119905)] (86)

The undesired deterministic perturbations are

120585119905= [

sin (10119905)sin (10119905)] (87)

and H(119905) denotes the Heaviside step function or unit stepsignal Then under Assumptions 1ndash4 6 8 9 and 11 it is pos-sible to obtain a filtered feedback (perfect state) informationpattern 120578119894

119905= 119909

119905

Remark 17 As told before the functions 119891(sdot) and 119892119894

(sdot) aswell as the matrix 119867 are unknown but in order to makean example of (85) and its simulation the values of these

ldquounknownrdquo parameters will be shown in the Appendix of thispaper

Thereby consider now a first differential neural networkgiven by

119910119905= 119882

1119905120590 (119881

1119905119910119905) +

2

sum

119894=1

119882119894

2119905120593119894

(119881119894

2119905119910119905) 119906

119894

119905 (88)

where

11988210

= [05 01 0 2

02 05 06 0]

11988110

= [1 03 05 02

05 13 1 06]

119879

1198821

20= [

05 01

0 02] 119882

2

20= [

13 08

08 13]

1198811

20= [

1 01

01 2] 119881

2

20= [

05 07

09 01]

(89)

and the activation functions are

120590119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=1198811119905119910119905

119901 = 1 2 3 4

1205931

119901119902(119908

1

) =45

1 + 119890minus051199081minus 3

100381610038161003816100381610038161003816100381610038161199081=11988112119905119910119905

119901 = 119902 = 1 2

1205932

119901119902(119908

2

) =45

1 + 119890minus051199082minus 3

100381610038161003816100381610038161003816100381610038161199082=11988122119905119910119905

119901 = 119902 = 1 2

(90)

By proposing the values described in Assumption 8 as

Λ120590= Λ

1

120593= Λ

2

120593= [

13 0

0 13]

Λ1

]120593 = Λ2

]120593 = [08 0

0 08]

Λ1

120593= Λ

2

120593= Λ

1

1205931015840 = Λ

2

1205931015840 = [

13 0

0 13]

minus1

Λ119865= [

01 0

0 01]

minus1

Λ120590= Λ

1205901015840 =

[[[

[

13 0 0 0

0 13 0 0

0 0 13 0

0 0 0 13

]]]

]

minus1

Λ ]120590 =[[[

[

08 0 0 0

0 08 0 0

0 0 08 0

0 0 0 08

]]]

]

119860 =

[[[

[

minus13963 minus22632

00473 minus61606

0426 minus74451

minus61533 08738

]]]

]

119876 = [10 0

0 10]

119897120590= 119897

1

120593= 119897

2

120593= 13 119906

1

= 1199062

= 5

(91)

10 Mathematical Problems in Engineering

1

0

minus1

y1t

y1t

minus2

0 5 10 15 20 25

(a)

15

1

05

0

minus05

minus1

minus15

0 5 10 15 20 25

y2t

y2t

(b)

Figure 1 Comparison between 119910119905and 119910

119905for the 2-player nonlinear differential game (85)

and by choosing 1198701198821

= 100 and 1198701198811= 119870

1

1198822

= 1198702

1198822

= 1198701

1198812

=

1198702

1198812

= 1 the solution of the algebraic Riccati equation (23)results in

119875 = [20524 minus06374

minus06374 31842] (92)

Then by applying the learning law (38) described inTheorem 13 the value of the identification error in averagesense (40) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) = 003145 (93)

Remark 18 Even though 003145 is the value of 1198651on the

time interval [0 25] it is not the global minimum value that1198651can take since 119905does not tend to infinity So in order to find

this global minimum value it would be required to simulatean arbitrarily large time

On the other hand let a second differential neural net-work given by

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905(94)

be such that the equalities (32) and (33) are fulfilled that is

11988230

= [05 01 0 2

02 05 06 0] 119882

40= [

1 0

0 1]

Λ120572= [

173 0

0 173] Λ

120574= [

20 0

0 20]

Λ=

[[[

[

26 0 0 0

0 26 0 0

0 0 26 0

0 0 0 26

]]]

]

minus1

Λ120574= [

6734 546

546 6162]

minus1

Λ119866= [

01 0

0 01]

minus1

120585 = 5

(95)

and the activation functions are

120572119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=119905

119901 = 1 2 3 4

120574119901119902(119908) =

45

1 + 119890minus051199081minus 3

10038161003816100381610038161003816100381610038161003816119908=119905

119901 = 119902 = 1 2

(96)

By choosing 1198701198823

= 100000 and 1198701198824

= 1 the solution of thealgebraic Riccati equation (23) results in the matrix (92) andby applying the learning law (59) described in Theorem 14the value of the identification error in average sense (61) on atime period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) = 000006645 (97)

Remark 19 As in Remark 18 in order to find the globalminimum value of 119866

1 it would be required to simulate an

arbitrarily large time

Finally taking into account the identification errors (93)and (97) the value of the filtering error in average sense (77)-(78) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 003139 (98)

The simulation of this example wasmade using theMATLABand Simulink platforms and its results are shown in Figures1ndash4

As is seen in Figure 1 the differential neural network(88) can perform the identification of the continuous-timenonlinear dynamic game (85) where 119910

1119905and 119910

2119905(or simi-

larly 1199091119905

and 1199092119905) denote the state variables with undesired

perturbations and 1199101119905

and 1199102119905

indicate their state estimatesOn the other hand according to Figure 2 the differential

neural network (94) identifies the dynamics of the additivedeterministic noises or in other words the dynamics of =119867120585

119905 Thus 119911

1119905and 119911

2119905represent the state variables of the

above differential equation and 1119905

and 2119905

are their stateestimates

In this way Figure 3 shows the performance of thefiltering process (13) in the nonlinear differential game (85)that is to say it shows the comparison between the expectedor uncorrupted state variables 119909

1119905and 119909

2119905and the filtered

state estimates 1199091119905

and 1199092119905

Mathematical Problems in Engineering 11

0

minus01

z1t

z1t

01

02

0 05 1 15 2 25 3 35

(a)

0

minus01

z2t

z2t

01

0

minus02

05 1 15 2 25 3 35

(b)

Figure 2 Comparison between 119911119905and

119905for the 2-player nonlinear differential game (85)

1

0

minus1

minus2

x1t

x1t

0 5 10 15 20 25

(a)

1

0

minus1

x2t

x2t

0 5 10 15 20 25

(b)

Figure 3 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

minus2

minus22

minus24

minus26

minus28

x1t

x1t

5 10 15 20 25

(a)

15

1

05

5 10 15 20 25

x2t

x2t

(b)

Figure 4 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

Finally Figure 4 also exhibits the described filteringprocess but comparing the real state variables 119909

1119905and 119909

2119905

with filtered state estimates 1199091119905

and 1199092119905

7 Results Analysis and Discussion

Although the differential neural networks (14) and (27) canperform the identification of the differential equations (10)and (11) it is important to remember that their performancedepends on the number of neurons used and on the proposi-tion of all the constant values (or free parameters) that weredescribed in Assumptions 8 and 11 Therefore the values of1198651and 119866

1(at a fixed time) can change according to this fact

In other words due to the fact that the differential neuralnetworks are an approximation of a dynamic system (orgame) there always will be a residual error that will dependon this approximation

Thus in the particular case shown in Example 16 thedesign of (88) was made using only eight neurons four forthe layer of perceptrons without any relation to the playersand two for the layer of perceptrons of each player Similarlyfor the design of (94) (which is subject to the equalities (32)-(33)) six neurons were used

On the other hand it is important to mention that(14) and (27) will operate properly only if Assumptions 1ndash4 6 8 9 and 11 are satisfied that is to say there is noguarantee that these differential neural networks performgood identification and filtering processes if for examplethe class of nonlinear differential games (1)ndash(4) has a stochas-tic nature

Finally although this paper presents a new approach foridentification and filtering of nonlinear dynamic games itshould be emphasized that there exist other techniques thatmight solve the problem treated here for example the citedpublications in Section 1

12 Mathematical Problems in Engineering

8 Conclusions and Future Work

According to the results of this paper the differential neuralnetworks (14) and (27) solve the problem of identifying andfiltering the class of nonlinear differential games (1)ndash(4)

However there is no guarantee that (14) and (27) identifyand filter the states of (1)ndash(4) if Assumptions 1ndash4 6 8 9 and11 are not met

Thereby the proposed learning laws (38) and (59) obtainthe maximum values of identification error in average sense(40) and (61) that is to say the maximum value of filteringerror (77)-(78)

Nevertheless these errors depend on both the number ofneurons used in the differential neural networks (14) and (27)and the proposed values applied in their design conditions

On the other hand according to the simulation resultsof the illustrating example the effectiveness of (14) and (27)is shown and the applicability of Theorems 13 14 and 15 isverified

Finally speaking of future work in this research fieldone can analyze and discuss the use of (14) and (27) for theobtaining of equilibrium solutions in the class of nonlineardifferential games (1)ndash(4) that is to say the use of differentialneural networks for controlling nonlinear differential games

Appendix

The values of the ldquounknownrdquo nonlinear functions 119891(sdot) 1198921(sdot)and 1198922(sdot) and the ldquounknownrdquo constant matrix119867 used in (85)of Example 16 are shown as follows

119891 (119909119905) = [

1199092119905

sin (1199091119905)]

1198921

(119909119905)

= [minus 119909

1119905cos (119909

1119905) 0

0 1199092119905sin (119909

1119905) minus 119909

1119905cos (119909

2119905)]

1198922

(119909119905) = [

sin (1199091119905) 0

0 minus [1199091119905+ 119909

2119905]]

119867 = [05 0

0 05]

(A1)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I Alvarez A Poznyak and A Malo ldquoUrban traffic controlproblem a game theory approachrdquo in Proceedings of the 47thIEEE Conference on Decision and Control (CDC rsquo08) pp 2168ndash2172 Cancun Mexico December 2008

[2] T Basar and G J Olsder Dynamic Noncooperative GameTheory Academic Press Great Britain UK 2nd edition 1995

[3] J M C Clark and R B Vinter ldquoA differential dynamicgames approach to flow controlrdquo in Proceedings of the 42nd

IEEE Conference on Decision and Control pp 1228ndash1231 MauiHawaii USA December 2003

[4] S H Tamaddoni S Taheri and M Ahmadian ldquoOptimal VSCdesign based on nash strategy for differential 2-player gamesrdquoin Proceedings of the IEEE International Conference on SystemsMan and Cybernetics (SMC rsquo09) pp 2415ndash2420 San AntonioTex USA October 2009

[5] F Amato M Mattei and A Pironti ldquoRobust strategies forNash linear quadratic games under uncertain dynamicsrdquo inProceedings of the 37th IEEE Conference on Decision and Control(CDC rsquo98) pp 1869ndash1870 Tampa Fla USA December 1998

[6] X Nian S Yang and N Chen ldquoGuaranteed cost strategiesof uncertain LQ closed-loop differential games with multipleplayersrdquo in Proceedings of the American Control Conference pp1724ndash1729 Minneapolis Minn USA June 2006

[7] M Jimenez-Lizarraga and A Poznyak ldquoEquilibrium in LQdifferential games withmultiple scenariosrdquo in Proceedings of the46th IEEE Conference on Decision and Control (CDC rsquo07) pp4081ndash4086 New Orleans La USA December 2007

[8] M Jimenez-Lizarraga A Poznyak andM A Alcorta ldquoLeader-follower strategies for a multi-plant differential gamerdquo inProceedings of the American Control Conference (ACC rsquo08) pp3839ndash3844 Seattle Wash USA June 2008

[9] A S Poznyak and C J Gallegos ldquoMultimodel prey-predatorLQ differential gamesrdquo in Proceedings of the American ControlConference pp 5369ndash5374 Denver Colo USA June 2003

[10] Y Li and L Guo ldquoConvergence of adaptive linear stochasticdifferential games nonzero-sum caserdquo in Proceedings of the 10thWorld Congress on Intelligent Control and Automation (WCICArsquo12) pp 3543ndash3548 Beijing China July 2012

[11] D Vrabie and F Lewis ldquoAdaptive Dynamic Programmingalgorithm for finding online the equilibrium solution of thetwo-player zero-sum differential gamerdquo in Proceedings of theInternational Joint Conference on Neural Networks (IJCNN rsquo10)pp 1ndash8 Barcelona Spain July 2010

[12] B-S Chen C-S Tseng and H-J Uang ldquoFuzzy differentialgames for nonlinear stochastic systems suboptimal approachrdquoIEEE Transactions on Fuzzy Systems vol 10 no 2 pp 222ndash2332002

[13] B Widrow J R Glover Jr J M McCool et al ldquoAdaptive noisecancelling principles and applicationsrdquo Proceedings of the IEEEvol 63 no 12 pp 1692ndash1716 1975

[14] C E de Souza U Shaked and M Fu ldquoRobust119867infinfiltering for

continuous time varying uncertain systems with deterministicinput signalsrdquo IEEE Transactions on Signal Processing vol 43no 3 pp 709ndash719 1995

[15] A Osorio-Cordero A S Poznyak and M Taksar ldquoRobustdeterministic filtering for linear uncertain time-varying sys-temsrdquo in Proceedings of the American Control Conference pp2526ndash2530 Albuquerque NM USA June 1997

[16] M Hou and R J Patton ldquoOptimal filtering for systems withunknown inputsrdquo IEEE Transactions on Automatic Control vol43 no 3 pp 445ndash449 1998

[17] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 3088ndash3092 1984

[18] E B Kosmatopoulos M M Polycarpou M A Christodoulouand P A Ioannou ldquoHigh-order neural network structuresfor identification of dynamical systemsrdquo IEEE Transactions onNeural Networks vol 6 no 2 pp 422ndash431 1995

Mathematical Problems in Engineering 13

[19] F Abdollahi H A Talebi and R V Patel ldquoState estimationfor flexible-joint manipulators using stable neural networksrdquo inProceedings of the IEEE International Symposium on Computa-tional Intelligence in Robotics and Automation pp 25ndash29 KobeJapan 2003

[20] A Alessandri C Cervellera and M Sanguineti ldquoDesign ofasymptotic estimators an approach based on neural networksand nonlinear programmingrdquo IEEE Transactions on NeuralNetworks vol 18 no 1 pp 86ndash96 2007

[21] Y H Kim F L Lewis and C T Abdallah ldquoNonlinear observerdesign using dynamic recurrent neural networksrdquo in Proceed-ings of the 35th IEEE Conference on Decision and Control pp949ndash954 Kobe Japan December 1996

[22] F L Lewis K Liu and A Yesildirek ldquoNeural net robotcontroller with guaranteed tracking performancerdquo IEEE Trans-actions on Neural Networks vol 6 no 3 pp 703ndash715 1995

[23] G A Rovithakis and M A Christodoulou ldquoAdaptive controlof unknown plants using dynamical neural networksrdquo IEEETransactions on Systems Man and Cybernetics vol 24 no 3 pp400ndash412 1994

[24] A Poznyak E N Sanchez and W Yu Differential NeuralNetworks for Robust Nonlinear Control Identification StateEstimation and Trajectory TrackingWorld Scientific Singapore2001

[25] E Garcıa and D A Murano ldquoState estimation for a class ofnonlinear differential games using differential neural networksrdquoin Proceedings of the American Control Conference (ACC rsquo11) pp2486ndash2491 San Francisco Calif USA July 2011

[26] D A Murano Nash equilibrium for uncertain nonlinear dif-ferential games using differential neural networks deterministicand stochastic cases [PhD thesis] CINVESTAV-IPN DistritoFederal Mexico 2004

[27] D Jiang and J Wang ldquoA recurrent neural network for onlinedesign of robust optimal filtersrdquo IEEE Transactions on Circuitsand Systems I FundamentalTheory and Applications vol 47 no6 pp 921ndash926 2000

[28] A G Parlos S K Menon and A F Atiya ldquoAn algorithmicapproach to adaptive state filtering using recurrent neuralnetworksrdquo IEEE Transactions on Neural Networks vol 12 no6 pp 1411ndash1432 2001

[29] S Haykin Neural Networks A Comprehensive FoundationPearson Chennai India 1999

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 2: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

2 Mathematical Problems in Engineering

However an identification process could be deficient ifthere exist undesired perturbations in the game dynamicsthat is to say if the obtained information structure ofthe game is corrupted by (deterministic) noises Therebysome deterministic filtering works have been published inorder to solve this issue For example [13] describes theconcept of adaptive noise canceling for the estimation ofsignals corrupted by additive perturbations [14] introducesa finite horizon robust 119867

infinfiltering method that provides a

guaranteed bound for the estimation error in the presence ofboth parameter uncertainty and a known input signal [15]presents the filtering of the states of a time-varying uncertainlinear system with deterministic disturbances of limitedpower and [16] derives an optimal filtering formula for lineartime-varying discrete systems with unknown inputs

Therefore according to these preliminary works themotivation of this paper is to solve the problem of identifyingand filtering a class of nonlinear differential games withadditive deterministic perturbations where themathematicalmodel of this class and the effect (or power) of the noises arecompletely unknown

12 Main Contribution Since the introduction of con-tinuous-time recurrent neural networks (see [17]) the self-named differential neural networks have proved to be anexcellent tool in the identification state estimation andcontrol of several systems and of the appointed continuous-time dynamic games

For example in [18] differential neural networks are usedfor the identification of dynamical systems references [19ndash21] design differential neural network observers for adaptivestate estimation theworks of [22 23] propose neural networkcontrollers for several applications and [24] shows a com-pendium of differential neural networks for identificationstate estimation and control of nonlinear systems Also [25]treats the state estimation problem for affine nonlinear dif-ferential games using a differential neural network observerand in [26] a nearly optimal Nash equilibrium for classes ofdeterministic and stochastic nonlinear differential games isobtained using differential neural networks

Moreover speaking of recurrent neural networks anddeterministic filtering the work of [27] develops a recurrentneural network for robust optimal filter design in a 119867

infin

approach and in [28] algorithms are presented in orderto obtain adaptive filtering in nonlinear dynamic systemsapproximated by neural networks

Nevertheless the idea of using differential neural net-works for identification and filtering of a class of continuous-time nonlinear dynamic games is a new approach that as faras the authors know has not been treated before

Hence the main contribution of this paper is the proofthat it is possible to identify and to filter the states of a certainclass of nonlinear differential games through the designingof two differential neural networks Moreover this filteredidentification process generates a feedback (perfect state)information pattern for the mentioned class of games

More specifically although the structure of this class isknown its mathematical model is not that is to say the only

available information of the nonlinear differential game isthe control actions of each player So by using only thisavailable information a first differential neural network willbe designed for the identification of the nonlinear dynamicgame with the undesired perturbations and similarly thesecond differential neural network will identify the effectof these additive noises in the dynamics of the nonlineardifferential game which means that the perturbations areknown but their power is not

According to the above one of these two differentialneural networks will do the complete identification processof the class of nonlinear differential games and noticeablythe filtering (or the canceling) of the undesired perturbationswill be held by subtracting the state estimates of the twodifferential neural networks

Finally it is important to emphasize that these twodifferential neural networks have the structure of multilayerperceptrons (see [23 24 29]) and by means of Lyapunovrsquossecondmethod of stability the learning laws for their synapticweights are derived

2 Class of Nonlinear Differential Games

Consider the following continuous-time nonlinear dynamicgame given by

119905= 119891 (119909

119905) +

119873

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905 (1)

where 119905 isin [0infin) the index 119894 = 1 2 3 119873 denotes thenumber of players 119909

119905isin R119899 is the state vector of the game

119906119894

119905isin 119880

119894

adm sube R119904119894 denotes the admissible control actionvector of each player the mappings 119891 R119899

rarr R119899 and119892119894

R119899

rarr 119880119894

adm are unknown nonlinear functions 120585119905isin R119899

denotes a known deterministic perturbation vector and119867 isan unknown constant matrix of adequate dimensions

Similarly consider now the following set of cost functions(or performance indexes) associated with each player andgiven by

119869119894

119905= int

infin

0

120579119894

(119909119905 119906

119894

119905) 119889119905 (2)

where 120579119894 R119899

times 119880119894

adm rarr R is well-defined for the 119894 playerMoreover the information structure of each player

(denoted by 120578119894

119905) has a standard feedback (perfect state)

pattern that is to say

120578119894

119905= 119909

119905 (3)

and a permissible control strategy (or control policy) of the 119894player is defined by the set of functions 120588119894(119909

119905) satisfying

120588119894

R119899

997888rarr 119880119894

adm (4)

Nevertheless the class of nonlinear differential games (1)ndash(4)is not completely described if the following assumptions arenot fulfilled

Mathematical Problems in Engineering 3

Assumption 1 The admissible control actions 119906119894119905are measur-

able and bounded for all time 119905 isin [0infin) that is10038171003817100381710038171003817119906119894

119905

10038171003817100381710038171003817le 119906

119894

lt infin (5)

where 119906119894 isin R are known constants

Assumption 2 In order to guarantee the global existence anduniqueness of the solution of (1) the unknown nonlinearfunctions119891(sdot) and 119892119894(sdot) satisfy the Lipschitz condition that isthere exist 0 lt 120595 120595119894

isin R constants such that the equations1003817100381710038171003817119891 (1205941) minus 119891 (1205942)

1003817100381710038171003817 le 12059510038171003817100381710038171205941 minus 1205942

1003817100381710038171003817

10038171003817100381710038171003817119892119894

(1205941) minus 119892

119894

(1205942)10038171003817100381710038171003817le 120595

119894 10038171003817100381710038171205941 minus 12059421003817100381710038171003817

(6)

are fulfilled for all 1205941 120594

2isin X sub R119899 Moreover 119891(sdot) and 119892119894(sdot)

satisfy the following linear growth condition1003817100381710038171003817119891 (120594)

1003817100381710038171003817 le 1198911+ 119891

2

10038171003817100381710038171205941003817100381710038171003817

10038171003817100381710038171003817119892119894

(120594)10038171003817100381710038171003817le 119892

119894

1+ 119892

119894

2

10038171003817100381710038171205941003817100381710038171003817

(7)

where 1198911 119891

2 119892

119894

1 119892

119894

2isin R are known constants

Assumption 3 Under the permissible control strategies120588119894

(119909119905) the (feedback) class of nonlinear differential games

given by (1)ndash(4) is quadratically stable that is there exists aLyapunov (maybe unknown) function 0 le 119864

119905 R119899

rarr Rsuch that the inequalities

[120597119864

119905

120597119909119905

][119891 (119909119905) +

119873

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905] le minus120582

1

10038171003817100381710038171199091199051003817100381710038171003817

2

10038171003817100381710038171003817100381710038171003817

120597119864119905

120597119909119905

10038171003817100381710038171003817100381710038171003817

le 1205822

10038171003817100381710038171199091199051003817100381710038171003817

(8)

are satisfied for any known 0 lt 1205821 120582

2isin R constants

Assumption 4 The deterministic perturbation 120585119905is measur-

able and bounded for all time 119905 isin [0infin) that is10038171003817100381710038171205851199051003817100381710038171003817 le 120585 lt infin (9)

where 120585 isin R is a known constant

3 Problem Statement

Let the class of continuous-time nonlinear dynamic games(1)ndash(4) be such that Assumptions 1 to 4 are fulfilled Thenif one makes the following change of variables

119910119905= 119891 (119909

119905) +

119873

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905 (10)

119905= 119867120585

119905 (11)

it is clear that the expected or uncorrupted vector state of theclass of nonlinear differential games (1)ndash(4) can be defined as

119909119905= 119910

119905minus 119911

119905 (12)

Thereby and in view of the fact that 119891(sdot) 119892119894(sdot) and 119867

are unknown the tackled problem in this paper is to obtaina feedback (perfect state) information pattern 120578

119894

119905= 119909

119905

given that 119909119905isin R119899 satisfies the following filtering (or noise

canceling) equation

119909119905= 119910

119905minus

119905(13)

and where 119910119905

119905isin R119899 are the state estimates of differential

equations (10) and (11) respectively

4 Differential Neural Networks Design

In order to solve the problem described above consider a firstdifferential neural network given by the following equation

119910119905= 119882

1119905120590 (119881

1119905119910119905) +

119873

sum

119894=1

119882119894

2119905120593119894

(119881119894

2119905119910119905) 119906

119894

119905 (14)

where 119905 isin [0infin) 119894 = 1 2 3 119873 denotes the number ofplayers 119910

119905isin R119899 is the vector state of the neural network

the matrices 1198821119905

isin R119899times119896 1198811119905

isin R119896times119899 119882119894

2119905isin R119899times119903119894 and

119881119894

2119905isin R119903119894times119899 are synaptic weights of the neural network and

120590 R119896

rarr R119896 and120593119894 R119903119894 rarr R119903119894times119904119894 are activation functionsof the neural network

Remark 5 For simplicity (and from now on) 120590 = 120590(1198811119905119910119905)

and 120593119894 = 120593119894(119881119894

2119905119910119905)

According to [29] the differential neural network (14)is classified as multilayer perceptrons and its structure wasinitially taken from [23 24] Also this differential neuralnetwork only uses sigmoid activation functions that is 120590 and120593119894 have a diagonal structure with elements

120590119901(119908) =

119886119901

1 + 119890minus119887119901119908

minus 119888119901

10038161003816100381610038161003816100381610038161003816119908=1198811119905119910119905

119901 = 1 2 119896 (15)

120593119894

119902119902(119908

119894

) =

119886119902

1 + 119890minus119887119902119908119894minus 119888

119902

10038161003816100381610038161003816100381610038161003816119908119894=1198811198942119905119910119905

119902 = 1 2 min (119903119894 119904

119894)

(16)

where 0 lt 119886119901 119887

119901 119888119901

isin R and 0 lt 119886119902 119887

119902 119888119902

isin Rare known constants that manipulate the geometry of thesigmoid function

Thus in view of the fact that 120590 and 120593119894 are sigmoid

activation functions they are bounded and they satisfy thefollowing equations

[ minus 119860 [119910119905minus 119910

119905]]

119879

Λ120590[ minus 119860 [119910

119905minus 119910

119905]]

le [119910119905minus 119910

119905]119879

Λ120590[119910

119905minus 119910

119905]

[120593119894

119906119894

119905]119879

Λ119894

120593[120593

119894

119906119894

119905] le [119906

119894

]2

[119910119905minus 119910

119905]119879

Λ119894

120593[119910

119905minus 119910

119905]

(17)

1015840

= 119863120590[119881

1119905minus 119881

10] 119910

119905+ ]

120590

1205931198941015840

= ⟨119863119894

120593 119906

119894

119905⟩ [119881

119894

2119905minus 119881

119894

20] 119910

119905+ ]119894

120593119906119894

119905

(18)

4 Mathematical Problems in Engineering

[]120590]119879

Λ1205901015840 []

120590]

le 119897120590[[119881

1119905minus 119881

10] 119910

119905]119879

Λ ]120590 [[1198811119905 minus 11988110] 119910119905]

[]119894120593119906119894

119905]119879

Λ119894

1205931015840 []119894

120593119906119894

119905]

le 119897119894

120593[119906

119894

]2

[[119881119894

2119905minus 119881

119894

20] 119910

119905]119879

Λ119894

]120593 [[119881119894

2119905minus 119881

119894

20] 119910

119905]

(19)

where 119860 Λ120590 Λ

1205901015840 Λ ]120590 Λ 120590

Λ119894

120593 Λ119894

1205931015840 Λ119894

]120593 Λ119894

120593 119897120590 and 119897119894

120593are

known constants of adequate dimensions and

= 120590 (11988110119910119905) minus 120590 (119881

10119910119905)

120593119894

= 120593119894

(119881119894

20119910119905) minus 120593

119894

(119881119894

20119910119905)

1015840

= 120590 (1198811119905119910119905) minus 120590 (119881

10119910119905)

1205931198941015840

= 120593119894

(119881119894

2119905119910119905) minus 120593

119894

(119881119894

20119910119905)

119863120590=

120597

120597120595120590 (120595) isin R

119896times119896

119863119894

120593=

120597

120597120595120593119894

(120595) isin R119903119894

(20)

Nevertheless there are some design conditions that thisfirst differential neural network needs to satisfy

Assumption 6 According to Assumption 2 the approxima-tion or residual error of (14) corresponding to the unidenti-fied dynamics of (10) and that is given by

119865119905= 119910

119905minus [119882

10120590 (119881

10119910119905) +

119873

sum

119894=1

119882119894

20120593119894

(119881119894

20119910119905) 119906

119894

119905] (21)

where 11988210 119881

10 119882119894

20 and 119881

119894

20are initial synaptic weights

(when 119905 = 0) is bounded and satisfies the followinginequality

[119865119905]119879

Λ119865[119865

119905] le 119865

1+ 119865

2[119910

119905]119879

Λ119865[119910

119905] (22)

where 0 lt Λ119865isin R119899times119899 is a known constant and 0 lt 119865

1

1198652isin R

Remark 7 The constants 1198651and 119865

2in (22) are not known

a priori because of the fact that they depend on the perfor-mance of the differential neural network (14) that is to saythe residual error (21) will depend on the number of neuronsused in (14) and on its parameters and therefore 119865

1and 119865

2

will depend on this too (see [24])

Assumption 8 There exist values of Λ120590 Λ

1205901015840 Λ119894

120593 Λ119894

1205931015840 Λ 120590

Λ119894

120593 Λ

119865 119897

120590 119897119894

120593 119860 and 119876 such that they provide a 0 lt 119875 =

[119875]119879

isin R119899times119899 solution to the algebraic Riccati equation asfollows

119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876 = 0 (23)

where

119860 = 11988210119860 (24)

119877 = [11988210] [[Λ

120590]minus1

+ [Λ1205901015840]minus1

] [11988210]119879

+ [Λ119865]minus1

+

119873

sum

119894=1

[119882119894

20] [[Λ

119894

120593]minus1

+ [Λ119894

1205931015840]minus1

] [119882119894

20]119879

(25)

119876 = 2119876 + Λ120590+

119873

sum

119894=1

[119906119894

]2

Λ119894

120593 (26)

where 119860 such that 119860 isin R119899times119899 is Hurwitz and 119876 is known

On the other hand consider now a second differentialneural network given by the following equation

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905 (27)

where 119905 isin [0infin) 119905isin R119899 is the vector state the matrices

1198823119905

isin R119899times120575 and 1198824119905

isin R119899times120583 are synaptic weights and 120572

R119899

rarr R120575 and 120574 R119899

rarr R120583times119899 are sigmoid activationfunctions

Then similar to120590 and120593119894 the activation functions120572(sdot) and120574(sdot) satisfy the following inequalities

[ minus 119860 [119905minus 119911

119905]]

119879

Λ[ minus 119860 [

119905minus 119911

119905]]

le [119905minus 119911

119905]119879

Λ120572[

119905minus 119911

119905]

[120574 120585119905]119879

Λ120574[120574 120585

119905] le [ 120585 ]

2

[119905minus 119911

119905]119879

Λ120574[

119905minus 119911

119905]

(28)

where 119860 Λ Λ

120572 Λ

120574 and Λ

120574are known constants of

adequate dimensions and

= 120572 (119905) minus 120572 (119911

119905)

120574 = 120574 (119905) minus 120574 (119911

119905)

(29)

Thereby it is easy to confirm that if 120572(sdot) = 120590 if 120574(sdot) =120593119894 if 120585

119905= 119906

119894

119905 and if 119881

1119905= 119881

119894

2119905= 119868 where 119894 = 1

then the differential neural networks (14) and (27) coincideHence two new assumptions (corresponding toAssumptions6 and 8) must be satisfied for this second differential neuralnetwork

Assumption 9 According (again) to Assumption 2 theapproximation or residual error of (27) corresponding to theunidentified dynamics of (11) and that is given by

119866119905=

119905minus [119882

30120572 (119911

119905) + 119882

40120574 (119911

119905) 120585

119905] (30)

where11988230

and11988240

are initial synaptic weights (when 119905 = 0)is bounded and satisfies the inequality

[119866119905]119879

Λ119866[119866

119905] le 119866

1+ 119866

2[119911

119905]119879

Λ119866[119911

119905] (31)

where 0 lt Λ119866isin R119899times119899 is a known constant and 0 lt 119866

1

1198662isin R

Mathematical Problems in Engineering 5

Remark 10 Similar to Remark 7 the constants 1198661and 119866

2

in (31) are not known a priori because they depend on theperformance of the differential neural network (27)

Assumption 11 If there exist values of Λ Λ

120574 Λ

119866119882

30 and

11988240

such that

[11988240] [Λ

120574]minus1

[11988240]119879

=

119873

sum

119894=1

[119882119894

20] [[Λ

119894

120593]minus1

+ [Λ119894

1205931015840]minus1

] [119882119894

20]119879

11988230

= 11988210

Λ= Λ

120590+ Λ

1205901015840

Λ119866= Λ

119865

(32)

and values of Λ120572and Λ

120574such that

119876 = Λ120572+ [ 120585 ]

2

Λ120574minus Λ

120590minus

119873

sum

119894=1

[119906119894

]2

Λ119894

120593 (33)

then they provide a 0 lt 119875 = [119875]119879

isin R119899times119899 solution to thealgebraic Riccati equation (23) where

119860 = 11988230119860 (34)

119877 = [11988230] [Λ

]minus1

[11988230]119879

+ [Λ119866]minus1

+ [11988240] [Λ

119894

120593]minus1

[11988240]119879

(35)

119876 = 119876 + Λ120572+ [ 120585 ]

2

Λ120574 (36)

Remark 12 Notice that the equalities (32) and (33) werechosen in order to guarantee the same solution of thealgebraic Riccati equation (23) in Assumptions 8 and 11 Alsonotice that the first term of the right-hand side of (26) willalways be greater than the first term of the right-hand side of(36) that is 2119876 gt 119876

5 Main Result on Identification and Filtering

According to the above the main result on identification andfiltering for the class of nonlinear differential games (1)ndash(4)deals with both the development of an adaptive learning lawfor the synaptic weights of the differential neural networks(14) and (27) and the inference of a maximum value ofidentification error for the dynamics (10) and (11)

Moreover the establishment of a maximum value of fil-tering error between the uncorrupted states and the identifiedones is obtained namely an error is defined by

119890119905= 119909

119905minus 119909

119905 (37)

where 119909119905isin R119899 is given by noise canceling equation (13) and

119909119905isin R119899 is given by the expected or uncorrupted vector state

(12)More formally the main obtained result is described in

the following three theorems

Theorem 13 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural network (14) be suchthat Assumptions 6 and 8 are satisfied If the synaptic weightsof (14) are adjusted with the following learning law

1119905= minus2119870

1198821119875120576

119905[120590]

119879

1119905= minus 119870

1198811[2[119863

120590]119879

[11988210]119879

119875120576119905

+119897120590[Λ ]120590]

119879

1119905119910119905] [119910

119905]119879

119894

2119905= minus2119870

119894

1198822

119875120576119905[119906

119894

119905]119879

[120593119894

]119879

119894

2119905= minus119870

119894

1198812

[2 ⟨119863119894

120593 119906

119894

119905⟩ [119882

119894

20]119879

119875120576119905

+119897119894

120593[119906

119894

]2

[Λ119894

]120593]119879

119894

2119905119910119905] [119910

119905]119879

(38)

where120576119905= 119910

119905minus 119910

119905(39)

denotes the identification error 1119905= 119881

1119905minus119881

10 119894

2119905= 119881

119894

2119905minus

119881119894

20 and 119870

1198821 119870

1198811 119870119894

1198822

and 119870119894

1198812

are known symmetric andpositive-definite constant matrices then it is possible to obtainthe nextmaximumvalue of identification error in average sense119870

119894

1198822

as follows

lim sup119905rarrinfin

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) le 119865

1 (40)

Proof Taking into account the residual error (21) the differ-ential equation (10) can be expressed as

119910119905= 119882

10120590 (119881

10119910119905) +

119873

sum

119894=1

119882119894

20120593119894

(119881119894

20119910119905) 119906

119894

119905+ 119865

119905 (41)

Then by substituting (14) and (41) into the derivative of (39)with respect to 119905 and by adding and subtracting the terms[119882

10120590] [119882

10120590(119881

10119910119905)] [119882119894

20120593119894

119906119894

119905] and [119882119894

20120593119894

(119881119894

20119910119905)119906

119894

119905] it

is easy to confirm that

120576119905=

1119905120590 +119882

10 + 119882

101015840

minus 119865119905

+

119873

sum

119894=1

119894

2119905120593119894

119906119894

119905+

119873

sum

119894=1

119882119894

20120593119894

119906119894

119905+

119873

sum

119894=1

119882119894

201205931198941015840

119906119894

119905

(42)

where 1119905= 119882

1119905minus119882

10and 119894

2119905= 119882

119894

2119905minus119882

119894

20 Now let the

Lyapunov (energetic) candidate function

119871119905= 119864

119905+ [120576

119905]119879

119875 [120576119905] +

1

2tr ([

1119905]119879

[1198701198821]minus1

[1119905])

+1

2tr ([

1119905]119879

[1198701198811]minus1

[1119905])

+1

2

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905])

+1

2

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905])

(43)

6 Mathematical Problems in Engineering

be such that the inequalities (8) are fulfilled (seeAssumption 3) and let

119905le minus120582

1

10038171003817100381710038171199101199051003817100381710038171003817

2

+ 2[120576119905]119879

119875 [ 120576119905]

+ tr ([1119905]119879

[1198701198821]minus1

[1119905])

+ tr ([1119905]119879

[1198701198811]minus1

[1119905])

+

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905])

+

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905])

(44)

be the derivative of (43) with respect to 119905 Then by substi-tuting (42) into the second term of the right-hand side of(44) and by adding and subtracting the term [2[120576

119905]119879

119875119860120576119905]

one may get

2[120576119905]119879

119875 [ 120576119905] = 2[120576

119905]119879

11987511988210[ minus 119860120576

119905] + 2[120576

119905]119879

119875119882101015840

+

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20120593119894

119906119894

119905

+

119873

sum

119894=1

2[120576119905]119879

119875119882119894

201205931198941015840

119906119894

119905minus 2[120576

119905]119879

119875119865119905

+ 2[120576119905]119879

1198751119905120590 +

119873

sum

119894=1

2[120576119905]119879

119875119894

2119905120593119894

119906119894

119905

+ [120576119905]119879

[119875119860 + [119860 ]119879

119875] [120576119905]

(45)

Next by analyzing the first five terms of the right-hand sideof (45) with the following inequality

[119883]119879

119884 + [119884]119879

119883 le [119883]119879

[Λ]minus1

[119883] + [119884]119879

Λ [119884] (46)

which is valid for any pair of matrices 119883119884 isin RΩtimesΓ and forany constant matrix 0 lt Λ isin RΩtimesΩ where Ω and Γ arepositive integers (see [24]) the following is obtained

(i) Using (46) and (17) in the first term of the right-handside of (45) then

2[120576119905]119879

11987511988210[ minus 119860120576

119905]

le [120576119905]119879

[119875 [11988210] [Λ

120590]minus1

[11988210]119879

119875] [120576119905]

+ [120576119905]119879

Λ120590[120576

119905]

(47)

(ii) Substituting (18) into the second term of the right-hand side of (45) and using (46) and (19) then

2[120576119905]119879

119875119882101015840

le 2[120576119905]119879

11987511988210119863

1205901119905119910119905

+ [120576119905]119879

[119875 [11988210] [Λ

1205901015840]minus1

[11988210]119879

119875] [120576119905]

+ 119897120590[

1119905119910119905]119879

Λ ]120590 [1119905119910119905]

(48)

(iii) Using (46) and (17) in the third term of the right-handside of (45) then

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20120593119894

119906119894

119905

le

119873

sum

119894=1

[120576119905]119879

[119875 [119882119894

20] [Λ

119894

120593]minus1

[119882119894

20]119879

119875] [120576119905]

+

119873

sum

119894=1

[119906119894

]2

[120576119905]119879

Λ119894

120593[120576

119905]

(49)

(iv) Substituting (18) into the fourth term of the right-hand side of (45) and using (46) and (19) then

119873

sum

119894=1

2[120576119905]119879

119875119882119894

201205931198941015840

119906119894

119905

le

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20⟨119863

119894

120593 119906

119894

119905⟩

119894

2119905119910119905

+

119873

sum

119894=1

[120576119905]119879

[119875 [119882119894

20] [Λ

119894

1205931015840]minus1

[119882119894

20]119879

119875] [120576119905]

+

119873

sum

119894=1

119897119894

120593[119906

119894

]2

[119894

2119905119910119905]119879

Λ119894

]120593 [119894

2119905119910119905]

(50)

(v) Using (46) and (22) in the fifth term of the right-handside of (45) then

minus2[120576119905]119879

119875119865119905le [120576

119905]119879

[119875[Λ119865]minus1

119875] [120576119905]

+ 1198651+ 119865

2[119910

119905]119879

Λ119865[119910

119905]

(51)

So by substituting (47)ndash(51) into the right-hand side of(45) and by adding and subtracting the term [[120576

119905]119879

2119876[120576119905]]

inequality (44) can be expressed as

119905le [120576

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120576119905]

minus [120576119905]119879

2119876 [120576119905] + 119865

1+ 119865

2

1003817100381710038171003817Λ 119865

1003817100381710038171003817

10038171003817100381710038171199101199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199101199051003817100381710038171003817

2

+ tr (Υ1198821) + tr (Υ

1198811)

+

119873

sum

119894=1

tr (Υ119894

1198822

) +

119873

sum

119894=1

tr (Υ119894

1198812

)

(52)

Mathematical Problems in Engineering 7

where the algebraic Riccati equation in the first term of theright-hand side of (52) is described in (23) and in (24)ndash(26)and

Υ1198821

= [1119905]119879

[1198701198821]minus1

[1119905] + 2120590[120576

119905]119879

1198751119905

Υ1198811= [

1119905]119879

[1198701198811]minus1

[1119905] + 2119910

119905[120576

119905]119879

11987511988210119863

1205901119905

+ 119897120590119910119905[119910

119905]119879

[1119905]119879

Λ ]120590 [1119905]

Υ119894

1198822

= [119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905] + 2120593

119894

119906119894

119905[120576

119905]119879

119875119894

2119905

Υ119894

1198812

= [119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905]

+ 2119910119905[120576

119905]119879

119875119882119894

20⟨119863

119894

120593 119906

119894

119905⟩

119894

2119905

+ 119897119894

120593[119906

119894

]2

119910119905[119910

119905]119879

[119894

2119905]119879

Λ119894

]120593 [119894

2119905]

(53)

Thereby by equating (53) to zero that is

Υ1198821

= Υ1198811= Υ

119894

1198822

= Υ119894

1198812

= 0 (54)

and by respectively solving for 1119905

1119905 119894

2119905 and 119894

2119905 the

learning law given by (38) is obtained Now by choosing

1003817100381710038171003817Λ 119865

1003817100381710038171003817 le1205821

1198652

(55)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120576

119905]119879

2119876 [120576119905] + 119865

1 (56)

Thus by integrating both sides of (56) on the time interval[0 119905] the following is obtained

int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591

le 1198710minus 119871

119905+ [119865

1] 119905 le 119871

0+ [119865

1] 119905

(57)

and by dividing (57) by 119905 it is easy to verify that

1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591 le

1198710

119905+ 119865

1 (58)

Finally by calculating the upper limit as 119905 rarr infin the max-imum value of identification error in average sense is the onedescribed in (40) This means that the identification error isbounded between zero and (40) and therefore this meansthat 120576

119905is stable in the sense of Lyapunov

Theorem 14 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural network (27) be suchthat Assumptions 9 and 11 are satisfied If the synaptic weightsof (27) are adjusted with the following learning law

3119905= minus2119870

1198823119872120598

119905[120572 (

119905)]

119879

119894

4119905= minus2 119870

119894

1198824

119872120598119905[120585

119905]119879

[120574 (119905)]

119879

(59)

where

120598119905=

119905minus 119911

119905(60)

denotes the identification error and 1198701198823

and 119870119894

1198824

are knownsymmetric and positive-definite constant matrices then it ispossible to obtain the next maximum value of identificationerror in average sense 119870119894

1198822

as follows

lim sup119905rarrinfin

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) le 119866

1 (61)

Proof Taking into account the residual error (30) the differ-ential equation (11) can be expressed as

119905= 119882

30120572 (119911

119905) + 119882

40120574 (119911

119905) 120585

119905+ 119866

119905 (62)

Then by substituting (27) and (62) into the derivative of (60)with respect to 119905 and by adding and subtracting the terms[119882

30120572()] and [119882

40120574()120585

119905] it is easy to confirm that

120598119905=

3119905120572 () + 119882

30 +

4119905120574 () 120585

119905+119882

40120574120585

119905minus 119866

119905 (63)

where 3119905= 119882

3119905minus119882

30and

4119905= 119882

4119905minus119882

40 Now let the

Lyapunov (energetic) candidate function

119872119905= 119864

119905+ [120598

119905]119879

119875 [120598119905] +

1

2tr ([

3119905]119879

[1198701198823]minus1

[3119905])

+1

2tr ([

4119905]119879

[1198701198824]minus1

[4119905])

(64)

be such that the inequalities (8) are fulfilled (seeAssumption 3) and let

119905le minus120582

1

10038171003817100381710038171199111199051003817100381710038171003817

2

+ 2[120598119905]119879

119875 [ 120598119905]

+ tr ([3119905]119879

[1198701198823]minus1

[3119905])

+ tr ([4119905]119879

[1198701198824]minus1

[4119905])

(65)

be the derivative of (64) with respect to 119905 Then by substitut-ing (63) into the second term of the right-hand side of (65)and by adding and subtracting the term [2[120598

119905]119879

119875119860 120598119905] one

may get

2[120598119905]119879

119875 [ 120598119905] = 2[120598

119905]119879

11987511988230[ minus 119860120598

119905]

+ 2[120598119905]119879

11987511988240120574120585

119905minus 2[120598

119905]119879

119875119866119905

+ 2[120598119905]119879

1198753119905120572 ()

+ 2[120598119905]119879

1198754119905120574 () 120585

119905

+ [120598119905]119879

[119875119860 + [119860 ]119879

119875] [120598119905]

(66)

8 Mathematical Problems in Engineering

Next by analyzing the first three terms of the right-hand sideof (66) with the inequality (46) the following is obtained

(i) Using (46) and (28) in the first term of the right-handside of (66) then

2[120598119905]119879

11987511988230[ minus 119860120598

119905]

le [120598119905]119879

[119875 [11988230] [Λ

]minus1

[11988230]119879

119875] [120598119905]

+ [120598119905]119879

Λ120572[120598

119905]

(67)

(ii) Using (46) and (28) in the second term of the right-hand side of (66) then

2[120598119905]119879

11987511988240120574120585

119905

le [120598119905]119879

[119875 [11988240] [Λ

120574]minus1

[11988240]119879

119875] [120598119905]

+ [120585]2

[120598119905]119879

Λ120574[120598

119905]

(68)

(iii) Using (46) and (31) in the third term of the right-handside of (66) then

minus2[120598119905]119879

119875119866119905le [120598

119905]119879

[119875[Λ119866]minus1

119875] [120598119905]

+ 1198661+ 119866

2[119911

119905]119879

Λ119866[119911

119905]

(69)

So by substituting (67)ndash(69) into the right-hand side of(66) and by adding and subtracting the term [[120598

119905]119879

119876[120598119905]]

inequality (65) can be expressed as

119905le [120598

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120598119905]

minus [120598119905]119879

119876 [120598119905] + 119866

1+ 119866

2

1003817100381710038171003817Λ119866

1003817100381710038171003817

10038171003817100381710038171199111199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199111199051003817100381710038171003817

2

+ tr (Ψ1198823) + tr (Ψ

1198824)

(70)

where the algebraic Riccati equation in the first term of theright-hand side of (70) is described in (23) and in (34)ndash(36)and

Ψ1198823

= [3119905]119879

[1198701198823]minus1

[3119905] + 2120572 () [120598

119905]119879

1198753119905 (71)

Ψ1198824

= [4119905]119879

[1198701198824]minus1

[4119905] + 2120574 () 120585

119905[120598

119905]119879

1198754119905 (72)

Thereby by equating (71) and (72) to zero (Ψ1198823

= Ψ1198824

= 0)and by respectively solving for

3119905and

4119905 the learning law

given by (59) is obtained Now by choosing

1003817100381710038171003817Λ119866

1003817100381710038171003817 le1205821

1198662

(73)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120598

119905]119879

119876 [120598119905] + 119866

1 (74)

Thus by integrating both sides of (74) on the time interval[0 119905] the following is obtained

int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591

le 1198720minus119872

119905+ [119866

1] 119905 le 119872

0+ [119866

1] 119905

(75)

and by dividing (75) by 119905 it is easy to verify that

1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591 le

1198720

119905+ 119866

1 (76)

Finally by calculating the upper limit as 119905 rarr infin themaximum value of identification error in average sense is theone described in (61)

Theorem 15 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural networks (14) and (27)be such that the Assumptions 6 and 8 and Assumptions 9 and11 are satisfied If the synaptic weights of (14) and (27) arerespectively adjusted with the learning laws (38) and (59) thenit is possible to obtain the following maximum value of filteringerror in average sense

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1

if 1198651gt 119866

1

(77)

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 0

if 1198651le 119866

1

(78)

where 119890119905is given by (37)

Proof Consider the following Lyapunov (energetic) candi-date function

Φ119905= [119890

119905]119879

119875 [119890119905] (79)

where 119875 is the solution of the algebraic Riccati equation (23)with 119860 119877 and 119876 defined by (24)ndash(26) or (34)ndash(36) Thenby calculating the derivative of (79) with respect to 119905 and byconsidering the equations (37) (13) and (12) it can be verifiedthat

Φ119905= 2[119890

119905]119879

119875 [ 120576119905minus 120598

119905] (80)

and taking into account (42) (63) and the proofs ofTheorems 13 and 14 one may get

Φ119905le minus [119890

119905]119879

2119876 [119890119905] + 119865

1minus [minus [119890

119905]119879

119876 [119890119905] + 119866

1]

le minus [119890119905]119879

[2119876 minus 119876] [119890119905] + 119865

1minus 119866

1

le minus [119890119905]119879

119876 [119890119905] + 119865

1minus 119866

1

(81)

Mathematical Problems in Engineering 9

Thus by integrating both sides of (81) on the time interval[0 119905] the following is obtained

int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le Φ

0minus Φ

119905+ [119865

1minus 119866

1] 119905

le Φ0+ [119865

1minus 119866

1] 119905

(82)

and by dividing (82) by 119905 it is easy to confirm that

1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le

Φ0

119905+ 119865

1minus 119866

1 (83)

By calculating the upper limit as 119905 rarr infin equation (83) canbe expressed as

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1(84)

and finally it is clear that

(i) if 1198651gt 119866

1 the maximum value of filtering error in

average sense is the one described in (77)(ii) if 119865

1le 119866

1 the maximum value of filtering error in

average sense is the one described in (78) given thatthe left-hand side of the inequality (84) cannot benegative

Hence the theorem is proved

6 Illustrating Example

Example 16 Consider a 2-player nonlinear differential gamegiven by

119905= 119891 (119909

119905) +

2

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905(85)

subject to the change of variables (10) and (11) (119894 = 2) wherethe control actions of each player are

1199061

119905= [

05 sawtooth (05119905)H (119905)

]

1199062

119905= [

H (119905)

05 sawtooth (05119905)] (86)

The undesired deterministic perturbations are

120585119905= [

sin (10119905)sin (10119905)] (87)

and H(119905) denotes the Heaviside step function or unit stepsignal Then under Assumptions 1ndash4 6 8 9 and 11 it is pos-sible to obtain a filtered feedback (perfect state) informationpattern 120578119894

119905= 119909

119905

Remark 17 As told before the functions 119891(sdot) and 119892119894

(sdot) aswell as the matrix 119867 are unknown but in order to makean example of (85) and its simulation the values of these

ldquounknownrdquo parameters will be shown in the Appendix of thispaper

Thereby consider now a first differential neural networkgiven by

119910119905= 119882

1119905120590 (119881

1119905119910119905) +

2

sum

119894=1

119882119894

2119905120593119894

(119881119894

2119905119910119905) 119906

119894

119905 (88)

where

11988210

= [05 01 0 2

02 05 06 0]

11988110

= [1 03 05 02

05 13 1 06]

119879

1198821

20= [

05 01

0 02] 119882

2

20= [

13 08

08 13]

1198811

20= [

1 01

01 2] 119881

2

20= [

05 07

09 01]

(89)

and the activation functions are

120590119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=1198811119905119910119905

119901 = 1 2 3 4

1205931

119901119902(119908

1

) =45

1 + 119890minus051199081minus 3

100381610038161003816100381610038161003816100381610038161199081=11988112119905119910119905

119901 = 119902 = 1 2

1205932

119901119902(119908

2

) =45

1 + 119890minus051199082minus 3

100381610038161003816100381610038161003816100381610038161199082=11988122119905119910119905

119901 = 119902 = 1 2

(90)

By proposing the values described in Assumption 8 as

Λ120590= Λ

1

120593= Λ

2

120593= [

13 0

0 13]

Λ1

]120593 = Λ2

]120593 = [08 0

0 08]

Λ1

120593= Λ

2

120593= Λ

1

1205931015840 = Λ

2

1205931015840 = [

13 0

0 13]

minus1

Λ119865= [

01 0

0 01]

minus1

Λ120590= Λ

1205901015840 =

[[[

[

13 0 0 0

0 13 0 0

0 0 13 0

0 0 0 13

]]]

]

minus1

Λ ]120590 =[[[

[

08 0 0 0

0 08 0 0

0 0 08 0

0 0 0 08

]]]

]

119860 =

[[[

[

minus13963 minus22632

00473 minus61606

0426 minus74451

minus61533 08738

]]]

]

119876 = [10 0

0 10]

119897120590= 119897

1

120593= 119897

2

120593= 13 119906

1

= 1199062

= 5

(91)

10 Mathematical Problems in Engineering

1

0

minus1

y1t

y1t

minus2

0 5 10 15 20 25

(a)

15

1

05

0

minus05

minus1

minus15

0 5 10 15 20 25

y2t

y2t

(b)

Figure 1 Comparison between 119910119905and 119910

119905for the 2-player nonlinear differential game (85)

and by choosing 1198701198821

= 100 and 1198701198811= 119870

1

1198822

= 1198702

1198822

= 1198701

1198812

=

1198702

1198812

= 1 the solution of the algebraic Riccati equation (23)results in

119875 = [20524 minus06374

minus06374 31842] (92)

Then by applying the learning law (38) described inTheorem 13 the value of the identification error in averagesense (40) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) = 003145 (93)

Remark 18 Even though 003145 is the value of 1198651on the

time interval [0 25] it is not the global minimum value that1198651can take since 119905does not tend to infinity So in order to find

this global minimum value it would be required to simulatean arbitrarily large time

On the other hand let a second differential neural net-work given by

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905(94)

be such that the equalities (32) and (33) are fulfilled that is

11988230

= [05 01 0 2

02 05 06 0] 119882

40= [

1 0

0 1]

Λ120572= [

173 0

0 173] Λ

120574= [

20 0

0 20]

Λ=

[[[

[

26 0 0 0

0 26 0 0

0 0 26 0

0 0 0 26

]]]

]

minus1

Λ120574= [

6734 546

546 6162]

minus1

Λ119866= [

01 0

0 01]

minus1

120585 = 5

(95)

and the activation functions are

120572119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=119905

119901 = 1 2 3 4

120574119901119902(119908) =

45

1 + 119890minus051199081minus 3

10038161003816100381610038161003816100381610038161003816119908=119905

119901 = 119902 = 1 2

(96)

By choosing 1198701198823

= 100000 and 1198701198824

= 1 the solution of thealgebraic Riccati equation (23) results in the matrix (92) andby applying the learning law (59) described in Theorem 14the value of the identification error in average sense (61) on atime period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) = 000006645 (97)

Remark 19 As in Remark 18 in order to find the globalminimum value of 119866

1 it would be required to simulate an

arbitrarily large time

Finally taking into account the identification errors (93)and (97) the value of the filtering error in average sense (77)-(78) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 003139 (98)

The simulation of this example wasmade using theMATLABand Simulink platforms and its results are shown in Figures1ndash4

As is seen in Figure 1 the differential neural network(88) can perform the identification of the continuous-timenonlinear dynamic game (85) where 119910

1119905and 119910

2119905(or simi-

larly 1199091119905

and 1199092119905) denote the state variables with undesired

perturbations and 1199101119905

and 1199102119905

indicate their state estimatesOn the other hand according to Figure 2 the differential

neural network (94) identifies the dynamics of the additivedeterministic noises or in other words the dynamics of =119867120585

119905 Thus 119911

1119905and 119911

2119905represent the state variables of the

above differential equation and 1119905

and 2119905

are their stateestimates

In this way Figure 3 shows the performance of thefiltering process (13) in the nonlinear differential game (85)that is to say it shows the comparison between the expectedor uncorrupted state variables 119909

1119905and 119909

2119905and the filtered

state estimates 1199091119905

and 1199092119905

Mathematical Problems in Engineering 11

0

minus01

z1t

z1t

01

02

0 05 1 15 2 25 3 35

(a)

0

minus01

z2t

z2t

01

0

minus02

05 1 15 2 25 3 35

(b)

Figure 2 Comparison between 119911119905and

119905for the 2-player nonlinear differential game (85)

1

0

minus1

minus2

x1t

x1t

0 5 10 15 20 25

(a)

1

0

minus1

x2t

x2t

0 5 10 15 20 25

(b)

Figure 3 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

minus2

minus22

minus24

minus26

minus28

x1t

x1t

5 10 15 20 25

(a)

15

1

05

5 10 15 20 25

x2t

x2t

(b)

Figure 4 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

Finally Figure 4 also exhibits the described filteringprocess but comparing the real state variables 119909

1119905and 119909

2119905

with filtered state estimates 1199091119905

and 1199092119905

7 Results Analysis and Discussion

Although the differential neural networks (14) and (27) canperform the identification of the differential equations (10)and (11) it is important to remember that their performancedepends on the number of neurons used and on the proposi-tion of all the constant values (or free parameters) that weredescribed in Assumptions 8 and 11 Therefore the values of1198651and 119866

1(at a fixed time) can change according to this fact

In other words due to the fact that the differential neuralnetworks are an approximation of a dynamic system (orgame) there always will be a residual error that will dependon this approximation

Thus in the particular case shown in Example 16 thedesign of (88) was made using only eight neurons four forthe layer of perceptrons without any relation to the playersand two for the layer of perceptrons of each player Similarlyfor the design of (94) (which is subject to the equalities (32)-(33)) six neurons were used

On the other hand it is important to mention that(14) and (27) will operate properly only if Assumptions 1ndash4 6 8 9 and 11 are satisfied that is to say there is noguarantee that these differential neural networks performgood identification and filtering processes if for examplethe class of nonlinear differential games (1)ndash(4) has a stochas-tic nature

Finally although this paper presents a new approach foridentification and filtering of nonlinear dynamic games itshould be emphasized that there exist other techniques thatmight solve the problem treated here for example the citedpublications in Section 1

12 Mathematical Problems in Engineering

8 Conclusions and Future Work

According to the results of this paper the differential neuralnetworks (14) and (27) solve the problem of identifying andfiltering the class of nonlinear differential games (1)ndash(4)

However there is no guarantee that (14) and (27) identifyand filter the states of (1)ndash(4) if Assumptions 1ndash4 6 8 9 and11 are not met

Thereby the proposed learning laws (38) and (59) obtainthe maximum values of identification error in average sense(40) and (61) that is to say the maximum value of filteringerror (77)-(78)

Nevertheless these errors depend on both the number ofneurons used in the differential neural networks (14) and (27)and the proposed values applied in their design conditions

On the other hand according to the simulation resultsof the illustrating example the effectiveness of (14) and (27)is shown and the applicability of Theorems 13 14 and 15 isverified

Finally speaking of future work in this research fieldone can analyze and discuss the use of (14) and (27) for theobtaining of equilibrium solutions in the class of nonlineardifferential games (1)ndash(4) that is to say the use of differentialneural networks for controlling nonlinear differential games

Appendix

The values of the ldquounknownrdquo nonlinear functions 119891(sdot) 1198921(sdot)and 1198922(sdot) and the ldquounknownrdquo constant matrix119867 used in (85)of Example 16 are shown as follows

119891 (119909119905) = [

1199092119905

sin (1199091119905)]

1198921

(119909119905)

= [minus 119909

1119905cos (119909

1119905) 0

0 1199092119905sin (119909

1119905) minus 119909

1119905cos (119909

2119905)]

1198922

(119909119905) = [

sin (1199091119905) 0

0 minus [1199091119905+ 119909

2119905]]

119867 = [05 0

0 05]

(A1)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I Alvarez A Poznyak and A Malo ldquoUrban traffic controlproblem a game theory approachrdquo in Proceedings of the 47thIEEE Conference on Decision and Control (CDC rsquo08) pp 2168ndash2172 Cancun Mexico December 2008

[2] T Basar and G J Olsder Dynamic Noncooperative GameTheory Academic Press Great Britain UK 2nd edition 1995

[3] J M C Clark and R B Vinter ldquoA differential dynamicgames approach to flow controlrdquo in Proceedings of the 42nd

IEEE Conference on Decision and Control pp 1228ndash1231 MauiHawaii USA December 2003

[4] S H Tamaddoni S Taheri and M Ahmadian ldquoOptimal VSCdesign based on nash strategy for differential 2-player gamesrdquoin Proceedings of the IEEE International Conference on SystemsMan and Cybernetics (SMC rsquo09) pp 2415ndash2420 San AntonioTex USA October 2009

[5] F Amato M Mattei and A Pironti ldquoRobust strategies forNash linear quadratic games under uncertain dynamicsrdquo inProceedings of the 37th IEEE Conference on Decision and Control(CDC rsquo98) pp 1869ndash1870 Tampa Fla USA December 1998

[6] X Nian S Yang and N Chen ldquoGuaranteed cost strategiesof uncertain LQ closed-loop differential games with multipleplayersrdquo in Proceedings of the American Control Conference pp1724ndash1729 Minneapolis Minn USA June 2006

[7] M Jimenez-Lizarraga and A Poznyak ldquoEquilibrium in LQdifferential games withmultiple scenariosrdquo in Proceedings of the46th IEEE Conference on Decision and Control (CDC rsquo07) pp4081ndash4086 New Orleans La USA December 2007

[8] M Jimenez-Lizarraga A Poznyak andM A Alcorta ldquoLeader-follower strategies for a multi-plant differential gamerdquo inProceedings of the American Control Conference (ACC rsquo08) pp3839ndash3844 Seattle Wash USA June 2008

[9] A S Poznyak and C J Gallegos ldquoMultimodel prey-predatorLQ differential gamesrdquo in Proceedings of the American ControlConference pp 5369ndash5374 Denver Colo USA June 2003

[10] Y Li and L Guo ldquoConvergence of adaptive linear stochasticdifferential games nonzero-sum caserdquo in Proceedings of the 10thWorld Congress on Intelligent Control and Automation (WCICArsquo12) pp 3543ndash3548 Beijing China July 2012

[11] D Vrabie and F Lewis ldquoAdaptive Dynamic Programmingalgorithm for finding online the equilibrium solution of thetwo-player zero-sum differential gamerdquo in Proceedings of theInternational Joint Conference on Neural Networks (IJCNN rsquo10)pp 1ndash8 Barcelona Spain July 2010

[12] B-S Chen C-S Tseng and H-J Uang ldquoFuzzy differentialgames for nonlinear stochastic systems suboptimal approachrdquoIEEE Transactions on Fuzzy Systems vol 10 no 2 pp 222ndash2332002

[13] B Widrow J R Glover Jr J M McCool et al ldquoAdaptive noisecancelling principles and applicationsrdquo Proceedings of the IEEEvol 63 no 12 pp 1692ndash1716 1975

[14] C E de Souza U Shaked and M Fu ldquoRobust119867infinfiltering for

continuous time varying uncertain systems with deterministicinput signalsrdquo IEEE Transactions on Signal Processing vol 43no 3 pp 709ndash719 1995

[15] A Osorio-Cordero A S Poznyak and M Taksar ldquoRobustdeterministic filtering for linear uncertain time-varying sys-temsrdquo in Proceedings of the American Control Conference pp2526ndash2530 Albuquerque NM USA June 1997

[16] M Hou and R J Patton ldquoOptimal filtering for systems withunknown inputsrdquo IEEE Transactions on Automatic Control vol43 no 3 pp 445ndash449 1998

[17] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 3088ndash3092 1984

[18] E B Kosmatopoulos M M Polycarpou M A Christodoulouand P A Ioannou ldquoHigh-order neural network structuresfor identification of dynamical systemsrdquo IEEE Transactions onNeural Networks vol 6 no 2 pp 422ndash431 1995

Mathematical Problems in Engineering 13

[19] F Abdollahi H A Talebi and R V Patel ldquoState estimationfor flexible-joint manipulators using stable neural networksrdquo inProceedings of the IEEE International Symposium on Computa-tional Intelligence in Robotics and Automation pp 25ndash29 KobeJapan 2003

[20] A Alessandri C Cervellera and M Sanguineti ldquoDesign ofasymptotic estimators an approach based on neural networksand nonlinear programmingrdquo IEEE Transactions on NeuralNetworks vol 18 no 1 pp 86ndash96 2007

[21] Y H Kim F L Lewis and C T Abdallah ldquoNonlinear observerdesign using dynamic recurrent neural networksrdquo in Proceed-ings of the 35th IEEE Conference on Decision and Control pp949ndash954 Kobe Japan December 1996

[22] F L Lewis K Liu and A Yesildirek ldquoNeural net robotcontroller with guaranteed tracking performancerdquo IEEE Trans-actions on Neural Networks vol 6 no 3 pp 703ndash715 1995

[23] G A Rovithakis and M A Christodoulou ldquoAdaptive controlof unknown plants using dynamical neural networksrdquo IEEETransactions on Systems Man and Cybernetics vol 24 no 3 pp400ndash412 1994

[24] A Poznyak E N Sanchez and W Yu Differential NeuralNetworks for Robust Nonlinear Control Identification StateEstimation and Trajectory TrackingWorld Scientific Singapore2001

[25] E Garcıa and D A Murano ldquoState estimation for a class ofnonlinear differential games using differential neural networksrdquoin Proceedings of the American Control Conference (ACC rsquo11) pp2486ndash2491 San Francisco Calif USA July 2011

[26] D A Murano Nash equilibrium for uncertain nonlinear dif-ferential games using differential neural networks deterministicand stochastic cases [PhD thesis] CINVESTAV-IPN DistritoFederal Mexico 2004

[27] D Jiang and J Wang ldquoA recurrent neural network for onlinedesign of robust optimal filtersrdquo IEEE Transactions on Circuitsand Systems I FundamentalTheory and Applications vol 47 no6 pp 921ndash926 2000

[28] A G Parlos S K Menon and A F Atiya ldquoAn algorithmicapproach to adaptive state filtering using recurrent neuralnetworksrdquo IEEE Transactions on Neural Networks vol 12 no6 pp 1411ndash1432 2001

[29] S Haykin Neural Networks A Comprehensive FoundationPearson Chennai India 1999

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 3: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

Mathematical Problems in Engineering 3

Assumption 1 The admissible control actions 119906119894119905are measur-

able and bounded for all time 119905 isin [0infin) that is10038171003817100381710038171003817119906119894

119905

10038171003817100381710038171003817le 119906

119894

lt infin (5)

where 119906119894 isin R are known constants

Assumption 2 In order to guarantee the global existence anduniqueness of the solution of (1) the unknown nonlinearfunctions119891(sdot) and 119892119894(sdot) satisfy the Lipschitz condition that isthere exist 0 lt 120595 120595119894

isin R constants such that the equations1003817100381710038171003817119891 (1205941) minus 119891 (1205942)

1003817100381710038171003817 le 12059510038171003817100381710038171205941 minus 1205942

1003817100381710038171003817

10038171003817100381710038171003817119892119894

(1205941) minus 119892

119894

(1205942)10038171003817100381710038171003817le 120595

119894 10038171003817100381710038171205941 minus 12059421003817100381710038171003817

(6)

are fulfilled for all 1205941 120594

2isin X sub R119899 Moreover 119891(sdot) and 119892119894(sdot)

satisfy the following linear growth condition1003817100381710038171003817119891 (120594)

1003817100381710038171003817 le 1198911+ 119891

2

10038171003817100381710038171205941003817100381710038171003817

10038171003817100381710038171003817119892119894

(120594)10038171003817100381710038171003817le 119892

119894

1+ 119892

119894

2

10038171003817100381710038171205941003817100381710038171003817

(7)

where 1198911 119891

2 119892

119894

1 119892

119894

2isin R are known constants

Assumption 3 Under the permissible control strategies120588119894

(119909119905) the (feedback) class of nonlinear differential games

given by (1)ndash(4) is quadratically stable that is there exists aLyapunov (maybe unknown) function 0 le 119864

119905 R119899

rarr Rsuch that the inequalities

[120597119864

119905

120597119909119905

][119891 (119909119905) +

119873

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905] le minus120582

1

10038171003817100381710038171199091199051003817100381710038171003817

2

10038171003817100381710038171003817100381710038171003817

120597119864119905

120597119909119905

10038171003817100381710038171003817100381710038171003817

le 1205822

10038171003817100381710038171199091199051003817100381710038171003817

(8)

are satisfied for any known 0 lt 1205821 120582

2isin R constants

Assumption 4 The deterministic perturbation 120585119905is measur-

able and bounded for all time 119905 isin [0infin) that is10038171003817100381710038171205851199051003817100381710038171003817 le 120585 lt infin (9)

where 120585 isin R is a known constant

3 Problem Statement

Let the class of continuous-time nonlinear dynamic games(1)ndash(4) be such that Assumptions 1 to 4 are fulfilled Thenif one makes the following change of variables

119910119905= 119891 (119909

119905) +

119873

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905 (10)

119905= 119867120585

119905 (11)

it is clear that the expected or uncorrupted vector state of theclass of nonlinear differential games (1)ndash(4) can be defined as

119909119905= 119910

119905minus 119911

119905 (12)

Thereby and in view of the fact that 119891(sdot) 119892119894(sdot) and 119867

are unknown the tackled problem in this paper is to obtaina feedback (perfect state) information pattern 120578

119894

119905= 119909

119905

given that 119909119905isin R119899 satisfies the following filtering (or noise

canceling) equation

119909119905= 119910

119905minus

119905(13)

and where 119910119905

119905isin R119899 are the state estimates of differential

equations (10) and (11) respectively

4 Differential Neural Networks Design

In order to solve the problem described above consider a firstdifferential neural network given by the following equation

119910119905= 119882

1119905120590 (119881

1119905119910119905) +

119873

sum

119894=1

119882119894

2119905120593119894

(119881119894

2119905119910119905) 119906

119894

119905 (14)

where 119905 isin [0infin) 119894 = 1 2 3 119873 denotes the number ofplayers 119910

119905isin R119899 is the vector state of the neural network

the matrices 1198821119905

isin R119899times119896 1198811119905

isin R119896times119899 119882119894

2119905isin R119899times119903119894 and

119881119894

2119905isin R119903119894times119899 are synaptic weights of the neural network and

120590 R119896

rarr R119896 and120593119894 R119903119894 rarr R119903119894times119904119894 are activation functionsof the neural network

Remark 5 For simplicity (and from now on) 120590 = 120590(1198811119905119910119905)

and 120593119894 = 120593119894(119881119894

2119905119910119905)

According to [29] the differential neural network (14)is classified as multilayer perceptrons and its structure wasinitially taken from [23 24] Also this differential neuralnetwork only uses sigmoid activation functions that is 120590 and120593119894 have a diagonal structure with elements

120590119901(119908) =

119886119901

1 + 119890minus119887119901119908

minus 119888119901

10038161003816100381610038161003816100381610038161003816119908=1198811119905119910119905

119901 = 1 2 119896 (15)

120593119894

119902119902(119908

119894

) =

119886119902

1 + 119890minus119887119902119908119894minus 119888

119902

10038161003816100381610038161003816100381610038161003816119908119894=1198811198942119905119910119905

119902 = 1 2 min (119903119894 119904

119894)

(16)

where 0 lt 119886119901 119887

119901 119888119901

isin R and 0 lt 119886119902 119887

119902 119888119902

isin Rare known constants that manipulate the geometry of thesigmoid function

Thus in view of the fact that 120590 and 120593119894 are sigmoid

activation functions they are bounded and they satisfy thefollowing equations

[ minus 119860 [119910119905minus 119910

119905]]

119879

Λ120590[ minus 119860 [119910

119905minus 119910

119905]]

le [119910119905minus 119910

119905]119879

Λ120590[119910

119905minus 119910

119905]

[120593119894

119906119894

119905]119879

Λ119894

120593[120593

119894

119906119894

119905] le [119906

119894

]2

[119910119905minus 119910

119905]119879

Λ119894

120593[119910

119905minus 119910

119905]

(17)

1015840

= 119863120590[119881

1119905minus 119881

10] 119910

119905+ ]

120590

1205931198941015840

= ⟨119863119894

120593 119906

119894

119905⟩ [119881

119894

2119905minus 119881

119894

20] 119910

119905+ ]119894

120593119906119894

119905

(18)

4 Mathematical Problems in Engineering

[]120590]119879

Λ1205901015840 []

120590]

le 119897120590[[119881

1119905minus 119881

10] 119910

119905]119879

Λ ]120590 [[1198811119905 minus 11988110] 119910119905]

[]119894120593119906119894

119905]119879

Λ119894

1205931015840 []119894

120593119906119894

119905]

le 119897119894

120593[119906

119894

]2

[[119881119894

2119905minus 119881

119894

20] 119910

119905]119879

Λ119894

]120593 [[119881119894

2119905minus 119881

119894

20] 119910

119905]

(19)

where 119860 Λ120590 Λ

1205901015840 Λ ]120590 Λ 120590

Λ119894

120593 Λ119894

1205931015840 Λ119894

]120593 Λ119894

120593 119897120590 and 119897119894

120593are

known constants of adequate dimensions and

= 120590 (11988110119910119905) minus 120590 (119881

10119910119905)

120593119894

= 120593119894

(119881119894

20119910119905) minus 120593

119894

(119881119894

20119910119905)

1015840

= 120590 (1198811119905119910119905) minus 120590 (119881

10119910119905)

1205931198941015840

= 120593119894

(119881119894

2119905119910119905) minus 120593

119894

(119881119894

20119910119905)

119863120590=

120597

120597120595120590 (120595) isin R

119896times119896

119863119894

120593=

120597

120597120595120593119894

(120595) isin R119903119894

(20)

Nevertheless there are some design conditions that thisfirst differential neural network needs to satisfy

Assumption 6 According to Assumption 2 the approxima-tion or residual error of (14) corresponding to the unidenti-fied dynamics of (10) and that is given by

119865119905= 119910

119905minus [119882

10120590 (119881

10119910119905) +

119873

sum

119894=1

119882119894

20120593119894

(119881119894

20119910119905) 119906

119894

119905] (21)

where 11988210 119881

10 119882119894

20 and 119881

119894

20are initial synaptic weights

(when 119905 = 0) is bounded and satisfies the followinginequality

[119865119905]119879

Λ119865[119865

119905] le 119865

1+ 119865

2[119910

119905]119879

Λ119865[119910

119905] (22)

where 0 lt Λ119865isin R119899times119899 is a known constant and 0 lt 119865

1

1198652isin R

Remark 7 The constants 1198651and 119865

2in (22) are not known

a priori because of the fact that they depend on the perfor-mance of the differential neural network (14) that is to saythe residual error (21) will depend on the number of neuronsused in (14) and on its parameters and therefore 119865

1and 119865

2

will depend on this too (see [24])

Assumption 8 There exist values of Λ120590 Λ

1205901015840 Λ119894

120593 Λ119894

1205931015840 Λ 120590

Λ119894

120593 Λ

119865 119897

120590 119897119894

120593 119860 and 119876 such that they provide a 0 lt 119875 =

[119875]119879

isin R119899times119899 solution to the algebraic Riccati equation asfollows

119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876 = 0 (23)

where

119860 = 11988210119860 (24)

119877 = [11988210] [[Λ

120590]minus1

+ [Λ1205901015840]minus1

] [11988210]119879

+ [Λ119865]minus1

+

119873

sum

119894=1

[119882119894

20] [[Λ

119894

120593]minus1

+ [Λ119894

1205931015840]minus1

] [119882119894

20]119879

(25)

119876 = 2119876 + Λ120590+

119873

sum

119894=1

[119906119894

]2

Λ119894

120593 (26)

where 119860 such that 119860 isin R119899times119899 is Hurwitz and 119876 is known

On the other hand consider now a second differentialneural network given by the following equation

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905 (27)

where 119905 isin [0infin) 119905isin R119899 is the vector state the matrices

1198823119905

isin R119899times120575 and 1198824119905

isin R119899times120583 are synaptic weights and 120572

R119899

rarr R120575 and 120574 R119899

rarr R120583times119899 are sigmoid activationfunctions

Then similar to120590 and120593119894 the activation functions120572(sdot) and120574(sdot) satisfy the following inequalities

[ minus 119860 [119905minus 119911

119905]]

119879

Λ[ minus 119860 [

119905minus 119911

119905]]

le [119905minus 119911

119905]119879

Λ120572[

119905minus 119911

119905]

[120574 120585119905]119879

Λ120574[120574 120585

119905] le [ 120585 ]

2

[119905minus 119911

119905]119879

Λ120574[

119905minus 119911

119905]

(28)

where 119860 Λ Λ

120572 Λ

120574 and Λ

120574are known constants of

adequate dimensions and

= 120572 (119905) minus 120572 (119911

119905)

120574 = 120574 (119905) minus 120574 (119911

119905)

(29)

Thereby it is easy to confirm that if 120572(sdot) = 120590 if 120574(sdot) =120593119894 if 120585

119905= 119906

119894

119905 and if 119881

1119905= 119881

119894

2119905= 119868 where 119894 = 1

then the differential neural networks (14) and (27) coincideHence two new assumptions (corresponding toAssumptions6 and 8) must be satisfied for this second differential neuralnetwork

Assumption 9 According (again) to Assumption 2 theapproximation or residual error of (27) corresponding to theunidentified dynamics of (11) and that is given by

119866119905=

119905minus [119882

30120572 (119911

119905) + 119882

40120574 (119911

119905) 120585

119905] (30)

where11988230

and11988240

are initial synaptic weights (when 119905 = 0)is bounded and satisfies the inequality

[119866119905]119879

Λ119866[119866

119905] le 119866

1+ 119866

2[119911

119905]119879

Λ119866[119911

119905] (31)

where 0 lt Λ119866isin R119899times119899 is a known constant and 0 lt 119866

1

1198662isin R

Mathematical Problems in Engineering 5

Remark 10 Similar to Remark 7 the constants 1198661and 119866

2

in (31) are not known a priori because they depend on theperformance of the differential neural network (27)

Assumption 11 If there exist values of Λ Λ

120574 Λ

119866119882

30 and

11988240

such that

[11988240] [Λ

120574]minus1

[11988240]119879

=

119873

sum

119894=1

[119882119894

20] [[Λ

119894

120593]minus1

+ [Λ119894

1205931015840]minus1

] [119882119894

20]119879

11988230

= 11988210

Λ= Λ

120590+ Λ

1205901015840

Λ119866= Λ

119865

(32)

and values of Λ120572and Λ

120574such that

119876 = Λ120572+ [ 120585 ]

2

Λ120574minus Λ

120590minus

119873

sum

119894=1

[119906119894

]2

Λ119894

120593 (33)

then they provide a 0 lt 119875 = [119875]119879

isin R119899times119899 solution to thealgebraic Riccati equation (23) where

119860 = 11988230119860 (34)

119877 = [11988230] [Λ

]minus1

[11988230]119879

+ [Λ119866]minus1

+ [11988240] [Λ

119894

120593]minus1

[11988240]119879

(35)

119876 = 119876 + Λ120572+ [ 120585 ]

2

Λ120574 (36)

Remark 12 Notice that the equalities (32) and (33) werechosen in order to guarantee the same solution of thealgebraic Riccati equation (23) in Assumptions 8 and 11 Alsonotice that the first term of the right-hand side of (26) willalways be greater than the first term of the right-hand side of(36) that is 2119876 gt 119876

5 Main Result on Identification and Filtering

According to the above the main result on identification andfiltering for the class of nonlinear differential games (1)ndash(4)deals with both the development of an adaptive learning lawfor the synaptic weights of the differential neural networks(14) and (27) and the inference of a maximum value ofidentification error for the dynamics (10) and (11)

Moreover the establishment of a maximum value of fil-tering error between the uncorrupted states and the identifiedones is obtained namely an error is defined by

119890119905= 119909

119905minus 119909

119905 (37)

where 119909119905isin R119899 is given by noise canceling equation (13) and

119909119905isin R119899 is given by the expected or uncorrupted vector state

(12)More formally the main obtained result is described in

the following three theorems

Theorem 13 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural network (14) be suchthat Assumptions 6 and 8 are satisfied If the synaptic weightsof (14) are adjusted with the following learning law

1119905= minus2119870

1198821119875120576

119905[120590]

119879

1119905= minus 119870

1198811[2[119863

120590]119879

[11988210]119879

119875120576119905

+119897120590[Λ ]120590]

119879

1119905119910119905] [119910

119905]119879

119894

2119905= minus2119870

119894

1198822

119875120576119905[119906

119894

119905]119879

[120593119894

]119879

119894

2119905= minus119870

119894

1198812

[2 ⟨119863119894

120593 119906

119894

119905⟩ [119882

119894

20]119879

119875120576119905

+119897119894

120593[119906

119894

]2

[Λ119894

]120593]119879

119894

2119905119910119905] [119910

119905]119879

(38)

where120576119905= 119910

119905minus 119910

119905(39)

denotes the identification error 1119905= 119881

1119905minus119881

10 119894

2119905= 119881

119894

2119905minus

119881119894

20 and 119870

1198821 119870

1198811 119870119894

1198822

and 119870119894

1198812

are known symmetric andpositive-definite constant matrices then it is possible to obtainthe nextmaximumvalue of identification error in average sense119870

119894

1198822

as follows

lim sup119905rarrinfin

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) le 119865

1 (40)

Proof Taking into account the residual error (21) the differ-ential equation (10) can be expressed as

119910119905= 119882

10120590 (119881

10119910119905) +

119873

sum

119894=1

119882119894

20120593119894

(119881119894

20119910119905) 119906

119894

119905+ 119865

119905 (41)

Then by substituting (14) and (41) into the derivative of (39)with respect to 119905 and by adding and subtracting the terms[119882

10120590] [119882

10120590(119881

10119910119905)] [119882119894

20120593119894

119906119894

119905] and [119882119894

20120593119894

(119881119894

20119910119905)119906

119894

119905] it

is easy to confirm that

120576119905=

1119905120590 +119882

10 + 119882

101015840

minus 119865119905

+

119873

sum

119894=1

119894

2119905120593119894

119906119894

119905+

119873

sum

119894=1

119882119894

20120593119894

119906119894

119905+

119873

sum

119894=1

119882119894

201205931198941015840

119906119894

119905

(42)

where 1119905= 119882

1119905minus119882

10and 119894

2119905= 119882

119894

2119905minus119882

119894

20 Now let the

Lyapunov (energetic) candidate function

119871119905= 119864

119905+ [120576

119905]119879

119875 [120576119905] +

1

2tr ([

1119905]119879

[1198701198821]minus1

[1119905])

+1

2tr ([

1119905]119879

[1198701198811]minus1

[1119905])

+1

2

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905])

+1

2

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905])

(43)

6 Mathematical Problems in Engineering

be such that the inequalities (8) are fulfilled (seeAssumption 3) and let

119905le minus120582

1

10038171003817100381710038171199101199051003817100381710038171003817

2

+ 2[120576119905]119879

119875 [ 120576119905]

+ tr ([1119905]119879

[1198701198821]minus1

[1119905])

+ tr ([1119905]119879

[1198701198811]minus1

[1119905])

+

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905])

+

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905])

(44)

be the derivative of (43) with respect to 119905 Then by substi-tuting (42) into the second term of the right-hand side of(44) and by adding and subtracting the term [2[120576

119905]119879

119875119860120576119905]

one may get

2[120576119905]119879

119875 [ 120576119905] = 2[120576

119905]119879

11987511988210[ minus 119860120576

119905] + 2[120576

119905]119879

119875119882101015840

+

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20120593119894

119906119894

119905

+

119873

sum

119894=1

2[120576119905]119879

119875119882119894

201205931198941015840

119906119894

119905minus 2[120576

119905]119879

119875119865119905

+ 2[120576119905]119879

1198751119905120590 +

119873

sum

119894=1

2[120576119905]119879

119875119894

2119905120593119894

119906119894

119905

+ [120576119905]119879

[119875119860 + [119860 ]119879

119875] [120576119905]

(45)

Next by analyzing the first five terms of the right-hand sideof (45) with the following inequality

[119883]119879

119884 + [119884]119879

119883 le [119883]119879

[Λ]minus1

[119883] + [119884]119879

Λ [119884] (46)

which is valid for any pair of matrices 119883119884 isin RΩtimesΓ and forany constant matrix 0 lt Λ isin RΩtimesΩ where Ω and Γ arepositive integers (see [24]) the following is obtained

(i) Using (46) and (17) in the first term of the right-handside of (45) then

2[120576119905]119879

11987511988210[ minus 119860120576

119905]

le [120576119905]119879

[119875 [11988210] [Λ

120590]minus1

[11988210]119879

119875] [120576119905]

+ [120576119905]119879

Λ120590[120576

119905]

(47)

(ii) Substituting (18) into the second term of the right-hand side of (45) and using (46) and (19) then

2[120576119905]119879

119875119882101015840

le 2[120576119905]119879

11987511988210119863

1205901119905119910119905

+ [120576119905]119879

[119875 [11988210] [Λ

1205901015840]minus1

[11988210]119879

119875] [120576119905]

+ 119897120590[

1119905119910119905]119879

Λ ]120590 [1119905119910119905]

(48)

(iii) Using (46) and (17) in the third term of the right-handside of (45) then

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20120593119894

119906119894

119905

le

119873

sum

119894=1

[120576119905]119879

[119875 [119882119894

20] [Λ

119894

120593]minus1

[119882119894

20]119879

119875] [120576119905]

+

119873

sum

119894=1

[119906119894

]2

[120576119905]119879

Λ119894

120593[120576

119905]

(49)

(iv) Substituting (18) into the fourth term of the right-hand side of (45) and using (46) and (19) then

119873

sum

119894=1

2[120576119905]119879

119875119882119894

201205931198941015840

119906119894

119905

le

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20⟨119863

119894

120593 119906

119894

119905⟩

119894

2119905119910119905

+

119873

sum

119894=1

[120576119905]119879

[119875 [119882119894

20] [Λ

119894

1205931015840]minus1

[119882119894

20]119879

119875] [120576119905]

+

119873

sum

119894=1

119897119894

120593[119906

119894

]2

[119894

2119905119910119905]119879

Λ119894

]120593 [119894

2119905119910119905]

(50)

(v) Using (46) and (22) in the fifth term of the right-handside of (45) then

minus2[120576119905]119879

119875119865119905le [120576

119905]119879

[119875[Λ119865]minus1

119875] [120576119905]

+ 1198651+ 119865

2[119910

119905]119879

Λ119865[119910

119905]

(51)

So by substituting (47)ndash(51) into the right-hand side of(45) and by adding and subtracting the term [[120576

119905]119879

2119876[120576119905]]

inequality (44) can be expressed as

119905le [120576

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120576119905]

minus [120576119905]119879

2119876 [120576119905] + 119865

1+ 119865

2

1003817100381710038171003817Λ 119865

1003817100381710038171003817

10038171003817100381710038171199101199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199101199051003817100381710038171003817

2

+ tr (Υ1198821) + tr (Υ

1198811)

+

119873

sum

119894=1

tr (Υ119894

1198822

) +

119873

sum

119894=1

tr (Υ119894

1198812

)

(52)

Mathematical Problems in Engineering 7

where the algebraic Riccati equation in the first term of theright-hand side of (52) is described in (23) and in (24)ndash(26)and

Υ1198821

= [1119905]119879

[1198701198821]minus1

[1119905] + 2120590[120576

119905]119879

1198751119905

Υ1198811= [

1119905]119879

[1198701198811]minus1

[1119905] + 2119910

119905[120576

119905]119879

11987511988210119863

1205901119905

+ 119897120590119910119905[119910

119905]119879

[1119905]119879

Λ ]120590 [1119905]

Υ119894

1198822

= [119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905] + 2120593

119894

119906119894

119905[120576

119905]119879

119875119894

2119905

Υ119894

1198812

= [119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905]

+ 2119910119905[120576

119905]119879

119875119882119894

20⟨119863

119894

120593 119906

119894

119905⟩

119894

2119905

+ 119897119894

120593[119906

119894

]2

119910119905[119910

119905]119879

[119894

2119905]119879

Λ119894

]120593 [119894

2119905]

(53)

Thereby by equating (53) to zero that is

Υ1198821

= Υ1198811= Υ

119894

1198822

= Υ119894

1198812

= 0 (54)

and by respectively solving for 1119905

1119905 119894

2119905 and 119894

2119905 the

learning law given by (38) is obtained Now by choosing

1003817100381710038171003817Λ 119865

1003817100381710038171003817 le1205821

1198652

(55)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120576

119905]119879

2119876 [120576119905] + 119865

1 (56)

Thus by integrating both sides of (56) on the time interval[0 119905] the following is obtained

int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591

le 1198710minus 119871

119905+ [119865

1] 119905 le 119871

0+ [119865

1] 119905

(57)

and by dividing (57) by 119905 it is easy to verify that

1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591 le

1198710

119905+ 119865

1 (58)

Finally by calculating the upper limit as 119905 rarr infin the max-imum value of identification error in average sense is the onedescribed in (40) This means that the identification error isbounded between zero and (40) and therefore this meansthat 120576

119905is stable in the sense of Lyapunov

Theorem 14 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural network (27) be suchthat Assumptions 9 and 11 are satisfied If the synaptic weightsof (27) are adjusted with the following learning law

3119905= minus2119870

1198823119872120598

119905[120572 (

119905)]

119879

119894

4119905= minus2 119870

119894

1198824

119872120598119905[120585

119905]119879

[120574 (119905)]

119879

(59)

where

120598119905=

119905minus 119911

119905(60)

denotes the identification error and 1198701198823

and 119870119894

1198824

are knownsymmetric and positive-definite constant matrices then it ispossible to obtain the next maximum value of identificationerror in average sense 119870119894

1198822

as follows

lim sup119905rarrinfin

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) le 119866

1 (61)

Proof Taking into account the residual error (30) the differ-ential equation (11) can be expressed as

119905= 119882

30120572 (119911

119905) + 119882

40120574 (119911

119905) 120585

119905+ 119866

119905 (62)

Then by substituting (27) and (62) into the derivative of (60)with respect to 119905 and by adding and subtracting the terms[119882

30120572()] and [119882

40120574()120585

119905] it is easy to confirm that

120598119905=

3119905120572 () + 119882

30 +

4119905120574 () 120585

119905+119882

40120574120585

119905minus 119866

119905 (63)

where 3119905= 119882

3119905minus119882

30and

4119905= 119882

4119905minus119882

40 Now let the

Lyapunov (energetic) candidate function

119872119905= 119864

119905+ [120598

119905]119879

119875 [120598119905] +

1

2tr ([

3119905]119879

[1198701198823]minus1

[3119905])

+1

2tr ([

4119905]119879

[1198701198824]minus1

[4119905])

(64)

be such that the inequalities (8) are fulfilled (seeAssumption 3) and let

119905le minus120582

1

10038171003817100381710038171199111199051003817100381710038171003817

2

+ 2[120598119905]119879

119875 [ 120598119905]

+ tr ([3119905]119879

[1198701198823]minus1

[3119905])

+ tr ([4119905]119879

[1198701198824]minus1

[4119905])

(65)

be the derivative of (64) with respect to 119905 Then by substitut-ing (63) into the second term of the right-hand side of (65)and by adding and subtracting the term [2[120598

119905]119879

119875119860 120598119905] one

may get

2[120598119905]119879

119875 [ 120598119905] = 2[120598

119905]119879

11987511988230[ minus 119860120598

119905]

+ 2[120598119905]119879

11987511988240120574120585

119905minus 2[120598

119905]119879

119875119866119905

+ 2[120598119905]119879

1198753119905120572 ()

+ 2[120598119905]119879

1198754119905120574 () 120585

119905

+ [120598119905]119879

[119875119860 + [119860 ]119879

119875] [120598119905]

(66)

8 Mathematical Problems in Engineering

Next by analyzing the first three terms of the right-hand sideof (66) with the inequality (46) the following is obtained

(i) Using (46) and (28) in the first term of the right-handside of (66) then

2[120598119905]119879

11987511988230[ minus 119860120598

119905]

le [120598119905]119879

[119875 [11988230] [Λ

]minus1

[11988230]119879

119875] [120598119905]

+ [120598119905]119879

Λ120572[120598

119905]

(67)

(ii) Using (46) and (28) in the second term of the right-hand side of (66) then

2[120598119905]119879

11987511988240120574120585

119905

le [120598119905]119879

[119875 [11988240] [Λ

120574]minus1

[11988240]119879

119875] [120598119905]

+ [120585]2

[120598119905]119879

Λ120574[120598

119905]

(68)

(iii) Using (46) and (31) in the third term of the right-handside of (66) then

minus2[120598119905]119879

119875119866119905le [120598

119905]119879

[119875[Λ119866]minus1

119875] [120598119905]

+ 1198661+ 119866

2[119911

119905]119879

Λ119866[119911

119905]

(69)

So by substituting (67)ndash(69) into the right-hand side of(66) and by adding and subtracting the term [[120598

119905]119879

119876[120598119905]]

inequality (65) can be expressed as

119905le [120598

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120598119905]

minus [120598119905]119879

119876 [120598119905] + 119866

1+ 119866

2

1003817100381710038171003817Λ119866

1003817100381710038171003817

10038171003817100381710038171199111199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199111199051003817100381710038171003817

2

+ tr (Ψ1198823) + tr (Ψ

1198824)

(70)

where the algebraic Riccati equation in the first term of theright-hand side of (70) is described in (23) and in (34)ndash(36)and

Ψ1198823

= [3119905]119879

[1198701198823]minus1

[3119905] + 2120572 () [120598

119905]119879

1198753119905 (71)

Ψ1198824

= [4119905]119879

[1198701198824]minus1

[4119905] + 2120574 () 120585

119905[120598

119905]119879

1198754119905 (72)

Thereby by equating (71) and (72) to zero (Ψ1198823

= Ψ1198824

= 0)and by respectively solving for

3119905and

4119905 the learning law

given by (59) is obtained Now by choosing

1003817100381710038171003817Λ119866

1003817100381710038171003817 le1205821

1198662

(73)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120598

119905]119879

119876 [120598119905] + 119866

1 (74)

Thus by integrating both sides of (74) on the time interval[0 119905] the following is obtained

int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591

le 1198720minus119872

119905+ [119866

1] 119905 le 119872

0+ [119866

1] 119905

(75)

and by dividing (75) by 119905 it is easy to verify that

1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591 le

1198720

119905+ 119866

1 (76)

Finally by calculating the upper limit as 119905 rarr infin themaximum value of identification error in average sense is theone described in (61)

Theorem 15 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural networks (14) and (27)be such that the Assumptions 6 and 8 and Assumptions 9 and11 are satisfied If the synaptic weights of (14) and (27) arerespectively adjusted with the learning laws (38) and (59) thenit is possible to obtain the following maximum value of filteringerror in average sense

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1

if 1198651gt 119866

1

(77)

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 0

if 1198651le 119866

1

(78)

where 119890119905is given by (37)

Proof Consider the following Lyapunov (energetic) candi-date function

Φ119905= [119890

119905]119879

119875 [119890119905] (79)

where 119875 is the solution of the algebraic Riccati equation (23)with 119860 119877 and 119876 defined by (24)ndash(26) or (34)ndash(36) Thenby calculating the derivative of (79) with respect to 119905 and byconsidering the equations (37) (13) and (12) it can be verifiedthat

Φ119905= 2[119890

119905]119879

119875 [ 120576119905minus 120598

119905] (80)

and taking into account (42) (63) and the proofs ofTheorems 13 and 14 one may get

Φ119905le minus [119890

119905]119879

2119876 [119890119905] + 119865

1minus [minus [119890

119905]119879

119876 [119890119905] + 119866

1]

le minus [119890119905]119879

[2119876 minus 119876] [119890119905] + 119865

1minus 119866

1

le minus [119890119905]119879

119876 [119890119905] + 119865

1minus 119866

1

(81)

Mathematical Problems in Engineering 9

Thus by integrating both sides of (81) on the time interval[0 119905] the following is obtained

int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le Φ

0minus Φ

119905+ [119865

1minus 119866

1] 119905

le Φ0+ [119865

1minus 119866

1] 119905

(82)

and by dividing (82) by 119905 it is easy to confirm that

1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le

Φ0

119905+ 119865

1minus 119866

1 (83)

By calculating the upper limit as 119905 rarr infin equation (83) canbe expressed as

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1(84)

and finally it is clear that

(i) if 1198651gt 119866

1 the maximum value of filtering error in

average sense is the one described in (77)(ii) if 119865

1le 119866

1 the maximum value of filtering error in

average sense is the one described in (78) given thatthe left-hand side of the inequality (84) cannot benegative

Hence the theorem is proved

6 Illustrating Example

Example 16 Consider a 2-player nonlinear differential gamegiven by

119905= 119891 (119909

119905) +

2

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905(85)

subject to the change of variables (10) and (11) (119894 = 2) wherethe control actions of each player are

1199061

119905= [

05 sawtooth (05119905)H (119905)

]

1199062

119905= [

H (119905)

05 sawtooth (05119905)] (86)

The undesired deterministic perturbations are

120585119905= [

sin (10119905)sin (10119905)] (87)

and H(119905) denotes the Heaviside step function or unit stepsignal Then under Assumptions 1ndash4 6 8 9 and 11 it is pos-sible to obtain a filtered feedback (perfect state) informationpattern 120578119894

119905= 119909

119905

Remark 17 As told before the functions 119891(sdot) and 119892119894

(sdot) aswell as the matrix 119867 are unknown but in order to makean example of (85) and its simulation the values of these

ldquounknownrdquo parameters will be shown in the Appendix of thispaper

Thereby consider now a first differential neural networkgiven by

119910119905= 119882

1119905120590 (119881

1119905119910119905) +

2

sum

119894=1

119882119894

2119905120593119894

(119881119894

2119905119910119905) 119906

119894

119905 (88)

where

11988210

= [05 01 0 2

02 05 06 0]

11988110

= [1 03 05 02

05 13 1 06]

119879

1198821

20= [

05 01

0 02] 119882

2

20= [

13 08

08 13]

1198811

20= [

1 01

01 2] 119881

2

20= [

05 07

09 01]

(89)

and the activation functions are

120590119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=1198811119905119910119905

119901 = 1 2 3 4

1205931

119901119902(119908

1

) =45

1 + 119890minus051199081minus 3

100381610038161003816100381610038161003816100381610038161199081=11988112119905119910119905

119901 = 119902 = 1 2

1205932

119901119902(119908

2

) =45

1 + 119890minus051199082minus 3

100381610038161003816100381610038161003816100381610038161199082=11988122119905119910119905

119901 = 119902 = 1 2

(90)

By proposing the values described in Assumption 8 as

Λ120590= Λ

1

120593= Λ

2

120593= [

13 0

0 13]

Λ1

]120593 = Λ2

]120593 = [08 0

0 08]

Λ1

120593= Λ

2

120593= Λ

1

1205931015840 = Λ

2

1205931015840 = [

13 0

0 13]

minus1

Λ119865= [

01 0

0 01]

minus1

Λ120590= Λ

1205901015840 =

[[[

[

13 0 0 0

0 13 0 0

0 0 13 0

0 0 0 13

]]]

]

minus1

Λ ]120590 =[[[

[

08 0 0 0

0 08 0 0

0 0 08 0

0 0 0 08

]]]

]

119860 =

[[[

[

minus13963 minus22632

00473 minus61606

0426 minus74451

minus61533 08738

]]]

]

119876 = [10 0

0 10]

119897120590= 119897

1

120593= 119897

2

120593= 13 119906

1

= 1199062

= 5

(91)

10 Mathematical Problems in Engineering

1

0

minus1

y1t

y1t

minus2

0 5 10 15 20 25

(a)

15

1

05

0

minus05

minus1

minus15

0 5 10 15 20 25

y2t

y2t

(b)

Figure 1 Comparison between 119910119905and 119910

119905for the 2-player nonlinear differential game (85)

and by choosing 1198701198821

= 100 and 1198701198811= 119870

1

1198822

= 1198702

1198822

= 1198701

1198812

=

1198702

1198812

= 1 the solution of the algebraic Riccati equation (23)results in

119875 = [20524 minus06374

minus06374 31842] (92)

Then by applying the learning law (38) described inTheorem 13 the value of the identification error in averagesense (40) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) = 003145 (93)

Remark 18 Even though 003145 is the value of 1198651on the

time interval [0 25] it is not the global minimum value that1198651can take since 119905does not tend to infinity So in order to find

this global minimum value it would be required to simulatean arbitrarily large time

On the other hand let a second differential neural net-work given by

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905(94)

be such that the equalities (32) and (33) are fulfilled that is

11988230

= [05 01 0 2

02 05 06 0] 119882

40= [

1 0

0 1]

Λ120572= [

173 0

0 173] Λ

120574= [

20 0

0 20]

Λ=

[[[

[

26 0 0 0

0 26 0 0

0 0 26 0

0 0 0 26

]]]

]

minus1

Λ120574= [

6734 546

546 6162]

minus1

Λ119866= [

01 0

0 01]

minus1

120585 = 5

(95)

and the activation functions are

120572119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=119905

119901 = 1 2 3 4

120574119901119902(119908) =

45

1 + 119890minus051199081minus 3

10038161003816100381610038161003816100381610038161003816119908=119905

119901 = 119902 = 1 2

(96)

By choosing 1198701198823

= 100000 and 1198701198824

= 1 the solution of thealgebraic Riccati equation (23) results in the matrix (92) andby applying the learning law (59) described in Theorem 14the value of the identification error in average sense (61) on atime period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) = 000006645 (97)

Remark 19 As in Remark 18 in order to find the globalminimum value of 119866

1 it would be required to simulate an

arbitrarily large time

Finally taking into account the identification errors (93)and (97) the value of the filtering error in average sense (77)-(78) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 003139 (98)

The simulation of this example wasmade using theMATLABand Simulink platforms and its results are shown in Figures1ndash4

As is seen in Figure 1 the differential neural network(88) can perform the identification of the continuous-timenonlinear dynamic game (85) where 119910

1119905and 119910

2119905(or simi-

larly 1199091119905

and 1199092119905) denote the state variables with undesired

perturbations and 1199101119905

and 1199102119905

indicate their state estimatesOn the other hand according to Figure 2 the differential

neural network (94) identifies the dynamics of the additivedeterministic noises or in other words the dynamics of =119867120585

119905 Thus 119911

1119905and 119911

2119905represent the state variables of the

above differential equation and 1119905

and 2119905

are their stateestimates

In this way Figure 3 shows the performance of thefiltering process (13) in the nonlinear differential game (85)that is to say it shows the comparison between the expectedor uncorrupted state variables 119909

1119905and 119909

2119905and the filtered

state estimates 1199091119905

and 1199092119905

Mathematical Problems in Engineering 11

0

minus01

z1t

z1t

01

02

0 05 1 15 2 25 3 35

(a)

0

minus01

z2t

z2t

01

0

minus02

05 1 15 2 25 3 35

(b)

Figure 2 Comparison between 119911119905and

119905for the 2-player nonlinear differential game (85)

1

0

minus1

minus2

x1t

x1t

0 5 10 15 20 25

(a)

1

0

minus1

x2t

x2t

0 5 10 15 20 25

(b)

Figure 3 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

minus2

minus22

minus24

minus26

minus28

x1t

x1t

5 10 15 20 25

(a)

15

1

05

5 10 15 20 25

x2t

x2t

(b)

Figure 4 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

Finally Figure 4 also exhibits the described filteringprocess but comparing the real state variables 119909

1119905and 119909

2119905

with filtered state estimates 1199091119905

and 1199092119905

7 Results Analysis and Discussion

Although the differential neural networks (14) and (27) canperform the identification of the differential equations (10)and (11) it is important to remember that their performancedepends on the number of neurons used and on the proposi-tion of all the constant values (or free parameters) that weredescribed in Assumptions 8 and 11 Therefore the values of1198651and 119866

1(at a fixed time) can change according to this fact

In other words due to the fact that the differential neuralnetworks are an approximation of a dynamic system (orgame) there always will be a residual error that will dependon this approximation

Thus in the particular case shown in Example 16 thedesign of (88) was made using only eight neurons four forthe layer of perceptrons without any relation to the playersand two for the layer of perceptrons of each player Similarlyfor the design of (94) (which is subject to the equalities (32)-(33)) six neurons were used

On the other hand it is important to mention that(14) and (27) will operate properly only if Assumptions 1ndash4 6 8 9 and 11 are satisfied that is to say there is noguarantee that these differential neural networks performgood identification and filtering processes if for examplethe class of nonlinear differential games (1)ndash(4) has a stochas-tic nature

Finally although this paper presents a new approach foridentification and filtering of nonlinear dynamic games itshould be emphasized that there exist other techniques thatmight solve the problem treated here for example the citedpublications in Section 1

12 Mathematical Problems in Engineering

8 Conclusions and Future Work

According to the results of this paper the differential neuralnetworks (14) and (27) solve the problem of identifying andfiltering the class of nonlinear differential games (1)ndash(4)

However there is no guarantee that (14) and (27) identifyand filter the states of (1)ndash(4) if Assumptions 1ndash4 6 8 9 and11 are not met

Thereby the proposed learning laws (38) and (59) obtainthe maximum values of identification error in average sense(40) and (61) that is to say the maximum value of filteringerror (77)-(78)

Nevertheless these errors depend on both the number ofneurons used in the differential neural networks (14) and (27)and the proposed values applied in their design conditions

On the other hand according to the simulation resultsof the illustrating example the effectiveness of (14) and (27)is shown and the applicability of Theorems 13 14 and 15 isverified

Finally speaking of future work in this research fieldone can analyze and discuss the use of (14) and (27) for theobtaining of equilibrium solutions in the class of nonlineardifferential games (1)ndash(4) that is to say the use of differentialneural networks for controlling nonlinear differential games

Appendix

The values of the ldquounknownrdquo nonlinear functions 119891(sdot) 1198921(sdot)and 1198922(sdot) and the ldquounknownrdquo constant matrix119867 used in (85)of Example 16 are shown as follows

119891 (119909119905) = [

1199092119905

sin (1199091119905)]

1198921

(119909119905)

= [minus 119909

1119905cos (119909

1119905) 0

0 1199092119905sin (119909

1119905) minus 119909

1119905cos (119909

2119905)]

1198922

(119909119905) = [

sin (1199091119905) 0

0 minus [1199091119905+ 119909

2119905]]

119867 = [05 0

0 05]

(A1)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I Alvarez A Poznyak and A Malo ldquoUrban traffic controlproblem a game theory approachrdquo in Proceedings of the 47thIEEE Conference on Decision and Control (CDC rsquo08) pp 2168ndash2172 Cancun Mexico December 2008

[2] T Basar and G J Olsder Dynamic Noncooperative GameTheory Academic Press Great Britain UK 2nd edition 1995

[3] J M C Clark and R B Vinter ldquoA differential dynamicgames approach to flow controlrdquo in Proceedings of the 42nd

IEEE Conference on Decision and Control pp 1228ndash1231 MauiHawaii USA December 2003

[4] S H Tamaddoni S Taheri and M Ahmadian ldquoOptimal VSCdesign based on nash strategy for differential 2-player gamesrdquoin Proceedings of the IEEE International Conference on SystemsMan and Cybernetics (SMC rsquo09) pp 2415ndash2420 San AntonioTex USA October 2009

[5] F Amato M Mattei and A Pironti ldquoRobust strategies forNash linear quadratic games under uncertain dynamicsrdquo inProceedings of the 37th IEEE Conference on Decision and Control(CDC rsquo98) pp 1869ndash1870 Tampa Fla USA December 1998

[6] X Nian S Yang and N Chen ldquoGuaranteed cost strategiesof uncertain LQ closed-loop differential games with multipleplayersrdquo in Proceedings of the American Control Conference pp1724ndash1729 Minneapolis Minn USA June 2006

[7] M Jimenez-Lizarraga and A Poznyak ldquoEquilibrium in LQdifferential games withmultiple scenariosrdquo in Proceedings of the46th IEEE Conference on Decision and Control (CDC rsquo07) pp4081ndash4086 New Orleans La USA December 2007

[8] M Jimenez-Lizarraga A Poznyak andM A Alcorta ldquoLeader-follower strategies for a multi-plant differential gamerdquo inProceedings of the American Control Conference (ACC rsquo08) pp3839ndash3844 Seattle Wash USA June 2008

[9] A S Poznyak and C J Gallegos ldquoMultimodel prey-predatorLQ differential gamesrdquo in Proceedings of the American ControlConference pp 5369ndash5374 Denver Colo USA June 2003

[10] Y Li and L Guo ldquoConvergence of adaptive linear stochasticdifferential games nonzero-sum caserdquo in Proceedings of the 10thWorld Congress on Intelligent Control and Automation (WCICArsquo12) pp 3543ndash3548 Beijing China July 2012

[11] D Vrabie and F Lewis ldquoAdaptive Dynamic Programmingalgorithm for finding online the equilibrium solution of thetwo-player zero-sum differential gamerdquo in Proceedings of theInternational Joint Conference on Neural Networks (IJCNN rsquo10)pp 1ndash8 Barcelona Spain July 2010

[12] B-S Chen C-S Tseng and H-J Uang ldquoFuzzy differentialgames for nonlinear stochastic systems suboptimal approachrdquoIEEE Transactions on Fuzzy Systems vol 10 no 2 pp 222ndash2332002

[13] B Widrow J R Glover Jr J M McCool et al ldquoAdaptive noisecancelling principles and applicationsrdquo Proceedings of the IEEEvol 63 no 12 pp 1692ndash1716 1975

[14] C E de Souza U Shaked and M Fu ldquoRobust119867infinfiltering for

continuous time varying uncertain systems with deterministicinput signalsrdquo IEEE Transactions on Signal Processing vol 43no 3 pp 709ndash719 1995

[15] A Osorio-Cordero A S Poznyak and M Taksar ldquoRobustdeterministic filtering for linear uncertain time-varying sys-temsrdquo in Proceedings of the American Control Conference pp2526ndash2530 Albuquerque NM USA June 1997

[16] M Hou and R J Patton ldquoOptimal filtering for systems withunknown inputsrdquo IEEE Transactions on Automatic Control vol43 no 3 pp 445ndash449 1998

[17] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 3088ndash3092 1984

[18] E B Kosmatopoulos M M Polycarpou M A Christodoulouand P A Ioannou ldquoHigh-order neural network structuresfor identification of dynamical systemsrdquo IEEE Transactions onNeural Networks vol 6 no 2 pp 422ndash431 1995

Mathematical Problems in Engineering 13

[19] F Abdollahi H A Talebi and R V Patel ldquoState estimationfor flexible-joint manipulators using stable neural networksrdquo inProceedings of the IEEE International Symposium on Computa-tional Intelligence in Robotics and Automation pp 25ndash29 KobeJapan 2003

[20] A Alessandri C Cervellera and M Sanguineti ldquoDesign ofasymptotic estimators an approach based on neural networksand nonlinear programmingrdquo IEEE Transactions on NeuralNetworks vol 18 no 1 pp 86ndash96 2007

[21] Y H Kim F L Lewis and C T Abdallah ldquoNonlinear observerdesign using dynamic recurrent neural networksrdquo in Proceed-ings of the 35th IEEE Conference on Decision and Control pp949ndash954 Kobe Japan December 1996

[22] F L Lewis K Liu and A Yesildirek ldquoNeural net robotcontroller with guaranteed tracking performancerdquo IEEE Trans-actions on Neural Networks vol 6 no 3 pp 703ndash715 1995

[23] G A Rovithakis and M A Christodoulou ldquoAdaptive controlof unknown plants using dynamical neural networksrdquo IEEETransactions on Systems Man and Cybernetics vol 24 no 3 pp400ndash412 1994

[24] A Poznyak E N Sanchez and W Yu Differential NeuralNetworks for Robust Nonlinear Control Identification StateEstimation and Trajectory TrackingWorld Scientific Singapore2001

[25] E Garcıa and D A Murano ldquoState estimation for a class ofnonlinear differential games using differential neural networksrdquoin Proceedings of the American Control Conference (ACC rsquo11) pp2486ndash2491 San Francisco Calif USA July 2011

[26] D A Murano Nash equilibrium for uncertain nonlinear dif-ferential games using differential neural networks deterministicand stochastic cases [PhD thesis] CINVESTAV-IPN DistritoFederal Mexico 2004

[27] D Jiang and J Wang ldquoA recurrent neural network for onlinedesign of robust optimal filtersrdquo IEEE Transactions on Circuitsand Systems I FundamentalTheory and Applications vol 47 no6 pp 921ndash926 2000

[28] A G Parlos S K Menon and A F Atiya ldquoAn algorithmicapproach to adaptive state filtering using recurrent neuralnetworksrdquo IEEE Transactions on Neural Networks vol 12 no6 pp 1411ndash1432 2001

[29] S Haykin Neural Networks A Comprehensive FoundationPearson Chennai India 1999

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 4: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

4 Mathematical Problems in Engineering

[]120590]119879

Λ1205901015840 []

120590]

le 119897120590[[119881

1119905minus 119881

10] 119910

119905]119879

Λ ]120590 [[1198811119905 minus 11988110] 119910119905]

[]119894120593119906119894

119905]119879

Λ119894

1205931015840 []119894

120593119906119894

119905]

le 119897119894

120593[119906

119894

]2

[[119881119894

2119905minus 119881

119894

20] 119910

119905]119879

Λ119894

]120593 [[119881119894

2119905minus 119881

119894

20] 119910

119905]

(19)

where 119860 Λ120590 Λ

1205901015840 Λ ]120590 Λ 120590

Λ119894

120593 Λ119894

1205931015840 Λ119894

]120593 Λ119894

120593 119897120590 and 119897119894

120593are

known constants of adequate dimensions and

= 120590 (11988110119910119905) minus 120590 (119881

10119910119905)

120593119894

= 120593119894

(119881119894

20119910119905) minus 120593

119894

(119881119894

20119910119905)

1015840

= 120590 (1198811119905119910119905) minus 120590 (119881

10119910119905)

1205931198941015840

= 120593119894

(119881119894

2119905119910119905) minus 120593

119894

(119881119894

20119910119905)

119863120590=

120597

120597120595120590 (120595) isin R

119896times119896

119863119894

120593=

120597

120597120595120593119894

(120595) isin R119903119894

(20)

Nevertheless there are some design conditions that thisfirst differential neural network needs to satisfy

Assumption 6 According to Assumption 2 the approxima-tion or residual error of (14) corresponding to the unidenti-fied dynamics of (10) and that is given by

119865119905= 119910

119905minus [119882

10120590 (119881

10119910119905) +

119873

sum

119894=1

119882119894

20120593119894

(119881119894

20119910119905) 119906

119894

119905] (21)

where 11988210 119881

10 119882119894

20 and 119881

119894

20are initial synaptic weights

(when 119905 = 0) is bounded and satisfies the followinginequality

[119865119905]119879

Λ119865[119865

119905] le 119865

1+ 119865

2[119910

119905]119879

Λ119865[119910

119905] (22)

where 0 lt Λ119865isin R119899times119899 is a known constant and 0 lt 119865

1

1198652isin R

Remark 7 The constants 1198651and 119865

2in (22) are not known

a priori because of the fact that they depend on the perfor-mance of the differential neural network (14) that is to saythe residual error (21) will depend on the number of neuronsused in (14) and on its parameters and therefore 119865

1and 119865

2

will depend on this too (see [24])

Assumption 8 There exist values of Λ120590 Λ

1205901015840 Λ119894

120593 Λ119894

1205931015840 Λ 120590

Λ119894

120593 Λ

119865 119897

120590 119897119894

120593 119860 and 119876 such that they provide a 0 lt 119875 =

[119875]119879

isin R119899times119899 solution to the algebraic Riccati equation asfollows

119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876 = 0 (23)

where

119860 = 11988210119860 (24)

119877 = [11988210] [[Λ

120590]minus1

+ [Λ1205901015840]minus1

] [11988210]119879

+ [Λ119865]minus1

+

119873

sum

119894=1

[119882119894

20] [[Λ

119894

120593]minus1

+ [Λ119894

1205931015840]minus1

] [119882119894

20]119879

(25)

119876 = 2119876 + Λ120590+

119873

sum

119894=1

[119906119894

]2

Λ119894

120593 (26)

where 119860 such that 119860 isin R119899times119899 is Hurwitz and 119876 is known

On the other hand consider now a second differentialneural network given by the following equation

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905 (27)

where 119905 isin [0infin) 119905isin R119899 is the vector state the matrices

1198823119905

isin R119899times120575 and 1198824119905

isin R119899times120583 are synaptic weights and 120572

R119899

rarr R120575 and 120574 R119899

rarr R120583times119899 are sigmoid activationfunctions

Then similar to120590 and120593119894 the activation functions120572(sdot) and120574(sdot) satisfy the following inequalities

[ minus 119860 [119905minus 119911

119905]]

119879

Λ[ minus 119860 [

119905minus 119911

119905]]

le [119905minus 119911

119905]119879

Λ120572[

119905minus 119911

119905]

[120574 120585119905]119879

Λ120574[120574 120585

119905] le [ 120585 ]

2

[119905minus 119911

119905]119879

Λ120574[

119905minus 119911

119905]

(28)

where 119860 Λ Λ

120572 Λ

120574 and Λ

120574are known constants of

adequate dimensions and

= 120572 (119905) minus 120572 (119911

119905)

120574 = 120574 (119905) minus 120574 (119911

119905)

(29)

Thereby it is easy to confirm that if 120572(sdot) = 120590 if 120574(sdot) =120593119894 if 120585

119905= 119906

119894

119905 and if 119881

1119905= 119881

119894

2119905= 119868 where 119894 = 1

then the differential neural networks (14) and (27) coincideHence two new assumptions (corresponding toAssumptions6 and 8) must be satisfied for this second differential neuralnetwork

Assumption 9 According (again) to Assumption 2 theapproximation or residual error of (27) corresponding to theunidentified dynamics of (11) and that is given by

119866119905=

119905minus [119882

30120572 (119911

119905) + 119882

40120574 (119911

119905) 120585

119905] (30)

where11988230

and11988240

are initial synaptic weights (when 119905 = 0)is bounded and satisfies the inequality

[119866119905]119879

Λ119866[119866

119905] le 119866

1+ 119866

2[119911

119905]119879

Λ119866[119911

119905] (31)

where 0 lt Λ119866isin R119899times119899 is a known constant and 0 lt 119866

1

1198662isin R

Mathematical Problems in Engineering 5

Remark 10 Similar to Remark 7 the constants 1198661and 119866

2

in (31) are not known a priori because they depend on theperformance of the differential neural network (27)

Assumption 11 If there exist values of Λ Λ

120574 Λ

119866119882

30 and

11988240

such that

[11988240] [Λ

120574]minus1

[11988240]119879

=

119873

sum

119894=1

[119882119894

20] [[Λ

119894

120593]minus1

+ [Λ119894

1205931015840]minus1

] [119882119894

20]119879

11988230

= 11988210

Λ= Λ

120590+ Λ

1205901015840

Λ119866= Λ

119865

(32)

and values of Λ120572and Λ

120574such that

119876 = Λ120572+ [ 120585 ]

2

Λ120574minus Λ

120590minus

119873

sum

119894=1

[119906119894

]2

Λ119894

120593 (33)

then they provide a 0 lt 119875 = [119875]119879

isin R119899times119899 solution to thealgebraic Riccati equation (23) where

119860 = 11988230119860 (34)

119877 = [11988230] [Λ

]minus1

[11988230]119879

+ [Λ119866]minus1

+ [11988240] [Λ

119894

120593]minus1

[11988240]119879

(35)

119876 = 119876 + Λ120572+ [ 120585 ]

2

Λ120574 (36)

Remark 12 Notice that the equalities (32) and (33) werechosen in order to guarantee the same solution of thealgebraic Riccati equation (23) in Assumptions 8 and 11 Alsonotice that the first term of the right-hand side of (26) willalways be greater than the first term of the right-hand side of(36) that is 2119876 gt 119876

5 Main Result on Identification and Filtering

According to the above the main result on identification andfiltering for the class of nonlinear differential games (1)ndash(4)deals with both the development of an adaptive learning lawfor the synaptic weights of the differential neural networks(14) and (27) and the inference of a maximum value ofidentification error for the dynamics (10) and (11)

Moreover the establishment of a maximum value of fil-tering error between the uncorrupted states and the identifiedones is obtained namely an error is defined by

119890119905= 119909

119905minus 119909

119905 (37)

where 119909119905isin R119899 is given by noise canceling equation (13) and

119909119905isin R119899 is given by the expected or uncorrupted vector state

(12)More formally the main obtained result is described in

the following three theorems

Theorem 13 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural network (14) be suchthat Assumptions 6 and 8 are satisfied If the synaptic weightsof (14) are adjusted with the following learning law

1119905= minus2119870

1198821119875120576

119905[120590]

119879

1119905= minus 119870

1198811[2[119863

120590]119879

[11988210]119879

119875120576119905

+119897120590[Λ ]120590]

119879

1119905119910119905] [119910

119905]119879

119894

2119905= minus2119870

119894

1198822

119875120576119905[119906

119894

119905]119879

[120593119894

]119879

119894

2119905= minus119870

119894

1198812

[2 ⟨119863119894

120593 119906

119894

119905⟩ [119882

119894

20]119879

119875120576119905

+119897119894

120593[119906

119894

]2

[Λ119894

]120593]119879

119894

2119905119910119905] [119910

119905]119879

(38)

where120576119905= 119910

119905minus 119910

119905(39)

denotes the identification error 1119905= 119881

1119905minus119881

10 119894

2119905= 119881

119894

2119905minus

119881119894

20 and 119870

1198821 119870

1198811 119870119894

1198822

and 119870119894

1198812

are known symmetric andpositive-definite constant matrices then it is possible to obtainthe nextmaximumvalue of identification error in average sense119870

119894

1198822

as follows

lim sup119905rarrinfin

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) le 119865

1 (40)

Proof Taking into account the residual error (21) the differ-ential equation (10) can be expressed as

119910119905= 119882

10120590 (119881

10119910119905) +

119873

sum

119894=1

119882119894

20120593119894

(119881119894

20119910119905) 119906

119894

119905+ 119865

119905 (41)

Then by substituting (14) and (41) into the derivative of (39)with respect to 119905 and by adding and subtracting the terms[119882

10120590] [119882

10120590(119881

10119910119905)] [119882119894

20120593119894

119906119894

119905] and [119882119894

20120593119894

(119881119894

20119910119905)119906

119894

119905] it

is easy to confirm that

120576119905=

1119905120590 +119882

10 + 119882

101015840

minus 119865119905

+

119873

sum

119894=1

119894

2119905120593119894

119906119894

119905+

119873

sum

119894=1

119882119894

20120593119894

119906119894

119905+

119873

sum

119894=1

119882119894

201205931198941015840

119906119894

119905

(42)

where 1119905= 119882

1119905minus119882

10and 119894

2119905= 119882

119894

2119905minus119882

119894

20 Now let the

Lyapunov (energetic) candidate function

119871119905= 119864

119905+ [120576

119905]119879

119875 [120576119905] +

1

2tr ([

1119905]119879

[1198701198821]minus1

[1119905])

+1

2tr ([

1119905]119879

[1198701198811]minus1

[1119905])

+1

2

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905])

+1

2

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905])

(43)

6 Mathematical Problems in Engineering

be such that the inequalities (8) are fulfilled (seeAssumption 3) and let

119905le minus120582

1

10038171003817100381710038171199101199051003817100381710038171003817

2

+ 2[120576119905]119879

119875 [ 120576119905]

+ tr ([1119905]119879

[1198701198821]minus1

[1119905])

+ tr ([1119905]119879

[1198701198811]minus1

[1119905])

+

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905])

+

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905])

(44)

be the derivative of (43) with respect to 119905 Then by substi-tuting (42) into the second term of the right-hand side of(44) and by adding and subtracting the term [2[120576

119905]119879

119875119860120576119905]

one may get

2[120576119905]119879

119875 [ 120576119905] = 2[120576

119905]119879

11987511988210[ minus 119860120576

119905] + 2[120576

119905]119879

119875119882101015840

+

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20120593119894

119906119894

119905

+

119873

sum

119894=1

2[120576119905]119879

119875119882119894

201205931198941015840

119906119894

119905minus 2[120576

119905]119879

119875119865119905

+ 2[120576119905]119879

1198751119905120590 +

119873

sum

119894=1

2[120576119905]119879

119875119894

2119905120593119894

119906119894

119905

+ [120576119905]119879

[119875119860 + [119860 ]119879

119875] [120576119905]

(45)

Next by analyzing the first five terms of the right-hand sideof (45) with the following inequality

[119883]119879

119884 + [119884]119879

119883 le [119883]119879

[Λ]minus1

[119883] + [119884]119879

Λ [119884] (46)

which is valid for any pair of matrices 119883119884 isin RΩtimesΓ and forany constant matrix 0 lt Λ isin RΩtimesΩ where Ω and Γ arepositive integers (see [24]) the following is obtained

(i) Using (46) and (17) in the first term of the right-handside of (45) then

2[120576119905]119879

11987511988210[ minus 119860120576

119905]

le [120576119905]119879

[119875 [11988210] [Λ

120590]minus1

[11988210]119879

119875] [120576119905]

+ [120576119905]119879

Λ120590[120576

119905]

(47)

(ii) Substituting (18) into the second term of the right-hand side of (45) and using (46) and (19) then

2[120576119905]119879

119875119882101015840

le 2[120576119905]119879

11987511988210119863

1205901119905119910119905

+ [120576119905]119879

[119875 [11988210] [Λ

1205901015840]minus1

[11988210]119879

119875] [120576119905]

+ 119897120590[

1119905119910119905]119879

Λ ]120590 [1119905119910119905]

(48)

(iii) Using (46) and (17) in the third term of the right-handside of (45) then

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20120593119894

119906119894

119905

le

119873

sum

119894=1

[120576119905]119879

[119875 [119882119894

20] [Λ

119894

120593]minus1

[119882119894

20]119879

119875] [120576119905]

+

119873

sum

119894=1

[119906119894

]2

[120576119905]119879

Λ119894

120593[120576

119905]

(49)

(iv) Substituting (18) into the fourth term of the right-hand side of (45) and using (46) and (19) then

119873

sum

119894=1

2[120576119905]119879

119875119882119894

201205931198941015840

119906119894

119905

le

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20⟨119863

119894

120593 119906

119894

119905⟩

119894

2119905119910119905

+

119873

sum

119894=1

[120576119905]119879

[119875 [119882119894

20] [Λ

119894

1205931015840]minus1

[119882119894

20]119879

119875] [120576119905]

+

119873

sum

119894=1

119897119894

120593[119906

119894

]2

[119894

2119905119910119905]119879

Λ119894

]120593 [119894

2119905119910119905]

(50)

(v) Using (46) and (22) in the fifth term of the right-handside of (45) then

minus2[120576119905]119879

119875119865119905le [120576

119905]119879

[119875[Λ119865]minus1

119875] [120576119905]

+ 1198651+ 119865

2[119910

119905]119879

Λ119865[119910

119905]

(51)

So by substituting (47)ndash(51) into the right-hand side of(45) and by adding and subtracting the term [[120576

119905]119879

2119876[120576119905]]

inequality (44) can be expressed as

119905le [120576

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120576119905]

minus [120576119905]119879

2119876 [120576119905] + 119865

1+ 119865

2

1003817100381710038171003817Λ 119865

1003817100381710038171003817

10038171003817100381710038171199101199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199101199051003817100381710038171003817

2

+ tr (Υ1198821) + tr (Υ

1198811)

+

119873

sum

119894=1

tr (Υ119894

1198822

) +

119873

sum

119894=1

tr (Υ119894

1198812

)

(52)

Mathematical Problems in Engineering 7

where the algebraic Riccati equation in the first term of theright-hand side of (52) is described in (23) and in (24)ndash(26)and

Υ1198821

= [1119905]119879

[1198701198821]minus1

[1119905] + 2120590[120576

119905]119879

1198751119905

Υ1198811= [

1119905]119879

[1198701198811]minus1

[1119905] + 2119910

119905[120576

119905]119879

11987511988210119863

1205901119905

+ 119897120590119910119905[119910

119905]119879

[1119905]119879

Λ ]120590 [1119905]

Υ119894

1198822

= [119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905] + 2120593

119894

119906119894

119905[120576

119905]119879

119875119894

2119905

Υ119894

1198812

= [119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905]

+ 2119910119905[120576

119905]119879

119875119882119894

20⟨119863

119894

120593 119906

119894

119905⟩

119894

2119905

+ 119897119894

120593[119906

119894

]2

119910119905[119910

119905]119879

[119894

2119905]119879

Λ119894

]120593 [119894

2119905]

(53)

Thereby by equating (53) to zero that is

Υ1198821

= Υ1198811= Υ

119894

1198822

= Υ119894

1198812

= 0 (54)

and by respectively solving for 1119905

1119905 119894

2119905 and 119894

2119905 the

learning law given by (38) is obtained Now by choosing

1003817100381710038171003817Λ 119865

1003817100381710038171003817 le1205821

1198652

(55)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120576

119905]119879

2119876 [120576119905] + 119865

1 (56)

Thus by integrating both sides of (56) on the time interval[0 119905] the following is obtained

int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591

le 1198710minus 119871

119905+ [119865

1] 119905 le 119871

0+ [119865

1] 119905

(57)

and by dividing (57) by 119905 it is easy to verify that

1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591 le

1198710

119905+ 119865

1 (58)

Finally by calculating the upper limit as 119905 rarr infin the max-imum value of identification error in average sense is the onedescribed in (40) This means that the identification error isbounded between zero and (40) and therefore this meansthat 120576

119905is stable in the sense of Lyapunov

Theorem 14 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural network (27) be suchthat Assumptions 9 and 11 are satisfied If the synaptic weightsof (27) are adjusted with the following learning law

3119905= minus2119870

1198823119872120598

119905[120572 (

119905)]

119879

119894

4119905= minus2 119870

119894

1198824

119872120598119905[120585

119905]119879

[120574 (119905)]

119879

(59)

where

120598119905=

119905minus 119911

119905(60)

denotes the identification error and 1198701198823

and 119870119894

1198824

are knownsymmetric and positive-definite constant matrices then it ispossible to obtain the next maximum value of identificationerror in average sense 119870119894

1198822

as follows

lim sup119905rarrinfin

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) le 119866

1 (61)

Proof Taking into account the residual error (30) the differ-ential equation (11) can be expressed as

119905= 119882

30120572 (119911

119905) + 119882

40120574 (119911

119905) 120585

119905+ 119866

119905 (62)

Then by substituting (27) and (62) into the derivative of (60)with respect to 119905 and by adding and subtracting the terms[119882

30120572()] and [119882

40120574()120585

119905] it is easy to confirm that

120598119905=

3119905120572 () + 119882

30 +

4119905120574 () 120585

119905+119882

40120574120585

119905minus 119866

119905 (63)

where 3119905= 119882

3119905minus119882

30and

4119905= 119882

4119905minus119882

40 Now let the

Lyapunov (energetic) candidate function

119872119905= 119864

119905+ [120598

119905]119879

119875 [120598119905] +

1

2tr ([

3119905]119879

[1198701198823]minus1

[3119905])

+1

2tr ([

4119905]119879

[1198701198824]minus1

[4119905])

(64)

be such that the inequalities (8) are fulfilled (seeAssumption 3) and let

119905le minus120582

1

10038171003817100381710038171199111199051003817100381710038171003817

2

+ 2[120598119905]119879

119875 [ 120598119905]

+ tr ([3119905]119879

[1198701198823]minus1

[3119905])

+ tr ([4119905]119879

[1198701198824]minus1

[4119905])

(65)

be the derivative of (64) with respect to 119905 Then by substitut-ing (63) into the second term of the right-hand side of (65)and by adding and subtracting the term [2[120598

119905]119879

119875119860 120598119905] one

may get

2[120598119905]119879

119875 [ 120598119905] = 2[120598

119905]119879

11987511988230[ minus 119860120598

119905]

+ 2[120598119905]119879

11987511988240120574120585

119905minus 2[120598

119905]119879

119875119866119905

+ 2[120598119905]119879

1198753119905120572 ()

+ 2[120598119905]119879

1198754119905120574 () 120585

119905

+ [120598119905]119879

[119875119860 + [119860 ]119879

119875] [120598119905]

(66)

8 Mathematical Problems in Engineering

Next by analyzing the first three terms of the right-hand sideof (66) with the inequality (46) the following is obtained

(i) Using (46) and (28) in the first term of the right-handside of (66) then

2[120598119905]119879

11987511988230[ minus 119860120598

119905]

le [120598119905]119879

[119875 [11988230] [Λ

]minus1

[11988230]119879

119875] [120598119905]

+ [120598119905]119879

Λ120572[120598

119905]

(67)

(ii) Using (46) and (28) in the second term of the right-hand side of (66) then

2[120598119905]119879

11987511988240120574120585

119905

le [120598119905]119879

[119875 [11988240] [Λ

120574]minus1

[11988240]119879

119875] [120598119905]

+ [120585]2

[120598119905]119879

Λ120574[120598

119905]

(68)

(iii) Using (46) and (31) in the third term of the right-handside of (66) then

minus2[120598119905]119879

119875119866119905le [120598

119905]119879

[119875[Λ119866]minus1

119875] [120598119905]

+ 1198661+ 119866

2[119911

119905]119879

Λ119866[119911

119905]

(69)

So by substituting (67)ndash(69) into the right-hand side of(66) and by adding and subtracting the term [[120598

119905]119879

119876[120598119905]]

inequality (65) can be expressed as

119905le [120598

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120598119905]

minus [120598119905]119879

119876 [120598119905] + 119866

1+ 119866

2

1003817100381710038171003817Λ119866

1003817100381710038171003817

10038171003817100381710038171199111199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199111199051003817100381710038171003817

2

+ tr (Ψ1198823) + tr (Ψ

1198824)

(70)

where the algebraic Riccati equation in the first term of theright-hand side of (70) is described in (23) and in (34)ndash(36)and

Ψ1198823

= [3119905]119879

[1198701198823]minus1

[3119905] + 2120572 () [120598

119905]119879

1198753119905 (71)

Ψ1198824

= [4119905]119879

[1198701198824]minus1

[4119905] + 2120574 () 120585

119905[120598

119905]119879

1198754119905 (72)

Thereby by equating (71) and (72) to zero (Ψ1198823

= Ψ1198824

= 0)and by respectively solving for

3119905and

4119905 the learning law

given by (59) is obtained Now by choosing

1003817100381710038171003817Λ119866

1003817100381710038171003817 le1205821

1198662

(73)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120598

119905]119879

119876 [120598119905] + 119866

1 (74)

Thus by integrating both sides of (74) on the time interval[0 119905] the following is obtained

int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591

le 1198720minus119872

119905+ [119866

1] 119905 le 119872

0+ [119866

1] 119905

(75)

and by dividing (75) by 119905 it is easy to verify that

1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591 le

1198720

119905+ 119866

1 (76)

Finally by calculating the upper limit as 119905 rarr infin themaximum value of identification error in average sense is theone described in (61)

Theorem 15 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural networks (14) and (27)be such that the Assumptions 6 and 8 and Assumptions 9 and11 are satisfied If the synaptic weights of (14) and (27) arerespectively adjusted with the learning laws (38) and (59) thenit is possible to obtain the following maximum value of filteringerror in average sense

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1

if 1198651gt 119866

1

(77)

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 0

if 1198651le 119866

1

(78)

where 119890119905is given by (37)

Proof Consider the following Lyapunov (energetic) candi-date function

Φ119905= [119890

119905]119879

119875 [119890119905] (79)

where 119875 is the solution of the algebraic Riccati equation (23)with 119860 119877 and 119876 defined by (24)ndash(26) or (34)ndash(36) Thenby calculating the derivative of (79) with respect to 119905 and byconsidering the equations (37) (13) and (12) it can be verifiedthat

Φ119905= 2[119890

119905]119879

119875 [ 120576119905minus 120598

119905] (80)

and taking into account (42) (63) and the proofs ofTheorems 13 and 14 one may get

Φ119905le minus [119890

119905]119879

2119876 [119890119905] + 119865

1minus [minus [119890

119905]119879

119876 [119890119905] + 119866

1]

le minus [119890119905]119879

[2119876 minus 119876] [119890119905] + 119865

1minus 119866

1

le minus [119890119905]119879

119876 [119890119905] + 119865

1minus 119866

1

(81)

Mathematical Problems in Engineering 9

Thus by integrating both sides of (81) on the time interval[0 119905] the following is obtained

int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le Φ

0minus Φ

119905+ [119865

1minus 119866

1] 119905

le Φ0+ [119865

1minus 119866

1] 119905

(82)

and by dividing (82) by 119905 it is easy to confirm that

1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le

Φ0

119905+ 119865

1minus 119866

1 (83)

By calculating the upper limit as 119905 rarr infin equation (83) canbe expressed as

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1(84)

and finally it is clear that

(i) if 1198651gt 119866

1 the maximum value of filtering error in

average sense is the one described in (77)(ii) if 119865

1le 119866

1 the maximum value of filtering error in

average sense is the one described in (78) given thatthe left-hand side of the inequality (84) cannot benegative

Hence the theorem is proved

6 Illustrating Example

Example 16 Consider a 2-player nonlinear differential gamegiven by

119905= 119891 (119909

119905) +

2

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905(85)

subject to the change of variables (10) and (11) (119894 = 2) wherethe control actions of each player are

1199061

119905= [

05 sawtooth (05119905)H (119905)

]

1199062

119905= [

H (119905)

05 sawtooth (05119905)] (86)

The undesired deterministic perturbations are

120585119905= [

sin (10119905)sin (10119905)] (87)

and H(119905) denotes the Heaviside step function or unit stepsignal Then under Assumptions 1ndash4 6 8 9 and 11 it is pos-sible to obtain a filtered feedback (perfect state) informationpattern 120578119894

119905= 119909

119905

Remark 17 As told before the functions 119891(sdot) and 119892119894

(sdot) aswell as the matrix 119867 are unknown but in order to makean example of (85) and its simulation the values of these

ldquounknownrdquo parameters will be shown in the Appendix of thispaper

Thereby consider now a first differential neural networkgiven by

119910119905= 119882

1119905120590 (119881

1119905119910119905) +

2

sum

119894=1

119882119894

2119905120593119894

(119881119894

2119905119910119905) 119906

119894

119905 (88)

where

11988210

= [05 01 0 2

02 05 06 0]

11988110

= [1 03 05 02

05 13 1 06]

119879

1198821

20= [

05 01

0 02] 119882

2

20= [

13 08

08 13]

1198811

20= [

1 01

01 2] 119881

2

20= [

05 07

09 01]

(89)

and the activation functions are

120590119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=1198811119905119910119905

119901 = 1 2 3 4

1205931

119901119902(119908

1

) =45

1 + 119890minus051199081minus 3

100381610038161003816100381610038161003816100381610038161199081=11988112119905119910119905

119901 = 119902 = 1 2

1205932

119901119902(119908

2

) =45

1 + 119890minus051199082minus 3

100381610038161003816100381610038161003816100381610038161199082=11988122119905119910119905

119901 = 119902 = 1 2

(90)

By proposing the values described in Assumption 8 as

Λ120590= Λ

1

120593= Λ

2

120593= [

13 0

0 13]

Λ1

]120593 = Λ2

]120593 = [08 0

0 08]

Λ1

120593= Λ

2

120593= Λ

1

1205931015840 = Λ

2

1205931015840 = [

13 0

0 13]

minus1

Λ119865= [

01 0

0 01]

minus1

Λ120590= Λ

1205901015840 =

[[[

[

13 0 0 0

0 13 0 0

0 0 13 0

0 0 0 13

]]]

]

minus1

Λ ]120590 =[[[

[

08 0 0 0

0 08 0 0

0 0 08 0

0 0 0 08

]]]

]

119860 =

[[[

[

minus13963 minus22632

00473 minus61606

0426 minus74451

minus61533 08738

]]]

]

119876 = [10 0

0 10]

119897120590= 119897

1

120593= 119897

2

120593= 13 119906

1

= 1199062

= 5

(91)

10 Mathematical Problems in Engineering

1

0

minus1

y1t

y1t

minus2

0 5 10 15 20 25

(a)

15

1

05

0

minus05

minus1

minus15

0 5 10 15 20 25

y2t

y2t

(b)

Figure 1 Comparison between 119910119905and 119910

119905for the 2-player nonlinear differential game (85)

and by choosing 1198701198821

= 100 and 1198701198811= 119870

1

1198822

= 1198702

1198822

= 1198701

1198812

=

1198702

1198812

= 1 the solution of the algebraic Riccati equation (23)results in

119875 = [20524 minus06374

minus06374 31842] (92)

Then by applying the learning law (38) described inTheorem 13 the value of the identification error in averagesense (40) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) = 003145 (93)

Remark 18 Even though 003145 is the value of 1198651on the

time interval [0 25] it is not the global minimum value that1198651can take since 119905does not tend to infinity So in order to find

this global minimum value it would be required to simulatean arbitrarily large time

On the other hand let a second differential neural net-work given by

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905(94)

be such that the equalities (32) and (33) are fulfilled that is

11988230

= [05 01 0 2

02 05 06 0] 119882

40= [

1 0

0 1]

Λ120572= [

173 0

0 173] Λ

120574= [

20 0

0 20]

Λ=

[[[

[

26 0 0 0

0 26 0 0

0 0 26 0

0 0 0 26

]]]

]

minus1

Λ120574= [

6734 546

546 6162]

minus1

Λ119866= [

01 0

0 01]

minus1

120585 = 5

(95)

and the activation functions are

120572119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=119905

119901 = 1 2 3 4

120574119901119902(119908) =

45

1 + 119890minus051199081minus 3

10038161003816100381610038161003816100381610038161003816119908=119905

119901 = 119902 = 1 2

(96)

By choosing 1198701198823

= 100000 and 1198701198824

= 1 the solution of thealgebraic Riccati equation (23) results in the matrix (92) andby applying the learning law (59) described in Theorem 14the value of the identification error in average sense (61) on atime period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) = 000006645 (97)

Remark 19 As in Remark 18 in order to find the globalminimum value of 119866

1 it would be required to simulate an

arbitrarily large time

Finally taking into account the identification errors (93)and (97) the value of the filtering error in average sense (77)-(78) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 003139 (98)

The simulation of this example wasmade using theMATLABand Simulink platforms and its results are shown in Figures1ndash4

As is seen in Figure 1 the differential neural network(88) can perform the identification of the continuous-timenonlinear dynamic game (85) where 119910

1119905and 119910

2119905(or simi-

larly 1199091119905

and 1199092119905) denote the state variables with undesired

perturbations and 1199101119905

and 1199102119905

indicate their state estimatesOn the other hand according to Figure 2 the differential

neural network (94) identifies the dynamics of the additivedeterministic noises or in other words the dynamics of =119867120585

119905 Thus 119911

1119905and 119911

2119905represent the state variables of the

above differential equation and 1119905

and 2119905

are their stateestimates

In this way Figure 3 shows the performance of thefiltering process (13) in the nonlinear differential game (85)that is to say it shows the comparison between the expectedor uncorrupted state variables 119909

1119905and 119909

2119905and the filtered

state estimates 1199091119905

and 1199092119905

Mathematical Problems in Engineering 11

0

minus01

z1t

z1t

01

02

0 05 1 15 2 25 3 35

(a)

0

minus01

z2t

z2t

01

0

minus02

05 1 15 2 25 3 35

(b)

Figure 2 Comparison between 119911119905and

119905for the 2-player nonlinear differential game (85)

1

0

minus1

minus2

x1t

x1t

0 5 10 15 20 25

(a)

1

0

minus1

x2t

x2t

0 5 10 15 20 25

(b)

Figure 3 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

minus2

minus22

minus24

minus26

minus28

x1t

x1t

5 10 15 20 25

(a)

15

1

05

5 10 15 20 25

x2t

x2t

(b)

Figure 4 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

Finally Figure 4 also exhibits the described filteringprocess but comparing the real state variables 119909

1119905and 119909

2119905

with filtered state estimates 1199091119905

and 1199092119905

7 Results Analysis and Discussion

Although the differential neural networks (14) and (27) canperform the identification of the differential equations (10)and (11) it is important to remember that their performancedepends on the number of neurons used and on the proposi-tion of all the constant values (or free parameters) that weredescribed in Assumptions 8 and 11 Therefore the values of1198651and 119866

1(at a fixed time) can change according to this fact

In other words due to the fact that the differential neuralnetworks are an approximation of a dynamic system (orgame) there always will be a residual error that will dependon this approximation

Thus in the particular case shown in Example 16 thedesign of (88) was made using only eight neurons four forthe layer of perceptrons without any relation to the playersand two for the layer of perceptrons of each player Similarlyfor the design of (94) (which is subject to the equalities (32)-(33)) six neurons were used

On the other hand it is important to mention that(14) and (27) will operate properly only if Assumptions 1ndash4 6 8 9 and 11 are satisfied that is to say there is noguarantee that these differential neural networks performgood identification and filtering processes if for examplethe class of nonlinear differential games (1)ndash(4) has a stochas-tic nature

Finally although this paper presents a new approach foridentification and filtering of nonlinear dynamic games itshould be emphasized that there exist other techniques thatmight solve the problem treated here for example the citedpublications in Section 1

12 Mathematical Problems in Engineering

8 Conclusions and Future Work

According to the results of this paper the differential neuralnetworks (14) and (27) solve the problem of identifying andfiltering the class of nonlinear differential games (1)ndash(4)

However there is no guarantee that (14) and (27) identifyand filter the states of (1)ndash(4) if Assumptions 1ndash4 6 8 9 and11 are not met

Thereby the proposed learning laws (38) and (59) obtainthe maximum values of identification error in average sense(40) and (61) that is to say the maximum value of filteringerror (77)-(78)

Nevertheless these errors depend on both the number ofneurons used in the differential neural networks (14) and (27)and the proposed values applied in their design conditions

On the other hand according to the simulation resultsof the illustrating example the effectiveness of (14) and (27)is shown and the applicability of Theorems 13 14 and 15 isverified

Finally speaking of future work in this research fieldone can analyze and discuss the use of (14) and (27) for theobtaining of equilibrium solutions in the class of nonlineardifferential games (1)ndash(4) that is to say the use of differentialneural networks for controlling nonlinear differential games

Appendix

The values of the ldquounknownrdquo nonlinear functions 119891(sdot) 1198921(sdot)and 1198922(sdot) and the ldquounknownrdquo constant matrix119867 used in (85)of Example 16 are shown as follows

119891 (119909119905) = [

1199092119905

sin (1199091119905)]

1198921

(119909119905)

= [minus 119909

1119905cos (119909

1119905) 0

0 1199092119905sin (119909

1119905) minus 119909

1119905cos (119909

2119905)]

1198922

(119909119905) = [

sin (1199091119905) 0

0 minus [1199091119905+ 119909

2119905]]

119867 = [05 0

0 05]

(A1)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I Alvarez A Poznyak and A Malo ldquoUrban traffic controlproblem a game theory approachrdquo in Proceedings of the 47thIEEE Conference on Decision and Control (CDC rsquo08) pp 2168ndash2172 Cancun Mexico December 2008

[2] T Basar and G J Olsder Dynamic Noncooperative GameTheory Academic Press Great Britain UK 2nd edition 1995

[3] J M C Clark and R B Vinter ldquoA differential dynamicgames approach to flow controlrdquo in Proceedings of the 42nd

IEEE Conference on Decision and Control pp 1228ndash1231 MauiHawaii USA December 2003

[4] S H Tamaddoni S Taheri and M Ahmadian ldquoOptimal VSCdesign based on nash strategy for differential 2-player gamesrdquoin Proceedings of the IEEE International Conference on SystemsMan and Cybernetics (SMC rsquo09) pp 2415ndash2420 San AntonioTex USA October 2009

[5] F Amato M Mattei and A Pironti ldquoRobust strategies forNash linear quadratic games under uncertain dynamicsrdquo inProceedings of the 37th IEEE Conference on Decision and Control(CDC rsquo98) pp 1869ndash1870 Tampa Fla USA December 1998

[6] X Nian S Yang and N Chen ldquoGuaranteed cost strategiesof uncertain LQ closed-loop differential games with multipleplayersrdquo in Proceedings of the American Control Conference pp1724ndash1729 Minneapolis Minn USA June 2006

[7] M Jimenez-Lizarraga and A Poznyak ldquoEquilibrium in LQdifferential games withmultiple scenariosrdquo in Proceedings of the46th IEEE Conference on Decision and Control (CDC rsquo07) pp4081ndash4086 New Orleans La USA December 2007

[8] M Jimenez-Lizarraga A Poznyak andM A Alcorta ldquoLeader-follower strategies for a multi-plant differential gamerdquo inProceedings of the American Control Conference (ACC rsquo08) pp3839ndash3844 Seattle Wash USA June 2008

[9] A S Poznyak and C J Gallegos ldquoMultimodel prey-predatorLQ differential gamesrdquo in Proceedings of the American ControlConference pp 5369ndash5374 Denver Colo USA June 2003

[10] Y Li and L Guo ldquoConvergence of adaptive linear stochasticdifferential games nonzero-sum caserdquo in Proceedings of the 10thWorld Congress on Intelligent Control and Automation (WCICArsquo12) pp 3543ndash3548 Beijing China July 2012

[11] D Vrabie and F Lewis ldquoAdaptive Dynamic Programmingalgorithm for finding online the equilibrium solution of thetwo-player zero-sum differential gamerdquo in Proceedings of theInternational Joint Conference on Neural Networks (IJCNN rsquo10)pp 1ndash8 Barcelona Spain July 2010

[12] B-S Chen C-S Tseng and H-J Uang ldquoFuzzy differentialgames for nonlinear stochastic systems suboptimal approachrdquoIEEE Transactions on Fuzzy Systems vol 10 no 2 pp 222ndash2332002

[13] B Widrow J R Glover Jr J M McCool et al ldquoAdaptive noisecancelling principles and applicationsrdquo Proceedings of the IEEEvol 63 no 12 pp 1692ndash1716 1975

[14] C E de Souza U Shaked and M Fu ldquoRobust119867infinfiltering for

continuous time varying uncertain systems with deterministicinput signalsrdquo IEEE Transactions on Signal Processing vol 43no 3 pp 709ndash719 1995

[15] A Osorio-Cordero A S Poznyak and M Taksar ldquoRobustdeterministic filtering for linear uncertain time-varying sys-temsrdquo in Proceedings of the American Control Conference pp2526ndash2530 Albuquerque NM USA June 1997

[16] M Hou and R J Patton ldquoOptimal filtering for systems withunknown inputsrdquo IEEE Transactions on Automatic Control vol43 no 3 pp 445ndash449 1998

[17] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 3088ndash3092 1984

[18] E B Kosmatopoulos M M Polycarpou M A Christodoulouand P A Ioannou ldquoHigh-order neural network structuresfor identification of dynamical systemsrdquo IEEE Transactions onNeural Networks vol 6 no 2 pp 422ndash431 1995

Mathematical Problems in Engineering 13

[19] F Abdollahi H A Talebi and R V Patel ldquoState estimationfor flexible-joint manipulators using stable neural networksrdquo inProceedings of the IEEE International Symposium on Computa-tional Intelligence in Robotics and Automation pp 25ndash29 KobeJapan 2003

[20] A Alessandri C Cervellera and M Sanguineti ldquoDesign ofasymptotic estimators an approach based on neural networksand nonlinear programmingrdquo IEEE Transactions on NeuralNetworks vol 18 no 1 pp 86ndash96 2007

[21] Y H Kim F L Lewis and C T Abdallah ldquoNonlinear observerdesign using dynamic recurrent neural networksrdquo in Proceed-ings of the 35th IEEE Conference on Decision and Control pp949ndash954 Kobe Japan December 1996

[22] F L Lewis K Liu and A Yesildirek ldquoNeural net robotcontroller with guaranteed tracking performancerdquo IEEE Trans-actions on Neural Networks vol 6 no 3 pp 703ndash715 1995

[23] G A Rovithakis and M A Christodoulou ldquoAdaptive controlof unknown plants using dynamical neural networksrdquo IEEETransactions on Systems Man and Cybernetics vol 24 no 3 pp400ndash412 1994

[24] A Poznyak E N Sanchez and W Yu Differential NeuralNetworks for Robust Nonlinear Control Identification StateEstimation and Trajectory TrackingWorld Scientific Singapore2001

[25] E Garcıa and D A Murano ldquoState estimation for a class ofnonlinear differential games using differential neural networksrdquoin Proceedings of the American Control Conference (ACC rsquo11) pp2486ndash2491 San Francisco Calif USA July 2011

[26] D A Murano Nash equilibrium for uncertain nonlinear dif-ferential games using differential neural networks deterministicand stochastic cases [PhD thesis] CINVESTAV-IPN DistritoFederal Mexico 2004

[27] D Jiang and J Wang ldquoA recurrent neural network for onlinedesign of robust optimal filtersrdquo IEEE Transactions on Circuitsand Systems I FundamentalTheory and Applications vol 47 no6 pp 921ndash926 2000

[28] A G Parlos S K Menon and A F Atiya ldquoAn algorithmicapproach to adaptive state filtering using recurrent neuralnetworksrdquo IEEE Transactions on Neural Networks vol 12 no6 pp 1411ndash1432 2001

[29] S Haykin Neural Networks A Comprehensive FoundationPearson Chennai India 1999

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 5: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

Mathematical Problems in Engineering 5

Remark 10 Similar to Remark 7 the constants 1198661and 119866

2

in (31) are not known a priori because they depend on theperformance of the differential neural network (27)

Assumption 11 If there exist values of Λ Λ

120574 Λ

119866119882

30 and

11988240

such that

[11988240] [Λ

120574]minus1

[11988240]119879

=

119873

sum

119894=1

[119882119894

20] [[Λ

119894

120593]minus1

+ [Λ119894

1205931015840]minus1

] [119882119894

20]119879

11988230

= 11988210

Λ= Λ

120590+ Λ

1205901015840

Λ119866= Λ

119865

(32)

and values of Λ120572and Λ

120574such that

119876 = Λ120572+ [ 120585 ]

2

Λ120574minus Λ

120590minus

119873

sum

119894=1

[119906119894

]2

Λ119894

120593 (33)

then they provide a 0 lt 119875 = [119875]119879

isin R119899times119899 solution to thealgebraic Riccati equation (23) where

119860 = 11988230119860 (34)

119877 = [11988230] [Λ

]minus1

[11988230]119879

+ [Λ119866]minus1

+ [11988240] [Λ

119894

120593]minus1

[11988240]119879

(35)

119876 = 119876 + Λ120572+ [ 120585 ]

2

Λ120574 (36)

Remark 12 Notice that the equalities (32) and (33) werechosen in order to guarantee the same solution of thealgebraic Riccati equation (23) in Assumptions 8 and 11 Alsonotice that the first term of the right-hand side of (26) willalways be greater than the first term of the right-hand side of(36) that is 2119876 gt 119876

5 Main Result on Identification and Filtering

According to the above the main result on identification andfiltering for the class of nonlinear differential games (1)ndash(4)deals with both the development of an adaptive learning lawfor the synaptic weights of the differential neural networks(14) and (27) and the inference of a maximum value ofidentification error for the dynamics (10) and (11)

Moreover the establishment of a maximum value of fil-tering error between the uncorrupted states and the identifiedones is obtained namely an error is defined by

119890119905= 119909

119905minus 119909

119905 (37)

where 119909119905isin R119899 is given by noise canceling equation (13) and

119909119905isin R119899 is given by the expected or uncorrupted vector state

(12)More formally the main obtained result is described in

the following three theorems

Theorem 13 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural network (14) be suchthat Assumptions 6 and 8 are satisfied If the synaptic weightsof (14) are adjusted with the following learning law

1119905= minus2119870

1198821119875120576

119905[120590]

119879

1119905= minus 119870

1198811[2[119863

120590]119879

[11988210]119879

119875120576119905

+119897120590[Λ ]120590]

119879

1119905119910119905] [119910

119905]119879

119894

2119905= minus2119870

119894

1198822

119875120576119905[119906

119894

119905]119879

[120593119894

]119879

119894

2119905= minus119870

119894

1198812

[2 ⟨119863119894

120593 119906

119894

119905⟩ [119882

119894

20]119879

119875120576119905

+119897119894

120593[119906

119894

]2

[Λ119894

]120593]119879

119894

2119905119910119905] [119910

119905]119879

(38)

where120576119905= 119910

119905minus 119910

119905(39)

denotes the identification error 1119905= 119881

1119905minus119881

10 119894

2119905= 119881

119894

2119905minus

119881119894

20 and 119870

1198821 119870

1198811 119870119894

1198822

and 119870119894

1198812

are known symmetric andpositive-definite constant matrices then it is possible to obtainthe nextmaximumvalue of identification error in average sense119870

119894

1198822

as follows

lim sup119905rarrinfin

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) le 119865

1 (40)

Proof Taking into account the residual error (21) the differ-ential equation (10) can be expressed as

119910119905= 119882

10120590 (119881

10119910119905) +

119873

sum

119894=1

119882119894

20120593119894

(119881119894

20119910119905) 119906

119894

119905+ 119865

119905 (41)

Then by substituting (14) and (41) into the derivative of (39)with respect to 119905 and by adding and subtracting the terms[119882

10120590] [119882

10120590(119881

10119910119905)] [119882119894

20120593119894

119906119894

119905] and [119882119894

20120593119894

(119881119894

20119910119905)119906

119894

119905] it

is easy to confirm that

120576119905=

1119905120590 +119882

10 + 119882

101015840

minus 119865119905

+

119873

sum

119894=1

119894

2119905120593119894

119906119894

119905+

119873

sum

119894=1

119882119894

20120593119894

119906119894

119905+

119873

sum

119894=1

119882119894

201205931198941015840

119906119894

119905

(42)

where 1119905= 119882

1119905minus119882

10and 119894

2119905= 119882

119894

2119905minus119882

119894

20 Now let the

Lyapunov (energetic) candidate function

119871119905= 119864

119905+ [120576

119905]119879

119875 [120576119905] +

1

2tr ([

1119905]119879

[1198701198821]minus1

[1119905])

+1

2tr ([

1119905]119879

[1198701198811]minus1

[1119905])

+1

2

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905])

+1

2

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905])

(43)

6 Mathematical Problems in Engineering

be such that the inequalities (8) are fulfilled (seeAssumption 3) and let

119905le minus120582

1

10038171003817100381710038171199101199051003817100381710038171003817

2

+ 2[120576119905]119879

119875 [ 120576119905]

+ tr ([1119905]119879

[1198701198821]minus1

[1119905])

+ tr ([1119905]119879

[1198701198811]minus1

[1119905])

+

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905])

+

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905])

(44)

be the derivative of (43) with respect to 119905 Then by substi-tuting (42) into the second term of the right-hand side of(44) and by adding and subtracting the term [2[120576

119905]119879

119875119860120576119905]

one may get

2[120576119905]119879

119875 [ 120576119905] = 2[120576

119905]119879

11987511988210[ minus 119860120576

119905] + 2[120576

119905]119879

119875119882101015840

+

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20120593119894

119906119894

119905

+

119873

sum

119894=1

2[120576119905]119879

119875119882119894

201205931198941015840

119906119894

119905minus 2[120576

119905]119879

119875119865119905

+ 2[120576119905]119879

1198751119905120590 +

119873

sum

119894=1

2[120576119905]119879

119875119894

2119905120593119894

119906119894

119905

+ [120576119905]119879

[119875119860 + [119860 ]119879

119875] [120576119905]

(45)

Next by analyzing the first five terms of the right-hand sideof (45) with the following inequality

[119883]119879

119884 + [119884]119879

119883 le [119883]119879

[Λ]minus1

[119883] + [119884]119879

Λ [119884] (46)

which is valid for any pair of matrices 119883119884 isin RΩtimesΓ and forany constant matrix 0 lt Λ isin RΩtimesΩ where Ω and Γ arepositive integers (see [24]) the following is obtained

(i) Using (46) and (17) in the first term of the right-handside of (45) then

2[120576119905]119879

11987511988210[ minus 119860120576

119905]

le [120576119905]119879

[119875 [11988210] [Λ

120590]minus1

[11988210]119879

119875] [120576119905]

+ [120576119905]119879

Λ120590[120576

119905]

(47)

(ii) Substituting (18) into the second term of the right-hand side of (45) and using (46) and (19) then

2[120576119905]119879

119875119882101015840

le 2[120576119905]119879

11987511988210119863

1205901119905119910119905

+ [120576119905]119879

[119875 [11988210] [Λ

1205901015840]minus1

[11988210]119879

119875] [120576119905]

+ 119897120590[

1119905119910119905]119879

Λ ]120590 [1119905119910119905]

(48)

(iii) Using (46) and (17) in the third term of the right-handside of (45) then

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20120593119894

119906119894

119905

le

119873

sum

119894=1

[120576119905]119879

[119875 [119882119894

20] [Λ

119894

120593]minus1

[119882119894

20]119879

119875] [120576119905]

+

119873

sum

119894=1

[119906119894

]2

[120576119905]119879

Λ119894

120593[120576

119905]

(49)

(iv) Substituting (18) into the fourth term of the right-hand side of (45) and using (46) and (19) then

119873

sum

119894=1

2[120576119905]119879

119875119882119894

201205931198941015840

119906119894

119905

le

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20⟨119863

119894

120593 119906

119894

119905⟩

119894

2119905119910119905

+

119873

sum

119894=1

[120576119905]119879

[119875 [119882119894

20] [Λ

119894

1205931015840]minus1

[119882119894

20]119879

119875] [120576119905]

+

119873

sum

119894=1

119897119894

120593[119906

119894

]2

[119894

2119905119910119905]119879

Λ119894

]120593 [119894

2119905119910119905]

(50)

(v) Using (46) and (22) in the fifth term of the right-handside of (45) then

minus2[120576119905]119879

119875119865119905le [120576

119905]119879

[119875[Λ119865]minus1

119875] [120576119905]

+ 1198651+ 119865

2[119910

119905]119879

Λ119865[119910

119905]

(51)

So by substituting (47)ndash(51) into the right-hand side of(45) and by adding and subtracting the term [[120576

119905]119879

2119876[120576119905]]

inequality (44) can be expressed as

119905le [120576

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120576119905]

minus [120576119905]119879

2119876 [120576119905] + 119865

1+ 119865

2

1003817100381710038171003817Λ 119865

1003817100381710038171003817

10038171003817100381710038171199101199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199101199051003817100381710038171003817

2

+ tr (Υ1198821) + tr (Υ

1198811)

+

119873

sum

119894=1

tr (Υ119894

1198822

) +

119873

sum

119894=1

tr (Υ119894

1198812

)

(52)

Mathematical Problems in Engineering 7

where the algebraic Riccati equation in the first term of theright-hand side of (52) is described in (23) and in (24)ndash(26)and

Υ1198821

= [1119905]119879

[1198701198821]minus1

[1119905] + 2120590[120576

119905]119879

1198751119905

Υ1198811= [

1119905]119879

[1198701198811]minus1

[1119905] + 2119910

119905[120576

119905]119879

11987511988210119863

1205901119905

+ 119897120590119910119905[119910

119905]119879

[1119905]119879

Λ ]120590 [1119905]

Υ119894

1198822

= [119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905] + 2120593

119894

119906119894

119905[120576

119905]119879

119875119894

2119905

Υ119894

1198812

= [119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905]

+ 2119910119905[120576

119905]119879

119875119882119894

20⟨119863

119894

120593 119906

119894

119905⟩

119894

2119905

+ 119897119894

120593[119906

119894

]2

119910119905[119910

119905]119879

[119894

2119905]119879

Λ119894

]120593 [119894

2119905]

(53)

Thereby by equating (53) to zero that is

Υ1198821

= Υ1198811= Υ

119894

1198822

= Υ119894

1198812

= 0 (54)

and by respectively solving for 1119905

1119905 119894

2119905 and 119894

2119905 the

learning law given by (38) is obtained Now by choosing

1003817100381710038171003817Λ 119865

1003817100381710038171003817 le1205821

1198652

(55)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120576

119905]119879

2119876 [120576119905] + 119865

1 (56)

Thus by integrating both sides of (56) on the time interval[0 119905] the following is obtained

int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591

le 1198710minus 119871

119905+ [119865

1] 119905 le 119871

0+ [119865

1] 119905

(57)

and by dividing (57) by 119905 it is easy to verify that

1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591 le

1198710

119905+ 119865

1 (58)

Finally by calculating the upper limit as 119905 rarr infin the max-imum value of identification error in average sense is the onedescribed in (40) This means that the identification error isbounded between zero and (40) and therefore this meansthat 120576

119905is stable in the sense of Lyapunov

Theorem 14 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural network (27) be suchthat Assumptions 9 and 11 are satisfied If the synaptic weightsof (27) are adjusted with the following learning law

3119905= minus2119870

1198823119872120598

119905[120572 (

119905)]

119879

119894

4119905= minus2 119870

119894

1198824

119872120598119905[120585

119905]119879

[120574 (119905)]

119879

(59)

where

120598119905=

119905minus 119911

119905(60)

denotes the identification error and 1198701198823

and 119870119894

1198824

are knownsymmetric and positive-definite constant matrices then it ispossible to obtain the next maximum value of identificationerror in average sense 119870119894

1198822

as follows

lim sup119905rarrinfin

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) le 119866

1 (61)

Proof Taking into account the residual error (30) the differ-ential equation (11) can be expressed as

119905= 119882

30120572 (119911

119905) + 119882

40120574 (119911

119905) 120585

119905+ 119866

119905 (62)

Then by substituting (27) and (62) into the derivative of (60)with respect to 119905 and by adding and subtracting the terms[119882

30120572()] and [119882

40120574()120585

119905] it is easy to confirm that

120598119905=

3119905120572 () + 119882

30 +

4119905120574 () 120585

119905+119882

40120574120585

119905minus 119866

119905 (63)

where 3119905= 119882

3119905minus119882

30and

4119905= 119882

4119905minus119882

40 Now let the

Lyapunov (energetic) candidate function

119872119905= 119864

119905+ [120598

119905]119879

119875 [120598119905] +

1

2tr ([

3119905]119879

[1198701198823]minus1

[3119905])

+1

2tr ([

4119905]119879

[1198701198824]minus1

[4119905])

(64)

be such that the inequalities (8) are fulfilled (seeAssumption 3) and let

119905le minus120582

1

10038171003817100381710038171199111199051003817100381710038171003817

2

+ 2[120598119905]119879

119875 [ 120598119905]

+ tr ([3119905]119879

[1198701198823]minus1

[3119905])

+ tr ([4119905]119879

[1198701198824]minus1

[4119905])

(65)

be the derivative of (64) with respect to 119905 Then by substitut-ing (63) into the second term of the right-hand side of (65)and by adding and subtracting the term [2[120598

119905]119879

119875119860 120598119905] one

may get

2[120598119905]119879

119875 [ 120598119905] = 2[120598

119905]119879

11987511988230[ minus 119860120598

119905]

+ 2[120598119905]119879

11987511988240120574120585

119905minus 2[120598

119905]119879

119875119866119905

+ 2[120598119905]119879

1198753119905120572 ()

+ 2[120598119905]119879

1198754119905120574 () 120585

119905

+ [120598119905]119879

[119875119860 + [119860 ]119879

119875] [120598119905]

(66)

8 Mathematical Problems in Engineering

Next by analyzing the first three terms of the right-hand sideof (66) with the inequality (46) the following is obtained

(i) Using (46) and (28) in the first term of the right-handside of (66) then

2[120598119905]119879

11987511988230[ minus 119860120598

119905]

le [120598119905]119879

[119875 [11988230] [Λ

]minus1

[11988230]119879

119875] [120598119905]

+ [120598119905]119879

Λ120572[120598

119905]

(67)

(ii) Using (46) and (28) in the second term of the right-hand side of (66) then

2[120598119905]119879

11987511988240120574120585

119905

le [120598119905]119879

[119875 [11988240] [Λ

120574]minus1

[11988240]119879

119875] [120598119905]

+ [120585]2

[120598119905]119879

Λ120574[120598

119905]

(68)

(iii) Using (46) and (31) in the third term of the right-handside of (66) then

minus2[120598119905]119879

119875119866119905le [120598

119905]119879

[119875[Λ119866]minus1

119875] [120598119905]

+ 1198661+ 119866

2[119911

119905]119879

Λ119866[119911

119905]

(69)

So by substituting (67)ndash(69) into the right-hand side of(66) and by adding and subtracting the term [[120598

119905]119879

119876[120598119905]]

inequality (65) can be expressed as

119905le [120598

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120598119905]

minus [120598119905]119879

119876 [120598119905] + 119866

1+ 119866

2

1003817100381710038171003817Λ119866

1003817100381710038171003817

10038171003817100381710038171199111199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199111199051003817100381710038171003817

2

+ tr (Ψ1198823) + tr (Ψ

1198824)

(70)

where the algebraic Riccati equation in the first term of theright-hand side of (70) is described in (23) and in (34)ndash(36)and

Ψ1198823

= [3119905]119879

[1198701198823]minus1

[3119905] + 2120572 () [120598

119905]119879

1198753119905 (71)

Ψ1198824

= [4119905]119879

[1198701198824]minus1

[4119905] + 2120574 () 120585

119905[120598

119905]119879

1198754119905 (72)

Thereby by equating (71) and (72) to zero (Ψ1198823

= Ψ1198824

= 0)and by respectively solving for

3119905and

4119905 the learning law

given by (59) is obtained Now by choosing

1003817100381710038171003817Λ119866

1003817100381710038171003817 le1205821

1198662

(73)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120598

119905]119879

119876 [120598119905] + 119866

1 (74)

Thus by integrating both sides of (74) on the time interval[0 119905] the following is obtained

int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591

le 1198720minus119872

119905+ [119866

1] 119905 le 119872

0+ [119866

1] 119905

(75)

and by dividing (75) by 119905 it is easy to verify that

1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591 le

1198720

119905+ 119866

1 (76)

Finally by calculating the upper limit as 119905 rarr infin themaximum value of identification error in average sense is theone described in (61)

Theorem 15 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural networks (14) and (27)be such that the Assumptions 6 and 8 and Assumptions 9 and11 are satisfied If the synaptic weights of (14) and (27) arerespectively adjusted with the learning laws (38) and (59) thenit is possible to obtain the following maximum value of filteringerror in average sense

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1

if 1198651gt 119866

1

(77)

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 0

if 1198651le 119866

1

(78)

where 119890119905is given by (37)

Proof Consider the following Lyapunov (energetic) candi-date function

Φ119905= [119890

119905]119879

119875 [119890119905] (79)

where 119875 is the solution of the algebraic Riccati equation (23)with 119860 119877 and 119876 defined by (24)ndash(26) or (34)ndash(36) Thenby calculating the derivative of (79) with respect to 119905 and byconsidering the equations (37) (13) and (12) it can be verifiedthat

Φ119905= 2[119890

119905]119879

119875 [ 120576119905minus 120598

119905] (80)

and taking into account (42) (63) and the proofs ofTheorems 13 and 14 one may get

Φ119905le minus [119890

119905]119879

2119876 [119890119905] + 119865

1minus [minus [119890

119905]119879

119876 [119890119905] + 119866

1]

le minus [119890119905]119879

[2119876 minus 119876] [119890119905] + 119865

1minus 119866

1

le minus [119890119905]119879

119876 [119890119905] + 119865

1minus 119866

1

(81)

Mathematical Problems in Engineering 9

Thus by integrating both sides of (81) on the time interval[0 119905] the following is obtained

int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le Φ

0minus Φ

119905+ [119865

1minus 119866

1] 119905

le Φ0+ [119865

1minus 119866

1] 119905

(82)

and by dividing (82) by 119905 it is easy to confirm that

1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le

Φ0

119905+ 119865

1minus 119866

1 (83)

By calculating the upper limit as 119905 rarr infin equation (83) canbe expressed as

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1(84)

and finally it is clear that

(i) if 1198651gt 119866

1 the maximum value of filtering error in

average sense is the one described in (77)(ii) if 119865

1le 119866

1 the maximum value of filtering error in

average sense is the one described in (78) given thatthe left-hand side of the inequality (84) cannot benegative

Hence the theorem is proved

6 Illustrating Example

Example 16 Consider a 2-player nonlinear differential gamegiven by

119905= 119891 (119909

119905) +

2

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905(85)

subject to the change of variables (10) and (11) (119894 = 2) wherethe control actions of each player are

1199061

119905= [

05 sawtooth (05119905)H (119905)

]

1199062

119905= [

H (119905)

05 sawtooth (05119905)] (86)

The undesired deterministic perturbations are

120585119905= [

sin (10119905)sin (10119905)] (87)

and H(119905) denotes the Heaviside step function or unit stepsignal Then under Assumptions 1ndash4 6 8 9 and 11 it is pos-sible to obtain a filtered feedback (perfect state) informationpattern 120578119894

119905= 119909

119905

Remark 17 As told before the functions 119891(sdot) and 119892119894

(sdot) aswell as the matrix 119867 are unknown but in order to makean example of (85) and its simulation the values of these

ldquounknownrdquo parameters will be shown in the Appendix of thispaper

Thereby consider now a first differential neural networkgiven by

119910119905= 119882

1119905120590 (119881

1119905119910119905) +

2

sum

119894=1

119882119894

2119905120593119894

(119881119894

2119905119910119905) 119906

119894

119905 (88)

where

11988210

= [05 01 0 2

02 05 06 0]

11988110

= [1 03 05 02

05 13 1 06]

119879

1198821

20= [

05 01

0 02] 119882

2

20= [

13 08

08 13]

1198811

20= [

1 01

01 2] 119881

2

20= [

05 07

09 01]

(89)

and the activation functions are

120590119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=1198811119905119910119905

119901 = 1 2 3 4

1205931

119901119902(119908

1

) =45

1 + 119890minus051199081minus 3

100381610038161003816100381610038161003816100381610038161199081=11988112119905119910119905

119901 = 119902 = 1 2

1205932

119901119902(119908

2

) =45

1 + 119890minus051199082minus 3

100381610038161003816100381610038161003816100381610038161199082=11988122119905119910119905

119901 = 119902 = 1 2

(90)

By proposing the values described in Assumption 8 as

Λ120590= Λ

1

120593= Λ

2

120593= [

13 0

0 13]

Λ1

]120593 = Λ2

]120593 = [08 0

0 08]

Λ1

120593= Λ

2

120593= Λ

1

1205931015840 = Λ

2

1205931015840 = [

13 0

0 13]

minus1

Λ119865= [

01 0

0 01]

minus1

Λ120590= Λ

1205901015840 =

[[[

[

13 0 0 0

0 13 0 0

0 0 13 0

0 0 0 13

]]]

]

minus1

Λ ]120590 =[[[

[

08 0 0 0

0 08 0 0

0 0 08 0

0 0 0 08

]]]

]

119860 =

[[[

[

minus13963 minus22632

00473 minus61606

0426 minus74451

minus61533 08738

]]]

]

119876 = [10 0

0 10]

119897120590= 119897

1

120593= 119897

2

120593= 13 119906

1

= 1199062

= 5

(91)

10 Mathematical Problems in Engineering

1

0

minus1

y1t

y1t

minus2

0 5 10 15 20 25

(a)

15

1

05

0

minus05

minus1

minus15

0 5 10 15 20 25

y2t

y2t

(b)

Figure 1 Comparison between 119910119905and 119910

119905for the 2-player nonlinear differential game (85)

and by choosing 1198701198821

= 100 and 1198701198811= 119870

1

1198822

= 1198702

1198822

= 1198701

1198812

=

1198702

1198812

= 1 the solution of the algebraic Riccati equation (23)results in

119875 = [20524 minus06374

minus06374 31842] (92)

Then by applying the learning law (38) described inTheorem 13 the value of the identification error in averagesense (40) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) = 003145 (93)

Remark 18 Even though 003145 is the value of 1198651on the

time interval [0 25] it is not the global minimum value that1198651can take since 119905does not tend to infinity So in order to find

this global minimum value it would be required to simulatean arbitrarily large time

On the other hand let a second differential neural net-work given by

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905(94)

be such that the equalities (32) and (33) are fulfilled that is

11988230

= [05 01 0 2

02 05 06 0] 119882

40= [

1 0

0 1]

Λ120572= [

173 0

0 173] Λ

120574= [

20 0

0 20]

Λ=

[[[

[

26 0 0 0

0 26 0 0

0 0 26 0

0 0 0 26

]]]

]

minus1

Λ120574= [

6734 546

546 6162]

minus1

Λ119866= [

01 0

0 01]

minus1

120585 = 5

(95)

and the activation functions are

120572119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=119905

119901 = 1 2 3 4

120574119901119902(119908) =

45

1 + 119890minus051199081minus 3

10038161003816100381610038161003816100381610038161003816119908=119905

119901 = 119902 = 1 2

(96)

By choosing 1198701198823

= 100000 and 1198701198824

= 1 the solution of thealgebraic Riccati equation (23) results in the matrix (92) andby applying the learning law (59) described in Theorem 14the value of the identification error in average sense (61) on atime period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) = 000006645 (97)

Remark 19 As in Remark 18 in order to find the globalminimum value of 119866

1 it would be required to simulate an

arbitrarily large time

Finally taking into account the identification errors (93)and (97) the value of the filtering error in average sense (77)-(78) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 003139 (98)

The simulation of this example wasmade using theMATLABand Simulink platforms and its results are shown in Figures1ndash4

As is seen in Figure 1 the differential neural network(88) can perform the identification of the continuous-timenonlinear dynamic game (85) where 119910

1119905and 119910

2119905(or simi-

larly 1199091119905

and 1199092119905) denote the state variables with undesired

perturbations and 1199101119905

and 1199102119905

indicate their state estimatesOn the other hand according to Figure 2 the differential

neural network (94) identifies the dynamics of the additivedeterministic noises or in other words the dynamics of =119867120585

119905 Thus 119911

1119905and 119911

2119905represent the state variables of the

above differential equation and 1119905

and 2119905

are their stateestimates

In this way Figure 3 shows the performance of thefiltering process (13) in the nonlinear differential game (85)that is to say it shows the comparison between the expectedor uncorrupted state variables 119909

1119905and 119909

2119905and the filtered

state estimates 1199091119905

and 1199092119905

Mathematical Problems in Engineering 11

0

minus01

z1t

z1t

01

02

0 05 1 15 2 25 3 35

(a)

0

minus01

z2t

z2t

01

0

minus02

05 1 15 2 25 3 35

(b)

Figure 2 Comparison between 119911119905and

119905for the 2-player nonlinear differential game (85)

1

0

minus1

minus2

x1t

x1t

0 5 10 15 20 25

(a)

1

0

minus1

x2t

x2t

0 5 10 15 20 25

(b)

Figure 3 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

minus2

minus22

minus24

minus26

minus28

x1t

x1t

5 10 15 20 25

(a)

15

1

05

5 10 15 20 25

x2t

x2t

(b)

Figure 4 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

Finally Figure 4 also exhibits the described filteringprocess but comparing the real state variables 119909

1119905and 119909

2119905

with filtered state estimates 1199091119905

and 1199092119905

7 Results Analysis and Discussion

Although the differential neural networks (14) and (27) canperform the identification of the differential equations (10)and (11) it is important to remember that their performancedepends on the number of neurons used and on the proposi-tion of all the constant values (or free parameters) that weredescribed in Assumptions 8 and 11 Therefore the values of1198651and 119866

1(at a fixed time) can change according to this fact

In other words due to the fact that the differential neuralnetworks are an approximation of a dynamic system (orgame) there always will be a residual error that will dependon this approximation

Thus in the particular case shown in Example 16 thedesign of (88) was made using only eight neurons four forthe layer of perceptrons without any relation to the playersand two for the layer of perceptrons of each player Similarlyfor the design of (94) (which is subject to the equalities (32)-(33)) six neurons were used

On the other hand it is important to mention that(14) and (27) will operate properly only if Assumptions 1ndash4 6 8 9 and 11 are satisfied that is to say there is noguarantee that these differential neural networks performgood identification and filtering processes if for examplethe class of nonlinear differential games (1)ndash(4) has a stochas-tic nature

Finally although this paper presents a new approach foridentification and filtering of nonlinear dynamic games itshould be emphasized that there exist other techniques thatmight solve the problem treated here for example the citedpublications in Section 1

12 Mathematical Problems in Engineering

8 Conclusions and Future Work

According to the results of this paper the differential neuralnetworks (14) and (27) solve the problem of identifying andfiltering the class of nonlinear differential games (1)ndash(4)

However there is no guarantee that (14) and (27) identifyand filter the states of (1)ndash(4) if Assumptions 1ndash4 6 8 9 and11 are not met

Thereby the proposed learning laws (38) and (59) obtainthe maximum values of identification error in average sense(40) and (61) that is to say the maximum value of filteringerror (77)-(78)

Nevertheless these errors depend on both the number ofneurons used in the differential neural networks (14) and (27)and the proposed values applied in their design conditions

On the other hand according to the simulation resultsof the illustrating example the effectiveness of (14) and (27)is shown and the applicability of Theorems 13 14 and 15 isverified

Finally speaking of future work in this research fieldone can analyze and discuss the use of (14) and (27) for theobtaining of equilibrium solutions in the class of nonlineardifferential games (1)ndash(4) that is to say the use of differentialneural networks for controlling nonlinear differential games

Appendix

The values of the ldquounknownrdquo nonlinear functions 119891(sdot) 1198921(sdot)and 1198922(sdot) and the ldquounknownrdquo constant matrix119867 used in (85)of Example 16 are shown as follows

119891 (119909119905) = [

1199092119905

sin (1199091119905)]

1198921

(119909119905)

= [minus 119909

1119905cos (119909

1119905) 0

0 1199092119905sin (119909

1119905) minus 119909

1119905cos (119909

2119905)]

1198922

(119909119905) = [

sin (1199091119905) 0

0 minus [1199091119905+ 119909

2119905]]

119867 = [05 0

0 05]

(A1)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I Alvarez A Poznyak and A Malo ldquoUrban traffic controlproblem a game theory approachrdquo in Proceedings of the 47thIEEE Conference on Decision and Control (CDC rsquo08) pp 2168ndash2172 Cancun Mexico December 2008

[2] T Basar and G J Olsder Dynamic Noncooperative GameTheory Academic Press Great Britain UK 2nd edition 1995

[3] J M C Clark and R B Vinter ldquoA differential dynamicgames approach to flow controlrdquo in Proceedings of the 42nd

IEEE Conference on Decision and Control pp 1228ndash1231 MauiHawaii USA December 2003

[4] S H Tamaddoni S Taheri and M Ahmadian ldquoOptimal VSCdesign based on nash strategy for differential 2-player gamesrdquoin Proceedings of the IEEE International Conference on SystemsMan and Cybernetics (SMC rsquo09) pp 2415ndash2420 San AntonioTex USA October 2009

[5] F Amato M Mattei and A Pironti ldquoRobust strategies forNash linear quadratic games under uncertain dynamicsrdquo inProceedings of the 37th IEEE Conference on Decision and Control(CDC rsquo98) pp 1869ndash1870 Tampa Fla USA December 1998

[6] X Nian S Yang and N Chen ldquoGuaranteed cost strategiesof uncertain LQ closed-loop differential games with multipleplayersrdquo in Proceedings of the American Control Conference pp1724ndash1729 Minneapolis Minn USA June 2006

[7] M Jimenez-Lizarraga and A Poznyak ldquoEquilibrium in LQdifferential games withmultiple scenariosrdquo in Proceedings of the46th IEEE Conference on Decision and Control (CDC rsquo07) pp4081ndash4086 New Orleans La USA December 2007

[8] M Jimenez-Lizarraga A Poznyak andM A Alcorta ldquoLeader-follower strategies for a multi-plant differential gamerdquo inProceedings of the American Control Conference (ACC rsquo08) pp3839ndash3844 Seattle Wash USA June 2008

[9] A S Poznyak and C J Gallegos ldquoMultimodel prey-predatorLQ differential gamesrdquo in Proceedings of the American ControlConference pp 5369ndash5374 Denver Colo USA June 2003

[10] Y Li and L Guo ldquoConvergence of adaptive linear stochasticdifferential games nonzero-sum caserdquo in Proceedings of the 10thWorld Congress on Intelligent Control and Automation (WCICArsquo12) pp 3543ndash3548 Beijing China July 2012

[11] D Vrabie and F Lewis ldquoAdaptive Dynamic Programmingalgorithm for finding online the equilibrium solution of thetwo-player zero-sum differential gamerdquo in Proceedings of theInternational Joint Conference on Neural Networks (IJCNN rsquo10)pp 1ndash8 Barcelona Spain July 2010

[12] B-S Chen C-S Tseng and H-J Uang ldquoFuzzy differentialgames for nonlinear stochastic systems suboptimal approachrdquoIEEE Transactions on Fuzzy Systems vol 10 no 2 pp 222ndash2332002

[13] B Widrow J R Glover Jr J M McCool et al ldquoAdaptive noisecancelling principles and applicationsrdquo Proceedings of the IEEEvol 63 no 12 pp 1692ndash1716 1975

[14] C E de Souza U Shaked and M Fu ldquoRobust119867infinfiltering for

continuous time varying uncertain systems with deterministicinput signalsrdquo IEEE Transactions on Signal Processing vol 43no 3 pp 709ndash719 1995

[15] A Osorio-Cordero A S Poznyak and M Taksar ldquoRobustdeterministic filtering for linear uncertain time-varying sys-temsrdquo in Proceedings of the American Control Conference pp2526ndash2530 Albuquerque NM USA June 1997

[16] M Hou and R J Patton ldquoOptimal filtering for systems withunknown inputsrdquo IEEE Transactions on Automatic Control vol43 no 3 pp 445ndash449 1998

[17] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 3088ndash3092 1984

[18] E B Kosmatopoulos M M Polycarpou M A Christodoulouand P A Ioannou ldquoHigh-order neural network structuresfor identification of dynamical systemsrdquo IEEE Transactions onNeural Networks vol 6 no 2 pp 422ndash431 1995

Mathematical Problems in Engineering 13

[19] F Abdollahi H A Talebi and R V Patel ldquoState estimationfor flexible-joint manipulators using stable neural networksrdquo inProceedings of the IEEE International Symposium on Computa-tional Intelligence in Robotics and Automation pp 25ndash29 KobeJapan 2003

[20] A Alessandri C Cervellera and M Sanguineti ldquoDesign ofasymptotic estimators an approach based on neural networksand nonlinear programmingrdquo IEEE Transactions on NeuralNetworks vol 18 no 1 pp 86ndash96 2007

[21] Y H Kim F L Lewis and C T Abdallah ldquoNonlinear observerdesign using dynamic recurrent neural networksrdquo in Proceed-ings of the 35th IEEE Conference on Decision and Control pp949ndash954 Kobe Japan December 1996

[22] F L Lewis K Liu and A Yesildirek ldquoNeural net robotcontroller with guaranteed tracking performancerdquo IEEE Trans-actions on Neural Networks vol 6 no 3 pp 703ndash715 1995

[23] G A Rovithakis and M A Christodoulou ldquoAdaptive controlof unknown plants using dynamical neural networksrdquo IEEETransactions on Systems Man and Cybernetics vol 24 no 3 pp400ndash412 1994

[24] A Poznyak E N Sanchez and W Yu Differential NeuralNetworks for Robust Nonlinear Control Identification StateEstimation and Trajectory TrackingWorld Scientific Singapore2001

[25] E Garcıa and D A Murano ldquoState estimation for a class ofnonlinear differential games using differential neural networksrdquoin Proceedings of the American Control Conference (ACC rsquo11) pp2486ndash2491 San Francisco Calif USA July 2011

[26] D A Murano Nash equilibrium for uncertain nonlinear dif-ferential games using differential neural networks deterministicand stochastic cases [PhD thesis] CINVESTAV-IPN DistritoFederal Mexico 2004

[27] D Jiang and J Wang ldquoA recurrent neural network for onlinedesign of robust optimal filtersrdquo IEEE Transactions on Circuitsand Systems I FundamentalTheory and Applications vol 47 no6 pp 921ndash926 2000

[28] A G Parlos S K Menon and A F Atiya ldquoAn algorithmicapproach to adaptive state filtering using recurrent neuralnetworksrdquo IEEE Transactions on Neural Networks vol 12 no6 pp 1411ndash1432 2001

[29] S Haykin Neural Networks A Comprehensive FoundationPearson Chennai India 1999

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 6: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

6 Mathematical Problems in Engineering

be such that the inequalities (8) are fulfilled (seeAssumption 3) and let

119905le minus120582

1

10038171003817100381710038171199101199051003817100381710038171003817

2

+ 2[120576119905]119879

119875 [ 120576119905]

+ tr ([1119905]119879

[1198701198821]minus1

[1119905])

+ tr ([1119905]119879

[1198701198811]minus1

[1119905])

+

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905])

+

119873

sum

119894=1

tr ([119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905])

(44)

be the derivative of (43) with respect to 119905 Then by substi-tuting (42) into the second term of the right-hand side of(44) and by adding and subtracting the term [2[120576

119905]119879

119875119860120576119905]

one may get

2[120576119905]119879

119875 [ 120576119905] = 2[120576

119905]119879

11987511988210[ minus 119860120576

119905] + 2[120576

119905]119879

119875119882101015840

+

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20120593119894

119906119894

119905

+

119873

sum

119894=1

2[120576119905]119879

119875119882119894

201205931198941015840

119906119894

119905minus 2[120576

119905]119879

119875119865119905

+ 2[120576119905]119879

1198751119905120590 +

119873

sum

119894=1

2[120576119905]119879

119875119894

2119905120593119894

119906119894

119905

+ [120576119905]119879

[119875119860 + [119860 ]119879

119875] [120576119905]

(45)

Next by analyzing the first five terms of the right-hand sideof (45) with the following inequality

[119883]119879

119884 + [119884]119879

119883 le [119883]119879

[Λ]minus1

[119883] + [119884]119879

Λ [119884] (46)

which is valid for any pair of matrices 119883119884 isin RΩtimesΓ and forany constant matrix 0 lt Λ isin RΩtimesΩ where Ω and Γ arepositive integers (see [24]) the following is obtained

(i) Using (46) and (17) in the first term of the right-handside of (45) then

2[120576119905]119879

11987511988210[ minus 119860120576

119905]

le [120576119905]119879

[119875 [11988210] [Λ

120590]minus1

[11988210]119879

119875] [120576119905]

+ [120576119905]119879

Λ120590[120576

119905]

(47)

(ii) Substituting (18) into the second term of the right-hand side of (45) and using (46) and (19) then

2[120576119905]119879

119875119882101015840

le 2[120576119905]119879

11987511988210119863

1205901119905119910119905

+ [120576119905]119879

[119875 [11988210] [Λ

1205901015840]minus1

[11988210]119879

119875] [120576119905]

+ 119897120590[

1119905119910119905]119879

Λ ]120590 [1119905119910119905]

(48)

(iii) Using (46) and (17) in the third term of the right-handside of (45) then

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20120593119894

119906119894

119905

le

119873

sum

119894=1

[120576119905]119879

[119875 [119882119894

20] [Λ

119894

120593]minus1

[119882119894

20]119879

119875] [120576119905]

+

119873

sum

119894=1

[119906119894

]2

[120576119905]119879

Λ119894

120593[120576

119905]

(49)

(iv) Substituting (18) into the fourth term of the right-hand side of (45) and using (46) and (19) then

119873

sum

119894=1

2[120576119905]119879

119875119882119894

201205931198941015840

119906119894

119905

le

119873

sum

119894=1

2[120576119905]119879

119875119882119894

20⟨119863

119894

120593 119906

119894

119905⟩

119894

2119905119910119905

+

119873

sum

119894=1

[120576119905]119879

[119875 [119882119894

20] [Λ

119894

1205931015840]minus1

[119882119894

20]119879

119875] [120576119905]

+

119873

sum

119894=1

119897119894

120593[119906

119894

]2

[119894

2119905119910119905]119879

Λ119894

]120593 [119894

2119905119910119905]

(50)

(v) Using (46) and (22) in the fifth term of the right-handside of (45) then

minus2[120576119905]119879

119875119865119905le [120576

119905]119879

[119875[Λ119865]minus1

119875] [120576119905]

+ 1198651+ 119865

2[119910

119905]119879

Λ119865[119910

119905]

(51)

So by substituting (47)ndash(51) into the right-hand side of(45) and by adding and subtracting the term [[120576

119905]119879

2119876[120576119905]]

inequality (44) can be expressed as

119905le [120576

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120576119905]

minus [120576119905]119879

2119876 [120576119905] + 119865

1+ 119865

2

1003817100381710038171003817Λ 119865

1003817100381710038171003817

10038171003817100381710038171199101199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199101199051003817100381710038171003817

2

+ tr (Υ1198821) + tr (Υ

1198811)

+

119873

sum

119894=1

tr (Υ119894

1198822

) +

119873

sum

119894=1

tr (Υ119894

1198812

)

(52)

Mathematical Problems in Engineering 7

where the algebraic Riccati equation in the first term of theright-hand side of (52) is described in (23) and in (24)ndash(26)and

Υ1198821

= [1119905]119879

[1198701198821]minus1

[1119905] + 2120590[120576

119905]119879

1198751119905

Υ1198811= [

1119905]119879

[1198701198811]minus1

[1119905] + 2119910

119905[120576

119905]119879

11987511988210119863

1205901119905

+ 119897120590119910119905[119910

119905]119879

[1119905]119879

Λ ]120590 [1119905]

Υ119894

1198822

= [119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905] + 2120593

119894

119906119894

119905[120576

119905]119879

119875119894

2119905

Υ119894

1198812

= [119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905]

+ 2119910119905[120576

119905]119879

119875119882119894

20⟨119863

119894

120593 119906

119894

119905⟩

119894

2119905

+ 119897119894

120593[119906

119894

]2

119910119905[119910

119905]119879

[119894

2119905]119879

Λ119894

]120593 [119894

2119905]

(53)

Thereby by equating (53) to zero that is

Υ1198821

= Υ1198811= Υ

119894

1198822

= Υ119894

1198812

= 0 (54)

and by respectively solving for 1119905

1119905 119894

2119905 and 119894

2119905 the

learning law given by (38) is obtained Now by choosing

1003817100381710038171003817Λ 119865

1003817100381710038171003817 le1205821

1198652

(55)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120576

119905]119879

2119876 [120576119905] + 119865

1 (56)

Thus by integrating both sides of (56) on the time interval[0 119905] the following is obtained

int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591

le 1198710minus 119871

119905+ [119865

1] 119905 le 119871

0+ [119865

1] 119905

(57)

and by dividing (57) by 119905 it is easy to verify that

1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591 le

1198710

119905+ 119865

1 (58)

Finally by calculating the upper limit as 119905 rarr infin the max-imum value of identification error in average sense is the onedescribed in (40) This means that the identification error isbounded between zero and (40) and therefore this meansthat 120576

119905is stable in the sense of Lyapunov

Theorem 14 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural network (27) be suchthat Assumptions 9 and 11 are satisfied If the synaptic weightsof (27) are adjusted with the following learning law

3119905= minus2119870

1198823119872120598

119905[120572 (

119905)]

119879

119894

4119905= minus2 119870

119894

1198824

119872120598119905[120585

119905]119879

[120574 (119905)]

119879

(59)

where

120598119905=

119905minus 119911

119905(60)

denotes the identification error and 1198701198823

and 119870119894

1198824

are knownsymmetric and positive-definite constant matrices then it ispossible to obtain the next maximum value of identificationerror in average sense 119870119894

1198822

as follows

lim sup119905rarrinfin

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) le 119866

1 (61)

Proof Taking into account the residual error (30) the differ-ential equation (11) can be expressed as

119905= 119882

30120572 (119911

119905) + 119882

40120574 (119911

119905) 120585

119905+ 119866

119905 (62)

Then by substituting (27) and (62) into the derivative of (60)with respect to 119905 and by adding and subtracting the terms[119882

30120572()] and [119882

40120574()120585

119905] it is easy to confirm that

120598119905=

3119905120572 () + 119882

30 +

4119905120574 () 120585

119905+119882

40120574120585

119905minus 119866

119905 (63)

where 3119905= 119882

3119905minus119882

30and

4119905= 119882

4119905minus119882

40 Now let the

Lyapunov (energetic) candidate function

119872119905= 119864

119905+ [120598

119905]119879

119875 [120598119905] +

1

2tr ([

3119905]119879

[1198701198823]minus1

[3119905])

+1

2tr ([

4119905]119879

[1198701198824]minus1

[4119905])

(64)

be such that the inequalities (8) are fulfilled (seeAssumption 3) and let

119905le minus120582

1

10038171003817100381710038171199111199051003817100381710038171003817

2

+ 2[120598119905]119879

119875 [ 120598119905]

+ tr ([3119905]119879

[1198701198823]minus1

[3119905])

+ tr ([4119905]119879

[1198701198824]minus1

[4119905])

(65)

be the derivative of (64) with respect to 119905 Then by substitut-ing (63) into the second term of the right-hand side of (65)and by adding and subtracting the term [2[120598

119905]119879

119875119860 120598119905] one

may get

2[120598119905]119879

119875 [ 120598119905] = 2[120598

119905]119879

11987511988230[ minus 119860120598

119905]

+ 2[120598119905]119879

11987511988240120574120585

119905minus 2[120598

119905]119879

119875119866119905

+ 2[120598119905]119879

1198753119905120572 ()

+ 2[120598119905]119879

1198754119905120574 () 120585

119905

+ [120598119905]119879

[119875119860 + [119860 ]119879

119875] [120598119905]

(66)

8 Mathematical Problems in Engineering

Next by analyzing the first three terms of the right-hand sideof (66) with the inequality (46) the following is obtained

(i) Using (46) and (28) in the first term of the right-handside of (66) then

2[120598119905]119879

11987511988230[ minus 119860120598

119905]

le [120598119905]119879

[119875 [11988230] [Λ

]minus1

[11988230]119879

119875] [120598119905]

+ [120598119905]119879

Λ120572[120598

119905]

(67)

(ii) Using (46) and (28) in the second term of the right-hand side of (66) then

2[120598119905]119879

11987511988240120574120585

119905

le [120598119905]119879

[119875 [11988240] [Λ

120574]minus1

[11988240]119879

119875] [120598119905]

+ [120585]2

[120598119905]119879

Λ120574[120598

119905]

(68)

(iii) Using (46) and (31) in the third term of the right-handside of (66) then

minus2[120598119905]119879

119875119866119905le [120598

119905]119879

[119875[Λ119866]minus1

119875] [120598119905]

+ 1198661+ 119866

2[119911

119905]119879

Λ119866[119911

119905]

(69)

So by substituting (67)ndash(69) into the right-hand side of(66) and by adding and subtracting the term [[120598

119905]119879

119876[120598119905]]

inequality (65) can be expressed as

119905le [120598

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120598119905]

minus [120598119905]119879

119876 [120598119905] + 119866

1+ 119866

2

1003817100381710038171003817Λ119866

1003817100381710038171003817

10038171003817100381710038171199111199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199111199051003817100381710038171003817

2

+ tr (Ψ1198823) + tr (Ψ

1198824)

(70)

where the algebraic Riccati equation in the first term of theright-hand side of (70) is described in (23) and in (34)ndash(36)and

Ψ1198823

= [3119905]119879

[1198701198823]minus1

[3119905] + 2120572 () [120598

119905]119879

1198753119905 (71)

Ψ1198824

= [4119905]119879

[1198701198824]minus1

[4119905] + 2120574 () 120585

119905[120598

119905]119879

1198754119905 (72)

Thereby by equating (71) and (72) to zero (Ψ1198823

= Ψ1198824

= 0)and by respectively solving for

3119905and

4119905 the learning law

given by (59) is obtained Now by choosing

1003817100381710038171003817Λ119866

1003817100381710038171003817 le1205821

1198662

(73)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120598

119905]119879

119876 [120598119905] + 119866

1 (74)

Thus by integrating both sides of (74) on the time interval[0 119905] the following is obtained

int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591

le 1198720minus119872

119905+ [119866

1] 119905 le 119872

0+ [119866

1] 119905

(75)

and by dividing (75) by 119905 it is easy to verify that

1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591 le

1198720

119905+ 119866

1 (76)

Finally by calculating the upper limit as 119905 rarr infin themaximum value of identification error in average sense is theone described in (61)

Theorem 15 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural networks (14) and (27)be such that the Assumptions 6 and 8 and Assumptions 9 and11 are satisfied If the synaptic weights of (14) and (27) arerespectively adjusted with the learning laws (38) and (59) thenit is possible to obtain the following maximum value of filteringerror in average sense

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1

if 1198651gt 119866

1

(77)

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 0

if 1198651le 119866

1

(78)

where 119890119905is given by (37)

Proof Consider the following Lyapunov (energetic) candi-date function

Φ119905= [119890

119905]119879

119875 [119890119905] (79)

where 119875 is the solution of the algebraic Riccati equation (23)with 119860 119877 and 119876 defined by (24)ndash(26) or (34)ndash(36) Thenby calculating the derivative of (79) with respect to 119905 and byconsidering the equations (37) (13) and (12) it can be verifiedthat

Φ119905= 2[119890

119905]119879

119875 [ 120576119905minus 120598

119905] (80)

and taking into account (42) (63) and the proofs ofTheorems 13 and 14 one may get

Φ119905le minus [119890

119905]119879

2119876 [119890119905] + 119865

1minus [minus [119890

119905]119879

119876 [119890119905] + 119866

1]

le minus [119890119905]119879

[2119876 minus 119876] [119890119905] + 119865

1minus 119866

1

le minus [119890119905]119879

119876 [119890119905] + 119865

1minus 119866

1

(81)

Mathematical Problems in Engineering 9

Thus by integrating both sides of (81) on the time interval[0 119905] the following is obtained

int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le Φ

0minus Φ

119905+ [119865

1minus 119866

1] 119905

le Φ0+ [119865

1minus 119866

1] 119905

(82)

and by dividing (82) by 119905 it is easy to confirm that

1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le

Φ0

119905+ 119865

1minus 119866

1 (83)

By calculating the upper limit as 119905 rarr infin equation (83) canbe expressed as

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1(84)

and finally it is clear that

(i) if 1198651gt 119866

1 the maximum value of filtering error in

average sense is the one described in (77)(ii) if 119865

1le 119866

1 the maximum value of filtering error in

average sense is the one described in (78) given thatthe left-hand side of the inequality (84) cannot benegative

Hence the theorem is proved

6 Illustrating Example

Example 16 Consider a 2-player nonlinear differential gamegiven by

119905= 119891 (119909

119905) +

2

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905(85)

subject to the change of variables (10) and (11) (119894 = 2) wherethe control actions of each player are

1199061

119905= [

05 sawtooth (05119905)H (119905)

]

1199062

119905= [

H (119905)

05 sawtooth (05119905)] (86)

The undesired deterministic perturbations are

120585119905= [

sin (10119905)sin (10119905)] (87)

and H(119905) denotes the Heaviside step function or unit stepsignal Then under Assumptions 1ndash4 6 8 9 and 11 it is pos-sible to obtain a filtered feedback (perfect state) informationpattern 120578119894

119905= 119909

119905

Remark 17 As told before the functions 119891(sdot) and 119892119894

(sdot) aswell as the matrix 119867 are unknown but in order to makean example of (85) and its simulation the values of these

ldquounknownrdquo parameters will be shown in the Appendix of thispaper

Thereby consider now a first differential neural networkgiven by

119910119905= 119882

1119905120590 (119881

1119905119910119905) +

2

sum

119894=1

119882119894

2119905120593119894

(119881119894

2119905119910119905) 119906

119894

119905 (88)

where

11988210

= [05 01 0 2

02 05 06 0]

11988110

= [1 03 05 02

05 13 1 06]

119879

1198821

20= [

05 01

0 02] 119882

2

20= [

13 08

08 13]

1198811

20= [

1 01

01 2] 119881

2

20= [

05 07

09 01]

(89)

and the activation functions are

120590119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=1198811119905119910119905

119901 = 1 2 3 4

1205931

119901119902(119908

1

) =45

1 + 119890minus051199081minus 3

100381610038161003816100381610038161003816100381610038161199081=11988112119905119910119905

119901 = 119902 = 1 2

1205932

119901119902(119908

2

) =45

1 + 119890minus051199082minus 3

100381610038161003816100381610038161003816100381610038161199082=11988122119905119910119905

119901 = 119902 = 1 2

(90)

By proposing the values described in Assumption 8 as

Λ120590= Λ

1

120593= Λ

2

120593= [

13 0

0 13]

Λ1

]120593 = Λ2

]120593 = [08 0

0 08]

Λ1

120593= Λ

2

120593= Λ

1

1205931015840 = Λ

2

1205931015840 = [

13 0

0 13]

minus1

Λ119865= [

01 0

0 01]

minus1

Λ120590= Λ

1205901015840 =

[[[

[

13 0 0 0

0 13 0 0

0 0 13 0

0 0 0 13

]]]

]

minus1

Λ ]120590 =[[[

[

08 0 0 0

0 08 0 0

0 0 08 0

0 0 0 08

]]]

]

119860 =

[[[

[

minus13963 minus22632

00473 minus61606

0426 minus74451

minus61533 08738

]]]

]

119876 = [10 0

0 10]

119897120590= 119897

1

120593= 119897

2

120593= 13 119906

1

= 1199062

= 5

(91)

10 Mathematical Problems in Engineering

1

0

minus1

y1t

y1t

minus2

0 5 10 15 20 25

(a)

15

1

05

0

minus05

minus1

minus15

0 5 10 15 20 25

y2t

y2t

(b)

Figure 1 Comparison between 119910119905and 119910

119905for the 2-player nonlinear differential game (85)

and by choosing 1198701198821

= 100 and 1198701198811= 119870

1

1198822

= 1198702

1198822

= 1198701

1198812

=

1198702

1198812

= 1 the solution of the algebraic Riccati equation (23)results in

119875 = [20524 minus06374

minus06374 31842] (92)

Then by applying the learning law (38) described inTheorem 13 the value of the identification error in averagesense (40) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) = 003145 (93)

Remark 18 Even though 003145 is the value of 1198651on the

time interval [0 25] it is not the global minimum value that1198651can take since 119905does not tend to infinity So in order to find

this global minimum value it would be required to simulatean arbitrarily large time

On the other hand let a second differential neural net-work given by

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905(94)

be such that the equalities (32) and (33) are fulfilled that is

11988230

= [05 01 0 2

02 05 06 0] 119882

40= [

1 0

0 1]

Λ120572= [

173 0

0 173] Λ

120574= [

20 0

0 20]

Λ=

[[[

[

26 0 0 0

0 26 0 0

0 0 26 0

0 0 0 26

]]]

]

minus1

Λ120574= [

6734 546

546 6162]

minus1

Λ119866= [

01 0

0 01]

minus1

120585 = 5

(95)

and the activation functions are

120572119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=119905

119901 = 1 2 3 4

120574119901119902(119908) =

45

1 + 119890minus051199081minus 3

10038161003816100381610038161003816100381610038161003816119908=119905

119901 = 119902 = 1 2

(96)

By choosing 1198701198823

= 100000 and 1198701198824

= 1 the solution of thealgebraic Riccati equation (23) results in the matrix (92) andby applying the learning law (59) described in Theorem 14the value of the identification error in average sense (61) on atime period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) = 000006645 (97)

Remark 19 As in Remark 18 in order to find the globalminimum value of 119866

1 it would be required to simulate an

arbitrarily large time

Finally taking into account the identification errors (93)and (97) the value of the filtering error in average sense (77)-(78) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 003139 (98)

The simulation of this example wasmade using theMATLABand Simulink platforms and its results are shown in Figures1ndash4

As is seen in Figure 1 the differential neural network(88) can perform the identification of the continuous-timenonlinear dynamic game (85) where 119910

1119905and 119910

2119905(or simi-

larly 1199091119905

and 1199092119905) denote the state variables with undesired

perturbations and 1199101119905

and 1199102119905

indicate their state estimatesOn the other hand according to Figure 2 the differential

neural network (94) identifies the dynamics of the additivedeterministic noises or in other words the dynamics of =119867120585

119905 Thus 119911

1119905and 119911

2119905represent the state variables of the

above differential equation and 1119905

and 2119905

are their stateestimates

In this way Figure 3 shows the performance of thefiltering process (13) in the nonlinear differential game (85)that is to say it shows the comparison between the expectedor uncorrupted state variables 119909

1119905and 119909

2119905and the filtered

state estimates 1199091119905

and 1199092119905

Mathematical Problems in Engineering 11

0

minus01

z1t

z1t

01

02

0 05 1 15 2 25 3 35

(a)

0

minus01

z2t

z2t

01

0

minus02

05 1 15 2 25 3 35

(b)

Figure 2 Comparison between 119911119905and

119905for the 2-player nonlinear differential game (85)

1

0

minus1

minus2

x1t

x1t

0 5 10 15 20 25

(a)

1

0

minus1

x2t

x2t

0 5 10 15 20 25

(b)

Figure 3 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

minus2

minus22

minus24

minus26

minus28

x1t

x1t

5 10 15 20 25

(a)

15

1

05

5 10 15 20 25

x2t

x2t

(b)

Figure 4 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

Finally Figure 4 also exhibits the described filteringprocess but comparing the real state variables 119909

1119905and 119909

2119905

with filtered state estimates 1199091119905

and 1199092119905

7 Results Analysis and Discussion

Although the differential neural networks (14) and (27) canperform the identification of the differential equations (10)and (11) it is important to remember that their performancedepends on the number of neurons used and on the proposi-tion of all the constant values (or free parameters) that weredescribed in Assumptions 8 and 11 Therefore the values of1198651and 119866

1(at a fixed time) can change according to this fact

In other words due to the fact that the differential neuralnetworks are an approximation of a dynamic system (orgame) there always will be a residual error that will dependon this approximation

Thus in the particular case shown in Example 16 thedesign of (88) was made using only eight neurons four forthe layer of perceptrons without any relation to the playersand two for the layer of perceptrons of each player Similarlyfor the design of (94) (which is subject to the equalities (32)-(33)) six neurons were used

On the other hand it is important to mention that(14) and (27) will operate properly only if Assumptions 1ndash4 6 8 9 and 11 are satisfied that is to say there is noguarantee that these differential neural networks performgood identification and filtering processes if for examplethe class of nonlinear differential games (1)ndash(4) has a stochas-tic nature

Finally although this paper presents a new approach foridentification and filtering of nonlinear dynamic games itshould be emphasized that there exist other techniques thatmight solve the problem treated here for example the citedpublications in Section 1

12 Mathematical Problems in Engineering

8 Conclusions and Future Work

According to the results of this paper the differential neuralnetworks (14) and (27) solve the problem of identifying andfiltering the class of nonlinear differential games (1)ndash(4)

However there is no guarantee that (14) and (27) identifyand filter the states of (1)ndash(4) if Assumptions 1ndash4 6 8 9 and11 are not met

Thereby the proposed learning laws (38) and (59) obtainthe maximum values of identification error in average sense(40) and (61) that is to say the maximum value of filteringerror (77)-(78)

Nevertheless these errors depend on both the number ofneurons used in the differential neural networks (14) and (27)and the proposed values applied in their design conditions

On the other hand according to the simulation resultsof the illustrating example the effectiveness of (14) and (27)is shown and the applicability of Theorems 13 14 and 15 isverified

Finally speaking of future work in this research fieldone can analyze and discuss the use of (14) and (27) for theobtaining of equilibrium solutions in the class of nonlineardifferential games (1)ndash(4) that is to say the use of differentialneural networks for controlling nonlinear differential games

Appendix

The values of the ldquounknownrdquo nonlinear functions 119891(sdot) 1198921(sdot)and 1198922(sdot) and the ldquounknownrdquo constant matrix119867 used in (85)of Example 16 are shown as follows

119891 (119909119905) = [

1199092119905

sin (1199091119905)]

1198921

(119909119905)

= [minus 119909

1119905cos (119909

1119905) 0

0 1199092119905sin (119909

1119905) minus 119909

1119905cos (119909

2119905)]

1198922

(119909119905) = [

sin (1199091119905) 0

0 minus [1199091119905+ 119909

2119905]]

119867 = [05 0

0 05]

(A1)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I Alvarez A Poznyak and A Malo ldquoUrban traffic controlproblem a game theory approachrdquo in Proceedings of the 47thIEEE Conference on Decision and Control (CDC rsquo08) pp 2168ndash2172 Cancun Mexico December 2008

[2] T Basar and G J Olsder Dynamic Noncooperative GameTheory Academic Press Great Britain UK 2nd edition 1995

[3] J M C Clark and R B Vinter ldquoA differential dynamicgames approach to flow controlrdquo in Proceedings of the 42nd

IEEE Conference on Decision and Control pp 1228ndash1231 MauiHawaii USA December 2003

[4] S H Tamaddoni S Taheri and M Ahmadian ldquoOptimal VSCdesign based on nash strategy for differential 2-player gamesrdquoin Proceedings of the IEEE International Conference on SystemsMan and Cybernetics (SMC rsquo09) pp 2415ndash2420 San AntonioTex USA October 2009

[5] F Amato M Mattei and A Pironti ldquoRobust strategies forNash linear quadratic games under uncertain dynamicsrdquo inProceedings of the 37th IEEE Conference on Decision and Control(CDC rsquo98) pp 1869ndash1870 Tampa Fla USA December 1998

[6] X Nian S Yang and N Chen ldquoGuaranteed cost strategiesof uncertain LQ closed-loop differential games with multipleplayersrdquo in Proceedings of the American Control Conference pp1724ndash1729 Minneapolis Minn USA June 2006

[7] M Jimenez-Lizarraga and A Poznyak ldquoEquilibrium in LQdifferential games withmultiple scenariosrdquo in Proceedings of the46th IEEE Conference on Decision and Control (CDC rsquo07) pp4081ndash4086 New Orleans La USA December 2007

[8] M Jimenez-Lizarraga A Poznyak andM A Alcorta ldquoLeader-follower strategies for a multi-plant differential gamerdquo inProceedings of the American Control Conference (ACC rsquo08) pp3839ndash3844 Seattle Wash USA June 2008

[9] A S Poznyak and C J Gallegos ldquoMultimodel prey-predatorLQ differential gamesrdquo in Proceedings of the American ControlConference pp 5369ndash5374 Denver Colo USA June 2003

[10] Y Li and L Guo ldquoConvergence of adaptive linear stochasticdifferential games nonzero-sum caserdquo in Proceedings of the 10thWorld Congress on Intelligent Control and Automation (WCICArsquo12) pp 3543ndash3548 Beijing China July 2012

[11] D Vrabie and F Lewis ldquoAdaptive Dynamic Programmingalgorithm for finding online the equilibrium solution of thetwo-player zero-sum differential gamerdquo in Proceedings of theInternational Joint Conference on Neural Networks (IJCNN rsquo10)pp 1ndash8 Barcelona Spain July 2010

[12] B-S Chen C-S Tseng and H-J Uang ldquoFuzzy differentialgames for nonlinear stochastic systems suboptimal approachrdquoIEEE Transactions on Fuzzy Systems vol 10 no 2 pp 222ndash2332002

[13] B Widrow J R Glover Jr J M McCool et al ldquoAdaptive noisecancelling principles and applicationsrdquo Proceedings of the IEEEvol 63 no 12 pp 1692ndash1716 1975

[14] C E de Souza U Shaked and M Fu ldquoRobust119867infinfiltering for

continuous time varying uncertain systems with deterministicinput signalsrdquo IEEE Transactions on Signal Processing vol 43no 3 pp 709ndash719 1995

[15] A Osorio-Cordero A S Poznyak and M Taksar ldquoRobustdeterministic filtering for linear uncertain time-varying sys-temsrdquo in Proceedings of the American Control Conference pp2526ndash2530 Albuquerque NM USA June 1997

[16] M Hou and R J Patton ldquoOptimal filtering for systems withunknown inputsrdquo IEEE Transactions on Automatic Control vol43 no 3 pp 445ndash449 1998

[17] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 3088ndash3092 1984

[18] E B Kosmatopoulos M M Polycarpou M A Christodoulouand P A Ioannou ldquoHigh-order neural network structuresfor identification of dynamical systemsrdquo IEEE Transactions onNeural Networks vol 6 no 2 pp 422ndash431 1995

Mathematical Problems in Engineering 13

[19] F Abdollahi H A Talebi and R V Patel ldquoState estimationfor flexible-joint manipulators using stable neural networksrdquo inProceedings of the IEEE International Symposium on Computa-tional Intelligence in Robotics and Automation pp 25ndash29 KobeJapan 2003

[20] A Alessandri C Cervellera and M Sanguineti ldquoDesign ofasymptotic estimators an approach based on neural networksand nonlinear programmingrdquo IEEE Transactions on NeuralNetworks vol 18 no 1 pp 86ndash96 2007

[21] Y H Kim F L Lewis and C T Abdallah ldquoNonlinear observerdesign using dynamic recurrent neural networksrdquo in Proceed-ings of the 35th IEEE Conference on Decision and Control pp949ndash954 Kobe Japan December 1996

[22] F L Lewis K Liu and A Yesildirek ldquoNeural net robotcontroller with guaranteed tracking performancerdquo IEEE Trans-actions on Neural Networks vol 6 no 3 pp 703ndash715 1995

[23] G A Rovithakis and M A Christodoulou ldquoAdaptive controlof unknown plants using dynamical neural networksrdquo IEEETransactions on Systems Man and Cybernetics vol 24 no 3 pp400ndash412 1994

[24] A Poznyak E N Sanchez and W Yu Differential NeuralNetworks for Robust Nonlinear Control Identification StateEstimation and Trajectory TrackingWorld Scientific Singapore2001

[25] E Garcıa and D A Murano ldquoState estimation for a class ofnonlinear differential games using differential neural networksrdquoin Proceedings of the American Control Conference (ACC rsquo11) pp2486ndash2491 San Francisco Calif USA July 2011

[26] D A Murano Nash equilibrium for uncertain nonlinear dif-ferential games using differential neural networks deterministicand stochastic cases [PhD thesis] CINVESTAV-IPN DistritoFederal Mexico 2004

[27] D Jiang and J Wang ldquoA recurrent neural network for onlinedesign of robust optimal filtersrdquo IEEE Transactions on Circuitsand Systems I FundamentalTheory and Applications vol 47 no6 pp 921ndash926 2000

[28] A G Parlos S K Menon and A F Atiya ldquoAn algorithmicapproach to adaptive state filtering using recurrent neuralnetworksrdquo IEEE Transactions on Neural Networks vol 12 no6 pp 1411ndash1432 2001

[29] S Haykin Neural Networks A Comprehensive FoundationPearson Chennai India 1999

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 7: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

Mathematical Problems in Engineering 7

where the algebraic Riccati equation in the first term of theright-hand side of (52) is described in (23) and in (24)ndash(26)and

Υ1198821

= [1119905]119879

[1198701198821]minus1

[1119905] + 2120590[120576

119905]119879

1198751119905

Υ1198811= [

1119905]119879

[1198701198811]minus1

[1119905] + 2119910

119905[120576

119905]119879

11987511988210119863

1205901119905

+ 119897120590119910119905[119910

119905]119879

[1119905]119879

Λ ]120590 [1119905]

Υ119894

1198822

= [119894

2119905]119879

[119870119894

1198822

]minus1

[119894

2119905] + 2120593

119894

119906119894

119905[120576

119905]119879

119875119894

2119905

Υ119894

1198812

= [119894

2119905]119879

[119870119894

1198812

]minus1

[119894

2119905]

+ 2119910119905[120576

119905]119879

119875119882119894

20⟨119863

119894

120593 119906

119894

119905⟩

119894

2119905

+ 119897119894

120593[119906

119894

]2

119910119905[119910

119905]119879

[119894

2119905]119879

Λ119894

]120593 [119894

2119905]

(53)

Thereby by equating (53) to zero that is

Υ1198821

= Υ1198811= Υ

119894

1198822

= Υ119894

1198812

= 0 (54)

and by respectively solving for 1119905

1119905 119894

2119905 and 119894

2119905 the

learning law given by (38) is obtained Now by choosing

1003817100381710038171003817Λ 119865

1003817100381710038171003817 le1205821

1198652

(55)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120576

119905]119879

2119876 [120576119905] + 119865

1 (56)

Thus by integrating both sides of (56) on the time interval[0 119905] the following is obtained

int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591

le 1198710minus 119871

119905+ [119865

1] 119905 le 119871

0+ [119865

1] 119905

(57)

and by dividing (57) by 119905 it is easy to verify that

1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591 le

1198710

119905+ 119865

1 (58)

Finally by calculating the upper limit as 119905 rarr infin the max-imum value of identification error in average sense is the onedescribed in (40) This means that the identification error isbounded between zero and (40) and therefore this meansthat 120576

119905is stable in the sense of Lyapunov

Theorem 14 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural network (27) be suchthat Assumptions 9 and 11 are satisfied If the synaptic weightsof (27) are adjusted with the following learning law

3119905= minus2119870

1198823119872120598

119905[120572 (

119905)]

119879

119894

4119905= minus2 119870

119894

1198824

119872120598119905[120585

119905]119879

[120574 (119905)]

119879

(59)

where

120598119905=

119905minus 119911

119905(60)

denotes the identification error and 1198701198823

and 119870119894

1198824

are knownsymmetric and positive-definite constant matrices then it ispossible to obtain the next maximum value of identificationerror in average sense 119870119894

1198822

as follows

lim sup119905rarrinfin

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) le 119866

1 (61)

Proof Taking into account the residual error (30) the differ-ential equation (11) can be expressed as

119905= 119882

30120572 (119911

119905) + 119882

40120574 (119911

119905) 120585

119905+ 119866

119905 (62)

Then by substituting (27) and (62) into the derivative of (60)with respect to 119905 and by adding and subtracting the terms[119882

30120572()] and [119882

40120574()120585

119905] it is easy to confirm that

120598119905=

3119905120572 () + 119882

30 +

4119905120574 () 120585

119905+119882

40120574120585

119905minus 119866

119905 (63)

where 3119905= 119882

3119905minus119882

30and

4119905= 119882

4119905minus119882

40 Now let the

Lyapunov (energetic) candidate function

119872119905= 119864

119905+ [120598

119905]119879

119875 [120598119905] +

1

2tr ([

3119905]119879

[1198701198823]minus1

[3119905])

+1

2tr ([

4119905]119879

[1198701198824]minus1

[4119905])

(64)

be such that the inequalities (8) are fulfilled (seeAssumption 3) and let

119905le minus120582

1

10038171003817100381710038171199111199051003817100381710038171003817

2

+ 2[120598119905]119879

119875 [ 120598119905]

+ tr ([3119905]119879

[1198701198823]minus1

[3119905])

+ tr ([4119905]119879

[1198701198824]minus1

[4119905])

(65)

be the derivative of (64) with respect to 119905 Then by substitut-ing (63) into the second term of the right-hand side of (65)and by adding and subtracting the term [2[120598

119905]119879

119875119860 120598119905] one

may get

2[120598119905]119879

119875 [ 120598119905] = 2[120598

119905]119879

11987511988230[ minus 119860120598

119905]

+ 2[120598119905]119879

11987511988240120574120585

119905minus 2[120598

119905]119879

119875119866119905

+ 2[120598119905]119879

1198753119905120572 ()

+ 2[120598119905]119879

1198754119905120574 () 120585

119905

+ [120598119905]119879

[119875119860 + [119860 ]119879

119875] [120598119905]

(66)

8 Mathematical Problems in Engineering

Next by analyzing the first three terms of the right-hand sideof (66) with the inequality (46) the following is obtained

(i) Using (46) and (28) in the first term of the right-handside of (66) then

2[120598119905]119879

11987511988230[ minus 119860120598

119905]

le [120598119905]119879

[119875 [11988230] [Λ

]minus1

[11988230]119879

119875] [120598119905]

+ [120598119905]119879

Λ120572[120598

119905]

(67)

(ii) Using (46) and (28) in the second term of the right-hand side of (66) then

2[120598119905]119879

11987511988240120574120585

119905

le [120598119905]119879

[119875 [11988240] [Λ

120574]minus1

[11988240]119879

119875] [120598119905]

+ [120585]2

[120598119905]119879

Λ120574[120598

119905]

(68)

(iii) Using (46) and (31) in the third term of the right-handside of (66) then

minus2[120598119905]119879

119875119866119905le [120598

119905]119879

[119875[Λ119866]minus1

119875] [120598119905]

+ 1198661+ 119866

2[119911

119905]119879

Λ119866[119911

119905]

(69)

So by substituting (67)ndash(69) into the right-hand side of(66) and by adding and subtracting the term [[120598

119905]119879

119876[120598119905]]

inequality (65) can be expressed as

119905le [120598

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120598119905]

minus [120598119905]119879

119876 [120598119905] + 119866

1+ 119866

2

1003817100381710038171003817Λ119866

1003817100381710038171003817

10038171003817100381710038171199111199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199111199051003817100381710038171003817

2

+ tr (Ψ1198823) + tr (Ψ

1198824)

(70)

where the algebraic Riccati equation in the first term of theright-hand side of (70) is described in (23) and in (34)ndash(36)and

Ψ1198823

= [3119905]119879

[1198701198823]minus1

[3119905] + 2120572 () [120598

119905]119879

1198753119905 (71)

Ψ1198824

= [4119905]119879

[1198701198824]minus1

[4119905] + 2120574 () 120585

119905[120598

119905]119879

1198754119905 (72)

Thereby by equating (71) and (72) to zero (Ψ1198823

= Ψ1198824

= 0)and by respectively solving for

3119905and

4119905 the learning law

given by (59) is obtained Now by choosing

1003817100381710038171003817Λ119866

1003817100381710038171003817 le1205821

1198662

(73)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120598

119905]119879

119876 [120598119905] + 119866

1 (74)

Thus by integrating both sides of (74) on the time interval[0 119905] the following is obtained

int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591

le 1198720minus119872

119905+ [119866

1] 119905 le 119872

0+ [119866

1] 119905

(75)

and by dividing (75) by 119905 it is easy to verify that

1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591 le

1198720

119905+ 119866

1 (76)

Finally by calculating the upper limit as 119905 rarr infin themaximum value of identification error in average sense is theone described in (61)

Theorem 15 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural networks (14) and (27)be such that the Assumptions 6 and 8 and Assumptions 9 and11 are satisfied If the synaptic weights of (14) and (27) arerespectively adjusted with the learning laws (38) and (59) thenit is possible to obtain the following maximum value of filteringerror in average sense

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1

if 1198651gt 119866

1

(77)

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 0

if 1198651le 119866

1

(78)

where 119890119905is given by (37)

Proof Consider the following Lyapunov (energetic) candi-date function

Φ119905= [119890

119905]119879

119875 [119890119905] (79)

where 119875 is the solution of the algebraic Riccati equation (23)with 119860 119877 and 119876 defined by (24)ndash(26) or (34)ndash(36) Thenby calculating the derivative of (79) with respect to 119905 and byconsidering the equations (37) (13) and (12) it can be verifiedthat

Φ119905= 2[119890

119905]119879

119875 [ 120576119905minus 120598

119905] (80)

and taking into account (42) (63) and the proofs ofTheorems 13 and 14 one may get

Φ119905le minus [119890

119905]119879

2119876 [119890119905] + 119865

1minus [minus [119890

119905]119879

119876 [119890119905] + 119866

1]

le minus [119890119905]119879

[2119876 minus 119876] [119890119905] + 119865

1minus 119866

1

le minus [119890119905]119879

119876 [119890119905] + 119865

1minus 119866

1

(81)

Mathematical Problems in Engineering 9

Thus by integrating both sides of (81) on the time interval[0 119905] the following is obtained

int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le Φ

0minus Φ

119905+ [119865

1minus 119866

1] 119905

le Φ0+ [119865

1minus 119866

1] 119905

(82)

and by dividing (82) by 119905 it is easy to confirm that

1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le

Φ0

119905+ 119865

1minus 119866

1 (83)

By calculating the upper limit as 119905 rarr infin equation (83) canbe expressed as

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1(84)

and finally it is clear that

(i) if 1198651gt 119866

1 the maximum value of filtering error in

average sense is the one described in (77)(ii) if 119865

1le 119866

1 the maximum value of filtering error in

average sense is the one described in (78) given thatthe left-hand side of the inequality (84) cannot benegative

Hence the theorem is proved

6 Illustrating Example

Example 16 Consider a 2-player nonlinear differential gamegiven by

119905= 119891 (119909

119905) +

2

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905(85)

subject to the change of variables (10) and (11) (119894 = 2) wherethe control actions of each player are

1199061

119905= [

05 sawtooth (05119905)H (119905)

]

1199062

119905= [

H (119905)

05 sawtooth (05119905)] (86)

The undesired deterministic perturbations are

120585119905= [

sin (10119905)sin (10119905)] (87)

and H(119905) denotes the Heaviside step function or unit stepsignal Then under Assumptions 1ndash4 6 8 9 and 11 it is pos-sible to obtain a filtered feedback (perfect state) informationpattern 120578119894

119905= 119909

119905

Remark 17 As told before the functions 119891(sdot) and 119892119894

(sdot) aswell as the matrix 119867 are unknown but in order to makean example of (85) and its simulation the values of these

ldquounknownrdquo parameters will be shown in the Appendix of thispaper

Thereby consider now a first differential neural networkgiven by

119910119905= 119882

1119905120590 (119881

1119905119910119905) +

2

sum

119894=1

119882119894

2119905120593119894

(119881119894

2119905119910119905) 119906

119894

119905 (88)

where

11988210

= [05 01 0 2

02 05 06 0]

11988110

= [1 03 05 02

05 13 1 06]

119879

1198821

20= [

05 01

0 02] 119882

2

20= [

13 08

08 13]

1198811

20= [

1 01

01 2] 119881

2

20= [

05 07

09 01]

(89)

and the activation functions are

120590119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=1198811119905119910119905

119901 = 1 2 3 4

1205931

119901119902(119908

1

) =45

1 + 119890minus051199081minus 3

100381610038161003816100381610038161003816100381610038161199081=11988112119905119910119905

119901 = 119902 = 1 2

1205932

119901119902(119908

2

) =45

1 + 119890minus051199082minus 3

100381610038161003816100381610038161003816100381610038161199082=11988122119905119910119905

119901 = 119902 = 1 2

(90)

By proposing the values described in Assumption 8 as

Λ120590= Λ

1

120593= Λ

2

120593= [

13 0

0 13]

Λ1

]120593 = Λ2

]120593 = [08 0

0 08]

Λ1

120593= Λ

2

120593= Λ

1

1205931015840 = Λ

2

1205931015840 = [

13 0

0 13]

minus1

Λ119865= [

01 0

0 01]

minus1

Λ120590= Λ

1205901015840 =

[[[

[

13 0 0 0

0 13 0 0

0 0 13 0

0 0 0 13

]]]

]

minus1

Λ ]120590 =[[[

[

08 0 0 0

0 08 0 0

0 0 08 0

0 0 0 08

]]]

]

119860 =

[[[

[

minus13963 minus22632

00473 minus61606

0426 minus74451

minus61533 08738

]]]

]

119876 = [10 0

0 10]

119897120590= 119897

1

120593= 119897

2

120593= 13 119906

1

= 1199062

= 5

(91)

10 Mathematical Problems in Engineering

1

0

minus1

y1t

y1t

minus2

0 5 10 15 20 25

(a)

15

1

05

0

minus05

minus1

minus15

0 5 10 15 20 25

y2t

y2t

(b)

Figure 1 Comparison between 119910119905and 119910

119905for the 2-player nonlinear differential game (85)

and by choosing 1198701198821

= 100 and 1198701198811= 119870

1

1198822

= 1198702

1198822

= 1198701

1198812

=

1198702

1198812

= 1 the solution of the algebraic Riccati equation (23)results in

119875 = [20524 minus06374

minus06374 31842] (92)

Then by applying the learning law (38) described inTheorem 13 the value of the identification error in averagesense (40) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) = 003145 (93)

Remark 18 Even though 003145 is the value of 1198651on the

time interval [0 25] it is not the global minimum value that1198651can take since 119905does not tend to infinity So in order to find

this global minimum value it would be required to simulatean arbitrarily large time

On the other hand let a second differential neural net-work given by

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905(94)

be such that the equalities (32) and (33) are fulfilled that is

11988230

= [05 01 0 2

02 05 06 0] 119882

40= [

1 0

0 1]

Λ120572= [

173 0

0 173] Λ

120574= [

20 0

0 20]

Λ=

[[[

[

26 0 0 0

0 26 0 0

0 0 26 0

0 0 0 26

]]]

]

minus1

Λ120574= [

6734 546

546 6162]

minus1

Λ119866= [

01 0

0 01]

minus1

120585 = 5

(95)

and the activation functions are

120572119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=119905

119901 = 1 2 3 4

120574119901119902(119908) =

45

1 + 119890minus051199081minus 3

10038161003816100381610038161003816100381610038161003816119908=119905

119901 = 119902 = 1 2

(96)

By choosing 1198701198823

= 100000 and 1198701198824

= 1 the solution of thealgebraic Riccati equation (23) results in the matrix (92) andby applying the learning law (59) described in Theorem 14the value of the identification error in average sense (61) on atime period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) = 000006645 (97)

Remark 19 As in Remark 18 in order to find the globalminimum value of 119866

1 it would be required to simulate an

arbitrarily large time

Finally taking into account the identification errors (93)and (97) the value of the filtering error in average sense (77)-(78) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 003139 (98)

The simulation of this example wasmade using theMATLABand Simulink platforms and its results are shown in Figures1ndash4

As is seen in Figure 1 the differential neural network(88) can perform the identification of the continuous-timenonlinear dynamic game (85) where 119910

1119905and 119910

2119905(or simi-

larly 1199091119905

and 1199092119905) denote the state variables with undesired

perturbations and 1199101119905

and 1199102119905

indicate their state estimatesOn the other hand according to Figure 2 the differential

neural network (94) identifies the dynamics of the additivedeterministic noises or in other words the dynamics of =119867120585

119905 Thus 119911

1119905and 119911

2119905represent the state variables of the

above differential equation and 1119905

and 2119905

are their stateestimates

In this way Figure 3 shows the performance of thefiltering process (13) in the nonlinear differential game (85)that is to say it shows the comparison between the expectedor uncorrupted state variables 119909

1119905and 119909

2119905and the filtered

state estimates 1199091119905

and 1199092119905

Mathematical Problems in Engineering 11

0

minus01

z1t

z1t

01

02

0 05 1 15 2 25 3 35

(a)

0

minus01

z2t

z2t

01

0

minus02

05 1 15 2 25 3 35

(b)

Figure 2 Comparison between 119911119905and

119905for the 2-player nonlinear differential game (85)

1

0

minus1

minus2

x1t

x1t

0 5 10 15 20 25

(a)

1

0

minus1

x2t

x2t

0 5 10 15 20 25

(b)

Figure 3 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

minus2

minus22

minus24

minus26

minus28

x1t

x1t

5 10 15 20 25

(a)

15

1

05

5 10 15 20 25

x2t

x2t

(b)

Figure 4 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

Finally Figure 4 also exhibits the described filteringprocess but comparing the real state variables 119909

1119905and 119909

2119905

with filtered state estimates 1199091119905

and 1199092119905

7 Results Analysis and Discussion

Although the differential neural networks (14) and (27) canperform the identification of the differential equations (10)and (11) it is important to remember that their performancedepends on the number of neurons used and on the proposi-tion of all the constant values (or free parameters) that weredescribed in Assumptions 8 and 11 Therefore the values of1198651and 119866

1(at a fixed time) can change according to this fact

In other words due to the fact that the differential neuralnetworks are an approximation of a dynamic system (orgame) there always will be a residual error that will dependon this approximation

Thus in the particular case shown in Example 16 thedesign of (88) was made using only eight neurons four forthe layer of perceptrons without any relation to the playersand two for the layer of perceptrons of each player Similarlyfor the design of (94) (which is subject to the equalities (32)-(33)) six neurons were used

On the other hand it is important to mention that(14) and (27) will operate properly only if Assumptions 1ndash4 6 8 9 and 11 are satisfied that is to say there is noguarantee that these differential neural networks performgood identification and filtering processes if for examplethe class of nonlinear differential games (1)ndash(4) has a stochas-tic nature

Finally although this paper presents a new approach foridentification and filtering of nonlinear dynamic games itshould be emphasized that there exist other techniques thatmight solve the problem treated here for example the citedpublications in Section 1

12 Mathematical Problems in Engineering

8 Conclusions and Future Work

According to the results of this paper the differential neuralnetworks (14) and (27) solve the problem of identifying andfiltering the class of nonlinear differential games (1)ndash(4)

However there is no guarantee that (14) and (27) identifyand filter the states of (1)ndash(4) if Assumptions 1ndash4 6 8 9 and11 are not met

Thereby the proposed learning laws (38) and (59) obtainthe maximum values of identification error in average sense(40) and (61) that is to say the maximum value of filteringerror (77)-(78)

Nevertheless these errors depend on both the number ofneurons used in the differential neural networks (14) and (27)and the proposed values applied in their design conditions

On the other hand according to the simulation resultsof the illustrating example the effectiveness of (14) and (27)is shown and the applicability of Theorems 13 14 and 15 isverified

Finally speaking of future work in this research fieldone can analyze and discuss the use of (14) and (27) for theobtaining of equilibrium solutions in the class of nonlineardifferential games (1)ndash(4) that is to say the use of differentialneural networks for controlling nonlinear differential games

Appendix

The values of the ldquounknownrdquo nonlinear functions 119891(sdot) 1198921(sdot)and 1198922(sdot) and the ldquounknownrdquo constant matrix119867 used in (85)of Example 16 are shown as follows

119891 (119909119905) = [

1199092119905

sin (1199091119905)]

1198921

(119909119905)

= [minus 119909

1119905cos (119909

1119905) 0

0 1199092119905sin (119909

1119905) minus 119909

1119905cos (119909

2119905)]

1198922

(119909119905) = [

sin (1199091119905) 0

0 minus [1199091119905+ 119909

2119905]]

119867 = [05 0

0 05]

(A1)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I Alvarez A Poznyak and A Malo ldquoUrban traffic controlproblem a game theory approachrdquo in Proceedings of the 47thIEEE Conference on Decision and Control (CDC rsquo08) pp 2168ndash2172 Cancun Mexico December 2008

[2] T Basar and G J Olsder Dynamic Noncooperative GameTheory Academic Press Great Britain UK 2nd edition 1995

[3] J M C Clark and R B Vinter ldquoA differential dynamicgames approach to flow controlrdquo in Proceedings of the 42nd

IEEE Conference on Decision and Control pp 1228ndash1231 MauiHawaii USA December 2003

[4] S H Tamaddoni S Taheri and M Ahmadian ldquoOptimal VSCdesign based on nash strategy for differential 2-player gamesrdquoin Proceedings of the IEEE International Conference on SystemsMan and Cybernetics (SMC rsquo09) pp 2415ndash2420 San AntonioTex USA October 2009

[5] F Amato M Mattei and A Pironti ldquoRobust strategies forNash linear quadratic games under uncertain dynamicsrdquo inProceedings of the 37th IEEE Conference on Decision and Control(CDC rsquo98) pp 1869ndash1870 Tampa Fla USA December 1998

[6] X Nian S Yang and N Chen ldquoGuaranteed cost strategiesof uncertain LQ closed-loop differential games with multipleplayersrdquo in Proceedings of the American Control Conference pp1724ndash1729 Minneapolis Minn USA June 2006

[7] M Jimenez-Lizarraga and A Poznyak ldquoEquilibrium in LQdifferential games withmultiple scenariosrdquo in Proceedings of the46th IEEE Conference on Decision and Control (CDC rsquo07) pp4081ndash4086 New Orleans La USA December 2007

[8] M Jimenez-Lizarraga A Poznyak andM A Alcorta ldquoLeader-follower strategies for a multi-plant differential gamerdquo inProceedings of the American Control Conference (ACC rsquo08) pp3839ndash3844 Seattle Wash USA June 2008

[9] A S Poznyak and C J Gallegos ldquoMultimodel prey-predatorLQ differential gamesrdquo in Proceedings of the American ControlConference pp 5369ndash5374 Denver Colo USA June 2003

[10] Y Li and L Guo ldquoConvergence of adaptive linear stochasticdifferential games nonzero-sum caserdquo in Proceedings of the 10thWorld Congress on Intelligent Control and Automation (WCICArsquo12) pp 3543ndash3548 Beijing China July 2012

[11] D Vrabie and F Lewis ldquoAdaptive Dynamic Programmingalgorithm for finding online the equilibrium solution of thetwo-player zero-sum differential gamerdquo in Proceedings of theInternational Joint Conference on Neural Networks (IJCNN rsquo10)pp 1ndash8 Barcelona Spain July 2010

[12] B-S Chen C-S Tseng and H-J Uang ldquoFuzzy differentialgames for nonlinear stochastic systems suboptimal approachrdquoIEEE Transactions on Fuzzy Systems vol 10 no 2 pp 222ndash2332002

[13] B Widrow J R Glover Jr J M McCool et al ldquoAdaptive noisecancelling principles and applicationsrdquo Proceedings of the IEEEvol 63 no 12 pp 1692ndash1716 1975

[14] C E de Souza U Shaked and M Fu ldquoRobust119867infinfiltering for

continuous time varying uncertain systems with deterministicinput signalsrdquo IEEE Transactions on Signal Processing vol 43no 3 pp 709ndash719 1995

[15] A Osorio-Cordero A S Poznyak and M Taksar ldquoRobustdeterministic filtering for linear uncertain time-varying sys-temsrdquo in Proceedings of the American Control Conference pp2526ndash2530 Albuquerque NM USA June 1997

[16] M Hou and R J Patton ldquoOptimal filtering for systems withunknown inputsrdquo IEEE Transactions on Automatic Control vol43 no 3 pp 445ndash449 1998

[17] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 3088ndash3092 1984

[18] E B Kosmatopoulos M M Polycarpou M A Christodoulouand P A Ioannou ldquoHigh-order neural network structuresfor identification of dynamical systemsrdquo IEEE Transactions onNeural Networks vol 6 no 2 pp 422ndash431 1995

Mathematical Problems in Engineering 13

[19] F Abdollahi H A Talebi and R V Patel ldquoState estimationfor flexible-joint manipulators using stable neural networksrdquo inProceedings of the IEEE International Symposium on Computa-tional Intelligence in Robotics and Automation pp 25ndash29 KobeJapan 2003

[20] A Alessandri C Cervellera and M Sanguineti ldquoDesign ofasymptotic estimators an approach based on neural networksand nonlinear programmingrdquo IEEE Transactions on NeuralNetworks vol 18 no 1 pp 86ndash96 2007

[21] Y H Kim F L Lewis and C T Abdallah ldquoNonlinear observerdesign using dynamic recurrent neural networksrdquo in Proceed-ings of the 35th IEEE Conference on Decision and Control pp949ndash954 Kobe Japan December 1996

[22] F L Lewis K Liu and A Yesildirek ldquoNeural net robotcontroller with guaranteed tracking performancerdquo IEEE Trans-actions on Neural Networks vol 6 no 3 pp 703ndash715 1995

[23] G A Rovithakis and M A Christodoulou ldquoAdaptive controlof unknown plants using dynamical neural networksrdquo IEEETransactions on Systems Man and Cybernetics vol 24 no 3 pp400ndash412 1994

[24] A Poznyak E N Sanchez and W Yu Differential NeuralNetworks for Robust Nonlinear Control Identification StateEstimation and Trajectory TrackingWorld Scientific Singapore2001

[25] E Garcıa and D A Murano ldquoState estimation for a class ofnonlinear differential games using differential neural networksrdquoin Proceedings of the American Control Conference (ACC rsquo11) pp2486ndash2491 San Francisco Calif USA July 2011

[26] D A Murano Nash equilibrium for uncertain nonlinear dif-ferential games using differential neural networks deterministicand stochastic cases [PhD thesis] CINVESTAV-IPN DistritoFederal Mexico 2004

[27] D Jiang and J Wang ldquoA recurrent neural network for onlinedesign of robust optimal filtersrdquo IEEE Transactions on Circuitsand Systems I FundamentalTheory and Applications vol 47 no6 pp 921ndash926 2000

[28] A G Parlos S K Menon and A F Atiya ldquoAn algorithmicapproach to adaptive state filtering using recurrent neuralnetworksrdquo IEEE Transactions on Neural Networks vol 12 no6 pp 1411ndash1432 2001

[29] S Haykin Neural Networks A Comprehensive FoundationPearson Chennai India 1999

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 8: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

8 Mathematical Problems in Engineering

Next by analyzing the first three terms of the right-hand sideof (66) with the inequality (46) the following is obtained

(i) Using (46) and (28) in the first term of the right-handside of (66) then

2[120598119905]119879

11987511988230[ minus 119860120598

119905]

le [120598119905]119879

[119875 [11988230] [Λ

]minus1

[11988230]119879

119875] [120598119905]

+ [120598119905]119879

Λ120572[120598

119905]

(67)

(ii) Using (46) and (28) in the second term of the right-hand side of (66) then

2[120598119905]119879

11987511988240120574120585

119905

le [120598119905]119879

[119875 [11988240] [Λ

120574]minus1

[11988240]119879

119875] [120598119905]

+ [120585]2

[120598119905]119879

Λ120574[120598

119905]

(68)

(iii) Using (46) and (31) in the third term of the right-handside of (66) then

minus2[120598119905]119879

119875119866119905le [120598

119905]119879

[119875[Λ119866]minus1

119875] [120598119905]

+ 1198661+ 119866

2[119911

119905]119879

Λ119866[119911

119905]

(69)

So by substituting (67)ndash(69) into the right-hand side of(66) and by adding and subtracting the term [[120598

119905]119879

119876[120598119905]]

inequality (65) can be expressed as

119905le [120598

119905]119879

[119875119860 + [119860 ]119879

119875 + 119875119877119875 + 119876] [120598119905]

minus [120598119905]119879

119876 [120598119905] + 119866

1+ 119866

2

1003817100381710038171003817Λ119866

1003817100381710038171003817

10038171003817100381710038171199111199051003817100381710038171003817

2

minus 1205821

10038171003817100381710038171199111199051003817100381710038171003817

2

+ tr (Ψ1198823) + tr (Ψ

1198824)

(70)

where the algebraic Riccati equation in the first term of theright-hand side of (70) is described in (23) and in (34)ndash(36)and

Ψ1198823

= [3119905]119879

[1198701198823]minus1

[3119905] + 2120572 () [120598

119905]119879

1198753119905 (71)

Ψ1198824

= [4119905]119879

[1198701198824]minus1

[4119905] + 2120574 () 120585

119905[120598

119905]119879

1198754119905 (72)

Thereby by equating (71) and (72) to zero (Ψ1198823

= Ψ1198824

= 0)and by respectively solving for

3119905and

4119905 the learning law

given by (59) is obtained Now by choosing

1003817100381710038171003817Λ119866

1003817100381710038171003817 le1205821

1198662

(73)

and by solving the algebraic Riccati equation (23) one mayget

119905le minus [120598

119905]119879

119876 [120598119905] + 119866

1 (74)

Thus by integrating both sides of (74) on the time interval[0 119905] the following is obtained

int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591

le 1198720minus119872

119905+ [119866

1] 119905 le 119872

0+ [119866

1] 119905

(75)

and by dividing (75) by 119905 it is easy to verify that

1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591 le

1198720

119905+ 119866

1 (76)

Finally by calculating the upper limit as 119905 rarr infin themaximum value of identification error in average sense is theone described in (61)

Theorem 15 Let the class of continuous-time nonlineardynamic games (1)ndash(4) be such that Assumptions 1 to 4 arefulfilled Also let the differential neural networks (14) and (27)be such that the Assumptions 6 and 8 and Assumptions 9 and11 are satisfied If the synaptic weights of (14) and (27) arerespectively adjusted with the learning laws (38) and (59) thenit is possible to obtain the following maximum value of filteringerror in average sense

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1

if 1198651gt 119866

1

(77)

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 0

if 1198651le 119866

1

(78)

where 119890119905is given by (37)

Proof Consider the following Lyapunov (energetic) candi-date function

Φ119905= [119890

119905]119879

119875 [119890119905] (79)

where 119875 is the solution of the algebraic Riccati equation (23)with 119860 119877 and 119876 defined by (24)ndash(26) or (34)ndash(36) Thenby calculating the derivative of (79) with respect to 119905 and byconsidering the equations (37) (13) and (12) it can be verifiedthat

Φ119905= 2[119890

119905]119879

119875 [ 120576119905minus 120598

119905] (80)

and taking into account (42) (63) and the proofs ofTheorems 13 and 14 one may get

Φ119905le minus [119890

119905]119879

2119876 [119890119905] + 119865

1minus [minus [119890

119905]119879

119876 [119890119905] + 119866

1]

le minus [119890119905]119879

[2119876 minus 119876] [119890119905] + 119865

1minus 119866

1

le minus [119890119905]119879

119876 [119890119905] + 119865

1minus 119866

1

(81)

Mathematical Problems in Engineering 9

Thus by integrating both sides of (81) on the time interval[0 119905] the following is obtained

int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le Φ

0minus Φ

119905+ [119865

1minus 119866

1] 119905

le Φ0+ [119865

1minus 119866

1] 119905

(82)

and by dividing (82) by 119905 it is easy to confirm that

1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le

Φ0

119905+ 119865

1minus 119866

1 (83)

By calculating the upper limit as 119905 rarr infin equation (83) canbe expressed as

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1(84)

and finally it is clear that

(i) if 1198651gt 119866

1 the maximum value of filtering error in

average sense is the one described in (77)(ii) if 119865

1le 119866

1 the maximum value of filtering error in

average sense is the one described in (78) given thatthe left-hand side of the inequality (84) cannot benegative

Hence the theorem is proved

6 Illustrating Example

Example 16 Consider a 2-player nonlinear differential gamegiven by

119905= 119891 (119909

119905) +

2

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905(85)

subject to the change of variables (10) and (11) (119894 = 2) wherethe control actions of each player are

1199061

119905= [

05 sawtooth (05119905)H (119905)

]

1199062

119905= [

H (119905)

05 sawtooth (05119905)] (86)

The undesired deterministic perturbations are

120585119905= [

sin (10119905)sin (10119905)] (87)

and H(119905) denotes the Heaviside step function or unit stepsignal Then under Assumptions 1ndash4 6 8 9 and 11 it is pos-sible to obtain a filtered feedback (perfect state) informationpattern 120578119894

119905= 119909

119905

Remark 17 As told before the functions 119891(sdot) and 119892119894

(sdot) aswell as the matrix 119867 are unknown but in order to makean example of (85) and its simulation the values of these

ldquounknownrdquo parameters will be shown in the Appendix of thispaper

Thereby consider now a first differential neural networkgiven by

119910119905= 119882

1119905120590 (119881

1119905119910119905) +

2

sum

119894=1

119882119894

2119905120593119894

(119881119894

2119905119910119905) 119906

119894

119905 (88)

where

11988210

= [05 01 0 2

02 05 06 0]

11988110

= [1 03 05 02

05 13 1 06]

119879

1198821

20= [

05 01

0 02] 119882

2

20= [

13 08

08 13]

1198811

20= [

1 01

01 2] 119881

2

20= [

05 07

09 01]

(89)

and the activation functions are

120590119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=1198811119905119910119905

119901 = 1 2 3 4

1205931

119901119902(119908

1

) =45

1 + 119890minus051199081minus 3

100381610038161003816100381610038161003816100381610038161199081=11988112119905119910119905

119901 = 119902 = 1 2

1205932

119901119902(119908

2

) =45

1 + 119890minus051199082minus 3

100381610038161003816100381610038161003816100381610038161199082=11988122119905119910119905

119901 = 119902 = 1 2

(90)

By proposing the values described in Assumption 8 as

Λ120590= Λ

1

120593= Λ

2

120593= [

13 0

0 13]

Λ1

]120593 = Λ2

]120593 = [08 0

0 08]

Λ1

120593= Λ

2

120593= Λ

1

1205931015840 = Λ

2

1205931015840 = [

13 0

0 13]

minus1

Λ119865= [

01 0

0 01]

minus1

Λ120590= Λ

1205901015840 =

[[[

[

13 0 0 0

0 13 0 0

0 0 13 0

0 0 0 13

]]]

]

minus1

Λ ]120590 =[[[

[

08 0 0 0

0 08 0 0

0 0 08 0

0 0 0 08

]]]

]

119860 =

[[[

[

minus13963 minus22632

00473 minus61606

0426 minus74451

minus61533 08738

]]]

]

119876 = [10 0

0 10]

119897120590= 119897

1

120593= 119897

2

120593= 13 119906

1

= 1199062

= 5

(91)

10 Mathematical Problems in Engineering

1

0

minus1

y1t

y1t

minus2

0 5 10 15 20 25

(a)

15

1

05

0

minus05

minus1

minus15

0 5 10 15 20 25

y2t

y2t

(b)

Figure 1 Comparison between 119910119905and 119910

119905for the 2-player nonlinear differential game (85)

and by choosing 1198701198821

= 100 and 1198701198811= 119870

1

1198822

= 1198702

1198822

= 1198701

1198812

=

1198702

1198812

= 1 the solution of the algebraic Riccati equation (23)results in

119875 = [20524 minus06374

minus06374 31842] (92)

Then by applying the learning law (38) described inTheorem 13 the value of the identification error in averagesense (40) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) = 003145 (93)

Remark 18 Even though 003145 is the value of 1198651on the

time interval [0 25] it is not the global minimum value that1198651can take since 119905does not tend to infinity So in order to find

this global minimum value it would be required to simulatean arbitrarily large time

On the other hand let a second differential neural net-work given by

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905(94)

be such that the equalities (32) and (33) are fulfilled that is

11988230

= [05 01 0 2

02 05 06 0] 119882

40= [

1 0

0 1]

Λ120572= [

173 0

0 173] Λ

120574= [

20 0

0 20]

Λ=

[[[

[

26 0 0 0

0 26 0 0

0 0 26 0

0 0 0 26

]]]

]

minus1

Λ120574= [

6734 546

546 6162]

minus1

Λ119866= [

01 0

0 01]

minus1

120585 = 5

(95)

and the activation functions are

120572119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=119905

119901 = 1 2 3 4

120574119901119902(119908) =

45

1 + 119890minus051199081minus 3

10038161003816100381610038161003816100381610038161003816119908=119905

119901 = 119902 = 1 2

(96)

By choosing 1198701198823

= 100000 and 1198701198824

= 1 the solution of thealgebraic Riccati equation (23) results in the matrix (92) andby applying the learning law (59) described in Theorem 14the value of the identification error in average sense (61) on atime period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) = 000006645 (97)

Remark 19 As in Remark 18 in order to find the globalminimum value of 119866

1 it would be required to simulate an

arbitrarily large time

Finally taking into account the identification errors (93)and (97) the value of the filtering error in average sense (77)-(78) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 003139 (98)

The simulation of this example wasmade using theMATLABand Simulink platforms and its results are shown in Figures1ndash4

As is seen in Figure 1 the differential neural network(88) can perform the identification of the continuous-timenonlinear dynamic game (85) where 119910

1119905and 119910

2119905(or simi-

larly 1199091119905

and 1199092119905) denote the state variables with undesired

perturbations and 1199101119905

and 1199102119905

indicate their state estimatesOn the other hand according to Figure 2 the differential

neural network (94) identifies the dynamics of the additivedeterministic noises or in other words the dynamics of =119867120585

119905 Thus 119911

1119905and 119911

2119905represent the state variables of the

above differential equation and 1119905

and 2119905

are their stateestimates

In this way Figure 3 shows the performance of thefiltering process (13) in the nonlinear differential game (85)that is to say it shows the comparison between the expectedor uncorrupted state variables 119909

1119905and 119909

2119905and the filtered

state estimates 1199091119905

and 1199092119905

Mathematical Problems in Engineering 11

0

minus01

z1t

z1t

01

02

0 05 1 15 2 25 3 35

(a)

0

minus01

z2t

z2t

01

0

minus02

05 1 15 2 25 3 35

(b)

Figure 2 Comparison between 119911119905and

119905for the 2-player nonlinear differential game (85)

1

0

minus1

minus2

x1t

x1t

0 5 10 15 20 25

(a)

1

0

minus1

x2t

x2t

0 5 10 15 20 25

(b)

Figure 3 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

minus2

minus22

minus24

minus26

minus28

x1t

x1t

5 10 15 20 25

(a)

15

1

05

5 10 15 20 25

x2t

x2t

(b)

Figure 4 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

Finally Figure 4 also exhibits the described filteringprocess but comparing the real state variables 119909

1119905and 119909

2119905

with filtered state estimates 1199091119905

and 1199092119905

7 Results Analysis and Discussion

Although the differential neural networks (14) and (27) canperform the identification of the differential equations (10)and (11) it is important to remember that their performancedepends on the number of neurons used and on the proposi-tion of all the constant values (or free parameters) that weredescribed in Assumptions 8 and 11 Therefore the values of1198651and 119866

1(at a fixed time) can change according to this fact

In other words due to the fact that the differential neuralnetworks are an approximation of a dynamic system (orgame) there always will be a residual error that will dependon this approximation

Thus in the particular case shown in Example 16 thedesign of (88) was made using only eight neurons four forthe layer of perceptrons without any relation to the playersand two for the layer of perceptrons of each player Similarlyfor the design of (94) (which is subject to the equalities (32)-(33)) six neurons were used

On the other hand it is important to mention that(14) and (27) will operate properly only if Assumptions 1ndash4 6 8 9 and 11 are satisfied that is to say there is noguarantee that these differential neural networks performgood identification and filtering processes if for examplethe class of nonlinear differential games (1)ndash(4) has a stochas-tic nature

Finally although this paper presents a new approach foridentification and filtering of nonlinear dynamic games itshould be emphasized that there exist other techniques thatmight solve the problem treated here for example the citedpublications in Section 1

12 Mathematical Problems in Engineering

8 Conclusions and Future Work

According to the results of this paper the differential neuralnetworks (14) and (27) solve the problem of identifying andfiltering the class of nonlinear differential games (1)ndash(4)

However there is no guarantee that (14) and (27) identifyand filter the states of (1)ndash(4) if Assumptions 1ndash4 6 8 9 and11 are not met

Thereby the proposed learning laws (38) and (59) obtainthe maximum values of identification error in average sense(40) and (61) that is to say the maximum value of filteringerror (77)-(78)

Nevertheless these errors depend on both the number ofneurons used in the differential neural networks (14) and (27)and the proposed values applied in their design conditions

On the other hand according to the simulation resultsof the illustrating example the effectiveness of (14) and (27)is shown and the applicability of Theorems 13 14 and 15 isverified

Finally speaking of future work in this research fieldone can analyze and discuss the use of (14) and (27) for theobtaining of equilibrium solutions in the class of nonlineardifferential games (1)ndash(4) that is to say the use of differentialneural networks for controlling nonlinear differential games

Appendix

The values of the ldquounknownrdquo nonlinear functions 119891(sdot) 1198921(sdot)and 1198922(sdot) and the ldquounknownrdquo constant matrix119867 used in (85)of Example 16 are shown as follows

119891 (119909119905) = [

1199092119905

sin (1199091119905)]

1198921

(119909119905)

= [minus 119909

1119905cos (119909

1119905) 0

0 1199092119905sin (119909

1119905) minus 119909

1119905cos (119909

2119905)]

1198922

(119909119905) = [

sin (1199091119905) 0

0 minus [1199091119905+ 119909

2119905]]

119867 = [05 0

0 05]

(A1)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I Alvarez A Poznyak and A Malo ldquoUrban traffic controlproblem a game theory approachrdquo in Proceedings of the 47thIEEE Conference on Decision and Control (CDC rsquo08) pp 2168ndash2172 Cancun Mexico December 2008

[2] T Basar and G J Olsder Dynamic Noncooperative GameTheory Academic Press Great Britain UK 2nd edition 1995

[3] J M C Clark and R B Vinter ldquoA differential dynamicgames approach to flow controlrdquo in Proceedings of the 42nd

IEEE Conference on Decision and Control pp 1228ndash1231 MauiHawaii USA December 2003

[4] S H Tamaddoni S Taheri and M Ahmadian ldquoOptimal VSCdesign based on nash strategy for differential 2-player gamesrdquoin Proceedings of the IEEE International Conference on SystemsMan and Cybernetics (SMC rsquo09) pp 2415ndash2420 San AntonioTex USA October 2009

[5] F Amato M Mattei and A Pironti ldquoRobust strategies forNash linear quadratic games under uncertain dynamicsrdquo inProceedings of the 37th IEEE Conference on Decision and Control(CDC rsquo98) pp 1869ndash1870 Tampa Fla USA December 1998

[6] X Nian S Yang and N Chen ldquoGuaranteed cost strategiesof uncertain LQ closed-loop differential games with multipleplayersrdquo in Proceedings of the American Control Conference pp1724ndash1729 Minneapolis Minn USA June 2006

[7] M Jimenez-Lizarraga and A Poznyak ldquoEquilibrium in LQdifferential games withmultiple scenariosrdquo in Proceedings of the46th IEEE Conference on Decision and Control (CDC rsquo07) pp4081ndash4086 New Orleans La USA December 2007

[8] M Jimenez-Lizarraga A Poznyak andM A Alcorta ldquoLeader-follower strategies for a multi-plant differential gamerdquo inProceedings of the American Control Conference (ACC rsquo08) pp3839ndash3844 Seattle Wash USA June 2008

[9] A S Poznyak and C J Gallegos ldquoMultimodel prey-predatorLQ differential gamesrdquo in Proceedings of the American ControlConference pp 5369ndash5374 Denver Colo USA June 2003

[10] Y Li and L Guo ldquoConvergence of adaptive linear stochasticdifferential games nonzero-sum caserdquo in Proceedings of the 10thWorld Congress on Intelligent Control and Automation (WCICArsquo12) pp 3543ndash3548 Beijing China July 2012

[11] D Vrabie and F Lewis ldquoAdaptive Dynamic Programmingalgorithm for finding online the equilibrium solution of thetwo-player zero-sum differential gamerdquo in Proceedings of theInternational Joint Conference on Neural Networks (IJCNN rsquo10)pp 1ndash8 Barcelona Spain July 2010

[12] B-S Chen C-S Tseng and H-J Uang ldquoFuzzy differentialgames for nonlinear stochastic systems suboptimal approachrdquoIEEE Transactions on Fuzzy Systems vol 10 no 2 pp 222ndash2332002

[13] B Widrow J R Glover Jr J M McCool et al ldquoAdaptive noisecancelling principles and applicationsrdquo Proceedings of the IEEEvol 63 no 12 pp 1692ndash1716 1975

[14] C E de Souza U Shaked and M Fu ldquoRobust119867infinfiltering for

continuous time varying uncertain systems with deterministicinput signalsrdquo IEEE Transactions on Signal Processing vol 43no 3 pp 709ndash719 1995

[15] A Osorio-Cordero A S Poznyak and M Taksar ldquoRobustdeterministic filtering for linear uncertain time-varying sys-temsrdquo in Proceedings of the American Control Conference pp2526ndash2530 Albuquerque NM USA June 1997

[16] M Hou and R J Patton ldquoOptimal filtering for systems withunknown inputsrdquo IEEE Transactions on Automatic Control vol43 no 3 pp 445ndash449 1998

[17] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 3088ndash3092 1984

[18] E B Kosmatopoulos M M Polycarpou M A Christodoulouand P A Ioannou ldquoHigh-order neural network structuresfor identification of dynamical systemsrdquo IEEE Transactions onNeural Networks vol 6 no 2 pp 422ndash431 1995

Mathematical Problems in Engineering 13

[19] F Abdollahi H A Talebi and R V Patel ldquoState estimationfor flexible-joint manipulators using stable neural networksrdquo inProceedings of the IEEE International Symposium on Computa-tional Intelligence in Robotics and Automation pp 25ndash29 KobeJapan 2003

[20] A Alessandri C Cervellera and M Sanguineti ldquoDesign ofasymptotic estimators an approach based on neural networksand nonlinear programmingrdquo IEEE Transactions on NeuralNetworks vol 18 no 1 pp 86ndash96 2007

[21] Y H Kim F L Lewis and C T Abdallah ldquoNonlinear observerdesign using dynamic recurrent neural networksrdquo in Proceed-ings of the 35th IEEE Conference on Decision and Control pp949ndash954 Kobe Japan December 1996

[22] F L Lewis K Liu and A Yesildirek ldquoNeural net robotcontroller with guaranteed tracking performancerdquo IEEE Trans-actions on Neural Networks vol 6 no 3 pp 703ndash715 1995

[23] G A Rovithakis and M A Christodoulou ldquoAdaptive controlof unknown plants using dynamical neural networksrdquo IEEETransactions on Systems Man and Cybernetics vol 24 no 3 pp400ndash412 1994

[24] A Poznyak E N Sanchez and W Yu Differential NeuralNetworks for Robust Nonlinear Control Identification StateEstimation and Trajectory TrackingWorld Scientific Singapore2001

[25] E Garcıa and D A Murano ldquoState estimation for a class ofnonlinear differential games using differential neural networksrdquoin Proceedings of the American Control Conference (ACC rsquo11) pp2486ndash2491 San Francisco Calif USA July 2011

[26] D A Murano Nash equilibrium for uncertain nonlinear dif-ferential games using differential neural networks deterministicand stochastic cases [PhD thesis] CINVESTAV-IPN DistritoFederal Mexico 2004

[27] D Jiang and J Wang ldquoA recurrent neural network for onlinedesign of robust optimal filtersrdquo IEEE Transactions on Circuitsand Systems I FundamentalTheory and Applications vol 47 no6 pp 921ndash926 2000

[28] A G Parlos S K Menon and A F Atiya ldquoAn algorithmicapproach to adaptive state filtering using recurrent neuralnetworksrdquo IEEE Transactions on Neural Networks vol 12 no6 pp 1411ndash1432 2001

[29] S Haykin Neural Networks A Comprehensive FoundationPearson Chennai India 1999

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 9: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

Mathematical Problems in Engineering 9

Thus by integrating both sides of (81) on the time interval[0 119905] the following is obtained

int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le Φ

0minus Φ

119905+ [119865

1minus 119866

1] 119905

le Φ0+ [119865

1minus 119866

1] 119905

(82)

and by dividing (82) by 119905 it is easy to confirm that

1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591 le

Φ0

119905+ 119865

1minus 119866

1 (83)

By calculating the upper limit as 119905 rarr infin equation (83) canbe expressed as

lim sup119905rarrinfin

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) le 119865

1minus 119866

1(84)

and finally it is clear that

(i) if 1198651gt 119866

1 the maximum value of filtering error in

average sense is the one described in (77)(ii) if 119865

1le 119866

1 the maximum value of filtering error in

average sense is the one described in (78) given thatthe left-hand side of the inequality (84) cannot benegative

Hence the theorem is proved

6 Illustrating Example

Example 16 Consider a 2-player nonlinear differential gamegiven by

119905= 119891 (119909

119905) +

2

sum

119894=1

119892119894

(119909119905) 119906

119894

119905+ 119867120585

119905(85)

subject to the change of variables (10) and (11) (119894 = 2) wherethe control actions of each player are

1199061

119905= [

05 sawtooth (05119905)H (119905)

]

1199062

119905= [

H (119905)

05 sawtooth (05119905)] (86)

The undesired deterministic perturbations are

120585119905= [

sin (10119905)sin (10119905)] (87)

and H(119905) denotes the Heaviside step function or unit stepsignal Then under Assumptions 1ndash4 6 8 9 and 11 it is pos-sible to obtain a filtered feedback (perfect state) informationpattern 120578119894

119905= 119909

119905

Remark 17 As told before the functions 119891(sdot) and 119892119894

(sdot) aswell as the matrix 119867 are unknown but in order to makean example of (85) and its simulation the values of these

ldquounknownrdquo parameters will be shown in the Appendix of thispaper

Thereby consider now a first differential neural networkgiven by

119910119905= 119882

1119905120590 (119881

1119905119910119905) +

2

sum

119894=1

119882119894

2119905120593119894

(119881119894

2119905119910119905) 119906

119894

119905 (88)

where

11988210

= [05 01 0 2

02 05 06 0]

11988110

= [1 03 05 02

05 13 1 06]

119879

1198821

20= [

05 01

0 02] 119882

2

20= [

13 08

08 13]

1198811

20= [

1 01

01 2] 119881

2

20= [

05 07

09 01]

(89)

and the activation functions are

120590119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=1198811119905119910119905

119901 = 1 2 3 4

1205931

119901119902(119908

1

) =45

1 + 119890minus051199081minus 3

100381610038161003816100381610038161003816100381610038161199081=11988112119905119910119905

119901 = 119902 = 1 2

1205932

119901119902(119908

2

) =45

1 + 119890minus051199082minus 3

100381610038161003816100381610038161003816100381610038161199082=11988122119905119910119905

119901 = 119902 = 1 2

(90)

By proposing the values described in Assumption 8 as

Λ120590= Λ

1

120593= Λ

2

120593= [

13 0

0 13]

Λ1

]120593 = Λ2

]120593 = [08 0

0 08]

Λ1

120593= Λ

2

120593= Λ

1

1205931015840 = Λ

2

1205931015840 = [

13 0

0 13]

minus1

Λ119865= [

01 0

0 01]

minus1

Λ120590= Λ

1205901015840 =

[[[

[

13 0 0 0

0 13 0 0

0 0 13 0

0 0 0 13

]]]

]

minus1

Λ ]120590 =[[[

[

08 0 0 0

0 08 0 0

0 0 08 0

0 0 0 08

]]]

]

119860 =

[[[

[

minus13963 minus22632

00473 minus61606

0426 minus74451

minus61533 08738

]]]

]

119876 = [10 0

0 10]

119897120590= 119897

1

120593= 119897

2

120593= 13 119906

1

= 1199062

= 5

(91)

10 Mathematical Problems in Engineering

1

0

minus1

y1t

y1t

minus2

0 5 10 15 20 25

(a)

15

1

05

0

minus05

minus1

minus15

0 5 10 15 20 25

y2t

y2t

(b)

Figure 1 Comparison between 119910119905and 119910

119905for the 2-player nonlinear differential game (85)

and by choosing 1198701198821

= 100 and 1198701198811= 119870

1

1198822

= 1198702

1198822

= 1198701

1198812

=

1198702

1198812

= 1 the solution of the algebraic Riccati equation (23)results in

119875 = [20524 minus06374

minus06374 31842] (92)

Then by applying the learning law (38) described inTheorem 13 the value of the identification error in averagesense (40) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) = 003145 (93)

Remark 18 Even though 003145 is the value of 1198651on the

time interval [0 25] it is not the global minimum value that1198651can take since 119905does not tend to infinity So in order to find

this global minimum value it would be required to simulatean arbitrarily large time

On the other hand let a second differential neural net-work given by

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905(94)

be such that the equalities (32) and (33) are fulfilled that is

11988230

= [05 01 0 2

02 05 06 0] 119882

40= [

1 0

0 1]

Λ120572= [

173 0

0 173] Λ

120574= [

20 0

0 20]

Λ=

[[[

[

26 0 0 0

0 26 0 0

0 0 26 0

0 0 0 26

]]]

]

minus1

Λ120574= [

6734 546

546 6162]

minus1

Λ119866= [

01 0

0 01]

minus1

120585 = 5

(95)

and the activation functions are

120572119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=119905

119901 = 1 2 3 4

120574119901119902(119908) =

45

1 + 119890minus051199081minus 3

10038161003816100381610038161003816100381610038161003816119908=119905

119901 = 119902 = 1 2

(96)

By choosing 1198701198823

= 100000 and 1198701198824

= 1 the solution of thealgebraic Riccati equation (23) results in the matrix (92) andby applying the learning law (59) described in Theorem 14the value of the identification error in average sense (61) on atime period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) = 000006645 (97)

Remark 19 As in Remark 18 in order to find the globalminimum value of 119866

1 it would be required to simulate an

arbitrarily large time

Finally taking into account the identification errors (93)and (97) the value of the filtering error in average sense (77)-(78) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 003139 (98)

The simulation of this example wasmade using theMATLABand Simulink platforms and its results are shown in Figures1ndash4

As is seen in Figure 1 the differential neural network(88) can perform the identification of the continuous-timenonlinear dynamic game (85) where 119910

1119905and 119910

2119905(or simi-

larly 1199091119905

and 1199092119905) denote the state variables with undesired

perturbations and 1199101119905

and 1199102119905

indicate their state estimatesOn the other hand according to Figure 2 the differential

neural network (94) identifies the dynamics of the additivedeterministic noises or in other words the dynamics of =119867120585

119905 Thus 119911

1119905and 119911

2119905represent the state variables of the

above differential equation and 1119905

and 2119905

are their stateestimates

In this way Figure 3 shows the performance of thefiltering process (13) in the nonlinear differential game (85)that is to say it shows the comparison between the expectedor uncorrupted state variables 119909

1119905and 119909

2119905and the filtered

state estimates 1199091119905

and 1199092119905

Mathematical Problems in Engineering 11

0

minus01

z1t

z1t

01

02

0 05 1 15 2 25 3 35

(a)

0

minus01

z2t

z2t

01

0

minus02

05 1 15 2 25 3 35

(b)

Figure 2 Comparison between 119911119905and

119905for the 2-player nonlinear differential game (85)

1

0

minus1

minus2

x1t

x1t

0 5 10 15 20 25

(a)

1

0

minus1

x2t

x2t

0 5 10 15 20 25

(b)

Figure 3 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

minus2

minus22

minus24

minus26

minus28

x1t

x1t

5 10 15 20 25

(a)

15

1

05

5 10 15 20 25

x2t

x2t

(b)

Figure 4 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

Finally Figure 4 also exhibits the described filteringprocess but comparing the real state variables 119909

1119905and 119909

2119905

with filtered state estimates 1199091119905

and 1199092119905

7 Results Analysis and Discussion

Although the differential neural networks (14) and (27) canperform the identification of the differential equations (10)and (11) it is important to remember that their performancedepends on the number of neurons used and on the proposi-tion of all the constant values (or free parameters) that weredescribed in Assumptions 8 and 11 Therefore the values of1198651and 119866

1(at a fixed time) can change according to this fact

In other words due to the fact that the differential neuralnetworks are an approximation of a dynamic system (orgame) there always will be a residual error that will dependon this approximation

Thus in the particular case shown in Example 16 thedesign of (88) was made using only eight neurons four forthe layer of perceptrons without any relation to the playersand two for the layer of perceptrons of each player Similarlyfor the design of (94) (which is subject to the equalities (32)-(33)) six neurons were used

On the other hand it is important to mention that(14) and (27) will operate properly only if Assumptions 1ndash4 6 8 9 and 11 are satisfied that is to say there is noguarantee that these differential neural networks performgood identification and filtering processes if for examplethe class of nonlinear differential games (1)ndash(4) has a stochas-tic nature

Finally although this paper presents a new approach foridentification and filtering of nonlinear dynamic games itshould be emphasized that there exist other techniques thatmight solve the problem treated here for example the citedpublications in Section 1

12 Mathematical Problems in Engineering

8 Conclusions and Future Work

According to the results of this paper the differential neuralnetworks (14) and (27) solve the problem of identifying andfiltering the class of nonlinear differential games (1)ndash(4)

However there is no guarantee that (14) and (27) identifyand filter the states of (1)ndash(4) if Assumptions 1ndash4 6 8 9 and11 are not met

Thereby the proposed learning laws (38) and (59) obtainthe maximum values of identification error in average sense(40) and (61) that is to say the maximum value of filteringerror (77)-(78)

Nevertheless these errors depend on both the number ofneurons used in the differential neural networks (14) and (27)and the proposed values applied in their design conditions

On the other hand according to the simulation resultsof the illustrating example the effectiveness of (14) and (27)is shown and the applicability of Theorems 13 14 and 15 isverified

Finally speaking of future work in this research fieldone can analyze and discuss the use of (14) and (27) for theobtaining of equilibrium solutions in the class of nonlineardifferential games (1)ndash(4) that is to say the use of differentialneural networks for controlling nonlinear differential games

Appendix

The values of the ldquounknownrdquo nonlinear functions 119891(sdot) 1198921(sdot)and 1198922(sdot) and the ldquounknownrdquo constant matrix119867 used in (85)of Example 16 are shown as follows

119891 (119909119905) = [

1199092119905

sin (1199091119905)]

1198921

(119909119905)

= [minus 119909

1119905cos (119909

1119905) 0

0 1199092119905sin (119909

1119905) minus 119909

1119905cos (119909

2119905)]

1198922

(119909119905) = [

sin (1199091119905) 0

0 minus [1199091119905+ 119909

2119905]]

119867 = [05 0

0 05]

(A1)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I Alvarez A Poznyak and A Malo ldquoUrban traffic controlproblem a game theory approachrdquo in Proceedings of the 47thIEEE Conference on Decision and Control (CDC rsquo08) pp 2168ndash2172 Cancun Mexico December 2008

[2] T Basar and G J Olsder Dynamic Noncooperative GameTheory Academic Press Great Britain UK 2nd edition 1995

[3] J M C Clark and R B Vinter ldquoA differential dynamicgames approach to flow controlrdquo in Proceedings of the 42nd

IEEE Conference on Decision and Control pp 1228ndash1231 MauiHawaii USA December 2003

[4] S H Tamaddoni S Taheri and M Ahmadian ldquoOptimal VSCdesign based on nash strategy for differential 2-player gamesrdquoin Proceedings of the IEEE International Conference on SystemsMan and Cybernetics (SMC rsquo09) pp 2415ndash2420 San AntonioTex USA October 2009

[5] F Amato M Mattei and A Pironti ldquoRobust strategies forNash linear quadratic games under uncertain dynamicsrdquo inProceedings of the 37th IEEE Conference on Decision and Control(CDC rsquo98) pp 1869ndash1870 Tampa Fla USA December 1998

[6] X Nian S Yang and N Chen ldquoGuaranteed cost strategiesof uncertain LQ closed-loop differential games with multipleplayersrdquo in Proceedings of the American Control Conference pp1724ndash1729 Minneapolis Minn USA June 2006

[7] M Jimenez-Lizarraga and A Poznyak ldquoEquilibrium in LQdifferential games withmultiple scenariosrdquo in Proceedings of the46th IEEE Conference on Decision and Control (CDC rsquo07) pp4081ndash4086 New Orleans La USA December 2007

[8] M Jimenez-Lizarraga A Poznyak andM A Alcorta ldquoLeader-follower strategies for a multi-plant differential gamerdquo inProceedings of the American Control Conference (ACC rsquo08) pp3839ndash3844 Seattle Wash USA June 2008

[9] A S Poznyak and C J Gallegos ldquoMultimodel prey-predatorLQ differential gamesrdquo in Proceedings of the American ControlConference pp 5369ndash5374 Denver Colo USA June 2003

[10] Y Li and L Guo ldquoConvergence of adaptive linear stochasticdifferential games nonzero-sum caserdquo in Proceedings of the 10thWorld Congress on Intelligent Control and Automation (WCICArsquo12) pp 3543ndash3548 Beijing China July 2012

[11] D Vrabie and F Lewis ldquoAdaptive Dynamic Programmingalgorithm for finding online the equilibrium solution of thetwo-player zero-sum differential gamerdquo in Proceedings of theInternational Joint Conference on Neural Networks (IJCNN rsquo10)pp 1ndash8 Barcelona Spain July 2010

[12] B-S Chen C-S Tseng and H-J Uang ldquoFuzzy differentialgames for nonlinear stochastic systems suboptimal approachrdquoIEEE Transactions on Fuzzy Systems vol 10 no 2 pp 222ndash2332002

[13] B Widrow J R Glover Jr J M McCool et al ldquoAdaptive noisecancelling principles and applicationsrdquo Proceedings of the IEEEvol 63 no 12 pp 1692ndash1716 1975

[14] C E de Souza U Shaked and M Fu ldquoRobust119867infinfiltering for

continuous time varying uncertain systems with deterministicinput signalsrdquo IEEE Transactions on Signal Processing vol 43no 3 pp 709ndash719 1995

[15] A Osorio-Cordero A S Poznyak and M Taksar ldquoRobustdeterministic filtering for linear uncertain time-varying sys-temsrdquo in Proceedings of the American Control Conference pp2526ndash2530 Albuquerque NM USA June 1997

[16] M Hou and R J Patton ldquoOptimal filtering for systems withunknown inputsrdquo IEEE Transactions on Automatic Control vol43 no 3 pp 445ndash449 1998

[17] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 3088ndash3092 1984

[18] E B Kosmatopoulos M M Polycarpou M A Christodoulouand P A Ioannou ldquoHigh-order neural network structuresfor identification of dynamical systemsrdquo IEEE Transactions onNeural Networks vol 6 no 2 pp 422ndash431 1995

Mathematical Problems in Engineering 13

[19] F Abdollahi H A Talebi and R V Patel ldquoState estimationfor flexible-joint manipulators using stable neural networksrdquo inProceedings of the IEEE International Symposium on Computa-tional Intelligence in Robotics and Automation pp 25ndash29 KobeJapan 2003

[20] A Alessandri C Cervellera and M Sanguineti ldquoDesign ofasymptotic estimators an approach based on neural networksand nonlinear programmingrdquo IEEE Transactions on NeuralNetworks vol 18 no 1 pp 86ndash96 2007

[21] Y H Kim F L Lewis and C T Abdallah ldquoNonlinear observerdesign using dynamic recurrent neural networksrdquo in Proceed-ings of the 35th IEEE Conference on Decision and Control pp949ndash954 Kobe Japan December 1996

[22] F L Lewis K Liu and A Yesildirek ldquoNeural net robotcontroller with guaranteed tracking performancerdquo IEEE Trans-actions on Neural Networks vol 6 no 3 pp 703ndash715 1995

[23] G A Rovithakis and M A Christodoulou ldquoAdaptive controlof unknown plants using dynamical neural networksrdquo IEEETransactions on Systems Man and Cybernetics vol 24 no 3 pp400ndash412 1994

[24] A Poznyak E N Sanchez and W Yu Differential NeuralNetworks for Robust Nonlinear Control Identification StateEstimation and Trajectory TrackingWorld Scientific Singapore2001

[25] E Garcıa and D A Murano ldquoState estimation for a class ofnonlinear differential games using differential neural networksrdquoin Proceedings of the American Control Conference (ACC rsquo11) pp2486ndash2491 San Francisco Calif USA July 2011

[26] D A Murano Nash equilibrium for uncertain nonlinear dif-ferential games using differential neural networks deterministicand stochastic cases [PhD thesis] CINVESTAV-IPN DistritoFederal Mexico 2004

[27] D Jiang and J Wang ldquoA recurrent neural network for onlinedesign of robust optimal filtersrdquo IEEE Transactions on Circuitsand Systems I FundamentalTheory and Applications vol 47 no6 pp 921ndash926 2000

[28] A G Parlos S K Menon and A F Atiya ldquoAn algorithmicapproach to adaptive state filtering using recurrent neuralnetworksrdquo IEEE Transactions on Neural Networks vol 12 no6 pp 1411ndash1432 2001

[29] S Haykin Neural Networks A Comprehensive FoundationPearson Chennai India 1999

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 10: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

10 Mathematical Problems in Engineering

1

0

minus1

y1t

y1t

minus2

0 5 10 15 20 25

(a)

15

1

05

0

minus05

minus1

minus15

0 5 10 15 20 25

y2t

y2t

(b)

Figure 1 Comparison between 119910119905and 119910

119905for the 2-player nonlinear differential game (85)

and by choosing 1198701198821

= 100 and 1198701198811= 119870

1

1198822

= 1198702

1198822

= 1198701

1198812

=

1198702

1198812

= 1 the solution of the algebraic Riccati equation (23)results in

119875 = [20524 minus06374

minus06374 31842] (92)

Then by applying the learning law (38) described inTheorem 13 the value of the identification error in averagesense (40) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120576120591]119879

2119876 [120576120591]) 119889120591) = 003145 (93)

Remark 18 Even though 003145 is the value of 1198651on the

time interval [0 25] it is not the global minimum value that1198651can take since 119905does not tend to infinity So in order to find

this global minimum value it would be required to simulatean arbitrarily large time

On the other hand let a second differential neural net-work given by

119911119905= 119882

3119905120572 (

119905) + 119882

4119905120574 (

119905) 120585

119905(94)

be such that the equalities (32) and (33) are fulfilled that is

11988230

= [05 01 0 2

02 05 06 0] 119882

40= [

1 0

0 1]

Λ120572= [

173 0

0 173] Λ

120574= [

20 0

0 20]

Λ=

[[[

[

26 0 0 0

0 26 0 0

0 0 26 0

0 0 0 26

]]]

]

minus1

Λ120574= [

6734 546

546 6162]

minus1

Λ119866= [

01 0

0 01]

minus1

120585 = 5

(95)

and the activation functions are

120572119901(119908) =

45

1 + 119890minus05119908minus 3

1003816100381610038161003816100381610038161003816119908=119905

119901 = 1 2 3 4

120574119901119902(119908) =

45

1 + 119890minus051199081minus 3

10038161003816100381610038161003816100381610038161003816119908=119905

119901 = 119902 = 1 2

(96)

By choosing 1198701198823

= 100000 and 1198701198824

= 1 the solution of thealgebraic Riccati equation (23) results in the matrix (92) andby applying the learning law (59) described in Theorem 14the value of the identification error in average sense (61) on atime period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([120598120591]119879

119876 [120598120591]) 119889120591) = 000006645 (97)

Remark 19 As in Remark 18 in order to find the globalminimum value of 119866

1 it would be required to simulate an

arbitrarily large time

Finally taking into account the identification errors (93)and (97) the value of the filtering error in average sense (77)-(78) on a time period of 25 seconds is

lim sup119905rarr25

(1

119905int

119905

0

([119890120591]119879

119876 [119890120591]) 119889120591) = 003139 (98)

The simulation of this example wasmade using theMATLABand Simulink platforms and its results are shown in Figures1ndash4

As is seen in Figure 1 the differential neural network(88) can perform the identification of the continuous-timenonlinear dynamic game (85) where 119910

1119905and 119910

2119905(or simi-

larly 1199091119905

and 1199092119905) denote the state variables with undesired

perturbations and 1199101119905

and 1199102119905

indicate their state estimatesOn the other hand according to Figure 2 the differential

neural network (94) identifies the dynamics of the additivedeterministic noises or in other words the dynamics of =119867120585

119905 Thus 119911

1119905and 119911

2119905represent the state variables of the

above differential equation and 1119905

and 2119905

are their stateestimates

In this way Figure 3 shows the performance of thefiltering process (13) in the nonlinear differential game (85)that is to say it shows the comparison between the expectedor uncorrupted state variables 119909

1119905and 119909

2119905and the filtered

state estimates 1199091119905

and 1199092119905

Mathematical Problems in Engineering 11

0

minus01

z1t

z1t

01

02

0 05 1 15 2 25 3 35

(a)

0

minus01

z2t

z2t

01

0

minus02

05 1 15 2 25 3 35

(b)

Figure 2 Comparison between 119911119905and

119905for the 2-player nonlinear differential game (85)

1

0

minus1

minus2

x1t

x1t

0 5 10 15 20 25

(a)

1

0

minus1

x2t

x2t

0 5 10 15 20 25

(b)

Figure 3 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

minus2

minus22

minus24

minus26

minus28

x1t

x1t

5 10 15 20 25

(a)

15

1

05

5 10 15 20 25

x2t

x2t

(b)

Figure 4 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

Finally Figure 4 also exhibits the described filteringprocess but comparing the real state variables 119909

1119905and 119909

2119905

with filtered state estimates 1199091119905

and 1199092119905

7 Results Analysis and Discussion

Although the differential neural networks (14) and (27) canperform the identification of the differential equations (10)and (11) it is important to remember that their performancedepends on the number of neurons used and on the proposi-tion of all the constant values (or free parameters) that weredescribed in Assumptions 8 and 11 Therefore the values of1198651and 119866

1(at a fixed time) can change according to this fact

In other words due to the fact that the differential neuralnetworks are an approximation of a dynamic system (orgame) there always will be a residual error that will dependon this approximation

Thus in the particular case shown in Example 16 thedesign of (88) was made using only eight neurons four forthe layer of perceptrons without any relation to the playersand two for the layer of perceptrons of each player Similarlyfor the design of (94) (which is subject to the equalities (32)-(33)) six neurons were used

On the other hand it is important to mention that(14) and (27) will operate properly only if Assumptions 1ndash4 6 8 9 and 11 are satisfied that is to say there is noguarantee that these differential neural networks performgood identification and filtering processes if for examplethe class of nonlinear differential games (1)ndash(4) has a stochas-tic nature

Finally although this paper presents a new approach foridentification and filtering of nonlinear dynamic games itshould be emphasized that there exist other techniques thatmight solve the problem treated here for example the citedpublications in Section 1

12 Mathematical Problems in Engineering

8 Conclusions and Future Work

According to the results of this paper the differential neuralnetworks (14) and (27) solve the problem of identifying andfiltering the class of nonlinear differential games (1)ndash(4)

However there is no guarantee that (14) and (27) identifyand filter the states of (1)ndash(4) if Assumptions 1ndash4 6 8 9 and11 are not met

Thereby the proposed learning laws (38) and (59) obtainthe maximum values of identification error in average sense(40) and (61) that is to say the maximum value of filteringerror (77)-(78)

Nevertheless these errors depend on both the number ofneurons used in the differential neural networks (14) and (27)and the proposed values applied in their design conditions

On the other hand according to the simulation resultsof the illustrating example the effectiveness of (14) and (27)is shown and the applicability of Theorems 13 14 and 15 isverified

Finally speaking of future work in this research fieldone can analyze and discuss the use of (14) and (27) for theobtaining of equilibrium solutions in the class of nonlineardifferential games (1)ndash(4) that is to say the use of differentialneural networks for controlling nonlinear differential games

Appendix

The values of the ldquounknownrdquo nonlinear functions 119891(sdot) 1198921(sdot)and 1198922(sdot) and the ldquounknownrdquo constant matrix119867 used in (85)of Example 16 are shown as follows

119891 (119909119905) = [

1199092119905

sin (1199091119905)]

1198921

(119909119905)

= [minus 119909

1119905cos (119909

1119905) 0

0 1199092119905sin (119909

1119905) minus 119909

1119905cos (119909

2119905)]

1198922

(119909119905) = [

sin (1199091119905) 0

0 minus [1199091119905+ 119909

2119905]]

119867 = [05 0

0 05]

(A1)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I Alvarez A Poznyak and A Malo ldquoUrban traffic controlproblem a game theory approachrdquo in Proceedings of the 47thIEEE Conference on Decision and Control (CDC rsquo08) pp 2168ndash2172 Cancun Mexico December 2008

[2] T Basar and G J Olsder Dynamic Noncooperative GameTheory Academic Press Great Britain UK 2nd edition 1995

[3] J M C Clark and R B Vinter ldquoA differential dynamicgames approach to flow controlrdquo in Proceedings of the 42nd

IEEE Conference on Decision and Control pp 1228ndash1231 MauiHawaii USA December 2003

[4] S H Tamaddoni S Taheri and M Ahmadian ldquoOptimal VSCdesign based on nash strategy for differential 2-player gamesrdquoin Proceedings of the IEEE International Conference on SystemsMan and Cybernetics (SMC rsquo09) pp 2415ndash2420 San AntonioTex USA October 2009

[5] F Amato M Mattei and A Pironti ldquoRobust strategies forNash linear quadratic games under uncertain dynamicsrdquo inProceedings of the 37th IEEE Conference on Decision and Control(CDC rsquo98) pp 1869ndash1870 Tampa Fla USA December 1998

[6] X Nian S Yang and N Chen ldquoGuaranteed cost strategiesof uncertain LQ closed-loop differential games with multipleplayersrdquo in Proceedings of the American Control Conference pp1724ndash1729 Minneapolis Minn USA June 2006

[7] M Jimenez-Lizarraga and A Poznyak ldquoEquilibrium in LQdifferential games withmultiple scenariosrdquo in Proceedings of the46th IEEE Conference on Decision and Control (CDC rsquo07) pp4081ndash4086 New Orleans La USA December 2007

[8] M Jimenez-Lizarraga A Poznyak andM A Alcorta ldquoLeader-follower strategies for a multi-plant differential gamerdquo inProceedings of the American Control Conference (ACC rsquo08) pp3839ndash3844 Seattle Wash USA June 2008

[9] A S Poznyak and C J Gallegos ldquoMultimodel prey-predatorLQ differential gamesrdquo in Proceedings of the American ControlConference pp 5369ndash5374 Denver Colo USA June 2003

[10] Y Li and L Guo ldquoConvergence of adaptive linear stochasticdifferential games nonzero-sum caserdquo in Proceedings of the 10thWorld Congress on Intelligent Control and Automation (WCICArsquo12) pp 3543ndash3548 Beijing China July 2012

[11] D Vrabie and F Lewis ldquoAdaptive Dynamic Programmingalgorithm for finding online the equilibrium solution of thetwo-player zero-sum differential gamerdquo in Proceedings of theInternational Joint Conference on Neural Networks (IJCNN rsquo10)pp 1ndash8 Barcelona Spain July 2010

[12] B-S Chen C-S Tseng and H-J Uang ldquoFuzzy differentialgames for nonlinear stochastic systems suboptimal approachrdquoIEEE Transactions on Fuzzy Systems vol 10 no 2 pp 222ndash2332002

[13] B Widrow J R Glover Jr J M McCool et al ldquoAdaptive noisecancelling principles and applicationsrdquo Proceedings of the IEEEvol 63 no 12 pp 1692ndash1716 1975

[14] C E de Souza U Shaked and M Fu ldquoRobust119867infinfiltering for

continuous time varying uncertain systems with deterministicinput signalsrdquo IEEE Transactions on Signal Processing vol 43no 3 pp 709ndash719 1995

[15] A Osorio-Cordero A S Poznyak and M Taksar ldquoRobustdeterministic filtering for linear uncertain time-varying sys-temsrdquo in Proceedings of the American Control Conference pp2526ndash2530 Albuquerque NM USA June 1997

[16] M Hou and R J Patton ldquoOptimal filtering for systems withunknown inputsrdquo IEEE Transactions on Automatic Control vol43 no 3 pp 445ndash449 1998

[17] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 3088ndash3092 1984

[18] E B Kosmatopoulos M M Polycarpou M A Christodoulouand P A Ioannou ldquoHigh-order neural network structuresfor identification of dynamical systemsrdquo IEEE Transactions onNeural Networks vol 6 no 2 pp 422ndash431 1995

Mathematical Problems in Engineering 13

[19] F Abdollahi H A Talebi and R V Patel ldquoState estimationfor flexible-joint manipulators using stable neural networksrdquo inProceedings of the IEEE International Symposium on Computa-tional Intelligence in Robotics and Automation pp 25ndash29 KobeJapan 2003

[20] A Alessandri C Cervellera and M Sanguineti ldquoDesign ofasymptotic estimators an approach based on neural networksand nonlinear programmingrdquo IEEE Transactions on NeuralNetworks vol 18 no 1 pp 86ndash96 2007

[21] Y H Kim F L Lewis and C T Abdallah ldquoNonlinear observerdesign using dynamic recurrent neural networksrdquo in Proceed-ings of the 35th IEEE Conference on Decision and Control pp949ndash954 Kobe Japan December 1996

[22] F L Lewis K Liu and A Yesildirek ldquoNeural net robotcontroller with guaranteed tracking performancerdquo IEEE Trans-actions on Neural Networks vol 6 no 3 pp 703ndash715 1995

[23] G A Rovithakis and M A Christodoulou ldquoAdaptive controlof unknown plants using dynamical neural networksrdquo IEEETransactions on Systems Man and Cybernetics vol 24 no 3 pp400ndash412 1994

[24] A Poznyak E N Sanchez and W Yu Differential NeuralNetworks for Robust Nonlinear Control Identification StateEstimation and Trajectory TrackingWorld Scientific Singapore2001

[25] E Garcıa and D A Murano ldquoState estimation for a class ofnonlinear differential games using differential neural networksrdquoin Proceedings of the American Control Conference (ACC rsquo11) pp2486ndash2491 San Francisco Calif USA July 2011

[26] D A Murano Nash equilibrium for uncertain nonlinear dif-ferential games using differential neural networks deterministicand stochastic cases [PhD thesis] CINVESTAV-IPN DistritoFederal Mexico 2004

[27] D Jiang and J Wang ldquoA recurrent neural network for onlinedesign of robust optimal filtersrdquo IEEE Transactions on Circuitsand Systems I FundamentalTheory and Applications vol 47 no6 pp 921ndash926 2000

[28] A G Parlos S K Menon and A F Atiya ldquoAn algorithmicapproach to adaptive state filtering using recurrent neuralnetworksrdquo IEEE Transactions on Neural Networks vol 12 no6 pp 1411ndash1432 2001

[29] S Haykin Neural Networks A Comprehensive FoundationPearson Chennai India 1999

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 11: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

Mathematical Problems in Engineering 11

0

minus01

z1t

z1t

01

02

0 05 1 15 2 25 3 35

(a)

0

minus01

z2t

z2t

01

0

minus02

05 1 15 2 25 3 35

(b)

Figure 2 Comparison between 119911119905and

119905for the 2-player nonlinear differential game (85)

1

0

minus1

minus2

x1t

x1t

0 5 10 15 20 25

(a)

1

0

minus1

x2t

x2t

0 5 10 15 20 25

(b)

Figure 3 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

minus2

minus22

minus24

minus26

minus28

x1t

x1t

5 10 15 20 25

(a)

15

1

05

5 10 15 20 25

x2t

x2t

(b)

Figure 4 Comparison between 119909119905and 119909

119905for the 2-player nonlinear differential game (85)

Finally Figure 4 also exhibits the described filteringprocess but comparing the real state variables 119909

1119905and 119909

2119905

with filtered state estimates 1199091119905

and 1199092119905

7 Results Analysis and Discussion

Although the differential neural networks (14) and (27) canperform the identification of the differential equations (10)and (11) it is important to remember that their performancedepends on the number of neurons used and on the proposi-tion of all the constant values (or free parameters) that weredescribed in Assumptions 8 and 11 Therefore the values of1198651and 119866

1(at a fixed time) can change according to this fact

In other words due to the fact that the differential neuralnetworks are an approximation of a dynamic system (orgame) there always will be a residual error that will dependon this approximation

Thus in the particular case shown in Example 16 thedesign of (88) was made using only eight neurons four forthe layer of perceptrons without any relation to the playersand two for the layer of perceptrons of each player Similarlyfor the design of (94) (which is subject to the equalities (32)-(33)) six neurons were used

On the other hand it is important to mention that(14) and (27) will operate properly only if Assumptions 1ndash4 6 8 9 and 11 are satisfied that is to say there is noguarantee that these differential neural networks performgood identification and filtering processes if for examplethe class of nonlinear differential games (1)ndash(4) has a stochas-tic nature

Finally although this paper presents a new approach foridentification and filtering of nonlinear dynamic games itshould be emphasized that there exist other techniques thatmight solve the problem treated here for example the citedpublications in Section 1

12 Mathematical Problems in Engineering

8 Conclusions and Future Work

According to the results of this paper the differential neuralnetworks (14) and (27) solve the problem of identifying andfiltering the class of nonlinear differential games (1)ndash(4)

However there is no guarantee that (14) and (27) identifyand filter the states of (1)ndash(4) if Assumptions 1ndash4 6 8 9 and11 are not met

Thereby the proposed learning laws (38) and (59) obtainthe maximum values of identification error in average sense(40) and (61) that is to say the maximum value of filteringerror (77)-(78)

Nevertheless these errors depend on both the number ofneurons used in the differential neural networks (14) and (27)and the proposed values applied in their design conditions

On the other hand according to the simulation resultsof the illustrating example the effectiveness of (14) and (27)is shown and the applicability of Theorems 13 14 and 15 isverified

Finally speaking of future work in this research fieldone can analyze and discuss the use of (14) and (27) for theobtaining of equilibrium solutions in the class of nonlineardifferential games (1)ndash(4) that is to say the use of differentialneural networks for controlling nonlinear differential games

Appendix

The values of the ldquounknownrdquo nonlinear functions 119891(sdot) 1198921(sdot)and 1198922(sdot) and the ldquounknownrdquo constant matrix119867 used in (85)of Example 16 are shown as follows

119891 (119909119905) = [

1199092119905

sin (1199091119905)]

1198921

(119909119905)

= [minus 119909

1119905cos (119909

1119905) 0

0 1199092119905sin (119909

1119905) minus 119909

1119905cos (119909

2119905)]

1198922

(119909119905) = [

sin (1199091119905) 0

0 minus [1199091119905+ 119909

2119905]]

119867 = [05 0

0 05]

(A1)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I Alvarez A Poznyak and A Malo ldquoUrban traffic controlproblem a game theory approachrdquo in Proceedings of the 47thIEEE Conference on Decision and Control (CDC rsquo08) pp 2168ndash2172 Cancun Mexico December 2008

[2] T Basar and G J Olsder Dynamic Noncooperative GameTheory Academic Press Great Britain UK 2nd edition 1995

[3] J M C Clark and R B Vinter ldquoA differential dynamicgames approach to flow controlrdquo in Proceedings of the 42nd

IEEE Conference on Decision and Control pp 1228ndash1231 MauiHawaii USA December 2003

[4] S H Tamaddoni S Taheri and M Ahmadian ldquoOptimal VSCdesign based on nash strategy for differential 2-player gamesrdquoin Proceedings of the IEEE International Conference on SystemsMan and Cybernetics (SMC rsquo09) pp 2415ndash2420 San AntonioTex USA October 2009

[5] F Amato M Mattei and A Pironti ldquoRobust strategies forNash linear quadratic games under uncertain dynamicsrdquo inProceedings of the 37th IEEE Conference on Decision and Control(CDC rsquo98) pp 1869ndash1870 Tampa Fla USA December 1998

[6] X Nian S Yang and N Chen ldquoGuaranteed cost strategiesof uncertain LQ closed-loop differential games with multipleplayersrdquo in Proceedings of the American Control Conference pp1724ndash1729 Minneapolis Minn USA June 2006

[7] M Jimenez-Lizarraga and A Poznyak ldquoEquilibrium in LQdifferential games withmultiple scenariosrdquo in Proceedings of the46th IEEE Conference on Decision and Control (CDC rsquo07) pp4081ndash4086 New Orleans La USA December 2007

[8] M Jimenez-Lizarraga A Poznyak andM A Alcorta ldquoLeader-follower strategies for a multi-plant differential gamerdquo inProceedings of the American Control Conference (ACC rsquo08) pp3839ndash3844 Seattle Wash USA June 2008

[9] A S Poznyak and C J Gallegos ldquoMultimodel prey-predatorLQ differential gamesrdquo in Proceedings of the American ControlConference pp 5369ndash5374 Denver Colo USA June 2003

[10] Y Li and L Guo ldquoConvergence of adaptive linear stochasticdifferential games nonzero-sum caserdquo in Proceedings of the 10thWorld Congress on Intelligent Control and Automation (WCICArsquo12) pp 3543ndash3548 Beijing China July 2012

[11] D Vrabie and F Lewis ldquoAdaptive Dynamic Programmingalgorithm for finding online the equilibrium solution of thetwo-player zero-sum differential gamerdquo in Proceedings of theInternational Joint Conference on Neural Networks (IJCNN rsquo10)pp 1ndash8 Barcelona Spain July 2010

[12] B-S Chen C-S Tseng and H-J Uang ldquoFuzzy differentialgames for nonlinear stochastic systems suboptimal approachrdquoIEEE Transactions on Fuzzy Systems vol 10 no 2 pp 222ndash2332002

[13] B Widrow J R Glover Jr J M McCool et al ldquoAdaptive noisecancelling principles and applicationsrdquo Proceedings of the IEEEvol 63 no 12 pp 1692ndash1716 1975

[14] C E de Souza U Shaked and M Fu ldquoRobust119867infinfiltering for

continuous time varying uncertain systems with deterministicinput signalsrdquo IEEE Transactions on Signal Processing vol 43no 3 pp 709ndash719 1995

[15] A Osorio-Cordero A S Poznyak and M Taksar ldquoRobustdeterministic filtering for linear uncertain time-varying sys-temsrdquo in Proceedings of the American Control Conference pp2526ndash2530 Albuquerque NM USA June 1997

[16] M Hou and R J Patton ldquoOptimal filtering for systems withunknown inputsrdquo IEEE Transactions on Automatic Control vol43 no 3 pp 445ndash449 1998

[17] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 3088ndash3092 1984

[18] E B Kosmatopoulos M M Polycarpou M A Christodoulouand P A Ioannou ldquoHigh-order neural network structuresfor identification of dynamical systemsrdquo IEEE Transactions onNeural Networks vol 6 no 2 pp 422ndash431 1995

Mathematical Problems in Engineering 13

[19] F Abdollahi H A Talebi and R V Patel ldquoState estimationfor flexible-joint manipulators using stable neural networksrdquo inProceedings of the IEEE International Symposium on Computa-tional Intelligence in Robotics and Automation pp 25ndash29 KobeJapan 2003

[20] A Alessandri C Cervellera and M Sanguineti ldquoDesign ofasymptotic estimators an approach based on neural networksand nonlinear programmingrdquo IEEE Transactions on NeuralNetworks vol 18 no 1 pp 86ndash96 2007

[21] Y H Kim F L Lewis and C T Abdallah ldquoNonlinear observerdesign using dynamic recurrent neural networksrdquo in Proceed-ings of the 35th IEEE Conference on Decision and Control pp949ndash954 Kobe Japan December 1996

[22] F L Lewis K Liu and A Yesildirek ldquoNeural net robotcontroller with guaranteed tracking performancerdquo IEEE Trans-actions on Neural Networks vol 6 no 3 pp 703ndash715 1995

[23] G A Rovithakis and M A Christodoulou ldquoAdaptive controlof unknown plants using dynamical neural networksrdquo IEEETransactions on Systems Man and Cybernetics vol 24 no 3 pp400ndash412 1994

[24] A Poznyak E N Sanchez and W Yu Differential NeuralNetworks for Robust Nonlinear Control Identification StateEstimation and Trajectory TrackingWorld Scientific Singapore2001

[25] E Garcıa and D A Murano ldquoState estimation for a class ofnonlinear differential games using differential neural networksrdquoin Proceedings of the American Control Conference (ACC rsquo11) pp2486ndash2491 San Francisco Calif USA July 2011

[26] D A Murano Nash equilibrium for uncertain nonlinear dif-ferential games using differential neural networks deterministicand stochastic cases [PhD thesis] CINVESTAV-IPN DistritoFederal Mexico 2004

[27] D Jiang and J Wang ldquoA recurrent neural network for onlinedesign of robust optimal filtersrdquo IEEE Transactions on Circuitsand Systems I FundamentalTheory and Applications vol 47 no6 pp 921ndash926 2000

[28] A G Parlos S K Menon and A F Atiya ldquoAn algorithmicapproach to adaptive state filtering using recurrent neuralnetworksrdquo IEEE Transactions on Neural Networks vol 12 no6 pp 1411ndash1432 2001

[29] S Haykin Neural Networks A Comprehensive FoundationPearson Chennai India 1999

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 12: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

12 Mathematical Problems in Engineering

8 Conclusions and Future Work

According to the results of this paper the differential neuralnetworks (14) and (27) solve the problem of identifying andfiltering the class of nonlinear differential games (1)ndash(4)

However there is no guarantee that (14) and (27) identifyand filter the states of (1)ndash(4) if Assumptions 1ndash4 6 8 9 and11 are not met

Thereby the proposed learning laws (38) and (59) obtainthe maximum values of identification error in average sense(40) and (61) that is to say the maximum value of filteringerror (77)-(78)

Nevertheless these errors depend on both the number ofneurons used in the differential neural networks (14) and (27)and the proposed values applied in their design conditions

On the other hand according to the simulation resultsof the illustrating example the effectiveness of (14) and (27)is shown and the applicability of Theorems 13 14 and 15 isverified

Finally speaking of future work in this research fieldone can analyze and discuss the use of (14) and (27) for theobtaining of equilibrium solutions in the class of nonlineardifferential games (1)ndash(4) that is to say the use of differentialneural networks for controlling nonlinear differential games

Appendix

The values of the ldquounknownrdquo nonlinear functions 119891(sdot) 1198921(sdot)and 1198922(sdot) and the ldquounknownrdquo constant matrix119867 used in (85)of Example 16 are shown as follows

119891 (119909119905) = [

1199092119905

sin (1199091119905)]

1198921

(119909119905)

= [minus 119909

1119905cos (119909

1119905) 0

0 1199092119905sin (119909

1119905) minus 119909

1119905cos (119909

2119905)]

1198922

(119909119905) = [

sin (1199091119905) 0

0 minus [1199091119905+ 119909

2119905]]

119867 = [05 0

0 05]

(A1)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I Alvarez A Poznyak and A Malo ldquoUrban traffic controlproblem a game theory approachrdquo in Proceedings of the 47thIEEE Conference on Decision and Control (CDC rsquo08) pp 2168ndash2172 Cancun Mexico December 2008

[2] T Basar and G J Olsder Dynamic Noncooperative GameTheory Academic Press Great Britain UK 2nd edition 1995

[3] J M C Clark and R B Vinter ldquoA differential dynamicgames approach to flow controlrdquo in Proceedings of the 42nd

IEEE Conference on Decision and Control pp 1228ndash1231 MauiHawaii USA December 2003

[4] S H Tamaddoni S Taheri and M Ahmadian ldquoOptimal VSCdesign based on nash strategy for differential 2-player gamesrdquoin Proceedings of the IEEE International Conference on SystemsMan and Cybernetics (SMC rsquo09) pp 2415ndash2420 San AntonioTex USA October 2009

[5] F Amato M Mattei and A Pironti ldquoRobust strategies forNash linear quadratic games under uncertain dynamicsrdquo inProceedings of the 37th IEEE Conference on Decision and Control(CDC rsquo98) pp 1869ndash1870 Tampa Fla USA December 1998

[6] X Nian S Yang and N Chen ldquoGuaranteed cost strategiesof uncertain LQ closed-loop differential games with multipleplayersrdquo in Proceedings of the American Control Conference pp1724ndash1729 Minneapolis Minn USA June 2006

[7] M Jimenez-Lizarraga and A Poznyak ldquoEquilibrium in LQdifferential games withmultiple scenariosrdquo in Proceedings of the46th IEEE Conference on Decision and Control (CDC rsquo07) pp4081ndash4086 New Orleans La USA December 2007

[8] M Jimenez-Lizarraga A Poznyak andM A Alcorta ldquoLeader-follower strategies for a multi-plant differential gamerdquo inProceedings of the American Control Conference (ACC rsquo08) pp3839ndash3844 Seattle Wash USA June 2008

[9] A S Poznyak and C J Gallegos ldquoMultimodel prey-predatorLQ differential gamesrdquo in Proceedings of the American ControlConference pp 5369ndash5374 Denver Colo USA June 2003

[10] Y Li and L Guo ldquoConvergence of adaptive linear stochasticdifferential games nonzero-sum caserdquo in Proceedings of the 10thWorld Congress on Intelligent Control and Automation (WCICArsquo12) pp 3543ndash3548 Beijing China July 2012

[11] D Vrabie and F Lewis ldquoAdaptive Dynamic Programmingalgorithm for finding online the equilibrium solution of thetwo-player zero-sum differential gamerdquo in Proceedings of theInternational Joint Conference on Neural Networks (IJCNN rsquo10)pp 1ndash8 Barcelona Spain July 2010

[12] B-S Chen C-S Tseng and H-J Uang ldquoFuzzy differentialgames for nonlinear stochastic systems suboptimal approachrdquoIEEE Transactions on Fuzzy Systems vol 10 no 2 pp 222ndash2332002

[13] B Widrow J R Glover Jr J M McCool et al ldquoAdaptive noisecancelling principles and applicationsrdquo Proceedings of the IEEEvol 63 no 12 pp 1692ndash1716 1975

[14] C E de Souza U Shaked and M Fu ldquoRobust119867infinfiltering for

continuous time varying uncertain systems with deterministicinput signalsrdquo IEEE Transactions on Signal Processing vol 43no 3 pp 709ndash719 1995

[15] A Osorio-Cordero A S Poznyak and M Taksar ldquoRobustdeterministic filtering for linear uncertain time-varying sys-temsrdquo in Proceedings of the American Control Conference pp2526ndash2530 Albuquerque NM USA June 1997

[16] M Hou and R J Patton ldquoOptimal filtering for systems withunknown inputsrdquo IEEE Transactions on Automatic Control vol43 no 3 pp 445ndash449 1998

[17] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 3088ndash3092 1984

[18] E B Kosmatopoulos M M Polycarpou M A Christodoulouand P A Ioannou ldquoHigh-order neural network structuresfor identification of dynamical systemsrdquo IEEE Transactions onNeural Networks vol 6 no 2 pp 422ndash431 1995

Mathematical Problems in Engineering 13

[19] F Abdollahi H A Talebi and R V Patel ldquoState estimationfor flexible-joint manipulators using stable neural networksrdquo inProceedings of the IEEE International Symposium on Computa-tional Intelligence in Robotics and Automation pp 25ndash29 KobeJapan 2003

[20] A Alessandri C Cervellera and M Sanguineti ldquoDesign ofasymptotic estimators an approach based on neural networksand nonlinear programmingrdquo IEEE Transactions on NeuralNetworks vol 18 no 1 pp 86ndash96 2007

[21] Y H Kim F L Lewis and C T Abdallah ldquoNonlinear observerdesign using dynamic recurrent neural networksrdquo in Proceed-ings of the 35th IEEE Conference on Decision and Control pp949ndash954 Kobe Japan December 1996

[22] F L Lewis K Liu and A Yesildirek ldquoNeural net robotcontroller with guaranteed tracking performancerdquo IEEE Trans-actions on Neural Networks vol 6 no 3 pp 703ndash715 1995

[23] G A Rovithakis and M A Christodoulou ldquoAdaptive controlof unknown plants using dynamical neural networksrdquo IEEETransactions on Systems Man and Cybernetics vol 24 no 3 pp400ndash412 1994

[24] A Poznyak E N Sanchez and W Yu Differential NeuralNetworks for Robust Nonlinear Control Identification StateEstimation and Trajectory TrackingWorld Scientific Singapore2001

[25] E Garcıa and D A Murano ldquoState estimation for a class ofnonlinear differential games using differential neural networksrdquoin Proceedings of the American Control Conference (ACC rsquo11) pp2486ndash2491 San Francisco Calif USA July 2011

[26] D A Murano Nash equilibrium for uncertain nonlinear dif-ferential games using differential neural networks deterministicand stochastic cases [PhD thesis] CINVESTAV-IPN DistritoFederal Mexico 2004

[27] D Jiang and J Wang ldquoA recurrent neural network for onlinedesign of robust optimal filtersrdquo IEEE Transactions on Circuitsand Systems I FundamentalTheory and Applications vol 47 no6 pp 921ndash926 2000

[28] A G Parlos S K Menon and A F Atiya ldquoAn algorithmicapproach to adaptive state filtering using recurrent neuralnetworksrdquo IEEE Transactions on Neural Networks vol 12 no6 pp 1411ndash1432 2001

[29] S Haykin Neural Networks A Comprehensive FoundationPearson Chennai India 1999

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 13: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

Mathematical Problems in Engineering 13

[19] F Abdollahi H A Talebi and R V Patel ldquoState estimationfor flexible-joint manipulators using stable neural networksrdquo inProceedings of the IEEE International Symposium on Computa-tional Intelligence in Robotics and Automation pp 25ndash29 KobeJapan 2003

[20] A Alessandri C Cervellera and M Sanguineti ldquoDesign ofasymptotic estimators an approach based on neural networksand nonlinear programmingrdquo IEEE Transactions on NeuralNetworks vol 18 no 1 pp 86ndash96 2007

[21] Y H Kim F L Lewis and C T Abdallah ldquoNonlinear observerdesign using dynamic recurrent neural networksrdquo in Proceed-ings of the 35th IEEE Conference on Decision and Control pp949ndash954 Kobe Japan December 1996

[22] F L Lewis K Liu and A Yesildirek ldquoNeural net robotcontroller with guaranteed tracking performancerdquo IEEE Trans-actions on Neural Networks vol 6 no 3 pp 703ndash715 1995

[23] G A Rovithakis and M A Christodoulou ldquoAdaptive controlof unknown plants using dynamical neural networksrdquo IEEETransactions on Systems Man and Cybernetics vol 24 no 3 pp400ndash412 1994

[24] A Poznyak E N Sanchez and W Yu Differential NeuralNetworks for Robust Nonlinear Control Identification StateEstimation and Trajectory TrackingWorld Scientific Singapore2001

[25] E Garcıa and D A Murano ldquoState estimation for a class ofnonlinear differential games using differential neural networksrdquoin Proceedings of the American Control Conference (ACC rsquo11) pp2486ndash2491 San Francisco Calif USA July 2011

[26] D A Murano Nash equilibrium for uncertain nonlinear dif-ferential games using differential neural networks deterministicand stochastic cases [PhD thesis] CINVESTAV-IPN DistritoFederal Mexico 2004

[27] D Jiang and J Wang ldquoA recurrent neural network for onlinedesign of robust optimal filtersrdquo IEEE Transactions on Circuitsand Systems I FundamentalTheory and Applications vol 47 no6 pp 921ndash926 2000

[28] A G Parlos S K Menon and A F Atiya ldquoAn algorithmicapproach to adaptive state filtering using recurrent neuralnetworksrdquo IEEE Transactions on Neural Networks vol 12 no6 pp 1411ndash1432 2001

[29] S Haykin Neural Networks A Comprehensive FoundationPearson Chennai India 1999

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 14: Research Article Differential Neural Networks for ...downloads.hindawi.com/journals/mpe/2014/306761.pdf · second di erential neural network will identify the e ect of these additive

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of