stabilityanalysisofaclassofneuralnetworkswith state ... › journals › ddns › 2020 ›...

9
Research Article Stability Analysis of a Class of Neural Networks with State-Dependent State Delay Yue Chen and Jin-E Zhang Hubei Normal University, Huangshi 435002, China Correspondence should be addressed to Jin-E Zhang; [email protected] Received 13 February 2020; Accepted 2 April 2020; Published 1 May 2020 Academic Editor: Jianquan Lu Copyright © 2020 Yue Chen and Jin-E Zhang. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. e differential equations with state-dependent delay are very important equations because they can describe some problems in the real world more accurately. Due to the complexity of state-dependent delay, it also brings challenges to the research. e value of delay varying with the state is the difference between state-dependent delay and time-dependent delay. It is impossible to know exactly in advance how far historical state information is needed, and then the problem of state-dependent delay is more complicated compared with time-dependent delay. e dominating work of this paper is to solve the stability problem of neural networks equipped with state-dependent state delay. We use the purely analytical method to deduce the sufficient conditions for local exponential stability of the zero solution. Finally, a few numerical examples are presented to prove the availability of our results. 1. Introduction Neural network is an information processing system to simulate the structures and functions of the human brain. As far as we know, the functions and structures of the individual neuron are simple, but the dynamic behavior of the neural network is very rich and intricate [1, 2]. In the neural network, several simple processing units are connected to each other in a certain way. Neural network system is highly complex, which not only has the common characteristics of general nonlinear systems but also has its own unique features [3–6]. For example, it is proved that the neural network has the ability to approximate nonlinear mapping, and any continuous nonlinear function mapping can be approximated by the multilayer neural network with arbi- trary precision. It shows that the neural network has a good application prospect in the challenging nonlinear control field [3]. Also, the parallel distributed integrated optimi- zation processing of information makes the neural network very suitable for solving large-scale real-time computing problems in system control [4]. At the same time, some neural network models have the trait of automatically searching the extremum of energy function, which is very useful in adaptive control design [5]. In addition, the neural network also has the peculiarity of high fault-tolerant ability, generalization ability, and adaptive ability for learning be- havior [6]. Over that seventy years, neural networks have been in- volved in combinatorial optimization, pattern recognition, image processing, robot control, signal processing, and other scientific fields and achieved extensive success [7–10]. In the application of the neural network, on the one hand, infor- mation transmission between neurons needs time. On the other hand, due to the influence of hardware implementation in reality (such as limited switching speed), the time delay phenomenon is inevitable [11]. e existence of time delay may induce instability, oscillation, and poor performance, but we can also overcome the unfavorable effect about some types of time delays by developed control frameworks, such as based on the Halanay inequality framework [1, 2, 11, 12]. At present, many different delays have been applied to the neural network [2, 12–18], for instance, time-varying delay [12, 14], distributed delay [13, 15, 16], discrete constant delay [2, 13], proportional delay [17], and unbounded delay [18]. Hindawi Discrete Dynamics in Nature and Society Volume 2020, Article ID 4820351, 9 pages https://doi.org/10.1155/2020/4820351

Upload: others

Post on 08-Feb-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • Research ArticleStability Analysis of a Class of Neural Networks withState-Dependent State Delay

    Yue Chen and Jin-E Zhang

    Hubei Normal University, Huangshi 435002, China

    Correspondence should be addressed to Jin-E Zhang; [email protected]

    Received 13 February 2020; Accepted 2 April 2020; Published 1 May 2020

    Academic Editor: Jianquan Lu

    Copyright © 2020 Yue Chen and Jin-E Zhang.,is is an open access article distributed under the Creative Commons AttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work isproperly cited.

    ,e differential equations with state-dependent delay are very important equations because they can describe some problems inthe real world more accurately. Due to the complexity of state-dependent delay, it also brings challenges to the research.,e valueof delay varying with the state is the difference between state-dependent delay and time-dependent delay. It is impossible to knowexactly in advance how far historical state information is needed, and then the problem of state-dependent delay is morecomplicated compared with time-dependent delay. ,e dominating work of this paper is to solve the stability problem of neuralnetworks equipped with state-dependent state delay. We use the purely analytical method to deduce the sufficient conditions forlocal exponential stability of the zero solution. Finally, a few numerical examples are presented to prove the availability ofour results.

    1. Introduction

    Neural network is an information processing system tosimulate the structures and functions of the human brain. Asfar as we know, the functions and structures of the individualneuron are simple, but the dynamic behavior of the neuralnetwork is very rich and intricate [1, 2]. In the neuralnetwork, several simple processing units are connected toeach other in a certain way. Neural network system is highlycomplex, which not only has the common characteristics ofgeneral nonlinear systems but also has its own uniquefeatures [3–6]. For example, it is proved that the neuralnetwork has the ability to approximate nonlinear mapping,and any continuous nonlinear function mapping can beapproximated by the multilayer neural network with arbi-trary precision. It shows that the neural network has a goodapplication prospect in the challenging nonlinear controlfield [3]. Also, the parallel distributed integrated optimi-zation processing of information makes the neural networkvery suitable for solving large-scale real-time computingproblems in system control [4]. At the same time, someneural network models have the trait of automatically

    searching the extremum of energy function, which is veryuseful in adaptive control design [5]. In addition, the neuralnetwork also has the peculiarity of high fault-tolerant ability,generalization ability, and adaptive ability for learning be-havior [6].

    Over that seventy years, neural networks have been in-volved in combinatorial optimization, pattern recognition,image processing, robot control, signal processing, and otherscientific fields and achieved extensive success [7–10]. In theapplication of the neural network, on the one hand, infor-mation transmission between neurons needs time. On theother hand, due to the influence of hardware implementationin reality (such as limited switching speed), the time delayphenomenon is inevitable [11]. ,e existence of time delaymay induce instability, oscillation, and poor performance, butwe can also overcome the unfavorable effect about some typesof time delays by developed control frameworks, such asbased on the Halanay inequality framework [1, 2, 11, 12]. Atpresent, many different delays have been applied to the neuralnetwork [2, 12–18], for instance, time-varying delay [12, 14],distributed delay [13, 15, 16], discrete constant delay [2, 13],proportional delay [17], and unbounded delay [18].

    HindawiDiscrete Dynamics in Nature and SocietyVolume 2020, Article ID 4820351, 9 pageshttps://doi.org/10.1155/2020/4820351

    mailto:[email protected]://orcid.org/0000-0001-9289-1848https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://doi.org/10.1155/2020/4820351

  • State-dependent delay (SDD) is all around us [19–22].With limited natural resources, Antarctic whales and sealstend to mature longer if their populations are large [19]. Inthe problem of car following, it is inevitable to encounter thephenomenon that the time delay changes with the state,which contains physiological time delay, mechanical timedelay, and motion time delay [20, 21]. In addition, in theblood circulation system, the concentration of nutrientsregulates the mitotic cycle of hematopoietic stem cells; thus,the mitotic cycle of stem cells is affected by the concentrationof cells in the region [22]. In these cases, in order to describechange and evolution of things more accurately and makethe research results more realistic and modest with nuancedunderstanding, we must adopt differential equations withSDD.

    Neurodynamics help to identify the highly complex andprecise multilevel nonlinear brain system. ,e processing ofneural information involves the coupling and cooperation ofmultiple levels and regions. In this way, the nerve activityabout the structural neural network is worthy of study anddiscussion from the perspective of models and evolution,and then to some extent, the cognitive function of thespecific functional neural network is realized by evolutiveneurodynamics [13]. ,e work about evolutive neuro-dynamics will be helpful to understand the informationprocessing mechanism and the neural energy coding rule inthe nervous system and also provides the basis for the re-search of the potential mechanism of cognitive function.

    In this paper, the local evolution characteristics of neuralnetworks with state-dependent state delay (SDSD) will bediscussed. ,e topic introduced here may interest re-searchers in engaging the theory and application of the newneural network model having kinematics and dynamicsfeature. Prior to this, there have been many studies on thestability of nonlinear systems with SDSD. Hartung [23]described a type of nonlinear functional differential equationwith SDSD and analyzed the stability conditions of periodicsolutions based on the linearization method. Exponentialstability conditions of nonlinear systems with SDSD werereported via comparing with time-dependent delay systemsin [24]. Fiter and Fridman [25] developed the Lyapu-nov–Krasovskii functional method to discuss asymptoticstability about some particular linear systems with SDSD. Liand Wu [26] considered a class of nonlinear differentialsystems with SDD pulses. By pulse control theory, uniformlystable, uniformly asymptotically stable, and exponentiallystable results were presented. To derive stability criteria ofnonlinear systems with SDSD, Li and Yang [27] initiallycreated purely analytical frame structure. To the best of theauthors’ knowledge, although the neural networks are widelyused and the theoretical results are abundant, the researchon stability of the neural network with SDSD is still blank.Moreover, since time delay is an inevitable factor in neuralnetworks, it is of great significance to study the problem ofneural networks with SDSD. ,en, the primary contribu-tions of this paper are generalized as follows: (1) A generalneural network model with SDSD is established. Neuralnetwork model with SDSD may interest all those profes-sionals and academics in processing operations who would

    desire to utilize the capabilities of control systems aboutcapturing rich history information for cost effective and yetrobust events to be portrayed. Such research also contributesto reveal influences of neurodynamics evoked by SDSDcharacteristics. (2) Locally exponentially stable sufficientconditions of the neural networks with SDSD are obtained.,e purely analytical method employed by us demonstratesthat it is possible to analyze computational neurodynamicswithout a few additional restrictions. ,e purely analyticalmethod itself intents to address the universality of analysisframework. Definitely, it may be extended to a more generalclass of nonlinear systems with SDSD.

    ,e rest of this paper is arranged as follows. In Section2, we will present a specific neural network model. ,eresults of our work are described in Section 3. In Section 4,two numerical examples and simulation results are givento verify the validity of our results. Finally, in Section 5,the thesis is summarized, and the future work isprospected.

    2. Preliminaries and Model Description

    2.1. Notations. Let R and R+ represent the sets of realnumbers and nonnegative real numbers, respectively. Rndenotes the n-dimension Euclidean space. For a matrix E,λmax(E) is used to denote its maximum eigenvalue. P(E)stands for the minimum value of all elements of matrix E.,e vector 1-norm and 2-norm are severally expressed by‖·‖1 and ‖·‖2.

    2.2. Some Preliminaries and Problem Formulation. Based onthe work of Li and Yang [27], we put forward the fol-lowing neural network model with SDSD, which is de-scribed by

    _xi(t) � − aixi(t) + n

    j�1bijgj xj(t)

    + n

    j�1dijfj xj(t − τ(t,X)) , i � 1, 2, . . . , n, t≥ t0,

    (1)

    for the sake of presentation; we also give the compact form ofsystem (1) as follows:

    _X(t) � − AX(t) + Bg(X(t)) + Df(X(t − τ(t,X))),(2)

    where n stands for the number of neurons in the network,_X(t) denotes the upper right derivative of X(t),X � X(t) � (x1(t), x2(t), . . . , xn(t))

    T, and xi(t) representsthe state of the ith neuron. A is a diagonal matrix, fori � 1, 2, . . . , n, ai > 0, and B and D are constant matrices withcorresponding dimensions. g(X(t)) � (g1(x1)(t),g2(x2(t)), . . . , gn(xn(t)))

    T and f(X(t − τ(t,X))) �(f1(x1(t − τ(t,X))), f2(x2(t − τ(t,X))), . . . , fn(xn(t−τ(t,X))))T are the excitation functions of the ith neuron attime t and t − τ(t,X), respectively.

    2 Discrete Dynamics in Nature and Society

  • Furthermore, we use X(s) � Ψ(s), s ∈ [t0 − η, t0] todenote the initial value of system (2), where Ψ � Ψ(s) �(ψ1(s),ψ2(s), . . . ,ψn(s))

    T ∈ C([t0 − η, t0],Rn).

    C([t0 − η, t0],Rn) is a Banach space whose elements are

    continuous vector-valued functions. ,ese continuousfunctions map the interval [t0 − η, t0] into R

    n. Let ‖Ψ‖α �supt0− η≤s≤t0‖Ψ(s)‖ stand for the norm of a functionΨ(·) ∈ C([t0 − η, t0],R

    n), where ‖ · ‖ is the vector normmatching with the content of the paper.

    Remark 1. X(t) is right-upper derivable, which implies thatthe solution of system (2) can be continuous but not smooth.,e state delay τ(t,X) is related to the state of each neuron.

    For subsequent analysis, we need the following as-sumptions for system (1) and (2).

    Assumption 1. Functions g(·), f(·) ∈Rn satisfy f(0) � 0,g(0) � 0.

    ,rough Assumption 1, this ensures that X � 0 is aconstant solution of systems (1) and (2).

    Assumption 2. g(·), f(·) ∈Rn are locally Lipschitz con-tinuous; in other words, ∀ β1, β2 ∈R, and we have

    gi β1( − gi β2(

    ≤ ℓi β1 − β2

    , ∀i ∈ 1, 2, . . . , n{ },

    fi β1( − fi β2(

    ≤ ℓi β1 − β2

    , ∀i ∈ 1, 2, . . . , n{ },(3)

    where ℓi > 0 and ℓi > 0.According to Assumption 2, we can get two constant sets

    ℓ1, ℓ2, . . . , ℓn and ℓ1, ℓ2, . . . , ℓn . Let ℓ1, ℓ2, . . . , ℓn andlf � max ℓ1, ℓ2, . . . , ℓn .

    Assumption 3. ,e state delay τ(t,X) ∈ C(R+ × Rn, [0, η])is locally Lipschitz continuous, namely, for any Γ1, Γ2 ∈R

    n,there always exists a constant ℓτ > 0 such that

    τ t, Γ1( − τ t, Γ2(

    ≤ ℓτ Γ1 − Γ2����

    ����. (4)

    Assumption 4. When X � 0, τ(t,X) has supremumequipped with τ0(≤η), sup τ(t, 0), t≥ t0 � τ0.

    For ease of expression, let

    π1 � max1≤i≤n − ai + n

    j�1bji

    ℓi⎛⎝ ⎞⎠,

    π2 � max1≤i≤n n

    j�1dji

    ℓi.

    (5)

    Definition 1 (see [28]). ,e zero solution of system (2) is saidto be locally exponentially stable (LES) in regionM; if thereexist constants c> 0 and Lyapunov exponent ζ > 0, for anyt≥ t0, we have

    X t; t0,Ψ( ����

    ����≤ c‖Ψ‖αe− ζ t− t0( ), (6)

    where X(t; t0,Ψ) is a solution of system (2) with the initialcondition Ψ ∈ C([t0 − η, t0],M), M ⊂R

    n, and M is calleda local exponential attraction set of the zero solution.

    Lemma 1 (see [29]). Let Γ1, Γ2 ∈Rn, and we have

    ΓT1 Γ2 + ΓT2 Γ1 ≤ϖΓ

    T1 Γ1 + ϖ

    − 1ΓT2 Γ2, (7)

    for any ϖ> 0.

    3. Main Results

    Theorem 1. Under Assumptions 1–4, the zero equilibrium ofsystem (2) is LES if

    π1 + π2 < 0, (8)

    and Lyapunov exponent ζ > 0 satisfies

    ζ + π1 + π2eζ ℓτ‖Ψ‖α+τ0( ) ≤ 0. (9)

    Proof. We assume that X(t; t0,Ψ) is a trajectory of system(2) with initial value (t0,Ψ), where Ψ ∈ C([t0 − η, t0],R

    n)

    andΨ≠ 0. For the sake of convenience, let V(t) � V(t,X) �‖X(t)‖1 �

    ni�1 |xi(t)| and V0 � supV(s), s ∈ [t0 − η, t0] .

    ,en, for any ϵ ∈ (0, ζ), we claim that

    e(ζ− ϵ) t− t0( )V(t)≤V0, ∀t≥ t0. (10)

    Firstly, when t � t0, (10) holds. Next, we prove that (10)holds on (t0, +∞). In contrast to (10), there are someinstants on (t0, +∞) to make (10) untenable, and then wecan find an instant tq ≥ t0; the following three events willhappen:

    (1) e(ζ− ϵ)(tq− t0)V(tq) � V0.(2) e(ζ− ϵ)(t− t0)V(t)≤V0, for ∀t ∈ [t0 − η, tq].(3) ,ere exists a right neighbor of tq(U0+(tq, ξ)) such

    that ∀tξ ∈ U0+(tq, ξ) and e(ζ− ϵ)(tξ − t0)V(tξ)>V0.

    On the contrary, by Assumptions 1–4 and combin-ing (2), the derivative of e(ζ− ϵ)(t− t0)V(t) at time tq is asfollows:

    Discrete Dynamics in Nature and Society 3

  • ddt

    e(ζ− ϵ) t− t0( )V(t)

    t�tq

    � (ζ − ϵ)e(ζ− ϵ) tq− t0( V tq + e(ζ− ϵ) t− t0( ) _V(t)

    t�tq

    ≤ (ζ − ϵ)V0 + e(ζ− ϵ) tq− t0(

    n

    i�1sgn xi tq _xi tq

    ⎛⎝ ⎞⎠

    � (ζ − ϵ)V0 + e(ζ− ϵ) tq− t0(

    n

    i�1sgn xi tq − aixi tq

    ⎛⎝

    + n

    j�1bijgj xj tq +

    n

    j�1dijfj xj tq − τ tq,X tq ⎤⎥⎥⎦⎞⎠

    ≤ (ζ − ϵ)V0 + e(ζ− ϵ) tq− t0(

    n

    i�1− ai xi tq

    ⎛⎝ ⎞⎠

    + e(ζ− ϵ) tq− t0(

    n

    i�1

    n

    j�1bij

    ℓj xj tq

    + e(ζ− ϵ) tq− t0(

    n

    i�1

    n

    j�1dij

    ℓj xj tq − τ tq,X tq

    ⎛⎝ ⎞⎠

    ≤ (ζ − ϵ)V0

    + e(ζ− ϵ) tq− t0( max

    1≤i≤n− ai +

    n

    j�1bji

    ℓi⎛⎝ ⎞⎠ X tq �����

    �����

    + e(ζ− ϵ) tq− t0( max

    1≤i≤n

    n

    j�1dji

    ℓi⎛⎝ ⎞⎠ X tq − τ tq,X �����

    �����

    � ζ − ϵ + π1( V0 + e(ζ− ϵ) tq− τ tq,X( − t0( X tq − τ tq,X

    �����

    �����

    × π2e(ζ− ϵ)τ tq,X(

    ≤ ζ − ϵ + π1 + π2e(ζ− ϵ)τ tq,X( V0

    � ζ − ϵ + π1 + π2e(ζ− ϵ) τ tq,X( − τ tq,0( e

    (ζ− ϵ)τ tq,0( V0

    ≤ ζ − ϵ + π1 + π2e(ζ− ϵ)ℓτ X tq(

    ��������1e

    (ζ− ϵ)τ tq,0( V0

    ≤ ζ − ϵ + π1 + π2e(ζ− ϵ) ℓτ X tq(

    ��������1+τ0 V0.

    (11)

    Together with the definition ofV0, V(t), tq and condition(1), we have

    X tq �����

    �����1� V tq ≤V0 � ‖Ψ‖α, (12)

    and then from (9) and (11), we obtainddt

    e(ζ− ϵ) t− t0( )V(t)

    t�tq

    ≤ ζ − ϵ + π1 + π2e(ζ− ϵ) ℓτ‖Ψ‖α+τ0( ) ‖Ψ‖α < 0,

    (13)

    which is a contradiction with condition (9), and thus, (10)holds.

    Consider the arbitrariness of ϵ, let ϵ⟶ 0, and then weobtain

    eζ t− t0( )V(t)≤V0, ∀t≥ t0, (14)

    i.e.,

    ‖X(t)‖1 � V(t)≤ ‖Ψ‖αe− ζ t− t0( ), ∀t≥ t0. (15)

    ,e reasoning process of ,eorem 1 is completed. □

    Theorem 2. B and D are symmetric matrices; then, underAssumptions 1–4, the zero equilibrium of system (2) is LESif

    − 2P(A) + λmax B2

    + l2g + λmax D

    2 + l

    2f < 0, (16)

    and μ � 2ζ > 0 satisfies

    μ − 2P(A) + λmax B2

    + l2g + λmax D

    2

    + l2fe

    μ ℓτ‖Ψ‖α+τ0( ) ≤ 0,(17)

    where ζ is the Lyapunov exponent.

    Proof. Suppose X(t; t0,Ψ) is a solution of system (2) withinitial state (t0,Ψ), where Ψ ∈ C([t0 − η, t0],R

    n) and Ψ≠ 0.Let V(t,X) � (‖X(t)‖2)

    2 � XT(t)X(t). For convenience,set V(t) � V(t,X) and V0 � supV(s), s ∈ [t0 − η, t0] .,en, for any ϵ ∈ (0, μ), we claim that

    e(μ− ϵ) t− t0( )V(t)≤V0, ∀t≥ t0. (18)

    Firstly, when t � t0, (18) holds. Next, we prove that (18)holds on (t0, +∞). In contrast to (18), there are some in-stants on (t0, +∞) to make (18) untenable, and then we canfind an instant tq ≥ t0 to satisfy the following threeconditions:

    (1) e(μ− ϵ)(tq− t0)V(tq) � V0.(2) e(μ− ϵ)(t− t0)V(t)≤V0, for ∀t ∈ [t0 − η, tq].(3) ,ere exists a right neighbor of tq(U0+(tq, ξ)) such

    that ∀tξ ∈ U0+(tq, ξ), e(μ− ϵ)(tξ − t0)V(tξ)>V0.

    On the contrary, by Assumptions 1–4 and Lemma 1, wecompute the derivative of e(μ− ϵ)(t− t0)V(t) at time tq:

    4 Discrete Dynamics in Nature and Society

  • ddt

    e(μ− ϵ) t− t0( )V(t)

    t�tq

    � (μ − ϵ)e(μ− ϵ) tq− t0( V tq + e(μ− ϵ) t− t0( ) _V(t)

    t�tq

    ≤ (μ − ϵ)V0 + e(μ− ϵ) tq− t0( 2XT tq _X tq

    � (μ − ϵ)V0 + e(μ− ϵ) tq− t0( × 2XT tq

    × − AX tq + Bg X tq + Df X tq − τ tq,X

    � (μ − ϵ)V0 + e(μ− ϵ) tq− t0( − 2XT tq AX tq

    + e(μ− ϵ) tq− t0( 2XT tq Bg X tq

    + e(μ− ϵ) tq− t0( 2XT tq Df X tq − τ tq,X

    ≤ (μ − ϵ)V0 + e(μ− ϵ) tq− t0( (− 2P(A))XT tq X tq

    + e(μ− ϵ) tq− t0( λmax B

    2 + l

    2g X

    Ttq X tq

    + e(μ− ϵ) tq− t0( λmax D

    2 X

    Ttq X tq

    + e(μ− ϵ) tq− t0( l

    2fX

    Ttq − τ tq,X X tq − τ tq,X

    � μ − ϵ − 2P(A) + λmax B2

    + l2g + λmax D

    2 V0

    + e(μ− ϵ) tq− τ tq,X( − t0( l

    2fX

    Ttq − τ tq,X X tq − τ tq,X

    × e(μ− ϵ)τ tq,X(

    ≤ μ − ϵ − 2P(A) + λmax B2

    + l2g + λmax D

    2 V0

    + l2fe

    (μ− ϵ)τ tq,X( V0

    � μ − ϵ − 2P(A) + λmax B2

    + l2g + λmax D

    2 V0

    + l2fe

    (μ− ϵ) τ tq,X( − τ tq,0( e(μ− ϵ)τ tq,0( V0

    ≤ μ − ϵ − 2P(A) + λmax B2

    + l2g + λmax D

    2

    +l2fe

    (μ− ϵ) ℓτ X tq( ����

    ����2+τ0 V0.

    (19)

    Combining the definition of V0, V(t), tq and condition(1), we have

    X tq �����

    �����2� V tq

    1/2≤ V0(

    1/2� ‖Ψ‖α, (20)

    and then from (17) and (19), we obtainddt

    e(μ− ϵ) t− t0( )V(t)

    t�tq

    ≤ μ − ϵ − 2P(A) + λmax B2

    + l2g + λmax D

    2

    +l2fe

    (μ− ϵ) ℓτ‖Ψ‖α+τ0( ) ‖Ψ‖α( 2 < 0.

    (21)

    ,is is in contradiction with (9), so (18) holds.Consider the arbitrariness of ϵ, let ϵ⟶ 0, and then we

    could be capable of getting

    eμ t− t0( )V(t)≤V0, ∀t≥ t0, (22)

    i.e.,

    ‖X(t)‖2 � (V(t))1/2 ≤ ‖Ψ‖αe

    − (μ/2) t− t0( ), ∀t≥ t0. (23)

    ,e reasoning process of ,eorem 2 is completed. □

    Remark 2. In,eorem 2, by taking the value of ϖ in Lemma1 to be 1, we can get the following inequality:

    2XT tq Bg X tq � XT

    tq Bg X tq + XT

    tq Bg X tq

    � XT

    tq BTg X tq + g

    TX tq BX tq

    � BX tq Tg X tq + g

    TX tq BX tq

    ≤ BX tq T

    BX tq + gTX tq g X tq

    � XT

    tq BTBX tq + g

    TX tq g X tq

    � XT

    tq B2X tq + g

    TX tq g X tq

    ≤ λmax B2

    XT

    tq X tq + l2gX

    Ttq X tq

    � λmax B2

    + l2g X

    Ttq X tq .

    (24)

    We give a particular case of system (2), considering thefollowing one-dimensional system:

    _x(t) � ax(t) + bg(x(t)) + df(x(t − τ(t, x))), t≥ 0,

    τ(t, x) � δ + λx(t),(25)

    where a, b, d, δ, λ ∈R.

    Corollary 1. If system (25) satisfies the following conditions,

    (1) f(0) � 0, g(0) � 0. g(·), f(·) are locally Lipschitzcontinuous functions whose Lipschitz constants arel1, l2, respectively,

    (2) a< 0 and a + |b|l1 + |d|l2 < 0,(3) δ > 0, and the initial condition of system (25) satisfies

    |ψ(·)|≤M< (δ/|λ|), where |ψ(·)| � max− η≤t≤0|ψ(·)|,

    then we can obtain

    |x(t)|≤Me− ζt, t≥ 0, (26)

    Discrete Dynamics in Nature and Society 5

  • where x(t) � x(t; 0,ψ) represents the trajectory of system(25) with initial state (0,ψ), and Lyapunov exponent ζ > 0satisfies

    ζ + a +|b|l1 +|d|l2eζ(|λ|M+δ) ≤ 0. (27)

    Proof. To prove Corollary 1, we only need to prove thatCorollary 1 satisfies Assumptions 1–4. Obviously, fromcondition (1), we can know that system (25) satisfies As-sumptions 1 and 2. Next, we prove that system (25) satisfiesAssumptions 3 and 4.

    We assume that x(t; 0,ψ) is a trajectory of system (25)with initial value (0,ψ), where ψ ≠ 0. From τ(t, x) � δ+λx(t), we could know that τ(t, x) is continuous about t.Owing to |ψ|≤M and M< (δ/|λ|), there exists ρ> 0 such that∀t ∈ [0, ρ] and τ(t, x)≥ 0. In the circumstances, setV(t) � |x(t)|. Next, under conditions (1)–(3) and by ,eo-rem 1, we get |x(t)|≤Me− ζt, for ∀t ∈ [0, ρ], where Lyapunovexponent ζ satisfies (27).

    ,en, for ∀t ∈ [0, +∞), we declare that τ(t, x)≥ 0. Incontrast to the claim, there must be some instant on(ρ, +∞) to make τ(t, x(t))< 0. ,en, we could find aninstant tq ∈ [ρ, +∞) to meet the following threeconditions:

    (1) τ(tq, x(tq)) � δ + λx(tq) � 0.(2) ∀t ∈ [0, tq], τ(t, x(t))≥ 0.(3) ,ere exists a right neighbor of tq(U0+(tq, ξ)) such

    that τ(tξ , x(tξ))< 0, for any tξ ∈ U0+(tq, ξ).

    It follows from ,eorem 1, for any t ∈ [0, tq], we have|x(t)|≤Me− ζt, which indicates that |x(tq)|≤Me− ζtq <(δ/|λ|). On the basis of the continuity of x(t), there existsξ0 > 0 such that |x(t)| < (δ/|λ|), for any t ∈ [tq, tq + ξ

    0). It

    means that ∀t ∈ [tq, tq + ξ0), and

    τ(t, x(t)) � δ + λx(t)> 0holds, which contradicts with (9);therefore, ∀t ∈ [0, +∞) and τ(t, x(t))≥ 0. It ulteriorlyindicates that Assumptions 3 and 4 are all satisfied.,erefore, it can be known from,eorem 1 that Corollary1 is valid. ,e proof is completed. □

    Remark 3. Actually, in,eorem 1, when π1 + π2 < 0, we candefinitely get ζ + π1 + π2eζ(ℓτ‖Ψ‖+τ0) ≤ 0 as long as ζ is smallenough. ,eorem 2 has a similar conclusion.

    Remark 4. In ,eorem 1, by taking tq − τ(tq,X) as a wholeand then from condition (2), we are able to gete(ζ− ϵ)(tq− τ(tq,X)− t0)‖X(tq − τ(tq,X))‖≤V0. ,eorem 2 has asimilar conclusion.

    Remark 5. ,e proofs of ,eorems 1 and 2 also provide anestimate of the locally exponentially convergent rate ζ whichcould be obtained by solving transcendental equation (9) or(17).

    Remark 6. Remarkably, the Lyapunov exponent in ,eo-rems 1 and 2 is state-dependent, so only when the initialvalue is bounded can we find common ζ to make

    e(ζ− ϵ)V(t)≤V(0). Furthermore, due to the effect of SDSD,the results in our paper are local features, not globalfeatures.

    4. Illustrative Examples

    To prove the effectiveness of our results, two numericalexamples will be given in this section.

    Example 1. Consider a 2-dimensional neural network withSDSD, which is described by

    _x1(t)

    _x2(t) � −

    1 0

    0 1

    x1(t)

    x2(t)

    +0.25 0.25

    0.02 0.01

    g1 x1(t)(

    g2 x2(t)(

    +0.3 0.2

    0.01 0.02

    f1 x1(t − τ(t,X))(

    f2 x2(t − τ(t,X))( ,

    (28)

    where t0 � 0 and

    gi(·) � xi(t) + 1

    + xi(t) − 1

    , i � 1, 2,

    fi(·) � sin xi t − sin x1(t) + x2(t)(

    , i � 1, 2,

    τ(t,X) � sin x1(t) + x2(t)(

    .

    (29)

    Evidently, ℓ1 � ℓ2 � 2, ℓ1 � ℓ2 � 1, ℓτ � 1, τ(t, 0) �0, τ(t,X) ∈ [0, 1]. By calculating,

    π1 � max1≤i≤n − ai + n

    j�1bji

    ℓi⎛⎝ ⎞⎠ � − 0.46,

    π2 � max1≤i≤n n

    j�1dji

    ℓi � 0.31.

    (30)

    ,en, from ,eorem 1, system (28) is LES. ,e trajec-tories of the solution from a random initial value are shownin Figure 1. As shown in Figure 1, x1(t) and x2(t) in neuralnetwork model (28) are convergent. Figure 2 shows thephase diagram of system (28) evolving with time.

    Example 2. Consider another 2-dimensional neural networkwith SDSD, which is depicted by

    _x1(t)

    _x2(t) � −

    2 0

    0 2

    x1(t)

    x2(t)

    +0.5 0.2

    0.2 0.5

    g1 x1(t)(

    g2 x2(t)(

    +0.3 0.2

    0.2 0.3

    f1 x1(t − (t,X))(

    f2 x2(t − (t,X))( ,

    (31)

    where t0 � 0 and

    6 Discrete Dynamics in Nature and Society

  • gi(·) �cos xi(t) +(π/2)(

    2, i � 1, 2,

    fi(·) � tanh xi t − sin x1(t) + x2(t)(

    , i � 1, 2,

    τ(t,X) � sin x1(t) + x2(t)(

    .

    (32)

    Obviously, lg � 1/2, lf � 1, ℓτ � 1, τ(t, 0) � 0, (t,X) ∈[0, 1],P(A) � 2. By computing, λmax(B2) � 0.49, λmax(C2) � 0.25, and − 2P(A) + l2g + l

    2f + λ(B

    2)+ λ(C2) � − 4+(1/4) + 1 + 0.49 + 0.25 � − 2.01< 0. ,en, from ,eorem 2,system (31) is LES. ,e trajectories of the solution from arandom initial value are shown in Figure 3. As shown inFigure 3, x1(t) and x2(t) in neural network model (31) are

    0 0.2 0.4 0.6 0.8 10

    1

    2

    3

    Time

    x1

    (a)

    0 0.2 0.4 0.6 0.8 10

    1

    2

    3

    Time

    x2

    (b)

    Figure 1: Transient behavior of (a) x1(t) and (b) x2(t) in system (28).

    00.2

    0.40.6 0.8

    1

    01

    230

    0.51

    1.52

    2.53

    Timex1

    x2

    Figure 2: Transient behavior of (x1(t), x2(t)) in system (28).

    0 0.2 0.4 0.6 0.8 10

    2

    4

    6

    Time

    x1

    (a)

    0 0.2 0.4 0.6 0.8 10

    2

    4

    6

    Time

    x2

    (b)

    Figure 3: Transient behavior of (a) x1(t) and (b) x2(t) in system (31).

    00.2

    0.40.6

    0.81

    02

    460

    1

    2

    3

    4

    5

    Timex1

    x2

    Figure 4: Transient behavior of (x1(t), x2(t)) in system (31).

    Discrete Dynamics in Nature and Society 7

  • convergent. Figure 4 shows the phase diagram of system (31)evolving with time.

    5. Concluding Remarks

    We devote to resolve the problem of local dynamic propertyfor neural networks with SDSD in this paper. ,rough pureanalysis method and technique of reduction to absurdity, weobtain a certain number of sufficient conditions for localexponential stability of neural networks with SDSD. Basedon our results, we know that the Lyapunov exponent relieson the state on account of the effect of SDSD. It also indicatesthat the exponential stability results derived are local ratherthan global. Consequently, we can take the global dynamicsas a topic in the prospective research. In addition, we canalso develop SDSD system methods, for instance, event-triggered control.

    Data Availability

    No data were used to support this study.

    Conflicts of Interest

    ,e authors declare that there are no conflicts of interestregarding the publication of this paper.

    Acknowledgments

    ,ework of the first author formaster student Yue Chen wasdone under the direction of Assistant Professor Jin-E Zhang.,is work was supported by the Natural Science Foundationof China under Grants 61976084 and 61773152.

    References

    [1] L. Teng and D. Xu, “Global attracting set for non-autonomousneutral type neural networks with distributed delays,” Neu-rocomputing, vol. 94, pp. 64–67, 2012.

    [2] J. Cao and J. Wang, “Global exponential stability and peri-odicity of recurrent neural networks with time delays,” IEEETransactions on Circuits and Systems I: Regular Papers, vol. 52,no. 5, pp. 920–931, 2005.

    [3] Z. Yang, J. Peng, and Y. Liu, “Adaptive neural network forcetracking impedance control for uncertain robotic manipulatorbased on nonlinear velocity observer,” Neurocomputing,vol. 331, pp. 263–280, 2019.

    [4] H.-G. Han, Y. Li, Y.-N. Guo, and J.-F. Qiao, “A soft com-puting method to predict sludge volume index based on arecurrent self-organizing neural network,” Applied SoftComputing, vol. 38, pp. 477–486, 2016.

    [5] W. S. Chen and J. M. Li, “Adaptive neural network trackingcontrol for a class of unknown nonlinear time-delay systems,”Journal of Systems Engineering and Electronics, vol. 17, no. 3,pp. 611–618, 2006.

    [6] T. Zan, Z. Liu, H.Wang, M.Wang, and X. Gao, “Control chartpattern recognition using the convolutional neural network,”Journal of Intelligent Manufacturing, vol. 31, no. 3, pp. 703–716, 2020.

    [7] V. Narayanan, S. Jagannathan, and K. Ramkumar, “Event-sampled output feedback control of robot manipulators using

    neural networks,” IEEE Transactions on Neural Networks andLearning Systems, vol. 30, no. 6, pp. 1651–1658, 2019.

    [8] B. L. Chen, H. D. Li, W. Q. Luo, and J. W. Huang, “Imageprocessing operations identification via convolutional neuralnetwork,” Science China Information Sciences, vol. 63, no. 3,p. 139109, 2020.

    [9] M. M. Van Hulle and J. Larsen, “Neural networks in signalprocessing,” Neurocomputing, vol. 69, no. 1–3, pp. 1-2, 2005.

    [10] Y. Hayakawa and K. Nakajima, “Design of the inversefunction delayed neural network for solving combinatorialoptimization problems,” IEEE Transactions on Neural Net-works, vol. 21, no. 2, pp. 224–237, 2010.

    [11] Z. Zeng and J. Wang, “Analysis and design of associativememories based on recurrent neural networks with linearsaturation activation functions and time-varying delays,”Neural Computation, vol. 19, no. 8, pp. 2149–2182, 2007.

    [12] Z. Li, L. Liu, and Q. Zhu, “Mean-square exponential input-to-state stability of delayed Cohen-Grossberg neural networkswith Markovian switching based on vector Lyapunov func-tions,” Neural Networks, vol. 84, pp. 39–46, 2016.

    [13] A. Wu and Z. Zeng, “Lagrange stability of memristive neuralnetworks with discrete and distributed delays,” IEEE Trans-actions on Neural Networks and Learning Systems, vol. 25,no. 4, pp. 690–703, 2014.

    [14] T. Ensari and S. Arik, “Global stability of a class of neuralnetworks with time-varying delay,” IEEE Transactions on Circuitsand Systems II: Express Briefs, vol. 52, no. 3, pp. 126–130, 2005.

    [15] L. Li, Y. Yang, and F. Wang, “,e sampled-data exponentialstability of BAM with distributed leakage delays,” NeuralProcessing Letters, vol. 46, no. 2, pp. 537–547, 2017.

    [16] H. Liu, L. Zhao, Z. Zhang, and Y. Ou, “Stochastic stability ofmarkovian jumping Hopfield neural networks with constantand distributed delays,” Neurocomputing, vol. 72, no. 16–18,pp. 3669–3674, 2009.

    [17] C. Zheng, N. Li, and J. Cao, “Matrix measure based stabilitycriteria for high-order neural networks with proportionaldelay,” Neurocomputing, vol. 149, pp. 1149–1154, 2015.

    [18] J. Y. Zhang, “Global stability analysis in cellular neural net-works with unbounded time delays,” Applied Mathematicsand Mechanics, vol. 25, no. 6, pp. 686–693, 2004.

    [19] R. Gambell, “Birds and Mammals-Antarctic whales,” Editedby W. Bonner and D. Walton, Eds., pp. 223–241, PergamonPress, New York, NY, USA, 1985.

    [20] L. Liu, L. Zhu, and D. Yang, “Modeling and simulation of thecar-truck heterogeneous traffic flow based on a nonlinear car-following model,” Applied Mathematics and Computation,vol. 273, pp. 706–717, 2016.

    [21] S. Yu and Z. Shi, “An improved car-following model con-sidering relative velocity fluctuation,” Communications inNonlinear Science and Numerical Simulation, vol. 36,pp. 319–326, 2016.

    [22] P. Getto and M. Waurick, “A differential equation with state-dependent delay from cell population biology,” Journal ofDifferential Equations, vol. 260, no. 7, pp. 6176–6200, 2016.

    [23] F. Hartung, “Linearized stability in periodic functional dif-ferential equations with state-dependent delays,” Journal ofComputational and Applied Mathematics, vol. 174, no. 2,pp. 201–211, 2005.

    [24] I. Györi and F. Hartung, “Exponential stability of a state-dependent delay system,” Discrete & Continuous DynamicalSystems-A, vol. 18, no. 4, pp. 773–791, 2007.

    [25] C. Fiter and E. Fridman, “Stability of piecewise affine systemswithstate-dependent delay, and application to congestion control,” in

    8 Discrete Dynamics in Nature and Society

  • Proceedings of the 52nd IEEE Annual Conference on Decision andControl, pp. 1572–1577, Florence, Italy, December 2013.

    [26] X. Li and J. Wu, “Stability of nonlinear differential systemswith state-dependent delayed impulses,” Automatica, vol. 64,pp. 63–69, 2016.

    [27] X. Li and X. Yang, “Lyapunov stability analysis for nonlinearsystems with state-dependent state delay,” Automatica,vol. 112, p. 108674, 2020.

    [28] C. Zhou, X. Zeng, J. Yu, and H. Jiang, “A unified associativememory model based on external inputs of continuous re-current neural networks,” Neurocomputing, vol. 186, pp. 44–53, 2016.

    [29] L. Liu, X. Ding, W. Zhou, and X. Li, “Global mean squareexponential stability and stabilization of uncertain switcheddelay systems with Lévy noise and flexible switching signals,”Journal of the Franklin Institute, vol. 356, no. 18, pp. 11520–11545, 2019.

    Discrete Dynamics in Nature and Society 9