· international journal of bifurcation and chaos, vol. 17, no. 9 (2007) 3127–3150 c world...

24
International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETITIVE CELLULAR NEURAL NETWORKS M. DI MARCO , M. FORTI , M. GRAZZINI , P. NISTRI § and L. PANCIONI Dipartimento di Ingegneria dell’Informazione, Universit` a di Siena, v. Roma 56 — 53100 Siena, Italy [email protected] [email protected] [email protected] § [email protected] [email protected] Received September 28, 2006; Revised November 10, 2006 In a series of papers published in the seventies, Grossberg had developed a geometric approach for analyzing the global dynamical behavior and convergence properties of a class of competi- tive dynamical systems. The approach is based on the property that it is possible to associate a decision scheme with each competitive system in that class, and that global consistency of the decision scheme implies convergence of each solution toward some stationary state. In this paper, the Grossberg approach is extended to the class of competitive standard Cellular Neu- ral Networks (CNNs), and it is used to investigate convergence under the hypothesis that the competitive CNN has a globally consistent decision scheme. The extension is nonobvious and requires to deal with the set-valued vector field describing the dynamics of the CNN output solu- tions. It is also stressed that the extended approach does not require the existence of a Lyapunov function, hence it is applicable to address convergence in the general case where the CNN neuron interconnections are not necessarily symmetric. By means of the extended approach, a number of classes of third-order nonsymmetric competitive CNNs are discovered, which have a globally consistent decision scheme and are convergent. Moreover, global consistency and convergence hold for interconnection parameters belonging to sets with non-empty interior, and thus they represent physically robust properties. The paper also shows that when the dimension is higher than three, there are fundamental differences between the convergence properties of competitive CNNs implied by a globally consistent decision scheme, and those of the class of competitive dynamical systems considered by Grossberg. These differences lead to the need to introduce a stronger notion of global consistency of decisions, with respect to that proposed by Grossberg, in order to guarantee convergence of competitive CNNs with more than three neurons. Keywords : Cellular Neural Networks; convergence; competitive dynamical systems; neural decision schemes. 1. Introduction In a series of papers [Grossberg, 1978a, 1978b, 1980], Grossberg had developed a geometric approach for analyzing the global dynamical behavior and convergence properties of a class of nonlinear competitive systems, which include as a special case the classical Volterra–Lotka system for competing species. It has been proved that each competitive system in that class induces a decision scheme, and if the scheme is globally consistent, then 3127

Upload: others

Post on 28-Jun-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150c© World Scientific Publishing Company

GLOBAL CONSISTENCY OF DECISIONSAND CONVERGENCE OF COMPETITIVE

CELLULAR NEURAL NETWORKS

M. DI MARCO∗, M. FORTI†, M. GRAZZINI‡, P. NISTRI§and L. PANCIONI¶

Dipartimento di Ingegneria dell’Informazione, Universita di Siena,v. Roma 56 — 53100 Siena, Italy

[email protected][email protected]

[email protected]§[email protected]

[email protected]

Received September 28, 2006; Revised November 10, 2006

In a series of papers published in the seventies, Grossberg had developed a geometric approachfor analyzing the global dynamical behavior and convergence properties of a class of competi-tive dynamical systems. The approach is based on the property that it is possible to associatea decision scheme with each competitive system in that class, and that global consistency ofthe decision scheme implies convergence of each solution toward some stationary state. In thispaper, the Grossberg approach is extended to the class of competitive standard Cellular Neu-ral Networks (CNNs), and it is used to investigate convergence under the hypothesis that thecompetitive CNN has a globally consistent decision scheme. The extension is nonobvious andrequires to deal with the set-valued vector field describing the dynamics of the CNN output solu-tions. It is also stressed that the extended approach does not require the existence of a Lyapunovfunction, hence it is applicable to address convergence in the general case where the CNN neuroninterconnections are not necessarily symmetric. By means of the extended approach, a numberof classes of third-order nonsymmetric competitive CNNs are discovered, which have a globallyconsistent decision scheme and are convergent. Moreover, global consistency and convergencehold for interconnection parameters belonging to sets with non-empty interior, and thus theyrepresent physically robust properties. The paper also shows that when the dimension is higherthan three, there are fundamental differences between the convergence properties of competitiveCNNs implied by a globally consistent decision scheme, and those of the class of competitivedynamical systems considered by Grossberg. These differences lead to the need to introduce astronger notion of global consistency of decisions, with respect to that proposed by Grossberg,in order to guarantee convergence of competitive CNNs with more than three neurons.

Keywords : Cellular Neural Networks; convergence; competitive dynamical systems; neuraldecision schemes.

1. Introduction

In a series of papers [Grossberg, 1978a, 1978b,1980], Grossberg had developed a geometricapproach for analyzing the global dynamicalbehavior and convergence properties of a class of

nonlinear competitive systems, which include as aspecial case the classical Volterra–Lotka system forcompeting species. It has been proved that eachcompetitive system in that class induces a decisionscheme, and if the scheme is globally consistent, then

3127

Page 2:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

3128 M. Di Marco et al.

each solution is forced through a series of local deci-sions (or jumps) which eventually lead to a finalglobal decision (or global consensus). This corre-sponds to the fact that the solution has settledinto an equilibrium point, i.e. the system is conver-gent. In other circumstances, it may be possible thatthe decision scheme is globally inconsistent, hencethe series of local decisions never terminates and thesystem can sustain nonvanishing oscillations. Thisis true, for example, in the “voting paradox” (May–Leonard, 75), i.e. a Volterra–Lotka model with threecompeting species x1, x2 and x3, which has a con-tradictory decision scheme where x1 beats x2, x2

beats x3, and x3 beats x1.The goal of this paper is to extend the Gross-

berg approach in order to address convergenceof standard competitive cellular neural networks(CNNs), i.e. CNNs with inhibitory (nonpositive)interconnections between distinct neurons. Whatmakes the Grossberg approach really attractive inthe CNN framework, is that it does not requirethe existence of a Lyapunov function, hence it isin principle applicable also to address convergenceof nonsymmetric CNNs. In this regard, we recallthat the Lyapunov method developed by [Chua &Yang, 1988] is applicable to symmetric (reciprocal)CNNs, only.

The extension of Grossberg approach to stan-dard CNNs is not straightforward. First of all, weneed to put the CNN equations in a form that isstructurally analogous to that of the competitivesystems considered by Grossberg. It is shown in thepaper that this is possible if we write the CNN equa-tions with respect to the neuron outputs. However,one difficulty is that the dynamical system satis-fied by the CNN output solutions is characterizedby a set-valued vector field where the velocity is notuniquely defined at the neuron saturation levels, butrather it can assume an entire set of different val-ues. From a physical viewpoint, this is in agreementwith the fact that in saturation there are multiplefeasible velocities for the CNN output solutions.

The paper then shows that it is possible toassociate a decision scheme with the competitivedynamical system satisfied by the CNN output solu-tions, and to globally analyze the CNN dynam-ics and convergence properties on the basis of theconsistency or inconsistency of the scheme. In par-ticular, the paper investigates in detail the CNNconvergence properties implied by a globally con-sistent decision scheme, in the case where there arethree competing neurons. By means of the proposed

method, new classes of nonsymmetric third-ordercompetitive CNNs are discovered, which enjoy theproperty of convergence. For such classes, conver-gence is shown to be physically robust, since it holdsin polyhedral sets with nonempty interior of inter-connection parameters. The paper also shows thatwhen the competitive CNNs have more than threeneurons, then there are crucial differences in theconvergence properties implied by a globally con-sistent decision scheme, with respect to the classof competitive systems considered by Grossberg.These differences lead to the need to introduce astronger notion of global consistency of decisions,with respect to that originally proposed by Gross-berg, in order to guarantee convergence of compet-itive CNNs with more than three neurons.

Section 2 briefly reviews the Grossbergapproach, while Sec. 3 introduces the competitiveCNN model. Sections 4 and 5 extend the Grossbergapproach to competitive CNNs, and then Secs. 6 to8 give the main results on convergence for compet-itive CNNs inducing a globally consistent decisionscheme. The paper is ended with some concludingremarks in Sec. 9.

2. Grossberg Approach

In [Grossberg, 1980], Grossberg considered a classof nonlinear competitive dynamical systems thatare described by the system of ordinary differentialequations

xi = α(xi)Mi(x), i = 1, 2, . . . , n (1)

where x = (xi)i=1,2,...,n ∈ Rn is the vector of state

variables. The continuously differentiable functionMi : R

n → R satisfies

∂Mi

∂xj(x) ≤ 0 ∀ j �= i, ∀x ∈ R

n

namely Mi represents the competitive balance atthe state xi. The continuous function α : R → R

is such that α(xi) > 0 when xi > 0, and α(0) = 0,i.e. α is an amplification function that converts thecompetitive balance into the growth rate xi. More-over, since α(0) = 0, the hyperplanes xi = 0 areinvariant for the dynamics of (1), and α keeps thestate variable xi positive. In other words, there isa subset of R

n, namely the positive orthant O+ ={x ∈ R

n : xi > 0, i = 1, 2, . . . , n}, where the statevariables evolve: for initial conditions xi(0) > 0,i = 1, 2, . . . , n, the solution x(t) of (1) is such thatx(t) ∈ O+ for all t ≥ 0. An important special case is

Page 3:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

Global Consistency of Decisions and Convergence of Competitive Cellular Neural Networks 3129

the classical Volterra–Lotka system for n competingspecies

xi = xi

(1 −

n∑k=1

Bikxk

), i = 1, 2, . . . , n (2)

where Bik ≥ 0 for all i �= k.In the quoted paper, Grossberg developed a

method for studying the global dynamical behav-ior of (1), which “explicates a main theme aboutcompetitive dynamical systems: who is winningthe competition?” First of all, it has been provedthat (1) enjoys a property of ignition: if at sometime t+ some variable xi is enhanced (i.e. we havedxi(t)/dt|t=t+ ≥ 0), then for all subsequent timest ≥ t+, at least one variable xj , which is possibly dif-ferent from xi, will be enhanced. Roughly speaking,and in a more suggestive way, this means that oncethe competition starts for (1), it thereafter neverturns off. Then, one keeps track of which variableis winning the competition, i.e. which variable ismaximally enhanced at any time. If at some time adifferent variable begins to be maximally enhanced,the system decides to enhance this different variableby jumping between the two variables. These jumps,or local decisions, may happen only on certain sur-faces (jump sets) in the state space, which dependon the balance functions Mi in (1). The geometricstructure of the jump sets permits to define a deci-sion scheme for the competitive system. When thedecision scheme is globally consistent, i.e. it does notcontain recurrent jump cycles, the local decisionswill eventually terminate and the system will beforced to settle into a stationary state correspond-ing to a global decision. Hence, (1) is convergent.

A more detailed description of the approachthus delineated can be found in [Grossberg, 1978b,1980] see also the review paper [Grossberg, 1988].

3. Competitive CNNs

The standard CNNs, which have been introducedby [Chua & Yang, 1988], are governed by the sys-tem of nonlinear differential equations

x = −Dx + TG(x) + I (N)

where x = (xi)i=1,2,...,n ∈ Rn is the vector of neuron

state variables, D = diag(d1, d2, . . . , dn) ∈ Rn×n

with di > 0, i = 1, 2, . . . , n, is a diagonal matrixof neuron self-inhibitions, T = (Tij)i,j=1,2,...,n ∈R

n×n is the neuron interconnection matrix, I =(Ii)i=1,2,...,n ∈ R

n is the vector of biasing inputs,and G(x) = (g(x1), . . . , g(xn))′ : R

n → Rn, where

the prime means transpose, is a diagonal mappingwhere

g(ρ) =12(|ρ + 1| − |ρ − 1|) : R → R

is the piecewise-linear neuron activation.Given any x0 ∈ R

n, there exists a unique solu-tion x(t) of (N) with initial condition x(0) = x0,which is bounded and continuously differentiablefor t ≥ 0. Furthermore, there is a unique corre-sponding output solution of (N), which is given byy(t) = G(x(t)), t ≥ 0, and is such that

y(t) ∈ Kn = [−1, 1]n

for t ≥ 0. Since y(t) is the composition of a continu-ously differentiable function x(t) and a locally Lips-chitz function G(x), it follows that y(t) is absolutelycontinuous on any compact interval in [0,+∞),hence y(t) is differentiable for almost all (a.a.) t ∈[0,+∞) (in the sense of Lebesgue measure).

By an equilibrium point (EP) of (N), we meana vector xe ∈ R

n satisfying the algebraic equation

0 = −Dxe + TG(xe) + I.

Note that xe ∈ Rn is an EP of (N) if and only if

x(t) = xe, t ≥ 0, is a stationary solution of (N).

Definition 1. The CNN (N) is said to be conver-gent (or completely stable), if and only if, for anysolution x(t) of (N) we have limt→+∞ x(t) = xe,where xe is an EP of (N).

The next property shows that actually it suf-fices to study convergence of the output solutionsof (N), since this in turn implies convergence of the(state) solutions of (N) as well.

Property 1. Let x(t) be a solution of the CNN(N), and y(t) = G(x(t)) the corresponding outputsolution. If there exists the limt→+∞ y(t) = y(∞),then there exists also the limt→+∞ x(t) = x(∞) =D−1(Ty(∞) + I), where x(∞) is an EP of (N).

Proof. We have x(t) = −Dx(t) + f(t), wheref(t) = Ty(t) + I → Ty(∞) + I as t → +∞.Since the linear system x = −Dx is exponen-tially stable, it follows by an argument as that inthe proof of [Forti & Tesi, 2001, Th. 2] that wehave limt→+∞ x(t) = D−1(Ty(∞) + I). Moreover,x(∞) = D−1(Ty(∞) + I) is necessarily an EP of(N), see e.g. [Hale & Kocak, 1991]. �

Page 4:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

3130 M. Di Marco et al.

Henceforth, we suppose that the neuron inter-connection matrix T of (N) satisfies the nexthypothesis.

Assumption 1. The CNN (N) is competitive, i.e.we have

Tij ≤ 0 ∀ i �= j.

Competitive CNNs are of great importance forthe applications, see e.g. [Chua & Roska, 1990; Thi-ran, 1997; Thiran et al., 1998; Shi & Boahen, 2002;Barreto et al., 2005] and references therein. Underthe assumption that T is symmetric, it is well-known that a competitive CNN admits a globalLyapunov function and turns out to be conver-gent from the general Lyapunov theory developedin [Chua & Yang, 1988]. On the contrary nonsym-metric competitive CNNs may exhibit nonconver-gent dynamics including nonvanishing oscillationsand chaos. For example, in [Di Marco et al., 2000]a third-order nonsymmetric competitive CNN hasbeen studied, which displays Hopf bifurcations orig-inating a large-amplitude and globally attractingstable limit cycle, in a wide range of parameters. In[Di Marco et al., 2005], a fourth-order nonsymmet-ric competitive CNN is presented, which exhibits acascade of period-doubling bifurcations leading tothe birth of a complex attractor. Up to now, no gen-eral method is available for determining conditionsunder which a nonsymmetric competitive CNN isconvergent.

The goal of this paper is to extend to competi-tive CNNs (N) the method developed by Grossbergfor studying convergence of the class of competi-tive dynamical systems (1). This extension involvestwo main steps: (i) first of all, we write the CNNequations with respect to the neuron outputs yi,and show that in this way we are brought back toa dynamical system that is structurally similar tothe class of competitive systems (1) (Sec. 4); (ii) weanalyze the convergence properties of the dynamicalsystem satisfied by the CNN outputs, by generaliz-ing to this system the Grossberg approach (Secs. 5–7). We stress that the extended method does notrequire the existence of a Lyapunov function, andas such it is applicable to address convergence in thegeneral case where the interconnection matrix T ofthe competitive CNN is not necessarily symmetric.

We end this section with some notations andproperties used throughout the paper. Let Ξ ={ξ = (ξ1, ξ2, . . . , ξn)′ ∈ R

n : ξi ∈ {−1, 0, 1},i = 1, 2, . . . , n}. As usual, the state space R

n can

be subdivided in 3n subsets

Λξ = {x ∈ Rn : xi ≤ −1, ξi = −1;xi ≥ 1,

ξi = 1;xi ∈ (−1, 1), ξi = 0; i = 1, 2, . . . , n}where the right-hand side of (N) is an affine vectorfield, each subset being identified by a vector ξ ∈ Ξ.Note that Λξ1 ∩Λξ2 = ∅ for any ξ1, ξ2 ∈ Ξ such thatξ1 �= ξ2, and ∪ξ∈ΞΛξ = R

n. Accordingly, also theoutput space Kn = [−1, 1]n can be subdivided in3n subsets

Λoξ = G(Λξ)

= {y ∈ Rn : yi = ξi, ξi ∈ {−1, 1};

yi ∈ (−1, 1), ξi = 0; i = 1, 2, . . . , n}and we have Λo

ξ1∩ Λo

ξ2= ∅ for any ξ1, ξ2 ∈ Ξ with

ξ1 �= ξ2, and ∪ξ∈ΞΛoξ = Kn.

Given any ξ ∈ Ξ, we denote by Sξ = {i ∈{1, 2, . . . , n} : ξi ∈ {−1, 1}} the set of indexesof saturated variables of (N) in Λξ, Λo

ξ , and byLξ = {i ∈ {1, 2, . . . , n} : ξi = 0} the set of indexesof nonsaturated variables in the same subsets.

The following can be proved.

Property 2. Let x(t) be a solution of the CNN(N), and y(t) = G(x(t)) the corresponding outputsolution. Then, for any t0 ≥ 0 there exists ξ ∈ Ξ andδ > 0 such that we have x(t) ∈ Λξ and y(t) ∈ Λo

ξ ,

t ∈ (t0, t0 + δ).

Proof. The proof consists in verifying that the con-ditions of [Sussmann, 1982, Theorems I and II] areall satisfied by our dynamical system (N). Indeedthe vector field describing the dynamics of (N) ispiecewise affine and it is defined in a partition of thestate space R

n in subsets verifying the conditions of[Sussmann, 1982, App. IV]. Thus, the claim followsfrom the fact that the quoted results in [Sussmann,1982] ensure that any solution x(t) of the CNN (N)has a finite number of switchings between differentsubsets Λξ in any finite time interval. �

4. Dynamic System for CNN Outputs

As it was noticed in Sec. 2, the competitive dynami-cal systems (1) considered by Grossberg are charac-terized by: (a) functions Mi, i = 1, 2, . . . , n, whichrepresent the competitive balance at each neuronstate; (b) a non-negative amplification function α,which converts the competitive balance into thegrowth rate of the state variables, and (c) a sub-set of R

n where the solutions of (1) are constrainedto evolve, which coincides with the positive orthantO+ = {x ∈ R

n : xi > 0, i = 1, 2, . . . , n}.

Page 5:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

Global Consistency of Decisions and Convergence of Competitive Cellular Neural Networks 3131

The CNN (N) is quite different in structurewith respect to (1), and the Grossberg approachis not directly applicable to analyze (N). In partic-ular, a competitive balance and amplification func-tion with properties analogous to those of model(1), are not identifiable for (N). Moreover, the statespace of (N) is the whole R

n space. This notwith-standing, we prove in what follows that it is possi-ble to put the CNN equations in a form structurallyanalogous to that of system (1), if we write the sameequations with respect to the neuron outputs.

Let us consider the following system ofdifferential inclusions

y ∈ H(y)M(y), y ∈ Rn (3)

where

M(y) = Ay + I = (−D + T )y + I, y ∈ Rn

is the affine vector field satisfied by the CNN (N) inthe linear region {y ∈ R

n : |yi| < 1, i = 1, 2, . . . , n},extended to the whole R

n space. Furthermore,H(y) = diag(h(y1), . . . , h(yn)) : R

n � Rn is a diag-

onal set-valued map, such that h(ρ) : R � R is anon-negative set-valued map defined as

h(ρ) =

1, |ρ| < 1[0, 1], |ρ| = 10, |ρ| > 1

see Fig. 1. Note that under Assumption 1 we have

∂Mi

∂yj(y) = Aij = Tij ≤ 0

∀ i �= j, ∀ y = (yi)i=1,2,...,n ∈ Rn.

The following holds.

Fig. 1. Set-valued map h.

Property 3. Let y(t), t ≥ 0, be an output solutionof the CNN (N). Then, y(t), t ≥ 0, is also a solutionof the differential inclusion (3).

Before giving the proof, we observe that bywriting (3) in components, we obtain

yi ∈ h(yi)Mi(y) = h(yi)

(n∑

k=1

Aikyk + Ii

),

i = 1, 2, . . . , n(4)

for any y = (yi)i=1,2,...,n ∈ Rn, whose form is analo-

gous to that of the class of competitive systems (1),and in particular, to the Volterra–Lotka system (2).Indeed, (4) is characterized by: (a) functions Mi,i = 1, 2, . . . , n, which are the competitive balanceat each CNN neuron output yi;1 (b) a non-negativefunction h, which plays the role of an amplificationfunction converting the competitive balance intothe growth rate yi, and (c) a subset of the spaceR

n, namely the hypercube Kn = [−1, 1]n, wherethe solutions of (4) starting in Kn are constrainedto evolve.

Although (4) is structurally analogous to model(1), there is however a basic difference. In fact, theamplification h in (4) is a set-valued map assumingmultiple values at the saturation levels ρ = ±1,2

while the amplification function α in model (1) is aconventional single-valued function. Consequently,we need to develop ad hoc techniques to analyzethe dynamics of the differential inclusion (4), whichdiffer substantially from those used to analyze theordinary differential equation in model (1).

Proof of Property 3. The output solution y(t) is anabsolutely continuous function on every compactinterval in [0,+∞), hence y(t) is differentiable fora.a. t ∈ [0,+∞). Property 2 ensures that for anyt0 ∈ [0,+∞) there exists ξ ∈ Ξ and δ > 0 such thatx(t) ∈ Λξ and y(t) ∈ Λo

ξ(t), for all t ∈ (t0, t0 + δ).For any i ∈ Lξ we have yi(t) = xi(t), t ∈

(t0, t0 + δ), hence

yi(t) = xi(t) = −dixi(t) +n∑

k=1

Tikyk(t) + Ii

= −diyi(t) +n∑

k=1

Tikyk(t) + Ii

1It is of importance to remark that Mi(y) =Pn

k=1 Aikyk + Ii = −diyi +Pn

k=1 Tikyk + Ii coincides with the ith componentof the affine system satisfied by the CNN (N) in the linear region {y ∈ R

n : |yi| < 1, i = 1, 2, . . . , n}.2This agrees with the fact that the velocity yi(t) of the CNN output solution y(t) is not uniquely defined at the saturationlevels yi(t) = ±1, but rather there are multiple feasible velocities when yi(t) = ±1.

Page 6:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

3132 M. Di Marco et al.

=n∑

k=1

Aikyk(t) + Ii

= h(yi(t))

(n∑

k=1

Aikyk(t) + Ii

)

= [H(y(t))(Ay(t) + I)]i

where we have taken into account that, being yi(t) ∈(−1, 1), we have h(yi(t)) = 1. It is clear that if t0is an instant at which y(t) is differentiable, thenpassing to the limit as t → t+0 , we obtain yi(t0) ∈[H(y(t0))(Ay(t0) + I)]i for any i ∈ Lξ.

Moreover, for any i ∈ Sξ we have yi(t) = ξi ∈{−1, 1}, t ∈ (t0, t0 + δ), hence

yi(t) = 0 ∈ h(yi(t))

(n∑

k=1

Aikyk(t) + Ii

)

= [H(y(t))(Ay(t) + I)]i

where we have considered that, being yi(t) ∈{−1, 1}, we have h(yi(t)) = [0, 1]. Once more, ift0 is an instant at which y(t) is differentiable, thenpassing to the limit as t → t+0 , we obtain yi(t0) ∈[H(y(t0))(Ay(t0) + I)]i for any i ∈ Sξ.

Therefore, y(t) is an absolutely continuousfunction on any compact interval in [0,+∞), andfor a.a. t ∈ [0,+∞) we have

y(t) ∈ H(y(t))(Ay(t) + I) = H(y(t))M(y(t)).

Hence, y(t), t ≥ 0, is a solution of the differentialinclusion (3) [Aubin & Cellina, 1984]. �

5. Decision Scheme for CompetitiveCNNs

We have seen in Property 3 that any output solutionof the CNN (N) is also a solution of the differentialinclusion (3). In Sec. 5.1 it is shown that (3) enjoysan ignition property analogous to that establishedby Grossberg for the competitive systems (1) (seeSec. 2). On this basis, in Sec. 5.2 it is shown thatwe can associate with a competitive CNN a decisionscheme.

5.1. Ignition property

The inclusion (3) is defined by means of the affinevector field M(y) = Ay + I satisfied by the CNNin the linear region, whose components representthe competitive balance at each neuron output. By

means of M , we define the following relevant func-tions and subsets of Kn. Let

M+(y) = maxi=1,2,...,n

Mi(y) =

n∑j=1

Aijyj + Ii

:

Kn → R

be the maximal balance function, and

M−(y) = mini=1,2,...,n

Mi(y) =

n∑j=1

Aijyj + Ii

:

Kn → R

the minimal balance function. Furthermore, con-sider the subsets of Kn

R+ = {y ∈ Kn : M+(y) ≥ 0}R− = {y ∈ Kn : M−(y) ≤ 0}R� = R+ ∩ R−.

The following can be proved.

Property 4. If the CNN (N) satisfies Assump-tion 1, then R+ is positively invariant for the outputsolutions of (N). This means that, given any outputsolution y(t) of (N) such that y(t+) ∈ R+ for somet+ ≥ 0, then we have y(t) ∈ R+ for all t ≥ t+.The sets R− and R� are positively invariant for theoutput solutions of (N) as well.

Proof. Let x(t), t ≥ 0, be a solution of (N), y(t) =G(x(t)) the corresponding output solution, and sup-pose that y(t+) ∈ R+. Also assume without loss ofgenerality that t+ = 0, hence M+(y(0)) ≥ 0. Wewish to prove that there exists t > 0 such that wehave M+(y(t)) ≥ 0, hence y(t) ∈ R+, for t ∈ [0, t).This implies that R+ is positively invariant for theoutput solutions of (N).

We begin by noting that if Mi(y(0)) > 0 forsome i = 1, 2, . . . , n, hence M+(y(0)) > 0, thendue to the continuity of Mi(y(t)) it follows thatMi(y(t)) > 0, t ∈ [0, t), for some t > 0. This inturn implies that y(t) ∈ R+, t ∈ [0, t). Therefore,let M+(y(0)) = 0 and thus I0 = {i = 1, 2, . . . , n :Mi(y(0)) = 0} �= ∅.

From Property 2, there exist ξ ∈ Ξ and tξ > 0such that for t ∈ (0, tξ) we have x(t) ∈ Λξ andy(t) ∈ Λo

ξ . Arguing as in the proof of Property 3, itfollows that for t ∈ (0, tξ) we have

yi(t) = Mi(y(t)) =n∑

j=1

Aijyj(t) + Ii, i ∈ Lξ

and

yi(t) = 0, i ∈ Sξ.

Page 7:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

Global Consistency of Decisions and Convergence of Competitive Cellular Neural Networks 3133

For any i ∈ Lξ and t ∈ (0, tξ) we thus obtainMi(y(t)) =

∑nj=1 Aij yj(t) =

∑j∈Lξ

Aij yj(t), hence

Mi(y(t)) =∑j∈Lξ

AijMj(y(t)), i ∈ Lξ. (5)

Similarly, for any i ∈ Sξ and t ∈ (0, tξ) we obtainMi(y(t)) =

∑nj=1 Aij yj(t) =

∑j∈Lξ

Aij yj(t), and

Mi(y(t)) =∑j∈Lξ

AijMj(y(t)), i ∈ Sξ. (6)

Observe that if I0 = {1, 2, . . . , n}, i.e.Mi(y(0)) = 0 for any i = 1, 2, . . . , n, then since theorigin is an EP of (5)–(6), we have that Mi(y(t)) = 0for any t ∈ [0, tξ) and any i = 1, 2, . . . , n. Hencey(t) ∈ R+ for t ∈ [0, tξ). Therefore, assume thatI− = {i = 1, 2, . . . , n : Mi(y(0)) < 0} �= ∅. Thefollowing three different cases are possible.

(a) Lξ ∩ I− = ∅, i.e. Mi(y(0)) = 0 for all i ∈ Lξ.Then, since the origin is an EP of (5), we obtainMi(y(t)) = 0, t ∈ [0, tξ), for any i ∈ Lξ. Hence,y(t) ∈ R+, t ∈ [0, tξ).

(b) Lξ ∩ I0 = ∅, i.e. Mi(y(0)) < 0 for all i ∈ Lξ.Then, Sξ ∩ I0 �= ∅ and passing to the limit ast → 0+ in (6) we obtain

δi =d

dtMi(y(t))

∣∣∣∣t=0+

=∑j∈Lξ

AijMj(y(0)),

i ∈ Sξ ∩ I0.

Suppose that there exist i ∈ Sξ ∩I0 and j ∈ Lξ,such that Aij �= 0. Since i �= j, we have Aij < 0.Then, δi > 0 and since Mi(y(0)) = 0, weobtain Mi(y(t)) > 0, t ∈ (0, t), and y(t) ∈ R+,t ∈ [0, t), for some t > 0.

Otherwise, if for any i ∈ Sξ ∩I0 and j ∈ Lξ

we have Aij = 0, it turns out that for t ∈ (0, tξ)

Mi(y(t)) = 0, i ∈ Sξ ∩ I0.

Then, since Mi(y(0)) = 0, i ∈ Sξ ∩ I0, we haveMi(y(t)) = 0 and y(t) ∈ R+, for t ∈ [0, tξ).

(c) Lξ ∩ I0 �= ∅ and Lξ ∩ I− �= ∅. From (5) weobtain that for t ∈ (0, tξ) we have Mi(y(t)) =∑

j∈LξAijMj(y(t)) for i ∈ Lξ ∩ I0, hence pass-

ing to the limit as t → 0+,

ηi =d

dtMi(y(t))

∣∣∣∣t=0+

=∑

j∈Lξ∩I−AijMj(y(0)),

i ∈ Lξ ∩ I0.

Suppose that there exists i ∈ Lξ ∩ I0 andj ∈ Lξ ∩ I− such that Aij �= 0. Since i �= j,we have Aij < 0. Then, ηi > 0 and sinceMi(y(0)) = 0, we obtain Mi(y(t)) > 0, t ∈(0, t), and y(t) ∈ R+, t ∈ [0, t), for some t > 0.

Otherwise, if for any i ∈ Lξ ∩ I0 andj ∈ Lξ ∩I− we have Aij = 0, then for t ∈ (0, tξ)

Mi(y(t)) =∑

j∈Lξ∩I0

AiiMi(y(t)), i ∈ Lξ ∩ I0.

Since Mi(y(0)) = 0, i ∈ Lξ ∩I0, then we obtainMi(y(t)) = 0, i ∈ Lξ ∩ I0, and y(t) ∈ R+ fort ∈ [0, tξ).

This concludes the proof that R+ is posi-tively invariant for the output solutions of (N).An analogous proof can be repeated to showthat R− is positively invariant for the outputsolutions of (N). Then, the same invarianceproperty holds for R� = R+ ∩ R− as well. �Property 4 represents an ignition property for

(3) and the competitive CNN (N). To see this,first note that since h is a map assuming non-negative values, then on the basis of (4), Mi(y(t))and yi(t) have the same sign. Suppose that at someinstant t+ the output of neuron i is enhanced,i.e. dyi(t)/dt|t=t+ ≥ 0 and hence M+(y(t+)) ≥ 0(y(t+) ∈ R+). Property 4 implies that M+(y(t)) ≥0 for all t ≥ t+, and then for a.a. t ≥ t+ there isat least a neuron j = j(t) ∈ {1, 2, . . . , n}, the indexj depending on t, such yj(t)(t) ≥ 0. This meansthat at least one neuron output is enhanced for anytime t ≥ t+. In other words, when the competitionbetween neurons starts, it thereafter never turns off.An analogous ignition property holds with respectto the negative balance M− and R−.

We also have the following.

Property 5. Suppose that the CNN (N) satisfiesAssumption 1, and let y(t) be an output solution of(N) such that y(t) /∈ R� for all t ≥ 0. Then, all out-puts yi(t), i = 1, 2, . . . , n, are either monotonicallyincreasing or monotonically decreasing for t ≥ 0,hence there exists

limt→+∞ y(t) = y(∞).

Proof. The output solution y(t) is differentiable fora.a. t ≥ 0 and, considering that h is a map assum-ing non-negative values, it follows that Mi(y(t)) andyi(t) have the same sign for a.a. t ≥ 0 (see (4)).Since y(t) /∈ R� for any t ≥ 0, we thus have thatfor a.a. t ≥ 0 either yi(t) ≤ 0, i = 1, 2, . . . , n, or

Page 8:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

3134 M. Di Marco et al.

yi(t) ≥ 0, i = 1, 2, . . . , n. In any case thecomponents of y(t), t ≥ 0, are all monotone increas-ing or monotone decreasing and so limt→+∞ y(t)exists. �

Property 5 implies that only within R� thereis an interesting dynamics for (N) where the neu-ron outputs are not necessarily monotone increas-ing or decreasing functions. In what follows we thusrestrict our analysis to any output solution y(t) to(N) that hits R� at some finite instant t�, and issuch that y(t) ∈ R� for all t ≥ t� (see Property 4).

5.2. Jumps, jump sets and decisionscheme

Let y(t) be an output solution of the competitiveCNN (N), such that y(t) ∈ R� for t ≥ t�. Inanalogy to the Grossberg approach delineated inSec. 2, we wish to analyze the dynamical behaviorof y(t) in R� by keeping track of which neuron iswinning the competition, i.e. by tracking the indexw = w(t) ∈ {1, 2, . . . , n}, in general depending on t,such that we have Mw(t)(y(t)) = M+(y(t)). To thisend, we will follow the jumps of y(t) between thesubsets

R+i = {y ∈ R� : Mi(y) = M+(y)}, i = 1, 2, . . . , n.

Note that R+i is the subset of R� where the com-

petitive balance at neuron i is maximum, which wecall the winning subset of the ith neuron.

5.2.1. Jumps

Definition 2. We say that an output solution y(t)of (N) makes no jump (between subsets R+

i ) inthe time interval (ta, tb) ⊂ (t�,+∞), if and onlyif there exists an index w ∈ {1, 2, . . . , n} such thatMw(y(t)) = M+(y(t)) for all t ∈ (ta, tb).

Let

t1 = sup{τ > t� : y(t) makes no jump in (t�, τ)}.From Property 11 in Appendix A, it follows thatt� < t1 ≤ +∞. Moreover, let w(1) ∈ {1, 2, . . . , n}be such that Mw(1)(y(t)) = M+(y(t)) for all t ∈(t�, t1).

If t1 = +∞, then w(1) is the winning neuronfor all t ≥ t�. Otherwise, if t1 < +∞ we let

t2 = sup{τ > t1 : y(t) makes no jump in (t1, τ)}where t1 < t2 ≤ +∞. Also, let w(2) ∈ {1, 2, . . . , n}be such that Mw(2)(y(t)) = M+(y(t)) for all t ∈

(t1, t2). Of course, we have w(2) �= w(1). If thecase t1 < +∞ occurs, we will say that y(t) jumpsfrom the winning subset R+

w(1) to the winning subsetR+

w(2)at the instant t1.

If t2 = +∞, then w(2) is the winning neuronfor all t ≥ t1. Otherwise, if t2 < +∞ we let

t3 = sup{τ > t2 : y(t) makes no jump in (t2, τ)}where t2 < t3 ≤ +∞. Moreover, let w(3) ∈{1, 2, . . . , n} be such that Mw(3)(y(t)) = M+(y(t))for all t ∈ (t2, t3). We have w(3) �= w(2). If t2 <+∞, we will say that y(t) jumps from R+

w(2)to R+

w(3)

at the instant t2.Proceeding in this way, we can construct a

sequence of instants t� < t1 < t2 < t3 < · · ·,and a corresponding sequence of indexes w(1),w(2), w(3), . . . , in the set {1, 2, . . . , n}, such thaty(t) does not jump in the intervals (tk−1, tk), k =1, 2, . . . (we have let t0 = t�), whereas y(t) jumpsfrom R+

w(k) to R+w(k+1) at the instants tk, k = 1, 2, . . .

Such a sequence of jumps may be finite or infi-nite. If it is finite, then there exist an index w ∈{1, 2, . . . , n} and an instant tw < +∞, such thatMw(y(t)) = M+(y(t)) for all t ≥ tw, namely yw isthe eventual winning neuron.

5.2.2. Jump sets

Suppose that y(t) jumps from R+i to R+

j , j �= i,at the instant t = tj. We equivalently say that theCNN takes a local decision where the CNN decidesto maximally enhance neuron j instead of neuron i,at t = tj. Of course, the jump can only occur on thepositive jump set between the winning subsets R+

iand R+

j , which is given by

J+ij = R+

i ∩ R+j

= {y ∈ R� : Mi(y) = Mj(y) = M+(y)}.Definition 3. We say that the positive jump setJ+

ij , j �= i, is crossable from i to j, if and only ifthere exists at least a solution y(t) of the CNN (N),such that y(t) jumps from R+

i to R+j on J+

ij at someinstant t = tj. Otherwise, we say that the jump setJ+

ij is noncrossable from i to j.

Clearly, if J+ij = ∅, then J+

ij is noncrossable bothfrom i to j and from j to i.

5.2.3. Decision scheme

It is possible to associate with the competitive CNN(N) a positive directed decision graph G+. First,

Page 9:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

Global Consistency of Decisions and Convergence of Competitive Cellular Neural Networks 3135

−1

−0.5

0

0.5

1 −1

−0.5

0

0.5

1

− 1

−0.5

0

0.5

1

x2

x1

x 3 π1

π2

π3

−1

−0.5

0

0.5

1 −1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

x2

x1

x 3

R+

R+K3\

(a) (b)

−1

−0.5

0

0.5

1 −1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

x2

x1

x 3

R

R−K3\

−1

−0.5

0

0.5

1 −1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

x2

x1

x 3

R *

(c) (d)

Fig. 2. (a) Planar surfaces πi, i = 1, 2, 3; (b) set R+, which is obtained by subtracting in K3 = [−1, 1]3 the polyhedral setdelimited by the red planar surfaces; (c) set R−, which is obtained by subtracting in K3 the polyhedral set delimited by thegreen planar surfaces, and (d) set R� = R+ ∩ R−.

Page 10:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

3136 M. Di Marco et al.

−1

−0.5

0

0.5

1 1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

x2

x1

x 3

J+13

J+12

J+23

−1

−0.5

0

0.5

1 −1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

x2

x1

x 3

J 13

J−23

J−

12

(a) (b)

−1

−0.5

0

0.5

1 −1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

x2

x1

x 3

J+23

J+13

−1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

x1

x2

x 3

J+13

J+12

J+23

(c) (d)

Fig. 3. (a) Positive jump sets J+12, J+

13 and J+23; (b) negative jump sets J−

12, J−13 and J−

23; (c) positive jump sets together withboundary of R�, and (d) a view of the same sets with a different orientation.

Page 11:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

Global Consistency of Decisions and Convergence of Competitive Cellular Neural Networks 3137

consider a directed graph specified by the set ofnodes {1, 2, . . . , n}, each node corresponding to aneuron, which is fully connected. Then, we con-struct a (reduced) positive directed decision graphG+ as follows: for each i, j ∈ {1, 2, . . . , n}, withj �= i, we remove the branch oriented from nodei to node j if and only if J+

ij is noncrossable fromi to j. The graph G+ is identified with the positivedecision scheme induced by (N).

Definition 4. Suppose that the CNN (N) satisfiesAssumption 1. The positive directed decision graphG+ associated with (N) is said to be globally con-sistent, if and only if G+ is acyclic, i.e. G+ has nodirected jump cycle.

5.2.4. Negative decision scheme

If y(t) is an output solution of the competitiveCNN (N), such that y(t) ∈ R� for all t ≥ t�, wecan also consider for t ≥ t� the index � = �(t) ∈{1, 2, . . . , n}, in general depending on t, such thatwe have M�(t)(y(t)) = M−(y(t)), i.e. the index ofthe (maximally) losing neuron. Then, it is possibleto track which neuron is losing the competition, byfollowing the jumps of y(t) between the subsets

R−i = {y ∈ R� : Mi(y) = M−(y)}, i = 1, 2, . . . , n.

We can immediately generalize to the jumpsof y(t) between subsets R−

j the previous Defini-tions 3 and 4, i.e. introduce the concepts of cross-able/noncrossable negative jump set J−

ij , where

J−ij = R−

i ∩ R−j

= {y ∈ R� : Mi(y) = Mj(y) = M−(y)}and globally consistent negative directed decisiongraph G−. These are obvious extensions of Defini-tions 3 and 4, and are not given here explicitly forbrevity.

5.2.5. An example

Consider the third-order nonsymmetric CNN

x1

x2

x3

= −

x1

x2

x3

+

1 −2.3 −1−3.5 1.1 −0.5−0.8 −0.4 −0.5

×

g(x1)g(x2)g(x3)

(7)

which is such that

A = −D + T =

0 −2.3 −1−3.5 0.1 −0.5−0.8 −0.4 −1.5

.

Figure 2(a) depicts the three planar surfacesπi = {y = (y1, y2, y3)′ ∈ K3 : Mi(y) = (Ay)i = 0},i = 1, 2, 3, where each competitive balance Mi(y)vanishes, while Figs. 2(b)–2(d) show the three setsR+, R− and R�. Note that the boundary of eachset is obtained as the union of suitable portions ofthe planes πi.

Figure 3(a) then shows the positive jump setsJ+

12, J+13 and J+

23, which partition R� into the threewinning subsets R+

1 , R+2 and R+

3 , while Fig. 3(b)shows the negative jump sets J−

12, J−13 and J−

23.Finally, in Fig. 3(c) we have depicted the positivejump sets together with the boundary of R�, whileFig. 3(d) is a view of Fig. 3(c) with a different ori-entation, for better visibility of the location of thepositive jump sets.

It can be verified that (7) induces the globallyconsistent positive and negative decision schemes

G+ = G− : 2 → 1, 3 → 1, 3 → 2. �

6. General Result for a GloballyConsistent Decision Scheme

In the remaining part of this paper, we analyze theconvergence properties of the competitive CNN (N)implied by a globally consistent decision scheme.First of all, in this section we establish a generalresult on the asymptotic behavior of the outputsolutions of (N) (Theorem 1), and then in Sec. 7 weexploit Theorem 1 to address convergence of CNNswith two and three neurons, and with more thanthree neurons.

When the positive decision scheme associatedwith the competitive CNN (N) is globally consis-tent, then the next general result holds.

Theorem 1. Suppose that the CNN (N) satisfiesAssumption 1, and that the positive directed deci-sion graph G+ associated with the CNN is globallyconsistent. Let y(t), t ≥ 0, be an output solution of(N) such that y(t) ∈ R� for all t ≥ t�. Then, thefollowing hold.

(a) The solution y(t) undergoes at most n−1 jumpsbetween subsets R+

j for t ≥ t�, hence there existan index w ∈ {1, 2, . . . , n} and tw > t�, suchthat we have y(t) ∈ R+

w , t ≥ tw (neuron w is the

Page 12:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

3138 M. Di Marco et al.

eventual winning neuron). Furthermore, thereexists

limt→+∞ yw(t) = yw(∞) ∈ [−1, 1].

(b) If we have

−1 < yw(t) < 1, t ≥ tw

then there exists

limt→+∞ y(t) = y(∞) ∈ Kn.

Otherwise, we have

yw(t) = 1 or yw(t) = −1, t ≥ tw1

for some tw1 ≥ tw.

Proof. Let y(t), t ≥ 0, be an output solution of (N)such that y(t) ∈ R� for t ≥ t�. By Property 11 inAppendix A and the definition of jump in Sec. 5.2, itis easily seen that there exists a sequence of instantst� = t0 < t1 < t2 < · · · < tm < · · · with the follow-ing properties:

(i) t ∈ [ti−1, ti] implies y(t) ∈ R+h(i) for some

h(i) ∈ {1, 2, . . . , n}, i = 1, 2, . . . ,m, . . .(ii) y(t) jumps from R+

h(i) to R+h(i+1), where h(i) �=

h(i + 1), at t = ti, i = 1, 2, . . . ,m, . . .

Since y(t) can jump from R+i to R+

j , i �= j, ifand only if G+ has a branch directed from i to j,and G+ has no directed jump cycles, it follows thatthe previous sequence is finite and that y(t) makesat most n − 1 jumps. As a consequence there existan index w ∈ {1, 2, . . . , n} and tw > t�, such that wehave y(t) ∈ R+

w , t ≥ tw, and yw(t) /∈ R+w for t < tw.

Moreover, since h is a map assuming non-negativevalues, by (4) we have yw(t) ≥ 0 for a.a. t ≥ tw,thus yw is monotone nondecreasing for t ≥ tw andthere exists the limt→+∞ yw(t) = yw(∞) ∈ [−1, 1].

Now, we have two possibilities, i.e. either

−1 < yw(t) < 1, t ≥ tw

or there exists tw1 ≥ tw such that yw(t) = 1 oryw(t) = −1 for t ≥ tw1, and −1 < yw(t) < 1 fort ∈ [tw, tw1).

Assume that −1 < yw(t) < 1 for t ≥ tw and leti ∈ {1, 2, . . . , n} with i �= w. We have

yi(t) = h(yi(t))Mi(y(t)) ≤ M+(y(t)) = yw(t) (8)

for a.a. t ≥ tw. Since yi is an absolutely continu-ous function on any interval [tw, τ ], we can define[Royden, 1988]

Pτtw(yi) =

∫ τ

tw

y+i (t) dt

and

N τtw(yi) =

∫ τ

tw

y−i (t) dt

where y±i (t) = max{0,±yi(t)} ≥ 0 for a.a. t ∈[tw, τ ], and Pτ

tw(yi),N τtw(yi) denote the positive and

negative variations of yi in [tw, τ ], respectively.The functions Pτ

tw(yi),N τtw(yi) are non-negative and

nondecreasing for τ ≥ tw and since yi(t) = y+i (t) −

y−i (t) for a.a. t ≥ tw, we have

yi(τ) − yi(tw) = Pτtw(yi) −N τ

tw(yi), τ ≥ tw. (9)

By (8), it turns out that for a.a. t ≥ tw wehave y+

i (t) ≤ yw(t). Hence, considering that yw(t) ∈[−1, 1] is monotone nondecreasing for any t ≥ tw, itfollows that Pτ

tw(yi) ≤ 2 for all τ ≥ tw. Furthermore,since yi(t) ∈ [−1, 1] for any t ≥ tw, (9) implies thatwe also have N τ

tw(yi) ≤ 4 for all τ > tw. Therefore,we have proved the existence of the

limτ→+∞ yi(τ) = yi(tw) + lim

τ→+∞Pτtw(yi)

− limτ→+∞N τ

tw(yi).�

An analogous result holds with respect to a neg-ative decision scheme.

Theorem 2. Suppose that the CNN (N) satisfiesAssumption 1, and that the negative directed deci-sion graph G− associated with the CNN is globallyconsistent. Let y(t), t ≥ 0, be an output solution of(N) such that y(t) ∈ R� for all t ≥ t�. Then, thefollowing hold.

(a) The solution y(t) undergoes at most n−1 jumpsbetween subsets R−

j , hence there exist an index� ∈ {1, 2, . . . , n} and t� > t�, such that we havey(t) ∈ R−

� , t ≥ t� (neuron � is the eventual los-ing neuron). Furthermore, there exists

limt→+∞ y�(t) = y�(∞) ∈ [−1, 1].

(b) If we have

−1 < y�(t) < 1, t ≥ t�

then there exists

limt→+∞ y(t) = y(∞) ∈ Kn.

Otherwise, we have

y�(t) = 1 or y�(t) = −1, t ≥ t�1

for some t�1 ≥ t�.

Page 13:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

Global Consistency of Decisions and Convergence of Competitive Cellular Neural Networks 3139

Theorem 1 simply states that, when the com-petitive CNN (N) induces a globally consistent pos-itive decision scheme, then at least one componentyw(t) of each output solution y(t) of the CNN isconvergent.3 We stress that, however, convergenceof y(t) is in general not guaranteed (see also Sec.7.3). We also remark that, on the basis of Theo-rem 1, convergence of y(t) fails to be guaranteedonly for those solutions y(t) such that yw(t) = 1, oryw(t) = −1, for all large t (the output of the win-ning neuron w is eventually saturated). An analo-gous interpretation holds for Theorem 2.

7. Convergence Results

7.1. n = 2

The next theorem shows that competitive CNNswith two neurons always induce a globally consis-tent decision scheme and enjoy strong convergenceproperties.

Theorem 3. Consider a second-order CNN (N)satisfying Assumption 1. Then, J+

12 and J−12 are

noncrossable from 1 to 2 and from 2 to 1, hence thepositive directed decision graph G+ and the negativedirected decision graph G− are globally consistent.Moreover, (N) is convergent.

Proof. Consider the directed decision graph G+.If J+

12 = ∅, then by Definition 3 it follows thatJ+

12 is noncrossable from 1 to 2 and from 2 to 1.Assume that J+

12 �= ∅ and y ∈ J+12, hence M1(y) =

M2(y) ≥ 0. On the other hand, since y ∈ R� wealso have Mk(y) ≤ 0 for an index k ∈ {1, 2}. ThusM1(y) = M2(y) = 0 and so y is an EP of (N). There-fore, if y(t) = y for some t ≥ 0, then y(t) = y forany t ≥ t, and by Definition 3 it follows that oncemore J+

12 is noncrossable from 1 to 2 and from 2 to1. In conclusion, G+ is globally consistent. Clearly,the same arguments apply also to G−.

Let y(t), t ≥ 0, be an output solution of (N). Ify(t) ∈ R� for t sufficiently large, then Theorems 1and 2 apply to conclude the convergence of y(t) =(yw(t), y�(t))′, t ≥ 0, to y(∞) = (yw(∞), y�(∞))′(without loss of generality we have let w = 1, � = 2).On the other hand, if y(t) /∈ R� for any t ≥ 0 (recallthat R� is a positively invariant set), then Prop-erty 5 applies to conclude the convergence of y(t).Therefore, (N) is convergent from Property 1. �

Theorem 3 highlights that second-order com-petitive CNNs are characterized by a strong geo-metric structure of the jump sets, which forbids theCNN to make any local decision (jump). This inturn implies that such CNNs are always conver-gent. The convergence result in Theorem 3 is inagreement with that obtainable on the basis of Lya-punov method [Chua & Yang, 1988], since the inter-connection matrix of a nonsymmetric second-ordercompetitive CNN can be in general symmetrized byright-multiplying it by a diagonal and positive def-inite matrix. We have preferred to explicitly give aconvergence proof for two-dimensional competitiveCNNs by means of the method proposed in thispaper, since we are interested not only in provingconvergence when n = 2, but also in understand-ing how convergence is implied by the geometricstructure of decisions of the competitive CNN. Suchan interpretation is not possible by exploiting Lya-punov approach.

7.2. n = 3

Consider a third-order competitive CNN. Differ-ently from second-order competitive CNNs, a third-order CNN in general does not have a globallyconsistent decision scheme (G+ or G− are in gen-eral not acyclic), and it may be nonconvergent. Anexample of this kind is given next.

Example. In [Di Marco et al., 2000], the followingthird-order competitive CNN has been considered

x1

x2

x3

= −

x1

x2

x3

+

0 −α −β

−β 0 −α

−α −β 0

g(x1)g(x2)g(x3)

(10)

where α, β > 0. It can be verified that when param-eters α, β belong to the open region

Rosc ={

(α, β) : α + β > 2;α <β2 + 1β + 1

;

β <α2 + 1α + 1

}(11)

then (10) induces the positive directed decisionscheme G+ : 1 → 2, 2 → 3, and 3 → 1, whichhas a directed jump cycle. It is shown in [Di Marcoet al., 2000] that for these parameters the solutionsof (10) display large-size nonvanishing oscillationsas t → +∞.

3The index w of the eventual winning neuron depends on the output solution y(t) of (N).

Page 14:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

3140 M. Di Marco et al.

In the next basic result it is shown that, whenthe third-order CNN has a globally consistent deci-sion scheme, then it also enjoys the property ofconvergence.

Theorem 4. Consider a third-order CNN (N) sat-isfying Assumption 1, and assume that the positivedirected decision graph G+, or the negative directeddecision graph G−, is globally consistent. Then, (N)is convergent.

Proof. Let y(t), t ≥ 0, be an output solution of(N). If y(t) /∈ R� for any t ≥ 0, then by Prop-erty 5 the solution y(t) is convergent. Therefore,assume that y(t) ∈ R� for any t ≥ t�, for somet� ≥ 0, and that G+ is globally consistent. By The-orem 1 we conclude again the convergence of y(t) ifyw(t) ∈ (−1, 1) for t ≥ tw, for some tw ≥ t�. On theother hand, if yw(t) ∈ {−1, 1} for t ≥ tw1 ≥ t�, thenthe two variables xi(t), i = 1, 2, 3, i �= w, are solu-tions of the following reduced second-order compet-itive CNN

xi = −dixi +∑j �=w

Tijg(xj) + Tiwyw(∞) + Ii,

i ∈ {1, 2, 3}\{w}where yw(∞) = yw(tw1) ∈ {−1, 1}, which is con-vergent from Theorem 3. Clearly the same conclu-sion holds also in the case where G− is globallyconsistent. �

Roughly speaking the proof of Theorem 4 relieson the fact that, on the basis of Theorem 1, anysolution of (N) is either convergent or otherwise thenonwinning variables are eventually a solution ofa reduced second-order competitive CNN, which isconvergent from Theorem 3.

In Sec. 8, we further study the case of three-dimensional competitive CNNs and, exploiting The-orem 4, a number of classes of convergent CNNsinducing a globally consistent decision scheme areanalytically characterized.

7.3. n ≥ 4

Consider now a fourth-order competitive CNN. Webegin by showing with the next example that,differently from the three-dimensional and two-dimensional cases, when n = 4 the hypothesis ofa globally consistent decision scheme as in Defini-tion 4 does not necessarily guarantee convergence.An analogous conclusion holds for any n ≥ 4.

We stress that this is a crucial difference withrespect to previous results by Grossberg for a differ-ent class of competitive systems (see Sec. 2), whereglobal consistency of the decision scheme alwaysimplies convergence, no matter how large is thedimension of those systems.

Example. Consider the fourth-order competitiveCNN

x1

x2

x3

x4

= −

x1

x2

x3

x4

+

0 −α −β −1−β 0 −α −1−α −β 0 −1−1 −1 −1 0

×

g(x1)g(x2)g(x3)g(x4)

+

111I4

(12)

and suppose that parameters α, β > 0 belong toregion Rosc in (11) (see the example in Sec. 7.2),while I4 is a sufficiently large positive biasing inputsuch that

I4 > 7 + α + β. (13)

Under these assumptions, it can immediately beverified that we have

M+(y) = M4(y); Mi(y) < M+(y), i = 1, 2, 3

for all (y1, y2, y3, y4)′ ∈ K4. Hence,

J+ij = ∅, i, j = 1, 2, 3, 4; i �= j

and thus the fourth-order competitive CNN (12)obviously has a globally consistent positive decisionscheme.

Accounting for (13) it easily follows that theindex of the winning neuron w(t) = 4 for allt ≥ t� and y4(t) = 1 for all large t ≥ t, in accor-dance with Theorem 1. Therefore, for all t ≥ tthe state variables x1(t), x2(t) and x3(t) turn outto obey the reduced third-order system (10) con-sidered in the example of Sec. 7.2. We have seenin that example that when α, β ∈ Rosc the third-order system (10) induces a decision scheme whichis not globally consistent, moreover x1(t), x2(t) andx3(t) display nonvanishing oscillations for a.a. ini-tial condition. Hence, also the fourth-order CNN(12) is not convergent.

The example shows that it may be possible thata fourth-order competitive CNN has a globally con-sistent decision scheme, according to Definition 4,however the reduced third-order system satisfied forall large times by the three nonwinning variables is

Page 15:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

Global Consistency of Decisions and Convergence of Competitive Cellular Neural Networks 3141

not globally consistent, since its decision graph dis-plays a directed jump cycle. In turn this leads tononvanishing oscillations of the fourth-order CNN.

Based on these considerations, it follows that toguarantee convergence of competitive CNNs withn ≥ 4, we need a stronger notion of global con-sistency of decisions, with respect to that in Def-inition 4. This notion is made precise in the nextdefinition.

Definition 5. Suppose that n ≥ 4 and that theCNN (N) satisfies Assumption 1. The CNN (N) issaid to induce a strongly globally consistent decisionscheme, if and only if the following hold: (a) the pos-itive directed decision graph G+ associated with (N)is globally consistent, and: (b) each reduced-ordercompetitive CNN

xi = −dixi +∑j∈Q

Tijg(xj) +∑j∈P

Tijξj + Ii, i ∈ Q

where ξj = 1 or ξj = −1, ∅ �= P ⊂ {1, 2, . . . , n}with n − card(P ) ≥ 3, and Q = {1, 2, . . . , n}\P ,has the property that its positive directed decisiongraph G+

Q is globally consistent.

An analogous definition of strong global consistencycan be given with respect to the negative directeddecision graph G−.

The following convergence theorem holds.

Theorem 5. Consider a competitive CNN (N) withn ≥ 4. If the CNN induces a strongly globally con-sistent positive (or negative) decision scheme, then(N) is convergent.

Proof. We give a proof for a positive decisionscheme. The proof is quite analogous for a nega-tive decision scheme. Let y(t) be an output solutionof (N). If y(t) /∈ R� for any t ≥ 0, then by Prop-erty 5 the solution y(t) turns out to be convergent.Therefore, assume that y(t) ∈ R� for any t ≥ t�, forsome t� ≥ 0, and suppose that G+ is globally consis-tent. From Theorem 1 we have convergence of y(t)if yw(t) ∈ (−1, 1) for t ≥ tw, for some tw ≥ t�. Oth-erwise, we have yw(t) ∈ {−1, 1} for t ≥ tw1, wheretw1 is given in Theorem 1.

Define P1 = {1, 2, . . . , n : |xi(t)| ≥ 1, t > tw1}and Q1 = {1, 2, . . . , n}\P1. Obviously P1 �= ∅, sincew ∈ P1. Let m1 = card(P1) ≥ 1 and ξ1

j = yj(t) =g(xj(t)) ∈ {−1, 1}, t ≥ tw1, for all j ∈ P1. We have

that the n − m1 variables xi(t), i ∈ Q1, satisfy fort > tw1 the reduced system

xi = −dixi +∑j∈Q1

Tijg(xj) +∑j∈P1

Tijξ1j + Ii,

i ∈ Q1. (14)

Such a reduced system of dimension n − m1

is competitive and by assumption it has a globallyconsistent decision scheme G+

Q1. If n− m1 ≤ 3 then

Theorem 3 or Theorem 4 implies that any yi(t),i ∈ Q1, is convergent (the case n − m1 = 1 beingobvious). If instead n−m1 > 3, then we proceed asfollows.

The vector y1(t) = (g(xj(t)))j∈Q1 , t ≥ tw1, isan output of (14). By repeating the arguments atthe beginning of the proof we have that either y1(t)converges as t → +∞, or there exists an indexw1 ∈ Q1 and an instant tw2 > tw1 such that thewinning neuron w1 satisfies yw1(t) ∈ {−1, 1} forany t ≥ tw2. Arguing as before we define P2 = {i ∈Q1 : |xi(t)| ≥ 1, t > tw2} and Q2 = Q1\P2. Letm2 = card(P2) ≥ 1. We have that the n−m1 −m2

variables xi(t), i ∈ Q2, satisfy for t ≥ tw2 thereduced system

xi = −dixi +∑j∈Q2

Tijg(xj) +∑j∈P1

Tijξ1j

+∑j∈P2

Tijξ2j + Ii, i ∈ Q2

where ξ2j = yj(t) = g(xj(t)) ∈ {−1, 1}, t ≥ tw2

for all j ∈ P2. If n − m1 − m2 ≤ 3 then yi(t),i ∈ Q2, are convergent. Otherwise, we proceed inthis way, and after a finite number of steps weobtain n −∑k

i=1 mi ≤ 3. In conclusion, (N) is con-vergent by Property 1. �

8. Third-Order Competitive CNNs(Continued)

In this section we consider again third-order com-petitive CNNs (N), which are described by the sys-tem of differential equations

x1

x2

x3

= −

x1

x2

x3

+

T11 T12 T13

T21 T22 T23

T31 T32 T33

g(x1)g(x2)g(x3)

+

I1

I2

I3

(15)

where Tij ≤ 0 for all i, j ∈ {1, 2, 3}, i �= j, and wehave supposed that di = 1, i = 1, 2, 3. As it was

Page 16:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

3142 M. Di Marco et al.

done in Sec. 4, define the matrix

A =

A11 A12 A13

A21 A22 A23

A31 A32 A33

=

−1 + T11 T12 T13

T21 −1 + T22 T23

T31 T32 −1 + T33

and denote by Ai, i = 1, 2, 3, the rows of A. Thecompetitive balance for (15) is given by M(y) =Ay + I, where I = (I1, I2, I3)′ ∈ R

3 and y =(y1, y2, y3)′ ∈ K3.

In Sec. 8.1 we provide a test for checking whena positive jump set of (15) is noncrossable (cf. Def-inition 3), while in Sec. 8.2 we apply this test andthe results in Sec. 7.2 in order to find classes ofthird-order competitive CNNs which induce a glob-ally consistent decision scheme and are convergent.

8.1. Test for jump sets

Consider the third-order competitive CNN (15) andthe positive jump set

J+ij = R+

i ∩ R+j

= {x ∈ R� : Mi(x) = Mj(x) = M+(x)}where i, j ∈ {1, 2, 3}, j �= i. It can be easily verifiedthat J+

ij is the set of points x = (x1, x2, x3)′ ∈ R3

satisfying

J+ij =

−1 ≤ x1 ≤ 1;−1 ≤ x2 ≤ 1;−1 ≤ x3 ≤ 1(Aii − Aji)xi + (Aij − Ajj)xj

+ (Aiq − Ajq)xq + (Ii − Ij) = 0Aiixi + Aijxj + Aiqxq + Ii ≥ 0Aqixi + Aqjxj + Aqqxq + Iq ≤ 0

where q ∈ {1, 2, 3} and i �= q �= j, from which it fol-lows that J+

ij is a bounded and convex polyhedron,i.e. a polytope.

It is known that the polytope J+ij is completely

characterized by the set of its vertexes Vij (a poly-tope is the convex hull of its vertexes). In the nextproperty we give a test of practical applicability onvertexes in Vij , which ensures that J+

ij is noncross-able from i to j.

We need the following additional notation.Given a point y ∈ Kn, let ξ(y) ∈ Ξ be such thaty ∈ Λo

ξ(y). Moreover, denote by

Ad(ξ(y)) = {χ ∈ Ξ : χi = 0, i ∈ Lξ(y);χi ∈ {0, ξi(y)}, i ∈ Sξ(y)}.

It can be easily verified that y belongs to the closureof a region Λo

χ, if and only if χ ∈ Ad(ξ(y)).

Property 6. The nonempty positive jump set J+ij

of (15), where i, j ∈ {1, 2, 3}, j �= i, is noncrossablefrom i to j, if for any vertex v ∈ Vij and for anyχ ∈ Ad(ξ(v)) we have that

Gχ(Av + I) = 0

or

(Ai − Aj)Gχ(Av + I) > 0 (16)

where x → Gχ(x) is the linear operator from R3 to

R3 whose components are defined by

(Gχ(x))i ={

xi if χi = 00 if χi ∈ {−1, 1}

where χ = (χi)i=1,2,3 and x = (xi)i=1,2,3.

Proof. See Appendix B. �

8.2. Classes of convergent third-ordercompetitive CNNs

In this section, we analytically characterize someclasses of third-order competitive CNNs (15) thatinduce a globally consistent decision scheme G+,and are convergent as a consequence of Theorem 4.For definiteness, we suppose that G+ has only thethree directed branches

G+ : 2 → 1, 3 → 1, 3 → 2 (17)

but analogous results, which are not reported here,can be obtained for other decision schemes.

Property 7. Let A1 ⊂ R9 be the set of intercon-

nection parameters Aij , i, j ∈ {1, 2, 3}, satisfying

A13 < A23 ≤ 0; A12 < A32 ≤ 0;A21 < A31 ≤ 0; A11 ∈ R

(18)

and furthermore

A22 < A12 − |A11 −A21| − |A13 −A23| − 2IM (19)

and

A33 < min{A13 − |A11 − A31| − |A12 − A32|− 2IM, A23 − |A22 − A32|−|A21 − A31| − 2IM} (20)

where IM = maxi=1,2,3 |Ii|. Then, within A1 theCNN (15) has the globally consistent decisionscheme G+ in (17). Moreover, A1 is the union ofa finite number of convex polyhedra with nonemptyinterior in the space R

9.

Page 17:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

Global Consistency of Decisions and Convergence of Competitive Cellular Neural Networks 3143

Proof. See Appendix C. �

Property 7 has the following consequence.Given any nominal CNN (15) whose interconnec-tion parameters belong to the interior of subset A1

defined in Property 7, then the following hold: (a)the nominal CNN has a globally consistent decisionscheme and is convergent; (b) global consistency ofdecisions and convergence are properties that arerobust with respect to (small) perturbations of thenominal CNN interconnections.

We can also prove the next properties, whichgive other convex polyhedral sets with nonemptyinterior of interconnection parameters for which (N)robustly implements the positive decision schemeG+. The proofs are analogous to that of Property 7and are omitted for brevity.

Property 8. Let A2 be the set of interconnectionparameters Aij , i, j ∈ {1, 2, 3}, satisfying

A13 < A23 ≤ 0; A12 < A32 ≤ 0;A21 < A31 ≤ 0; A33 ∈ R

and furthermore

A22 > A32 + |A23 − A33| + |A21 − A31| + 2IM

and

A11 > max{A21 + |A12 − A22| + |A13 − A23| + 2IM,

A31 + |A13 − A33| + |A12 − A32| + 2IM}.Then, the same result as in Property 7 holds withA1 replaced by A2.

Property 9. Let A3 ⊂ R9 be the set of intercon-

nection parameters Aij , i, j ∈ {1, 2, 3}, satisfying

A12, A13 < 0; A31 ≤ 0; A22 > A32 > A12;

A23 > A13; A33 < A23

and furthermore

A11 > A31 + |A13 − A33| + |A32 − A12| + 2IM

and

A21 < min{A11 − |A22 − A12| − |A13 − A23| − 2IM,

A31 − |A22 − A32| − |A23 − A33| − 2IM}.Then, the same result as in Property 7 holds withA1 replaced by A3.

Property 10. Let A4 ⊂ R9 be the set of intercon-

nection parameters Aij , i, j ∈ {1, 2, 3}, satisfying

A31, A32 ≤ 0;A13 < 0;A23 > A13;A33 < A13;

A11 > A31;A22 > A32

and furthermore

A21 < A31 − |A22 − A32| − |A23 − A33| − 4IM

and

A21 + A22 − A11 + |A13 − A23| + 2IM

< A12 < A32 − |A11 − A31|− |A13 − A33| − 2IM.

Then, the same result as in Property 7 holds withA1 replaced by A4.

8.3. Simulation results

Consider the nominal symmetric third-order com-petitive CNN

x1

x2

x3

= −

x1

x2

x3

+

3 −1 −0.7−1 1.9 −0.2−0.7 −0.2 1.2

×

g(x1)g(x2)g(x3)

. (21)

The nominal matrix

A0 = T0 − D =

2 −1 −0.7−1 0.9 −0.2−0.7 −0.2 0.2

belongs to the interior of subset A2 defined in Prop-erty 7, hence (21) induces the globally consistentdecision scheme

G+1 : 2 → 1, 3 → 1, 3 → 2

and is convergent from Theorem 4.Then, consider the perturbed CNN with inter-

connection matrix

T =

3 −1 + α −0.7 + β

−1 + β 1.9 −0.2 + α

−0.7 + α −0.2 1.2

(22)

α, β being perturbation parameters. Correspond-ingly, the perturbed matrix AP is given by

AP = T − D =

2 −1 + α −0.7 + β

−1 + β 0.9 −0.2 + α

−0.7 + α −0.2 0.2

.

Page 18:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

3144 M. Di Marco et al.

α

β

−1.5 −1 −0.5 0 0.5 1

0.4

0

0.8

−0.4

−0.8

−1.2

n

p2

p3

p1

Cs

Fig. 4. Parameter subset where AP ∈ A2 (magenta), subset where the perturbed CNN still induces the scheme G+ (red), sub-set where G+ is not induced by the perturbed CNN (green), and subset where the same CNN is no longer competitive (blue).Nominal symmetric CNN with α = 0, β = 0 (point n), and three perturbed nonsymmetric CNNs p1 (α = −0.3, β = −0.2), p2(α = −0.6, β = −0.4) and p3 (α = −0.65, β = −1.2).

Note that if α �= 0, or α �= β, then the perturbedCNN is nonsymmetric.

We are interested in determining which are theallowed perturbations α, β that do not destroy theconsistent decision scheme G+ induced by the nom-inal CNN (21). Figure 4 shows in magenta theparameter subset with nonempty interior for whichAP ∈ A2. Within this subset, the perturbed CNNstill has the consistent decision scheme G+ and isconvergent. We have also implemented a numericalprogram to numerically find the whole set of param-eters for which a given decision scheme holds, whichexploits the test on vertexes in Property 6. By thisprogram, we have found that G+ is induced by theperturbed CNN, hence the CNN is convergent, alsoin the subset shown in red in Fig. 4.4 The figurethen shows the parameter subset where G+ is notinduced by the perturbed CNN (green), and thatwhere the perturbed CNN is no longer competitive(blue).

Figure 5(a) shows the positive jump sets J+12,

J+13 and J+

23 for the nominal CNN (21), while

Figs. 5(b)–5(d) show the positive jump sets for thethree specific perturbed CNNs defined by α = −0.3,β = −0.2 (point p1 in Fig. 4), α = −0.6, β = −0.4(point p2 in Fig. 4), and α = −0.65, β = −1.2(point p3 in Fig. 4). It is seen that the jump setsundergo quite significant changes in their shape andorientation within the subsets in magenta and red,although the same decision scheme G+ is induced.

In Fig. 6 we show some convergent solutions,corresponding to different initial conditions, forthe perturbed nonsymmetric CNN defined by α =−0.6, β = −0.4. The figure also shows for each solu-tion the time-domain evolution of the maximumbalance M+(y(t)) = maxi=1,2,3 Mi(y(t)). In the firstcase [Fig. 6(b)], it is seen that y(t) starts in R+

at t = 0 (M+(y(0)) > 0), and in accordance withProperty 4 it stays within R+ for all t ≥ 0, indeedwe have M+(y(t)) > 0 for all t ≥ 0. It is also seenthat the initial winning neuron is i = 2 (y(0) ∈ R+

2 ),while at tj = 6.1 we have a jump on J+

12 from R+2

to R+1 , and the winning neuron is j = 1 for t > 6.1

4We remark that the sets which have been analytically characterized in Properties 7–10, are only subsets of the parametersfor which the decision scheme G+ is induced by the CNN.

Page 19:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

Global Consistency of Decisions and Convergence of Competitive Cellular Neural Networks 3145

−1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

x1

x2

x 3

J+12

J+23

J+13

−1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

x1

x2

x 3

J+12

J+13J+

23

(a) (b)

−1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

x1

x2

x 3

J+12

J+13

J+23

−1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

−1

−0.5

0

0.5

1

x1

x2

x 3

J+23

J+13

J+12

(c) (d)

Fig. 5. (a) Positive jump sets J+12, J+

13, J+23 for the nominal CNN (21) (point n in Fig. 4), and for three perturbed CNNs:

(b) α = −0.3, β = −0.2 (p1 in Fig. 4), (c) α = −0.6, β = −0.4 (p2 in Fig. 4), and (d) α = −0.65, β = −1.2 (p3 in Fig. 4).

Page 20:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

3146 M. Di Marco et al.

0 2 4 6 8 10 12−3

−2

−1

0

1

2

3

t

x 1(t),

x2(t

), x

3(t)

0 2 4 6 8 10 120

0.5

1

1.5

2

t

M+(y

(t))

(a) (b)

0 2 4 6 8 10 12−4

−2

0

2

4

6

t

x 1(t),

x2(t

), x

3(t)

0 2 4 6 8 10 12−1

0

1

2

3

4

5

t

M+(y

(t))

(c) (d)

0 2 4 6 8 10 12−6

−4

−2

0

2

4

t

x 1(t),

x2(t

), x

3(t)

0 2 4 6 8 10 12−0.5

0

0.5

1

1.5

2

t

M+(y

(t))

(e) (f)

Fig. 6. Time-domain evolution of x1(t) (red), x2(t) (green), x3(t) (blue), and the maximum competitive balance M+(y(t)),for the perturbed CNN defined by α = −0.6, β = −0.4 and three different choices of initial conditions.

Page 21:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

Global Consistency of Decisions and Convergence of Competitive Cellular Neural Networks 3147

(y(t) ∈ R+1 for t ≥ 6.1). This corresponds to one of

the allowed jumps in the graph G+, i.e. 2 → 1.5

In the second case [Fig. 6(d)], y(t) starts outsideR+ at t = 0, since M+(y(0)) < 0. Then, y(t) quicklyenters R+ and it stays within this set thereafter. Wecan also note that there is a positive jump from theinitial winning neuron i = 3 to the eventual winningneuron j = 1, at tj = 5.7. Again, this corresponds toone of the allowed jumps in the graph G+, namely3 → 1.

Finally, in the third case [Fig. 6(f)], y(t) startsoutside R+ at t = 0, it enters R+ and subsequentlyundergoes a positive jump from the initial winningneuron i = 3 to the eventual winning neuronj = 2, at tj = 6.3. Once more, this correspondsto one of the allowed jumps in the graph G+,namely 3 → 2.

The next additional comments are in order.

1. Consistency of the decision scheme and conver-gence are robust properties for the nominal CNN(21). In addition, the allowed perturbations α, βpreserving these properties are reasonably largeand compatible with actual tolerances in theimplementation of CNNs.

2. As seen before, within the parameter subsetsshown in magenta and in red in Fig. 4, the CNNalways has the same consistent decision schemeG+ and is convergent. It is of interest to notethat within these subsets there are changes inthe configuration of the CNN EPs, due to bifur-cations that lead to the birth or disappearance ofequilibria. For example, the nominal CNN (21)has four asymptotically stable EPs in the sat-uration subsets identified by the following vec-tors ξ ∈ Ξ: (1,−1,−1)′, (1, 1,−1)′, (−1, 1, 1)′,(−1,−1, 1)′. Moreover, (21) has five unstableEPs, one at the origin and four in the subsets(−1, 0, 1)′, (0,−1, 1)′, (0, 1,−1)′, (1, 0,−1)′. Theperturbed CNNs with α = −0.3, β = −0.2and α = −0.6, β = −0.4 have the same equi-librium configuration as (21), however the per-turbed CNN such that α = −0.65, β = −1.2has only two stable EPs in the saturation sub-sets (−1, 1, 1)′, (1,−1,−1)′ and three unstableEPs, one at the origin and two EPs in (0,−1, 1)′,(0, 1,−1)′. These considerations show that thesame decision scheme is compatible with many

different equilibrium configurations and confirm,as it was already noted by [Grossberg, 1980], thatthe decisions are defined by geometric structuresof motion that exist far from equilibrium.

3. There are open sets of parameters α, β withinsubsets in magenta and red in Fig. 4 where, tothe authors’ knowledge, convergence of the per-turbed (nonsymmetric) CNN defined by matrixT in (22) cannot be proved via other alreadyexisting methods. Here, we only note that it isnot possible for these open sets to apply Lya-punov method devised in the original paper by[Chua & Yang, 1988]. In fact, the Lyapunovfunction discovered in that paper is valid forsymmetric CNN matrices T , or symmetrizablematrices T , i.e. matrices T such that thereexist diagonal and positive definite matricesK1 and K2 such that K1TK2 is symmetric.It is not difficult to show that matrix T asin (22) is symmetrizable if and only if α, βbelong to the following curve in the parameterspace [Parter & Youngs, 1962]

Cs ={

(α, β) :15(α − 1)(α − 0.7)

= (β − 1)(β − 0.2)(β − 0.7)}

see Fig. 4, which is a one-dimensional manifoldwith measure zero in the parameter space α, β.

9. Conclusion

The paper has extended to standard competitiveCNNs a geometric approach previously developedby Grossberg for analyzing convergence of a dif-ferent class of competitive dynamical systems. Theapproach permits to associate with a competitiveCNN a decision scheme, and to globally analyzeits dynamical behavior and convergence properties,under the hypothesis that the decision scheme isglobally consistent. Such an approach does not relyon the existence of a global Lyapunov function,hence it is applicable also in situations where a Lya-punov function is not available, as in the importantcase where the CNN neuron interconnection matrixis not symmetric.

The paper has analyzed in detail the case ofthree-dimensional competitive CNNs. A number of

5It is also worth remarking that M+(x(t)) does not tend to 0 as t → +∞ (for example, in Fig. 6(b) we have M+(y(t)) � 1.5for all large t). This is not in contradiction with convergence of x(t) and y(t). Indeed, Fig. 6(a) shows that for all large t wehave x1(t) > 1, hence y1(t) = 1 and 0 = y1(t) ∈ h(1)M+(y(t)) = [0, 1] · M+(y(t)), for all large t.

Page 22:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

3148 M. Di Marco et al.

classes of third-order nonsymmetric CNNs havebeen discovered, which have a globally consis-tent decision scheme and are convergent. In anycase, convergence holds in sets of interconnectionparameters with nonempty interior, hence it turnsout to be a physically robust property. This con-trasts with Lyapunov approach, which guaranteesconvergence for symmetric neuron interconnectionmatrices, which is not a generic property for theinterconnections.

For competitive CNNs with more than threeneurons (n ≥ 4), the paper has highlighted inter-esting new dynamical phenomena with respect tothe three-dimensional case, such as the possibilitythat a globally consistent decision scheme does notimply convergence. This represents a crucial dif-ference with respect to previous results by Gross-berg for a different class of competitive systems,where global consistency always implies conver-gence. Under a stronger notion of global consis-tency, with respect to that proposed by Grossberg,a general theorem ensuring convergence for com-petitive CNNs with n ≥ 4 has been proved. Futurework will be devoted to a more thorough analy-sis of competitive CNNs with n ≥ 4, with theaim to characterize classes of CNNs that enjoy theproperty of strong global consistency of decisionsintroduced in this paper, and thus turn out to beconvergent.

References

Aubin, J. P. & Cellina, A. [1984] Differential Inclusions(Springer-Verlag, Berlin).

Barreto, G. A., Mota, J. C. M., Souza, L. G. M., Frota,R. A. & Aguayo, L. [2005] “Condition monitoring of3G cellular networks through competitive neural mod-els,” IEEE Trans. Neural Networks 16, 1064–1075.

Chua, L. O. & Yang, L. [1988] “Cellular neural networks:Theory,” IEEE Trans. Circuits Syst. 35, 1257–1272.

Chua, L. O. & Roska, T. [1990] “Stability of a class ofnonreciprocal neural networks,” IEEE Trans. CircuitsSyst. 37, 1520–1527.

Di Marco, M., Forti, M. & Tesi, A. [2000] “Bifurcationsand oscillatory behavior in a class of competitive cel-lular neural networks,” Int. J. Bifurcation and Chaos10, 1267–1293.

Di Marco, M., Forti, M., Grazzini, M. & Pancioni, L.[2005] “Fourth-order nearly-symmetric CNNs exhibit-ing complex dynamics,” Int. J. Bifurcation and Chaos15, 1579–1587.

Forti, M. & Tesi, A. [2001] “A new method to analyzecomplete stability of PWL cellular neural networks,”Int. J. Bifurcation and Chaos 11, 655–676.

Grossberg, S. [1978a] “Competition, decision, and con-sensus,” J. Math. Anal. Appl. 66, 470–493.

Grossberg, S. [1978b] “Decisions, patterns, and oscilla-tions in nonlinear competitive systems with applica-tions to Volterra–Lotka systems,” J. Theor. Biol. 73,101–130.

Grossberg, S. [1980] “Biological competition,” Proc.Natl. Acad. Sci. 77, 2388–2342.

Grossberg, S. [1988] “Nonlinear neural networks: Prin-ciples, mechanisms, and architectures,” Neural Net-works 1, 17–61.

Hale, J. & Kocak, H. [1991] Dynamics and Bifurcations(Springer-Verlag, NY).

Parter, S. V. & Youngs, J. W. T. [1962] “Thesymmetrization of matrices by diagonal matrices,”J. Math. Anal. Appl. 4, 102–110.

Royden, H. L. [1988] Real Analysis (MacMillan Publish-ing Company, NY).

Shi, B. E. & Boahen, K. [2002] “Competitively coupledorientation selective cellular neural networks,” IEEETrans. Circuits Syst.-I 49, 388–394.

Sussmann, H. [1982] “Bounds on the number of switch-ings of trajectories for piecewise analytic vectorfields,” J. Diff. Eqs. 43, 399–418.

Thiran, P. [1997] Dynamics and Self-Organization ofLocally Coupled Neural Networks (Presses Poly-techniques et Universitaires Romandes, Lausanne,Switzerland).

Thiran, P., Setti, G. & Hasler, M. [1998] “An approachto information propagation in 1-D cellular neuralnetworks — Part I: Local diffusion,” IEEE Trans.Circuits Syst. I 45, 777–789.

Appendix A

Property 11. Let y(t) be an output solution of(N) such that y(t) ∈ R� for all t ≥ t�. Then,for any t0 ≥ t� there exists δ > 0 such that y(t)makes no jump (between subsets R+

i ) in the inter-val (t0, t0 + δ).

Proof. Arguing by contradiction, from Definition 2we assume that there exists t0 ≥ t� such that for anyδ > 0 there is not a neuron that wins in the wholetime interval (t0, t0 + δ). Let {tk} be a decreasingsequence such that tk → t0. For any k = 0, 1, 2, . . .,there exists wk ∈ {1, 2, . . . , n} such that

Mwk(y(tk)) ≥ Mi(y(tk))

for any i = 1, 2, . . . , n. By possibly passing to a sub-sequence {tmk

} we have that wmk= w� for some

w� ∈ {1, 2, . . . , n} and so

Mw�(y(tmk)) ≥ Mi(y(tmk

)) (A.1)

for any i = 1, 2, . . . , n and any k = 0, 1, 2, . . . . Onthe other hand, since w� is not a winning neuron

Page 23:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

Global Consistency of Decisions and Convergence of Competitive Cellular Neural Networks 3149

on any time interval (t0, t0 + δ), δ > 0, we havethat there exists a decreasing sequence {τk}, withτk → t0, and a sequence {ik} ⊂ {1, 2, . . . , n}\{w�}such that

Mw�(y(τk)) < Mik(y(τk))

for any k = 0, 1, 2, . . . . By possibly passing to asubsequence {τnk

} we have that ink= i� for some

i� ∈ {1, 2, . . . n}\{w�}, and so

Mw�(y(τnk)) < Mi�(y(τnk

)) (A.2)

for any k = 0, 1, 2, . . .In conclusion, from (A.1) and (A.2), it turns

out that the continuous function t → Mw�(y(t)) −Mi�(y(t)) has infinitely many zeros converging to t0.However, by Property 2, in a right neighborhood oft0, this function is a linear combination, which isnot identically equal to zero by (A.2), of solutionsof a linear autonomous system of ordinary differ-ential equations. However, this contradicts the factthat solutions of such a linear system cannot havenonisolated zeros, unless they are identically equalto zero. �

Appendix B

Proof of Property 6

Let y(t) be an output solution of (N) such thaty(τ) ∈ J+

ij , for some τ ≥ 0, and suppose thatJ+

ij �= ∅. We wish to prove that y(t) /∈ R+j \R+

i forall t ∈ [τ, τ + δ), for some δ > 0. This in turn impliesthat J+

ij is noncrossable from i to j.We begin by showing that for any χ ∈

Ad(ξ(y(τ))) we have

Gχ(Ay(τ) + I) = 0 (B.1)

or

(Ai − Aj)Gχ(Ay(τ) + I) > 0. (B.2)

We have y(τ) ∈ cl(Λoχ), hence y(τ) belongs to

the polytope J+ij ∩ cl(Λo

χ). Let V �= ∅, V ⊆ Vij, bethe set of vertexes of J+

ij ∩cl(Λoχ). Since V ⊆ cl(Λo

χ),it follows that χ ∈ Ad(ξ(v)) for any v ∈ V. Further-more, from the vertex properties of the polytopeJ+

ij ∩ cl(Λoχ), we obtain y(τ) =

∑v∈V λvv, for suit-

able λv ∈ [0, 1], v ∈ V, such that∑

v∈V λv = 1.

Then,

Gχ(Ay(τ) + I) = Gχ

(A∑v∈V

λvv + I∑v∈V

λv

)

= Gχ

(∑v∈V

λv(Av + I)

)

=∑v∈V

λvGχ(Av + I).

From our assumptions, we have that V =V1 ∪ V2, where V1,V2 are defined as follows, V1 ={v ∈ V : Gχ(Av + I) = 0} and V2 = {v ∈ V :(Ai − Aj)Gχ(Av + I) > 0}. Hence

Gχ(Ay(τ) + I) =∑v∈V2

λvGχ(Av + I).

If V2 = ∅, or λv = 0 for any v ∈ V2, we obtainGχ(Ay(τ) + I) = 0, hence (B.1), (B.2) hold. Ifinstead V2 �= ∅, and for some v ∈ V2 we have λv �= 0,then we obtain

(Ai − Aj)Gχ(Ay(τ) + I)

= (Ai − Aj)∑v∈V2

λvGχ(Av + I)

=∑v∈V2

λv(Ai − Aj)Gχ(Av + I) > 0

and once more (B.1), (B.2) are proved.To proceed, note that if y(τ) is an EP of (N), it

follows that y(t) = y(τ) ∈ J+ij for any t ≥ τ . There-

fore, suppose that y(τ) is not an EP of (N). As aconsequence of Property 2, there exist δ > 0 andχ ∈ Ad(ξ(y(τ))), such that y(t) satisfies

y(t) = Gχ(Ay(t) + I)

for t ∈ (τ, τ + δ). We have to consider two differ-ent cases, namely Gχ(Ay(τ) + I) = 0 or otherwise(Ai − Aj)Gχ(Ay(τ) + I) > 0. In the former case, itfollows that y(t) = y(τ) ∈ J+

ij for any t ∈ [τ, τ + δ),hence y(t) /∈ R+

j \R+i , t ∈ [τ, τ + δ). In the latter

case, the inequality (Ai − Aj)Gχ(Ay(τ) + I) > 0implies that y(t) belongs to the half-space Θ ={y ∈ Rn : (Ai − Aj)(y − y(τ)) ≥ 0} for allt ∈ [τ, τ + δ1), for some δ1 > 0. This in turn impliesthat y(t) /∈ R+

j \R+i , t ∈ [τ, τ + δ1), in fact for any

y ∈ Θ we have

Mi(y) − Mj(y) = (Mi(y) − Mj(y))− (Mi(y(τ)) − Mj(y(τ)))

= (Ai − Aj)(y − y(τ)) ≥ 0

Page 24:  · International Journal of Bifurcation and Chaos, Vol. 17, No. 9 (2007) 3127–3150 c World Scientific Publishing Company GLOBAL CONSISTENCY OF DECISIONS AND CONVERGENCE OF COMPETI

3150 M. Di Marco et al.

where we have taken into account that Mi(y(τ)) =Mj(y(τ)). This concludes the proof. �

Appendix C

Proof of Property 7

We wish to prove by using Property 6 that for inter-connection parameters within set A1, J+

12 is notcrossable from 1 to 2, J+

13 is not crossable from 1 to3 and J+

23 is not crossable from 2 to 3. Equivalently,for any (i, j, q) ∈ N = {(1, 2, 3); (1, 3, 2); (2, 3, 1)},J+

ij is noncrossable from i to j.Let v = (v1, v2, v3)′ be a vertex of J+

ij . If v

belongs to the interior of K3, then Ad(ξ(v)) = {0}and so it suffices to show by Property 6 that

G0(Av + I) = Av + I = M(v) = 0 (C.1)

or

(Ai − Aj)G0(Av + I) = (Ai − Aj)(Av + I) > 0.(C.2)

We have Mi(v) = Mj(v) ≥ 0 and Mq(v) ≤ 0for any (i, j, q) ∈ N . If Mi(v) = Mq(v) = 0, then(C.1) is satisfied. Therefore, suppose that |Mi(v)|+|Mq(v)| > 0. The left-hand side of (C.2) can berewritten as (Aii −Aji + Aij −Ajj)|Mi(v)| − (Aiq −Ajq)|Mq(v)|, so that (C.2) is satisfied if{

Aii − Aji + Aij − Ajj > 0Aiq − Ajq < 0

for any (i, j, q) ∈ N .The inequalities (18) ensure that Aiq −Ajq < 0

for any (i, j, q) ∈ N . Therefore, it remains to showthat we have A11−A21 +A12−A22 > 0, A11−A31 +A13 − A33 > 0 and A22 − A32 + A23 − A33 > 0, orequivalently that

A22 < A12 + A11 − A21

A33 < min{A13 + (A11 − A31);A23 + (A22 − A32)}.

(C.3)

From (19) we obtain A22 < A12 − |A11 − A21| −|A13 − A23| − 2IM ≤ A12 − |A11 − A21| ≤ A12 +A11−A21, and so the first inequality in (C.3) is sat-isfied. Then, a1 = A13 − |A11 −A31| − |A12 −A32| −2IM ≤ A13 − |A11 − A31| ≤ A13 + (A11 − A31) anda2 = A23 −|A22 −A32|− |A21 −A31|−2IM ≤ A23 −|A22−A32| ≤ A23+(A22−A32). Then, min{a1, a2} ≤min{A13+(A11−A31);A23+(A22−A32)}. From (20)we thus conclude that also the second inequality in(C.3) is satisfied.

Now, suppose that v = (vj)j=1,2,3 belongs tothe boundary of K3. We begin by showing that we

have |vj | < 1 for any (i, j, q) ∈ N . From (19), (20)we easily obtain

Aij − Ajj > 0 (C.4)

for any (i, j, q) ∈ N . Considering that Mi(v) −Mj(v) = 0, we have

vj =Aii − Aji

Aij − Ajjvi +

Aiq − Ajq

Aij − Ajjvq +

Ii − Ij

Aij − Ajj

hence

|vj| ≤∣∣∣∣Aii − Aji

Aij − Ajj

∣∣∣∣ |vi| +∣∣∣∣Aiq − Ajq

Aij − Ajj

∣∣∣∣ |vq|

+∣∣∣∣ Ii − Ij

Aij − Ajj

∣∣∣∣≤ |Aii − Aji| + |Aiq − Ajq| + 2IM

Aij − Ajj. (C.5)

On the other hand, (19) and (20) imply that for any(i, j, q) ∈ N we have

Ajj < Aij − |Aii − Aji| − |Aiq − Ajq| − 2IM .

Using this inequality in (C.5) we conclude that|vj | < 1. From |vj | < 1, we also have χj = 0 forany χ ∈ Ad(ξ(v)).

Let e1 = (1, 0, 0)′ ∈ Ξ, e2 = (0, 1, 0)′ ∈ Ξ ande3 = (0, 0, 1)′ ∈ Ξ. Since G−χ(v) = Gχ(v) andGei−eq(v) = Gei+eq(v) for any (i, j, q) ∈ N , we haveto prove that (16) is satisfied in the following threecases.

(a) χ = ei. We have vi = 1 and |vq| ≤ 1. IfGχ(Av + I) �= 0, then Mj(v) > 0 or Mq(v) < 0.From Aij−Ajj > 0 (see (C.4)) and Aiq−Ajq > 0(see (18)) we obtain

(Ai − Aj)Gχ(Av + I)

= (Aij − Ajj)|Mj(v)| − (Aiq − Ajq)|Mq(v)|= |Aij − Ajj||Mj(v)|

+ |Aiq − Ajq||Mq(v)| > 0.

(b) χ = eq. We have vq = 1 and |vi| ≤ 1, so we canproceed as in case (a).

(c) χ = ei + eq. We have vq = 1 and vi = 1. IfGχ(Av + I) �= 0, then Mj(v) > 0 and then fromcondition Aij − Ajj > 0 (see (C.4)) we obtain

(Ai −Aj)Gχ(Av + I) = (Aij −Ajj)|Mj(v)| > 0.

Finally, it can be immediately verified that A1

is the union of a finite number of convex polyhedrawith nonempty interior in R

9. �