ieee transactions on cybernetics, vol. 46, no. 7, july...

12
IEEE TRANSACTIONS ON CYBERNETICS, VOL. 46, NO. 7, JULY 2016 1655 Distributed Optimization for a Class of Nonlinear Multiagent Systems With Disturbance Rejection Xinghu Wang, Yiguang Hong, and Haibo Ji Abstract—The paper studies the distributed optimization prob- lem for a class of nonlinear multiagent systems in the presence of external disturbances. To solve the problem, we need to achieve the optimal multiagent consensus based on local cost func- tion information and neighboring information and meanwhile to reject local disturbance signals modeled by an exogenous system. With convex analysis and the internal model approach, we pro- pose a distributed optimization controller for heterogeneous and nonlinear agents in the form of continuous-time minimum-phase systems with unity relative degree. We prove that the proposed design can solve the exact optimization problem with rejecting disturbances. Index Terms—Distributed optimization, disturbance rejection, internal model (IM), multiagent systems, nonlinear systems. I. I NTRODUCTION I N RECENT years, a lot of efforts have been made to study the coordination problem of multiagent systems with many significant results, including consensus, formation, and optimization (see [24], [25], [28], [40], and references therein). One of the important problems is the so-called mul- tiagent consensus, which makes all agents achieve a common state (see [22], [24], [25], [32]). Particularly, some interesting results have been obtained for consensus of nonlinear multi- agent systems (see [4], [17], [19], [36]). For example, in the cyclic small-gain framework, an output-feedback was devel- oped to handle the case of a stationary leader in [19], while different consensus control laws were proposed, with the help of output regulation theory, for the case when the leader is modeled by an autonomous system in [33] and [38]. Although, multiagent consensus is an essential problem of multiagent systems, there usually exist additional concerns to describe most practical problems. In practice, optimization is one of the main concerns for the coordination of mul- tiagent systems, in order to make all the agents converge Manuscript received February 16, 2015; revised May 11, 2015; accepted June 26, 2015. Date of publication August 25, 2015; date of current ver- sion June 14, 2016. This work was supported in part by the National Natural Science Foundation of China under Grant 61273090, Grant 61333001, and Grant 61503359, in part by the Beijing Natural Science Foundation under Grant 4152057, and in part by the 973 Program under Grant 2014CB845301/2/3. This paper was recommended by Associate Editor Q. Liu. X. Wang and H. Ji are with the Department of Automation, University of Science and Technology of China, Hefei 230027, China (e-mail: [email protected]; [email protected]). Y. Hong is with the Key Laboratory of Systems and Control, Institute of Systems Science, Chinese Academy of Sciences, Beijing 100190, China (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TCYB.2015.2453167 to the optimization state. Therefore, distributed optimization or optimal consensus problems have attracted much research attention recently. To cooperatively solve the distributed opti- mization problem, each agent is only accessible to the local information (e.g., the sub-gradient) of its own cost func- tion and local interactions with its neighboring agents to seek the optimization solution of the global cost function. Thus, more complicated than the consensus problem with- out optimization concerns, the distributed optimization is to achieve a consensus state specified by the global cost func- tion. For the past decade, some effective discrete-time algo- rithms have been developed to cope with this problem, such as subgradient algorithms [20], [23] and alternating direc- tion method of multipliers [35], and continuous-time methods have also been considered to achieve distributed optimization tasks [8], [15], [18], [27], [30]. Many control techniques, such as the proportional-integral control, were adopted in the opti- mization design to ensure all the agents converge to the same optimization point (see [15], [31]). On the other hand, various disturbances, arising from either environment or communication, are ubiquitous in reality. The robustness against the disturbances is an important issue in the control of multiagent systems. Because of the presence of disturbances, most (optimal) consensus results were obtained that can only guarantee the convergence to a neighborhood of the common point (see [41] for consensus control and [30] for distributed optimization). Note that, the so-called internal model (IM) approach has been proved as one of the effective ways in control theory to handle various types of disturbances, including the finite superposition of the step, sinusoidal, and ramp signals [10], [13]. In recent years, IM approach has also been applied to reject external disturbances for uncer- tain multiagent consensus problem [12], [29], [34]. Regarding the distributed optimization, a simple IM-based optimization control was developed in [33] for single-integrator multiagent systems to reject local external disturbances. In this paper, the distributed optimization problem with rejecting exogenous disturbances is investigated for a class of heterogeneous nonlinear agents with unity relative degree. The external disturbance signals are assumed to be gener- ated by linear autonomous systems, which can be used to describe a finite superposition of step, sinusoidal, and ramp signals. A two-step design scheme is proposed to construct the distributed optimization control: 1) in the first step, the distributed optimization problem is converted to a distributed stabilization problem with the help of IMs and 2) in the sec- ond step, a distributed stabilization control is designed to solve 2168-2267 c 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Upload: others

Post on 05-Apr-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 46, NO. 7, JULY 2016 1655

Distributed Optimization for a Class of NonlinearMultiagent Systems With Disturbance Rejection

Xinghu Wang, Yiguang Hong, and Haibo Ji

Abstract—The paper studies the distributed optimization prob-lem for a class of nonlinear multiagent systems in the presence ofexternal disturbances. To solve the problem, we need to achievethe optimal multiagent consensus based on local cost func-tion information and neighboring information and meanwhile toreject local disturbance signals modeled by an exogenous system.With convex analysis and the internal model approach, we pro-pose a distributed optimization controller for heterogeneous andnonlinear agents in the form of continuous-time minimum-phasesystems with unity relative degree. We prove that the proposeddesign can solve the exact optimization problem with rejectingdisturbances.

Index Terms—Distributed optimization, disturbance rejection,internal model (IM), multiagent systems, nonlinear systems.

I. INTRODUCTION

IN RECENT years, a lot of efforts have been made tostudy the coordination problem of multiagent systems

with many significant results, including consensus, formation,and optimization (see [24], [25], [28], [40], and referencestherein). One of the important problems is the so-called mul-tiagent consensus, which makes all agents achieve a commonstate (see [22], [24], [25], [32]). Particularly, some interestingresults have been obtained for consensus of nonlinear multi-agent systems (see [4], [17], [19], [36]). For example, in thecyclic small-gain framework, an output-feedback was devel-oped to handle the case of a stationary leader in [19], whiledifferent consensus control laws were proposed, with the helpof output regulation theory, for the case when the leader ismodeled by an autonomous system in [33] and [38].

Although, multiagent consensus is an essential problem ofmultiagent systems, there usually exist additional concerns todescribe most practical problems. In practice, optimizationis one of the main concerns for the coordination of mul-tiagent systems, in order to make all the agents converge

Manuscript received February 16, 2015; revised May 11, 2015; acceptedJune 26, 2015. Date of publication August 25, 2015; date of current ver-sion June 14, 2016. This work was supported in part by the NationalNatural Science Foundation of China under Grant 61273090, Grant 61333001,and Grant 61503359, in part by the Beijing Natural Science Foundationunder Grant 4152057, and in part by the 973 Program under Grant2014CB845301/2/3. This paper was recommended by Associate Editor Q. Liu.

X. Wang and H. Ji are with the Department of Automation, Universityof Science and Technology of China, Hefei 230027, China (e-mail:[email protected]; [email protected]).

Y. Hong is with the Key Laboratory of Systems and Control, Instituteof Systems Science, Chinese Academy of Sciences, Beijing 100190, China(e-mail: [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TCYB.2015.2453167

to the optimization state. Therefore, distributed optimizationor optimal consensus problems have attracted much researchattention recently. To cooperatively solve the distributed opti-mization problem, each agent is only accessible to the localinformation (e.g., the sub-gradient) of its own cost func-tion and local interactions with its neighboring agents toseek the optimization solution of the global cost function.Thus, more complicated than the consensus problem with-out optimization concerns, the distributed optimization is toachieve a consensus state specified by the global cost func-tion. For the past decade, some effective discrete-time algo-rithms have been developed to cope with this problem, suchas subgradient algorithms [20], [23] and alternating direc-tion method of multipliers [35], and continuous-time methodshave also been considered to achieve distributed optimizationtasks [8], [15], [18], [27], [30]. Many control techniques, suchas the proportional-integral control, were adopted in the opti-mization design to ensure all the agents converge to the sameoptimization point (see [15], [31]).

On the other hand, various disturbances, arising from eitherenvironment or communication, are ubiquitous in reality. Therobustness against the disturbances is an important issue inthe control of multiagent systems. Because of the presence ofdisturbances, most (optimal) consensus results were obtainedthat can only guarantee the convergence to a neighborhood ofthe common point (see [41] for consensus control and [30]for distributed optimization). Note that, the so-called internalmodel (IM) approach has been proved as one of the effectiveways in control theory to handle various types of disturbances,including the finite superposition of the step, sinusoidal, andramp signals [10], [13]. In recent years, IM approach hasalso been applied to reject external disturbances for uncer-tain multiagent consensus problem [12], [29], [34]. Regardingthe distributed optimization, a simple IM-based optimizationcontrol was developed in [33] for single-integrator multiagentsystems to reject local external disturbances.

In this paper, the distributed optimization problem withrejecting exogenous disturbances is investigated for a classof heterogeneous nonlinear agents with unity relative degree.The external disturbance signals are assumed to be gener-ated by linear autonomous systems, which can be used todescribe a finite superposition of step, sinusoidal, and rampsignals. A two-step design scheme is proposed to constructthe distributed optimization control: 1) in the first step, thedistributed optimization problem is converted to a distributedstabilization problem with the help of IMs and 2) in the sec-ond step, a distributed stabilization control is designed to solve

2168-2267 c© 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

1656 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 46, NO. 7, JULY 2016

the distributed optimization problem. Moreover, to show thegenerality of the framework of distributed optimization, ourresults are applied directly to handle two basic consensus prob-lems for nonlinear multiagent systems. One is the leaderlessoutput average-consensus problem and the other one is theleader-following output consensus.

The main contribution of this paper can be summarizedas follows.

1) In the distributed optimization problem, the agentdynamics are extended to be heterogenous nonlinearsystems perturbed by external disturbances, differentfrom the single-integrator (linear) agent dynamics asin most existing continuous-time optimization design(see [15], [30], [33]).

2) To tackle the technical challenges resulting from het-erogenous nonlinear agent dynamics and external distur-bances, a two-step scheme based on IM and Lyapunovmethod is proposed to guarantee the exact optimization.Both semi-global and global distributed optimizationresults are obtained in different cases.

3) The obtained results can be applied to consensus prob-lems of nonlinear multiagent systems with externaldisturbances. In other words, our results provide alter-native and interesting ways to handle consensus withdisturbance rejection as discussed in [1] and [34].

This paper is organized as follows. We formulate thedistributed optimization problem with rejecting external dis-turbances in Section II. Next, we analyze the problem withdisturbances based on the IM in Section III, while we constructthe distributed optimization control and prove the optimizationresults in Section IV. Then, we show how to apply our opti-mization results to the conventional consensus problems ofnonlinear agents in Section V, followed by illustrative exam-ples in Section VI. Finally, we provide the conclusion inSection VII.

Notations: Let 1N be the vector of N ones and In be then×n identity matrix. For vectors x1, . . . , xm, col(x1, . . . , xm) =[x�

1 , . . . , x�m]�. Denote Bx

ρ = {x ∈ Rn : ‖x‖ ≤ ρ} for a constant

ρ > 0, and �c(W) = {x ∈ Rn : W(x) ≤ c} for a constant c > 0

and a C1 (continuously differentiable) positive definite andradially unbounded function W. K∞ is the set of continuous,strictly increasing, and unbounded functions α : [0,∞) →[0,∞) with α(0) = 0.

II. PROBLEM FORMULATION

In this section, we provide some preliminaries and thenformulate our problem.

We start with concepts for convex analysis. A differentiablefunction f : R

n → R is strictly convex if (y − x)�(∇f (y) −∇f (x)) > 0 for x = y ∈ R

n, and f is a m-strongly convex (fora constant m > 0) if (y − x)�(∇f (y) − ∇f (x)) ≥ m‖y − x‖2

for x = y (see [26] for details).Consider a network of N agents with interaction topology

described by an undirected graph G. An undirected graphG := {V, E} is defined with V := {1, 2, . . . ,N} as the nodeset and E ⊂ V × V as the edge set (without self-loops) [9].An edge ( j, i) ∈ E of the graph means that agents i and j

can exchange information with each other. A path of G is anordered sequence of distinct nodes in V such that any consecu-tive nodes in the sequence correspond to an edge of the graph.The graph is called connected if there exists a path from i to jfor any two nodes i, j ∈ V . A = [aij]i,j=1,...,N is the weightedadjacency matrix with aij ≥ 0, and aij = aji > 0 if ( j, i) ∈ E .The associated Laplacian matrix L = [lij]i,j=1,...,N is definedby lii = ∑N

j=1 aij and lij = −aij for i = j.In the multiagent system, agent i is endowed with a local

cost function fi : R → R for i = 1, . . . ,N. The global costfunction f : R → R is defined as a sum of the local costfunctions, that is

f (x) =N∑

i=1

fi(x), x ∈ R. (1)

As in [15] and [27], we assume that the optimal solution setX∗ = arg minx∈R f (x) is nonempty.

The nonlinear uncertain agent dynamics can be expressedin the following form:

{zi = gi1(zi, yi,w)yi = gi2(zi, yi,w)+ qi(t)+ ui, i = 1, . . . ,N

(2)

where (zi, yi) ∈ Rn is the state of agent i, yi ∈ R is its

output, ui ∈ R is the control input, w ∈ W denotes the con-stant uncertain parameter in a fixed compact set W ⊂ R

nw ,and the functions gi1, gi2 are sufficiently smooth vanishing atthe origin. The nonlinear system in the form of (2) is calledto be with unity relative degree, which was widely studiedin [5], [34], and reference therein. qi(t) := δi(vi,w) is the localactuating disturbance of agent i with vi being generated by thelocal disturbance source

vi = Sivi, vi(0) ∈ Vi (3)

for a fixed compact set Vi ⊂ Rnvi , where δi(vi,w) is poly-

nomial in vi with coefficients depending on w. Without lossof generality, we assume that, for each i = 1, . . . ,N, all theeigenvalues of the matrix Si have non-negative real parts (not-ing that negative real parts make the corresponding signalsvanish exponentially, whose analysis becomes quite trivial,referring to [10, Remark 1.3] for details).

Remark 1: The disturbance source (3) can generate the fun-damental sinusoidal/step/ramp type signals. In addition, it canalso produce a good approximation of any bounded periodicdisturbance signal by summing up a finite number of harmon-ics in its Fourier series expansion. Hence, it has been widelyused as typical nontrivial disturbances in the control literature(see [2], [10], [13], [38]).

Our control goal is to solve the optimization problem ofminimizing the cost function f (x) with disturbance rejectionby the agents under a distributed output-feedback control ui

of the form⎧⎨

ξi = γi1

(ξi,∇fi(yi),

∑Ni=1 aij

(yi − yj

))

ui = γi2

(ξi,∇fi(yi),

∑Ni=1 aij

(yi − yj

)) (4)

with ξi(0) = 0, where ∇fi(yi) is the gradient of fi at theoutput yi. To be strict, we give the following definition.

WANG et al.: DISTRIBUTED OPTIMIZATION FOR A CLASS OF NONLINEAR MULTIAGENT SYSTEMS 1657

Definition 1: The semi-global distributed optimizationproblem can be solved if, for any sets Bz

ρ and Byρ with

z = col(z1, . . . , zN), y = col(y1, . . . , yN) and a constantρ > 0, we can design a distributed optimization control inthe form of (4) such that, for any col(z(0), y(0)) ∈ Bz

ρ × Byρ

and vi(0) ∈ Vi, the solution of the closed-loop system is well-defined and yi(t), i = 1, . . . ,N converge to the same pointy∗ ∈ X∗.

Moreover, the global distributed optimization problemcan be solved if we can design a distributed optimizationcontrol (4) such that the solution of the closed-loop systemis well-defined and yi(t), i = 1, . . . ,N converge to y∗ ∈ X∗for any col(z(0), y(0)) and vi(0) ∈ Vi.

Note that, in the semi-global distributed optimizationdesign, the initial condition of the agents are assumed withina predefined region specified by ρ, which can be made arbi-trarily large but is required. However, the global distributedoptimization design works for all initial conditions.

Remark 2: Without assigning the cost functions to the agentnetwork, the problem becomes an output consensus or syn-chronization problem with external disturbances and can besolved by introducing the design of IM (see [1], [4]). Notethat the IM proposed in the above aforementioned literaturecan be only applied to the disturbances including constant andsinusoidal signals, based on an incremental passivity prop-erty, while our proposed distributed control can handle theunbounded ramp signals as well.

Remark 3: When the agent dynamics is reduced to the sin-gle integrator without any external disturbances, our problembecomes consistent with those in [15], [30], and [31]. In thepresence of disturbances, Wang and Elia [30] studied therobust issue with “bounded” optimization errors in order toguarantee that the agents’ states enter a small error neigh-borhood of the optimization point specified by the boundeddisturbances. However, we consider the exact optimizationby completely rejecting a class of bounded deterministic dis-turbances. In other words, we aim at a novel distributedoptimization design for the convergence to the exact opti-mization point, rather than a small neighborhood of theoptimization point.

To proceed further, we introduce the following well-known assumptions for our problem. The first assumptionis typical for the optimization of continuous-time multiagentsystems [30], [31].

Assumption 1: The undirected graph G is connected.Remark 4: Under Assumption 1, we have the following

results (see [9] for details). Zero is a simple eigenvalue ofmatrix L and 1�

NL = 0. Moreover, there exists a matrixQ ∈ R

N×(N−1) with

1�N Q = 0, Q�Q = IN−1, QQ� = IN − 1

N1N1�

N

such that the matrix L := Q�LQ is positive definite.The next assumption is about the cost functions in the

optimization problem.Assumption 2: All the local cost functions fi, i = 1, . . . ,N

are differentiable and convex on R, and there exists at leastone local cost function that is m-strongly convex on R.

Remark 5: Assumption 2 means that, there exists an indexi0 such that the local cost function fi0 is m-strongly convexon R. If so, we can define B = diag(b1, . . . , bN) with

bi ={

0, if i = i0m, if i = i0

(5)

and L = L+B. Then, under Assumption 1, as in the proof of[11, Lemma 3], the matrix L is positive definite. On the otherhand, it is of interest to note that, for a twice continuouslydifferentiable function fi0 , it is m-strongly convex if and onlyif f ′′

i0(x) ≥ m for all x ∈ R. In light of Assumption 2, the set

X∗ contains a unique point y∗.The following two assumptions describe the nonlinear agent

dynamics, which were also used in the study of conventionalnonlinear control problem (see [10], [14]).

Assumption 3: For the point y∗ ∈ X∗, there exists a uniquez∗

i such that gi1(z∗i , y∗,w) = 0.

With

zi = zi − z∗i , ei = yi − y∗ (6)

we obtain the translated agent dynamics{ ˙zi = gi1(zi, ei,w)

ei = gi2(zi, ei,w)+ qi(t)+ ui(7)

where

gi1(zi, ei,w) = gi1(zi, yi,w)− gi1(z∗

i , y∗,w)

gi2(zi, ei,w) = gi2(zi, yi,w)− gi2(z∗

i , y∗,w)

qi(t) = qi(t)+ gi2(z∗

i , y∗,w).

Assumption 4: There exists a C2 (twice continuously dif-ferentiable) function V0i(zi) such that

α0i(‖zi‖) ≤ V0i(zi) ≤ α0i(‖zi‖)∂V0i(zi)

∂ zigi1(zi, 0,w) ≤ −‖zi‖2, ∀w ∈ W (8)

for two class K∞ functions α0i(·) and α0i(·).Note that Assumption 3 describes the necessary steady-

state information achieving the exact optimization, namely, ifthe designed control solves the distributed optimization prob-lem (in the sense of Definition 1) with yi(t) converging toy∗, then zi(t) must converge to z∗

i (see [10, Remark 3.10]).Assumption 4 characterizes a minimum-phase property of theagent (2), which ensures the effectiveness of output-feedbackcontrol (see [14], [29], [34]).

III. PROBLEM CONVERSION BASED ON INTERNAL MODEL

Here, we convert the distributed optimization problem intoa distributed stabilization problem with an IM design.

Since δi(vi,w) is polynomial in vi, according to[10, Proposition 6.14], each of qi(t) = δi(vi,w)+gi2(z∗

i , y∗,w),i = 1, . . . ,N, has the following minimal zeroing polynomial

pi(λ) = λsi − i1 − i2λ− · · · − isiλsi−1 (9)

for some real numbers i1, . . . , isi . By defining τi(t) :=[τi1, . . . , τisi ]

� with

τi1 = qi(t), . . . , τisi = dsi−1qi(t)

dtsi−1.

1658 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 46, NO. 7, JULY 2016

It follows that

τi(t) = �iτi(t), qi(t) = iτi(t) (10)

where

�i =[

0 Isi−1

i1 i2, . . . , isi

]

, i = [1 01×(si−1)

].

Since the pair ( i,�i) is observable, let Gi be the matrixsuch that Mi := �i + Gi i is Hurwitz. Then, for each agent,an IM can be designed as follows (see [10], [38]):

ηi = Miηi + Giui, ηi(0) = 0. (11)

Defining the following coordinate transformation

ηi = ηi − τi(t)− Giei, ui = ui + iηi (12)

and plugging (11) into (7) yield⎧⎨

˙zi = gi1(zi, ei,w)˙ηi = Miηi + gi2(zi, ei,w)ei = gi2(zi, ei,w)− iηi + ui

(13)

where

gi2(zi, ei,w) = MiGiei − Gigi2(zi, ei,w)

gi2(zi, ei,w) = − iGiei + gi2(zi, ei,w).

Denoting

z = col(z1, . . . , zN)

η = col(η1, . . . , ηN)

e = col(e1, . . . , eN) (14)

we have the following result.Lemma 1: Consider the system (13) under Assumption 4.

Suppose that, for any sets Bzρ,Bηρ ,Be

ρ , there is a distributedoutput-feedback control

⎧⎨

ξ ′i = ′

i1

(ξ ′

i ,∇fi(yi),∑N

i=1 aij(yi − yj

))

ui = ′i2

(ξ ′

i ,∇fi(yi),∑N

i=1 aij(yi − yj

)) (15)

with ξ ′i (0) = 0, such that, for any (z(0), η(0), e(0)) ∈ Bz

ρ ×Bηρ × Be

ρ , the solution of the closed-loop system composedof (13) and (15) is bounded over [0,∞) and ei(t) convergesto the origin. Then the semi-global distributed optimizationgiven in Definition 1 can be solved.

Moreover, if all the sets Bzρ,Bηρ ,Be

ρ are the whole spaces,then the global distributed optimization given in Definition 1can also be solved.

Proof: Due to the subadditivity of the norm, we have

‖zi‖ ≤ ‖zi‖ + ∥∥z∗

i

∥∥

‖ηi‖ ≤ ‖ηi‖ + ‖τi(t)‖ + ‖Gi‖(|yi| + ∣

∣y∗∣∣)

|ei| ≤ |yi| + ∣∣y∗∣∣.

For the fixed y∗ and its related z∗i , and for any sets Bz

ρ, Byρ ,

there exists a real number ρ∗ > 0 such that, for each(z(0), y(0)) and vi(0) subject to

‖z(0)‖ ≤ ρ, ‖y(0)‖ ≤ ρ, vi(0) ∈ Vi

it follows:

‖z(0)‖ ≤ ρ∗, ‖η(0)‖ ≤ ρ∗, ‖e(0)‖ ≤ ρ∗.

On the other hand, under the distributed control (15), for any(z(0), η(0), e(0)) ∈ Bz

ρ∗×Bηρ∗×Beρ∗ , the solution of the closed-

loop system composed of (13) and (15) is bounded over [0,∞)

and ei(t) converges to the origin. In view of these two aspectstogether with the relationship

zi = zi + z∗i

yi = ei + y∗

ηi = ηi + τi(t)+ Giei (16)

the solution of the closed-loop system composed of (2), (11),and (15) is well defined. Moreover, the output yi convergesto the point y∗ ∈ X∗. Thus, the semi-global distributed opti-mization given in Definition 1 can be solved by a distributedcontrol composed of (11) and (15).

Similarly, the result for the global case can also beobtained.

IV. MAIN RESULTS

In this section, we provide and verify our IM-basedoptimization control designs for the semi-global and theglobal distributed optimization in the following two respectivesections.

A. Semi-Global Optimization Design

By Lemma 1, we need to construct a corresponding con-trol to achieve the semi-global distributed optimization. Morespecific, for agent i, we construct a distributed control in theform of

{ζi = κ

∑Nj=1 aij

(yi − yj

), ζi(0) = 0

ui = −κ∇fi(yi)− κ∑N

j=1 aij(yi − yj

)− κζi(17)

with the parameter κ > 0 to be determined (in Theorem 1).Remark 6: Because we have taken ζi(0) = 0 ∈ R, i =

1, . . . ,N, from∑N

i=1 ζi = 0, we have the following usefulidentity

N∑

i=1

ζi(t) =N∑

i=1

ζi(0) = 0. (18)

Let ζ ∗i = −∇fi(y∗), which satisfies

∑Ni=1 ζ

∗i = 0. Then,

with ζi = ζi − ζ ∗i , we obtain the closed-loop system as

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

˙zi = gi1(zi, ei,w)˙ηi = Miηi + gi2(zi, ei,w)˙ζi = κ

∑Nj=1 aij

(ei − ej

)

ei = gi2(zi, ei,w)− iηi − κψi(ei)

−κ∑Nj=1 aij

(ei − ej

)− κζi

(19)

where ψi(ei) = ∇fi(yi)− ∇fi(y∗). As in [15], it can be foundthat the system (19) has an equilibrium point at

(zi, ηi, ζi, ei

) = (0, 0, 0, 0), i = 1, . . . ,N. (20)

WANG et al.: DISTRIBUTED OPTIMIZATION FOR A CLASS OF NONLINEAR MULTIAGENT SYSTEMS 1659

Clearly, with (14) and ζ = col(ζ1, . . . , ζN), the system (19)can be expressed as

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

˙z = g1(z, e,w)˙η = Mη + g2(z, e,w)˙ζ = κLe

e = g2(z, e,w)− η − κψ(e)− κLe − κζ

(21)

where

g1 = col(g11, . . . , gN1), g2 = col(g12, . . . , gN2)

g2 = col(g12, . . . , gN2

), ψ = col(ψ1, . . . , ψN)

M = diag(M1, . . . ,MN), = diag( 1, . . . , N).

Then, by Lemma 1, if we can prove that, for any setBzρ,Bηρ ,Be

ρ , there is a constant κ > 0 such that the solu-tion of system (19) is bounded for t ≥ 0 and convergesto the origin for ζ (0) = col(∇f1(x∗), . . . ,∇fN(x∗)) andcol(z(0), η(0), e(0)) ∈ Bz

ρ × Bηρ × Beρ , then the semi-global

distributed optimization problem is solved by Definition 1.Consider the (ζ , e) subsystem and take

χ = T�e, ϑ = T�ζ , T =[

1√N

Q

]

(22)

with Q defined in Remark 4 and ‖T‖ = 1. Denoteχ = col(χ1, χ2), ϑ = col(ϑ1, ϑ2) with χ1, ϑ1 ∈R and χ2, ϑ2 ∈ R

N−1. Then, (21) can be rewritten as ϑ1 = 0and

⎧⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎩

˙z = g1(z, e,w)˙η = Mη + g2(z, e,w)ϑ2 = κLχ2

χ1 = 1√N

1�(g2(z, e,w)− η)− κ√N

1�ψ(e)χ2 = Q�(g2(z, e,w)− η)− κQψ(e)

−κLχ2 − κϑ2

(23)

where L is positive definite by Remark 4. Recalling Remark 6and

∑Ni=1 ∇fi(x∗) = 0, it gives ϑ1(t) ≡ 0, ∀t ≥ 0. Hence, the

asymptotic stability of system (23) at the equilibrium pointimplies that of system (21).

Thus, we are left to show the existence of κ to ensure theasymptotic stability of system (23) at the equilibrium pointwith a certain basin of attraction. Before going further, letus check the structure of the proposed optimization output-feedback control, which is designed as

⎧⎪⎪⎨

⎪⎪⎩

ηi = Miηi + Giui, ηi(0) = 0ζi = κ

∑Nj=1 aij

(yi − yj

), ζi(0) = 0

ui = −κ∇fi(yi)− κ∑N

j=1 aij(yi − yj

)

−κζi − iηi, i = 1, . . . ,N

(24)

where − iηi is the IM term to remove the influence of exter-nal disturbances; −κ∇fi(yi) is the gradient term to guide theagents for optimization; −κ∑N

j=1 aij(yi − yj) is the consen-sus term to drive all the agents’ outputs converge to the samepoint; and −κζi is an integral term to correct the error inducedby the consensus term.

Remark 7: Since the initial condition of controller (24) canbe chosen arbitrarily, we set it to zero for technical simplic-ity as in [16, Remark 7.3]. Furthermore, as shown in [15],

such a choice of initial condition ensures that the closed-loopsystem (19) has an equilibrium point at the origin, which ben-efits us much in the Lyapunov-based convergence analysis ofthe optimization algorithm.

Remark 8: Since conventional output consensus controldesigns need not contain the gradient term (resulting fromthe cost function) and the integral term, the optimization con-trol design is naturally much more complicated than that ofoutput consensus or synchronization problems, which bringsmany technical difficulties in the convergence analysis.

We are ready to prove the convergence of the proposedoptimization design.

Theorem 1: Consider the system (23). Under Assumption 4,for any real number ρ > 0, there exists a real numberκ > 0 such that, for any col(z(0), η(0), ϑ2(0), χ(0)) ∈Bzρ × Bηρ × Bϑ2

ρ × Bχρ , the solution of system (23) is boundedfor t ≥ 0 and converges to the origin. That means, underAssumptions 1–4, the semi-global distributed optimizationproblem for multiagent system (2) with the cost function (1)and the disturbance (3) can be solved by the optimizationcontrol (24).

Proof: First of all, let us show some useful properties forthe system (23).

1) Consider the z subsystem. By Assumption 3, let

V0(z) =N∑

i=1

V0i(zi)

which gives

α0(‖z‖) ≤ V0(z) ≤ α0(‖z‖)∂V0(z)

∂ zg1(z, 0,w) ≤ −‖z‖2

for some smooth functions α0(·), α0(·) ∈ K∞.2) Consider the η subsystem. Since Mi is Hurwitz, define

V1i(ηi) = ηi�P1iηi

where the matrix P1i is positive definite satisfyingP1iMi + M�

i P1i = −2I. It can be seen that

V1i|(23) = −2‖ηi‖2 + 2η�i P1igi2(zi, ei,w)

≤ −‖ηi‖2 + ‖P1i‖2∥∥gi2(zi, ei,w)

∥∥2.

Notice that, by [10, Lemma 7.8]

‖P1i‖2∥∥gi2(zi, ei,w)

∥∥2 ≤ pi1(zi)‖zi‖2 + pi2(ei)|ei|2

for smooth functions pi1(·), pi2(·) ≥ 1. Thus, taking

V1(η) =N∑

i=1

V1i(ηi)

yields

V1|(23) ≤ −‖η‖2 + p1(z)‖z‖2 + p2(e)‖e‖2

where

p1(z) =N∑

i=1

pi1(zi), p2(e) =N∑

i=1

pi2(ei).

1660 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 46, NO. 7, JULY 2016

3) Consider the (χ, ϑ2) subsystem. Define a positive defi-nite function

V2(χ, ϑ2) = 1

2χ�χ + 1

2ϑ�

2 L−1ϑ2.

It is straightforward to show that

V2|(23) = −κe�ψ(e)− κχ�2 Lχ2

+ χ�T�(g2(z, e,w)− η).

By Assumption 2, there exists an index i0 such that

e�ψ(e) ≥ m‖ei0‖2.

Because χ�2 Lχ2 = e�Le, by Remark 5 and ‖T‖ = 1,

there exists a constant m > 0 such that

−κe�ψ(e)− κχ�2 Lχ2 ≤ −κe�Le

≤ −κm‖e‖2 = −κm‖χ‖2. (25)

Moreover, by (22) and [10, Lemma 7.8]

χ�T�(g2(z, e,w)− η)

≤ p3(z)‖z‖2 + p4(e)‖e‖2 + p0‖η‖2

for a constant p0 > 0 and smooth functionsp3(·), p4(·) ≥ 1. Therefore

V2|(23) ≤ −κm‖χ‖2 + p3(z)‖z‖2

+ p4(e)‖e‖2 + p0‖η‖2.

Next, we show the stability of the system (23) at the equi-librium point. Because V0(z), V1(η), V2(χ, ϑ2) are positivedefinite and radially unbounded, for any Bz

ρ, Bηρ, Bϑ2ρ , Bχρ ,

there exists a constant ρ ≥ 1 such that

Bzρ ⊂ �ρ(V0), Bηρ ⊂ �ρ(V1), Bϑ2

ρ × Bχρ ⊂ �ρ(V2).

Obviously, all the sets �ρ(V0), �ρ(V1), �ρ(V2) arecompact. Then, from the C2 property of V0(z) and by[10, Lemma 7.8], it can be verified that, for all z ∈ �3ρ (V0)

∥∥∥∥∂V0(z)

∂ z

∥∥∥∥ ≤ c1‖z‖

‖g1(z, e,w)− g1(z, 0,w)‖ ≤ p5(e)‖e‖p1(z) ≤ c1, p3(z) ≤ c1

for a real number c1 > 0 and a smooth function p5(e) ≥ 1.Thus, we have

V0|(23) ≤ −1

2‖z‖2 + 1

2c2

1p25(e)‖e‖2, z ∈ �3ρ (V0).

Consider the Lyapunov function candidate

V(z, η, χ, ϑ2) = V0(z)+ c2V1(η)+ c2c3V2(χ, ϑ2)

with

c2 = min

{

1,1

4c1(1 + c3)

}

, c3 = min

{

1,1

2p0

}

.

Clearly, V(z, η, χ, ϑ2) is positive definite and radiallyunbounded. It means that the set �3ρ (V) is compact andsatisfies

�3ρ (V) ⊂ �3ρ (V0)×�3ρ′(V1)×�3ρ′′(V2)

for two positive constants ρ′ := ρ/c2 and ρ′′ := ρ/(c2c3),and compact sets �3ρ (V0), �3ρ′(V1), �3ρ′′(V2). Hence, dueto (22), there exists a real number c4 > 0 such that

1

2c2

1p25(e)+ c2p2(e)+ c2c3p4(e) ≤ c4, ∀χ ∈ �3ρ′′(V2).

It follows that, on the set �3ρ (V)

V|(23) ≤ −1

2‖z‖2 − c2

2‖η‖2 − (c2c3κm − c4)‖χ‖2.

If we choose

κ� = (c4 + 1)/(c2c3m)

then, for each κ ≥ κ�, on �3ρ (W), we have

V|(23) ≤ −1

2‖z‖2 − c2

2‖η‖2 − ‖χ‖2

∀col(z, η, χ, ϑ2) ∈ �3ρ (W).

Note that

Bzρ × Bηρ × Bχρ × Bϑ2

ρ

⊂ �ρ(V0)×�ρ(V1)×�ρ(V2) ⊂ �3ρ (V).

By [14, Th. 8.4], for any col(z(0), η(0), χ(0), ϑ2(0)) ∈ Bzρ ×

Bηρ ×Bχρ ×Bϑ2ρ , the solution of the system (23) remains in the

set �3ρ (V), and satisfies

limt→∞(‖z(t)‖ + ‖η(t)‖ + ‖χ(t)‖) = 0.

On the other hand, from the last equation in (23), χ2(t) isbounded, i.e., χ2(t) is uniformly continuous. By Barbalat’sLemma together with the boundedness of χ(t), it impliesthat limt→∞ χ2(t) = 0. By the last equation of (23) again,limt→∞ ϑ2(t) = 0. Thus, the equilibrium point (z, η, χ, ϑ2) =(0, 0, 0, 0) of the system (23) is uniformly asymptotically sta-ble with a basin of attraction containing Bz

ρ ×Bηρ ×Bχρ ×Bϑ2ρ .

Thus, the proof is complete.Remark 9: From the above analysis, the key of the two-

step scheme is the proof of the convergence of the sys-tem (23) composed of several strongly connected subsystems.On the one hand, due to the existence of the nonlinear termsg1(z, e,w), g2(z, e,w) in (23), the designed distributed controlmust produce a negative definite term about the variable ein the derivative of the constructed Lyapunov function alongthe system (23) to dominate the effects of these nonlinearterms. Here, we overcome this difficulty based on the posi-tive definiteness of the matrix L defined in Remark 5 and aquadratic negative definite term about the variable e is obtainedin (25). On the other hand, since we can only obtain a quadraticnegative definite term about e to dominate the general nonlin-ear functions, we have to prove that the time-derivative ofLyapunov function for the system (23) is negative definite ona compact set, which is, in fact, invariant for the system aswell as contains any given initial condition region. To solve theproblem, we construct this compact set by virtue of the conceptof Lyapunov level set �(V) for the Lyapunov function.

WANG et al.: DISTRIBUTED OPTIMIZATION FOR A CLASS OF NONLINEAR MULTIAGENT SYSTEMS 1661

B. Global Distributed Optimization Design

In the above section, we discussed the semi-global versionof the distributed optimization. Here, we address the globaldistributed optimization problem with an alternative condition,where the designed global optimization controller must workfor all initial conditions.

Because a global result needs a stronger condition than thesemi-global one, we introduce another assumption to replaceAssumption 4 for the global distributed optimization problem.

Assumption 5: The agent dynamics (2) satisfies the follow-ing two conditions.

1) There exists a C1 function V0i(zi) such that

α0i(‖zi‖) ≤ V0i(zi) ≤ α0i(‖zi‖)V0i|(7) ≤ −‖zi‖2 + νi1|ei|2, ∀w ∈ W (26)

for a positive constant νi1 and two class K∞ functionsα0i(·) and α0i(·).

2) There exists a positive constant νi2 such that

‖gi2(zi, ei,w)‖ ≤ νi2‖zi‖ + νi2|ei|. (27)

Remark 10: Compared with the semi-global result, theglobal result depends on the linear growth condition (27). Infact, this linear growth condition is not so restrictive in thestudy of the global convergence case, and was also widelyused in the consensus studies of nonlinear multiagent systems(referring to [17, Assumption 1] and [36, Assumption 1]).

Then we have the following result.Theorem 2: Under Assumptions 1–3 and 5, there exists a

constant κ > 0 such that the distributed control⎧⎪⎪⎨

⎪⎪⎩

ηi = Miηi + Giui, ηi(0) = 0ζi = κ

∑Nj=1 aij

(yi − yj

), ζi(0) = 0

ui = −κ∇fi(yi)− κ∑N

j=1 aij(yi − yj

)

−κζi − iηi, i = 1, . . . ,N

(28)

solves the global distributed optimization problem for mul-tiagent system (2) with the cost function (1) and thedisturbance (3).

Proof: By Lemma 1, the global optimization problem isconverted into the existence of the constant κ > 0 such thatthe equilibrium point (zi, ηi, ζi, ei) = (0, 0, 0, 0), i = 1, . . . ,Nof the system (23) is globally asymptotically stable. The proofcan be completed in a similar manner as that of Theorem 1.Here, we only give a sketch of the proof to save space.

First, define V0(z),V1(η) and V2(χ, ϑ2) as in the proof ofTheorem 1. Under Assumption 5, it can be verified that

V0|(23) ≤ −‖z‖2 + c′1‖e‖2

V1|(23) ≤ −‖η‖2 + c′2

(‖z‖2 + ‖e‖2

)

V2|(23) ≤ −κm‖χ‖2 + c′3

(‖z‖2 + ‖η‖2 + ‖e‖2

)

where c′1, c′

2, c′3 are suitable positive constants.

Then we take

V ′(z, η, χ, ϑ2) = V0(z)+ 1

2(1 + c′

2

)V1(η)

+ 1

4(1 + c′

2

)c′

3

V2(χ, ϑ)

which satisfies

V ′|(23) ≤ −1

2‖z‖2 − 1

4(1 + c′

2

)‖η‖2

−(

κm

4(1 + c′

2

)c′

3

− c′2

2(1 + c′

2

) − c′3

4(1 + c′

2

)c′

3

)

‖χ‖2.

Choosing a sufficiently large κ gives

V ′|(23) ≤ −1

2‖z‖2 − 1

4(1 + c′

2

)‖η‖2 − ‖χ‖2.

By using the invariance principle (referring to[14, Corollary 4.2]), we can conclude that the equilib-rium point of the system (23) is globally asymptoticallystable. Thus, the conclusion follows.

V. APPLICATIONS TO CONSENSUS PROBLEM

As we mentioned, distributed optimization can be viewed asan extension of multiagent consensus by additionally consider-ing the optimization issue with cost functions. In this section,we show how the distributed optimization method can be usedto study some basic consensus problems.

A. Output Average-Consensus

In multiagent coordination, a typical problem is to reachthe average of the initial states of the multiagent system in adistributed manner. This problem is usually referred to as theaverage-consensus, which has been extensive studied in sev-eral concrete scenarios (see [24], [37], and therein). Note thatthe above mentioned results focused on the linear multiagentsystems, and to the best of our knowledge, there are very fewresults on average-consensus control for nonlinear multiagentsystems. Here, with the developed distributed optimizationmethod, we study a distributed control design for nonlin-ear output average-consensus. To be specific, we consider theproblem given in the following definition.

Definition 2: The (semi-global) output average-consensuscontrol for the nonlinear multiagent system (2) can be solvedif, for any sets Bz

ρ and Byρ with z = col(z1, . . . , zN), y =

col(y1, . . . , yN) and a constant ρ > 0, design a distributed con-trol such that, for any col(z(0), y(0)) ∈ Bz

ρ×Byρ and vi(0) ∈ Vi,

the solution of the closed-loop system is well-defined and eachof yi(t), i = 1, . . . ,N converges to y� = (1/N)

∑Ni=1 yi(0)

when t → ∞.To solve this problem, we introduce an assumption to

replace Assumptions 3 and 4 as follows.Assumption 6: For a given y� ∈ R, there is a unique solu-

tion z�i such that gi1(z�i , y�,w) = 0. Moreover, with takingthe coordinate transformation (6), the translated system (7)satisfies Assumption 4.

Applying Theorem 1 to this output average-consensus prob-lem, we have the following result.

1662 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 46, NO. 7, JULY 2016

Corollary 1: Under Assumptions 1 and 6 with y� =(1/N)

∑Ni=1 yi(0), there exists a constant κ > 0 such that the

distributed control⎧⎪⎪⎨

⎪⎪⎩

ηi = Miηi + Giui, ηi(0) = 0ζi = κ

∑Nj=1 aij

(yi − yj

), ζi(0) = 0

ui = −κ(yi − yi(0))− κ∑N

j=1 aij(yi − yj

)

−κζi − iηi, i = 1, . . . ,N

(29)

solves the semi-global output average-consensus problem formultiagent system (2) with the disturbance (3).

Proof: By letting

fi(x) = 1

2(x − yi(0))

2 (30)

we obtain the cost function in the form (1) as

f (x) = 1

2

N∑

i=1

(x − yi(0))2 (31)

whose minimization point is given by

y� = 1

N

N∑

i=1

yi(0). (32)

Clearly, Assumptions 1–4 of Theorem 1 are verified for thisspecified case. By applying Theorem 1, the distributed control(33) solves the corresponding semi-global distributed opti-mization problem for multiagent system (2) with the cost func-tion (31) and the disturbance (3), that is, each of the agents’outputs yi, i = 1, . . . ,N, converges to y� asymptotically. Thusthe proof is completed.

A global result can be obtain in the same way by applyingTheorem 2. Hence, the present study yields an interest-ing distributed control solving the nonlinear output average-consensus with rejecting disturbances. In the distributed con-trol (33), the local initial information yi(0) is used. Noticethat, using the initial condition has been issued in [7] that isto assist the consensus control. Obviously, where the dynamicuncertainty zi and nonlinear function gi2 vanish, our result isalso consistent with the existing average-consensus results onsingle-integrator multiagent systems (see [24]).

B. Output Consensus

Another interesting consensus problem is the output con-sensus in the leader-following setup. For example, in [19], fora class of nonlinear systems, based on cyclic-small-gain the-orem, a distributed control was designed forcing the outputsof the controlled agents to enter an arbitrarily small neighbor-hood of the desired consensus value (denoted by y0) in thepresence of bounded external disturbances.

Consider a group of nonlinear agents described by (2). y0 isreferred to as the leader that characterizes the desired outputconsensus value. The set VL ⊂ {1, 2, . . . ,N} is the nonemptyset of agents that can get the information of y0. Here, wewill also show how our obtained results can be appliedto this output consensus problem given in the followingdefinition.

Definition 3: The (semi-global) output consensus problemfor the nonlinear multiagent system (2) can be solved if,

for any sets Bzρ and By

ρ with z = col(z1, . . . , zN), y =col(y1, . . . , yN) and a constant ρ > 0, design a distributedcontrol such that, for any col(z(0), y(0)) ∈ Bz

ρ × Byρ and

vi(0) ∈ Vi, the outputs yi, and i = 1, . . . ,N, converge tothe desired consensus value y0.

Suppose

fi(x) =⎧⎨

1

2(x − y0)

2, if i ∈ VL

0, if i /∈ VL.

Clearly, Assumption 2 is verified for all the local cost func-tions. Note that, in this specific case, the optimization pointis y� = y0 from the optimization viewpoint. Then this outputconsensus problem is formulated as a special case of the con-sidered distributed optimization problem. By Theorem 1, wehave the following result.

Corollary 2: Under Assumptions 1 and 6 with y� = y0,there exists a constant κ > 0 such that the distributed control

⎧⎪⎪⎨

⎪⎪⎩

ηi = Miηi + Giui, ηi(0) = 0ζi = κ

∑Nj=1 aij

(yi − yj

), ζi(0) = 0

ui = −κ(yi − yi(0))− κ∑N

j=1 aij(yi − yj

)

−κζi − iηi, i = 1, . . . ,N

(33)

solves the semi-global output consensus problem for multia-gent system (2) with the disturbance (3).

There are two main differences between our results andthose in [19]: 1) we solve the problem by a linear high-gain type distributed control, which is simpler than that givenin [19] and 2) we provide an exact consensus design with theaid of IMs different from the design to force agents’ outputsinto an arbitrarily small neighborhood of the desired consen-sus value. In fact, the our result at least provides anotherapproach to the output consensus problem in the unity rel-ative degree case. Additionally, if the considered systems arelinear, then semi-global results naturally become the globalresults.

VI. EXAMPLES

In this section, we present two illustrative examples for thedistributed optimization problem.

Example 1: Consider a five-agent network with the agentsdescribed by the dynamics of the FitzHugh–Nagumo type(see [21])

⎧⎨

zi = �i1(yi −�i2zi)

yi = yi(yi −�i3)(1 − yi)− zi + qi(t)+ ui

i = 1, . . . , 4(34)

where (zi, yi) ∈ R × R is the system state, ui is the con-trol input, qi(t) is the external disturbance generated by alinear system in the form of (3). �ij := � ′

ij + wij > 0,i = 1, . . . , 5, j = 1, 2, 3 are some real parameters withuncertainty wij influence on the nominal value � ′

ij. Denotew := (w11,w12,w13, . . . ,w51,w52,w53). The network inter-action topology with all edge weights being 1 is shown inFig. 1, which verifies Assumption 1. Our aim is to design adistributed control to make all agents’ outputs converge to the

WANG et al.: DISTRIBUTED OPTIMIZATION FOR A CLASS OF NONLINEAR MULTIAGENT SYSTEMS 1663

Fig. 1. Interaction topology for the network.

Fig. 2. Responses of the agents’ outputs for Type-I in Example 1 (top:responses without using IM; bottom: responses using IM).

minimum point of the function f (x) = ∑5i=1 fi(x) for x ∈ R

in the presence of the disturbance qi, where

f1(x) = (x + 2)2, f2(x) = (x − 4)2

f3(x) = x4 + x2 + 1, f4(x) = x2 ln(

1 + x2)

f5(x) = 5e0.2x. (35)

Clearly, for i = 1, 2, 3, the function fi is m-strongly convexon R with m = 2, and, for i = 4, 5, the function fi is convex.Therefore, Assumption 2 holds. Moreover, for the optimiza-tion point y∗ = 0.4032, there exists z∗

i = �−1i2 y∗ verifying

Assumption 3. Also, Assumption 4 can be verified because�i1,�i2 are positive. Thus, all assumptions in Theorem 1 areverified, and hence, a distributed optimization controller of theform (24) can be constructed.

For simulations, two different types of disturbances arediscussed as follows.

Type-I: We set qi(t) = Ami sin(ωit) + A0i with(A01, . . . ,A05) = (3, 2, 1,−1,−2), (Am1, . . . ,Am5) =(5, 8, 4, 7, 10), (ω1, . . . , ω5) = (π/4, π/5, π/7, 2π/7, π/3).Clearly, qi(t) is bounded. In this case, the system (10)is given with si = 3 and i1 = i3 = 0, i2 = −ω2

i .

Fig. 3. Responses of the agents’ outputs for Type-II in Example 1 (top:responses without using IM; bottom: responses using IM).

Let Gi = −[3σi, 3σ 2i , σ

3i ]� with σi = 2, and the ini-

tial condition be in the sets Bzρ = {z ∈ R

5 : ‖z‖ ≤10} and By

ρ = {y ∈ R5 : ‖y‖ ≤ 10} with z =

(z1, . . . , z5) and y = (y1, . . . , y5). In (24), we chooseκ = 15 for the problem. The simulation result isshown in Fig. 2 with (z1(0), y1(0), . . . , z5(0), y5(0)) =(2,−1, 1, 0, 3, 1,−1, 2,−4, 1) and other initial conditionsbeing zero. It can be seen that yi(t) approaches the optimiza-tion point y∗ = 0.4032 as t → ∞ for i = 1, . . . , 5. To make acomparison, the agents’ responses without using IM are shownin Fig. 2, which show that, without using IM, the algorithmcan guarantee a bounded convergence error but not achievethe exact optimization.

Type-II: Set qi(t) = Ami sin(ωit) + A0it with(A01, . . . ,A05) = (3, 2, 1,−1,−2), (Am1, . . . ,Am5) =(5, 8, 4, 7, 10), (ω1, . . . , ω5) = (π/4, π/5, π/7, (2π)/7, π/3).Clearly, qi(t) is unbounded due to the ramp sig-nal A0it. In this case, the system (10) is given withsi = 4 and i1 = i2 = i4 = 0, i3 = −ω2

i . WithGi = −[4σi, 6σ 2

i , 4σ 3i , σ

4i ]� for σi = 2 and κ = 15, the

algorithm (24) solves our problem. The simulation result isshown in Fig. 3 with the same initial condition as that inType I. It can be seen that, even with unbounded disturbances,output yi(t) can still converge to y∗ for i = 1, . . . , 5. Similarly,the agents’ responses without using IM are shown in Fig. 3,which show that without IM, the algorithm fails to achievethe optimization.

1664 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 46, NO. 7, JULY 2016

Fig. 4. Responses of the agents’ outputs for Type-I in Example 2 (top:responses without using IM; bottom: responses using IM).

Example 2: Consider a five-agent network with the agentsdescribed by the dynamics of the Lorenz type (see [6], [38])

⎧⎪⎪⎨

⎪⎪⎩

zi1 = �i1(yi − zi1)

zi2 = zi1yi −�i2zi2yi = �i3zi1 − yi − zi1zi2 + qi(t)+ ui

i = 1, . . . , 4

(36)

where (zi1, zi2, yi) ∈ R3 is the system state, ui is the con-

trol input, and qi(t) is the external disturbance generatedby a system in the form of (3). �ij := � ′

ij + wij > 0,i = 1, . . . , 5, j = 1, 2, 3 are some real parameters withuncertainty wij influencing the nominal value � ′

ij. Denotew := (w11,w12,w13, . . . ,w51,w52,w53). The network inter-action topology with all edge weights being 1 is also shownin Fig. 1. As in Example 1, our aim is to design a dis-tributed control to make all agents’ outputs converge to theminimum point of the function f (x) = ∑5

i=1 fi(x) with fispecified in (35). Clearly, Assumptions 1 and 2 are satisfied.Moreover, for the optimization point y∗ = 0.4032, there exists(z∗

i1, z∗i2) = (y∗,�−1

i2 y∗2) verifying Assumption 3. Also, since�i1,�i2 are positive, as shown in [39], Assumption 4 can beverified. Thus, all assumptions in Theorem 1 are verified, andhence, a distributed optimization controller of the form (24)can be constructed.

Also, two different types of disturbances are presented asfollows.

Type-I: We set qi(t) = Ami sin(ωit) + A0i with(A01, . . . ,A05) = (3, 2, 1,−1,−2), (Am1, . . . ,Am5) =(5, 8, 4, 7, 10), (ω1, . . . , ω5) = (π/2, π/4, π/5, (2π)/5, π/3).

Fig. 5. Responses of the agents’ outputs for Type-II in Example 2 (top:responses without using IM; bottom: responses using IM).

Clearly, qi(t) is bounded. In this case, the system (10) isgiven with si = 3 and i1 = i3 = 0, i2 = −ω2

i . LetGi = −[3σi, 3σ 2

i , σ3i ]� with σi = 2, and the initial con-

dition be in the set Bzρ = {z ∈ R

10 : ‖z‖ ≤ 10} andByρ = {y ∈ R

5 : ‖y‖ ≤ 10} with z = (z11, z12, . . . , z51, z52)

and y = (y1, . . . , y5). In (24), we choose κ = 10for the problem. The simulation result is shown inFig. 4 with (z11(0), z12(0), y1(0), . . . , z51(0), z52(0), y5(0)) =(2, 1, 0,−1, 1, 2, 3,−1, −2,−1, 1, 2, 2, 0, 4, ) and other ini-tial conditions being zero. It can be seen that yi(t) approachesthe optimization point y∗ = 0.4032 as t → ∞ for i = 1, . . . , 5.As in Example 1, as shown in Fig. 4 for the agents’ responseswithout using IM, without using IM, the algorithm can guar-antee a bounded convergence error but not achieve the exactoptimization.

Type-II: Set qi(t) = Ami sin(ωit) + A0it with(A01, . . . ,A05) = (3, 2, 1,−1,−2), (Am1, . . . ,Am5) =(5, 8, 4, 7, 10), (ω1, . . . , ω5) = (π/2, π/4, π/5, (2π)/5, π/3).Clearly, qi(t) is unbounded due to the ramp sig-nal A0it. In this case, the system (10) is given withsi = 4 and i1 = i2 = i4 = 0, i3 = −ω2

i . WithGi = −[4σi, 6σ 2

i , 4σ 3i , σ

4i ]� for σi = 2 and κ = 10, the

algorithm (24) solves our problem. The simulation result isshown in Fig. 5 with the same initial condition as that inType I. It can be seen that output yi(t) can still converge to y∗for i = 1, . . . , 5. Again, as shown in Fig. 5 for the agents’responses without using IM, without using IM, the algorithmfails to achieve the optimization.

WANG et al.: DISTRIBUTED OPTIMIZATION FOR A CLASS OF NONLINEAR MULTIAGENT SYSTEMS 1665

VII. CONCLUSION

In this paper, the distributed optimization problem wasinvestigated for a class of nonlinear multiagent systems hav-ing unity relative degree along with external disturbances.The problem was solved by a two-step design scheme: thefirst step is to construct an IM that converts the distributedoptimization problem into a distributed stabilization problemand the second step is to solve the distributed stabiliza-tion problem. The obtained results were also shown to beapplicable to some basic consensus problems. It is worthmentioning that, the distributed optimization with more gen-eral nonlinear agents and exogenous disturbances is still achallenging and important problem, which deserves furtherinvestigation.

REFERENCES

[1] H. Bai and S. Yusef, “Output synchronization of nonlinear systems underinput disturbances,” in Proc. 21st Int. Symp. Math. Theory Netw. Syst.,Groningen, The Netherlands, 2014, pp. 1–8.

[2] Z. Chen and J. Huang, “A general formulation and solvability of theglobal robust output regulation problem,” IEEE Trans. Autom. Control,vol. 50, no. 4, pp. 448–462, Apr. 2005.

[3] Z. Chen and H. T. Zhang, “Analysis of joint connectivity condition formulti-agents with boundary constraints,” IEEE Trans. Cybern., vol. 43,no. 2, pp. 437–444, Apr. 2013.

[4] C. De Persis and B. Jayawardhana, “On the internal model principle inthe coordination of nonlinear systems,” IEEE Trans. Control Netw. Syst.,vol. 1, no. 3, pp. 272–282, Sep. 2014.

[5] Z. Ding, “Consensus output regulation of a class of heterogeneousnonlinear systems,” IEEE Trans. Autom. Control, vol. 58, no. 10,pp. 2648–2653, Oct. 2013.

[6] Z. Duan and G. Chen, “Global robust stability and synchronizationof networks with Lorenz-type nodes,” IEEE Trans. Circuits Syst. II,Exp. Briefs, vol. 56, no. 8, pp. 679–683, Aug. 2009.

[7] M. Fan, Z. Chen, and H. Zhang, “Semi-global consensus of nonlin-ear second-order multi-agent systems with measurement output feed-back,” IEEE Trans. Autom. Control, vol. 59, no. 8, pp. 2222–2227,Aug. 2014.

[8] B. Gharesifard and J. Cortés, “Distributed continuous-time convex opti-mization on weight-balanced digraphs,” IEEE Trans. Autom. Control,vol. 59, no. 3, pp. 781–786, Mar. 2014.

[9] C. Godsil and G. Royle, Algebraic Graph Theory. New York, NY, USA:Springer, 2001.

[10] J. Huang, Nonlinear Output Regulation: Theory and Applications.Philadelphia, PA, USA: SIAM, 2004.

[11] Y. Hong, J. Hu, and L. Gao, “Tracking control for multi-agent consensuswith an active leader and variable topology,” Automatica, vol. 42, no. 7,pp. 1177–1182, Jul. 2006.

[12] Y. Hong, X. Wang, and Z. P. Jiang, “Distributed output regulation ofleader-follower multi-agent systems,” Int. J. Robust Nonlin. Control,vol. 23, no. 1, pp. 48–66, Jan. 2013.

[13] A. Isidori, L. Marconi, and A. Serrani, Robust Autonomous Guidance:An Internal Model-Based Approach. London, U.K.: Springer, 2003.

[14] H. K. Khalil, Nonlinear Systems, 3rd ed. Upper Saddle River, NJ, USA:Prentice Hall, 2002.

[15] S. Kia, J. Cortés, and S. Martínez, “Distributed convex opti-mization via continuous-time coordination algorithms withdiscrete-time communication,” Automatica, vol. 55, pp. 254–264,May 2015.

[16] M. Krstic and P. V. Kokotovic, “Adaptive nonlinear design withcontroller-identifier separation and swapping,” IEEE Trans. Autom.Control, vol. 40, no. 3, pp. 426–440, Mar. 1995.

[17] K. Liu, G. Xie, W. Ren, and L. Wang, “Consensus for multi-agent sys-tems with inherent nonlinear dynamics under directed topologies,” Syst.Control Lett., vol. 62, no. 2, pp. 152–162, Feb. 2013.

[18] Q. Liu and J. Wang, “A second-order multi-agent network for bound-constrained distributed optimization,” IEEE Trans. Autom. Control,to be published.

[19] T. Liu and Z. P. Jiang, “Distributed output-feedback control of nonlin-ear multi-agent systems,” IEEE Trans. Autom. Control, vol. 58, no. 11,pp. 2912–2917, Nov. 2013.

[20] Y. Lou, G. Shi, K. H. Johansson, and Y. Hong, “An approximateprojected consensus algorithm for computing intersection of convexsets,” IEEE Trans. Autom. Control, vol. 59, no. 7, pp. 1722–1736,Jul. 2014.

[21] G. S. Medvedev and N. Kopell, “Synchronization and transient dynam-ics in the chains of electrically coupled FitzHugh–Nagumo oscillators,”SIAM J. Appl. Math., vol. 61, no. 5, pp. 1762–1801, 2001.

[22] Z. Meng, W. Ren, Y. Cao, and Z. You, “Leaderless and leader-followingconsensus with communication and input delays under a directed net-work topology,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 41,no. 1, pp. 75–88, Feb. 2011.

[23] A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Trans. Autom. Control, vol. 54, no. 1,pp. 48–61, Jan. 2009.

[24] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks ofagents with switching topology and time-delays,” IEEE Trans. Autom.Control, vol. 49, no. 9, pp. 1520–1533, Sep. 2004.

[25] W. Ren and R. W. Beard, Distributed Consensus in Multi-VehicleCooperative Control. London, U.K.: Springer, 2008.

[26] R. Rockafellar, Convex Analysis. Princeton, NJ, USA: Princeton Univ.Press, 1972.

[27] G. Shi, K. H. Johansson, and Y. Hong, “Reaching an optimal con-sensus: Dynamical systems that compute intersections of convexsets,” IEEE Trans. Autom. Control, vol. 58, no. 3, pp. 610–622,Mar. 2013.

[28] Y. Su and J. Huang, “Cooperative output regulation of linear networkedsystems under switching topology,” IEEE Trans. Syst., Man, Cybern. B,Cybern., vol. 42, no. 3, pp. 864–875, Jun. 2012.

[29] Y. Su and J. Huang, “Cooperative semi-global robust output regula-tion for a class of nonlinear uncertain multi-agent systems,” Automatica,vol. 50, no. 4, pp. 1053–1065, Apr. 2014.

[30] J. Wang and N. Elia, “Control approach to distributed optimization,”in Proc. Allerton Conf. Commun. Control Comput., Allerton, IL, USA,2010, pp. 557–561.

[31] J. Wang and N. Elia, “A control perspective for centralized anddistributed convex optimization,” in Proc. 50th IEEE Conf. Decis.Control Eur. Control Conf. (CDC-ECC), Orlando, FL, USA, 2011,pp. 3800–3805.

[32] X. Wang, W. Ni, and X. Wang, “Leader-following formation ofswitching multirobot systems via internal model,” IEEE Trans.Syst., Man, Cybern. B, Cybern., vol. 42, no. 3, pp. 817–826,Jun. 2012.

[33] X. Wang, P. Yi, and Y. Hong, “Dynamic optimization for multi-agentsystems with external disturbances,” Control Theory Technol., vol. 12,no. 2, pp. 132–138, May 2014.

[34] X. Wang, D. Xu, and Y. Hong, “Consensus control of nonlinear leader-follower multi-agent systems with actuating disturbances,” Syst. ControlLett., vol. 73, pp. 58–66, Nov. 2014.

[35] E. Wei and A. Ozdaglar, “Distributed alternating direction method ofmultipliers,” in Proc. 51st IEEE Conf. Decis. Control, Maui, HI, USA,2012, pp. 5445–5450.

[36] G. Wen, Z. Duan, G. Chen, and W. Yu, “Consensus tracking ofmulti-agent systems with Lipschitz-type node dynamics and switchingtopologies,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 61, no. 2,pp. 499–511, Feb. 2014.

[37] G. Xie and L. Wang, “Consensus control for a class of networks ofdynamic agents,” Int. J. Robust Nonlin. Control, vol. 17, nos. 10–11,pp. 941–959, Jul. 2007.

[38] D. Xu, Y. Hong, and X. Wang, “Distributed output regulation of nonlin-ear multi-agent systems via host internal model,” IEEE Trans. Autom.Control, vol. 59, no. 10, pp. 2784–2789, Oct. 2014.

[39] D. Xu and J. Huang, “Robust adaptive control of a class of nonlinearsystems and its applications,” IEEE Trans. Circuits Syst. I, Reg. Papers,vol. 57, no. 3, pp. 691–702, Mar. 2010.

[40] D. Yuan, S. Xu, and H. Zhao, “Distributed primal-dual subgradientmethod for multiagent optimization via consensus algorithms,” IEEETrans. Syst., Man, Cybern. B, Cybern., vol. 41, no. 6, pp. 1715–1724,Dec. 2011.

[41] T. Yucelen and M. Egerstedt, “Control of multiagent systems underpersistent disturbances,” in Proc. Amer. Control Conf., Montreal, QC,Canada, 2012, pp. 5264–5269.

1666 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 46, NO. 7, JULY 2016

Xinghu Wang received the B.Sc. degree in infor-mation and computing science from ShandongUniversity, Weihai, China, in 2007 and the Ph.D.degree in control theory and engineering from theUniversity of Science and Technology of China,Hefei, China, in 2012.

From 2013 to 2015, he was a Post-Doctorate withthe Academy of Mathematics and Systems Science,Chinese Academy of Sciences, Beijing, China. Heis currently with the Department of Automation,University of Science and Technology of China. His

current research interests include nonlinear control and multiagent systems.

Yiguang Hong received the B.Sc. and M.Sc. degreesin mechanics from Peking University, Beijing,China, and the Ph.D. degree in operations researchand cybernetics from the Chinese Academy ofSciences (CAS), Beijing.

He is currently a Professor with the Academy ofMathematics and Systems Science, and the Directorof Key Laboratory of Systems and Control, CAS.His current research interests include nonlinear con-trol, multiagent systems, hybrid systems, and soft-ware systems.

Prof. Hong was a recipient of the Guang Zhaozhi Award of Chinese ControlConference, the Young Author Prize of International Federation of AutomaticControl World Congress, the Young Scientist Award of CAS, the Youth Awardfor Science and Technology of China, and the State Natural Science Prize ofChina. He is the IEEE Control Systems Society Chapter Activities Chair.He is the Editor-in-Chief of the Control Theory and Technology and theDeputy Editor-in-Chief of the Acta Automatica Sinica. He serves or served asthe Associate Editor of some journals, such as the IEEE Control SystemsMagazine, the IEEE Transactions on Automatic Control, and NonlinearAnalysis: Hybrid Systems.

Haibo Ji was born in Anhui, China, in 1964.He received the B.Eng. degree from ZhejiangUniversity, Hangzhou, China, in 1984 and the Ph.D.degree from Peking University, Beijing, China, in1990, both in mechanical engineering.

He is currently a Professor with the Department ofAutomation, University of Science and Technologyof China, Hefei, China. His current research interestsinclude nonlinear control and its applications.