control under indeterminacy and double constraints

14
Differential Equations, Vol. 39, No. 11, 2003, pp. 1554–1567. Translated from Differentsial’nye Uravneniya, Vol. 39, No. 11, 2003, pp. 1474–1486. Original Russian Text Copyright c 2003 by Dar’in, Kurzhanskii. ORDINARY DIFFERENTIAL EQUATIONS Control Under Indeterminacy and Double Constraints A. N. Dar’in and A. B. Kurzhanskii Moscow State University, Moscow, Russia Received June 10, 2003 The present paper deals with control synthesis for a linear system subjected to indeterminate perturbations. We assume that the control is constrained both geometrically and integrally, while the noise is subjected only to a geometric condition. A similar statement of the problem was considered in the paper [1], but it was assumed there that game is regular, which essentially implies that the game problem can be reduced to an optimal control problem. In the present paper, we consider the general case. A doubly constrained control in the absence of indeterminacy was studied in [2, 3] and, in a somewhat different statement (as the problem of damping of a linear system by a control that simultaneously satisfies the minimum amplitude and the minimum energy expenditure conditions), in [4–6]. A doubly constrained system can be treated as a system with a geometric constraint and constrained phase coordinates [7–11]. We solve the problem by combining modified constructions of the Krasovskii extremal aim- ing [12, 13] and the Pontryagin alternating integral [14–20]: the control synthesis is constructed as a strategy extremal to the problem solvability set, which, in turn, is a limit of integral sums. In the present paper, we use the terminology in [18, 21]. 1. STATEMENT OF THE PROBLEM Consider the system ˙ x(t)= A(t)x(t)+ B(t)u + C (t)v, ˙ k(t)= -kuk 2 R(t) , t 0 t t 1 . (1.1) Here x(t) R n is the system state vector, k(t) R 1 is the current control resource, u R nu is the control, and v R nv is the noise. Both the control and the noise are subjected to the geometric constraints u P (t), v Q (t), t 0 t t 1 . (1.2) Here P (t) and Q (t) are given continuous multimappings with nonempty convex compact values. Furthermore, the control resource cannot be negative; i.e., we have the phase constraint k(t) 0, t 0 t t 1 . (1.3) We assume that the matrix R(t) is positive definite for all t. Then the control resource cannot increase and decreases whenever there is any control activity. As to the geometric constraint on the control, we assume that 0 P (t), since otherwise the set of admissible controls can be empty. Without loss of generality, we consider system (1.1) with A(t) 0 and C (t) I : ˙ x(t)= B(t)u + v, ˙ k(t)= -kuk 2 R(t) , t 0 t t 1 . (1.4) Indeed, the matrix A(t) disappears after a well-known linear transformation, and the matrix C (t) becomes unnecessary if we replace the set Q (t) by C (t)Q (t). [The matrix B(t) cannot be eliminated in this way, since the control occurs in the second equation in (1.4) without this matrix.] The controls can belong to the following two classes. 0012-2661/03/3911-1554$25.00 c 2003 MAIK “Nauka/Interperiodica”

Upload: a-n-darin

Post on 06-Aug-2016

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Control Under Indeterminacy and Double Constraints

Differential Equations, Vol. 39, No. 11, 2003, pp. 1554–1567. Translated from Differentsial’nye Uravneniya, Vol. 39, No. 11, 2003, pp. 1474–1486.Original Russian Text Copyright c© 2003 by Dar’in, Kurzhanskii.

ORDINARYDIFFERENTIAL EQUATIONS

Control Under Indeterminacy and Double Constraints

A. N. Dar’in and A. B. KurzhanskiiMoscow State University, Moscow, Russia

Received June 10, 2003

The present paper deals with control synthesis for a linear system subjected to indeterminateperturbations. We assume that the control is constrained both geometrically and integrally, whilethe noise is subjected only to a geometric condition. A similar statement of the problem wasconsidered in the paper [1], but it was assumed there that game is regular, which essentiallyimplies that the game problem can be reduced to an optimal control problem. In the presentpaper, we consider the general case.

A doubly constrained control in the absence of indeterminacy was studied in [2, 3] and, in asomewhat different statement (as the problem of damping of a linear system by a control thatsimultaneously satisfies the minimum amplitude and the minimum energy expenditure conditions),in [4–6]. A doubly constrained system can be treated as a system with a geometric constraint andconstrained phase coordinates [7–11].

We solve the problem by combining modified constructions of the Krasovskii extremal aim-ing [12, 13] and the Pontryagin alternating integral [14–20]: the control synthesis is constructed asa strategy extremal to the problem solvability set, which, in turn, is a limit of integral sums. In thepresent paper, we use the terminology in [18, 21].

1. STATEMENT OF THE PROBLEM

Consider the system

x(t) = A(t)x(t) +B(t)u+ C(t)v, k(t) = −‖u‖2R(t), t0 ≤ t ≤ t1. (1.1)

Here x(t) ∈ Rn is the system state vector, k(t) ∈ R1 is the current control resource, u ∈ Rnu is thecontrol, and v ∈ Rnv is the noise.

Both the control and the noise are subjected to the geometric constraints

u ∈ P (t), v ∈ Q (t), t0 ≤ t ≤ t1. (1.2)

Here P (t) and Q (t) are given continuous multimappings with nonempty convex compact values.Furthermore, the control resource cannot be negative; i.e., we have the phase constraint

k(t) ≥ 0, t0 ≤ t ≤ t1. (1.3)

We assume that the matrix R(t) is positive definite for all t. Then the control resource cannotincrease and decreases whenever there is any control activity. As to the geometric constraint onthe control, we assume that 0 ∈ P (t), since otherwise the set of admissible controls can be empty.

Without loss of generality, we consider system (1.1) with A(t) ≡ 0 and C(t) ≡ I:

x(t) = B(t)u+ v, k(t) = −‖u‖2R(t), t0 ≤ t ≤ t1. (1.4)

Indeed, the matrix A(t) disappears after a well-known linear transformation, and the matrix C(t)becomes unnecessary if we replace the set Q (t) by C(t)Q (t). [The matrix B(t) cannot be eliminatedin this way, since the control occurs in the second equation in (1.4) without this matrix.]

The controls can belong to the following two classes.

0012-2661/03/3911-1554$25.00 c© 2003 MAIK “Nauka/Interperiodica”

Page 2: Control Under Indeterminacy and Double Constraints

CONTROL UNDER INDETERMINACY AND DOUBLE CONSTRAINTS 1555

1. The class UOL of program controls consists of measurable functions that satisfy the geometricconstraint (1.2) almost everywhere and ensure the validity of the integral constraint

t∫t0

‖u(t)‖2R(t)dt ≤ k (t0) , t0 ≤ t ≤ t1,

equivalent to the phase constraint (1.3). In general, this class depends on the initial resource k (t0):the larger this value, the more controls belong to UOL = UOL (k (t0)).

2. The class UCL of positional strategies consists of multifunctions U (t, x, k) : [t0, t1]×Rn×R→convRn measurable with respect to t and jointly upper semicontinuous with respect to (x, k).The additional conditions U (t, x, k) ⊆ P (t) and U (t, x, k) = {0} for k < 0 ensure the validity ofthe geometric constraint (1.2) and the phase constraint (1.3), respectively.

The choice of strategies in the class UCL guarantees the existence of solutions of the differentialinclusion1 [21](

x(t)k(t)

)∈ conv

{(B(t)u−‖u‖2R(t)

) ∣∣∣∣∣ u ∈ U (t, x, k)

}+ Q (t) = B (t,U (t, x, k)) + Q (t). (1.5)

[Here and throughout the following, to simplify the notation, we use the same symbol Q (t) for theset {(x, k) ∈ Rn+1 | x ∈ Q (t), k = 0} in the space Rn+1.]

Let M ⊆ Rn be a given nonempty objective set whose sections2 have the following properties:(1) they are monotone nondecreasing: M (k1) ⊆M (k2) if k1 ≤ k2;(2) M (k) = ∅ for k < 0;(3) M (k) is continuous for all k such that M (k) 6= ∅;(4) the sets M (·) are convex and compact.These conditions imply that M is closed and quasiconvex, entirely lies in the half-space {k ≥ 0},

and extends as k increases. The class of mappings R → convRn with properties (1)–(4) will bedenoted by M. In some cases, property (4) is replaced by the following more restrictive property:

(4′) M is a convex set.The corresponding class of mappings will be denoted by M′.The aim of the present paper is to solve the following problem.Problem 1. Find the solvability set W [t0] ⊆ Rn+1 and a positional control strategy U (t, x, k)

belonging to UCL such that all solutions of the differential inclusion issuing from the point(t, x(t), k(t)), t0 ≤ t ≤ t1, (x(t), k(t)) ∈ W [t], satisfy the inclusion x (t1) ∈ M (k (t1)) at theterminal time.

The solvability set depends on the objective set and the terminal time; whenever this depen-dence is important, we use the more complete notation W [t] = W (t; t1,M ). The sections of thesolvability set will be denoted by W [k, t] = W (k, t; t1,M (·)).

2. REDUCTION TO A PROBLEM WITHOUT PHASE CONSTRAINTS

We introduce one more class of strategies including UCL.3. The class U ′CL of positional strategies without the integral constraint consists of multifunc-

tions U (t, x, k) : [t0, t1] × Rn × R → convRn measurable with respect to t and jointly uppersemicontinuous with respect to (x, k) whose values satisfy the inclusion U (t, x, k) ⊆ P (t).1 The passage to the convex hull is purely technical and does not affect the control capabilities. As was mentioned

in [21], the convexity of the right-hand side of the differential inclusion can be replaced by continuity. However,optimal positional strategies are only upper semicontinuous rather than continuous, and hence convexifying theright-hand side is preferable to requiring the strategies to be continuous. Note that, to ensure the existence ofsolutions, one needs only the measurability of the multimappings P (t) and Q (t) rather than their continuity.

2 Let N be a given set in the space Rn+1 of the variables (x, k). The sections of N are defined as the values of themultimapping N (k) = {x ∈ Rn | (x, k) ∈ N }. Obviously, the set N itself can be uniquely reconstructed from itssections, since it is the graph of the mapping N (·): N =

�(x, k) ∈ Rn+1 | x ∈ N (k)

. The properties of the set N

and the mapping N (·) are related as follows: (1) if N is convex, then so are the sections N (k); (2) if the sets N (k)are locally bounded and closed, then the closedness of N is equivalent to the upper semicontinuity of N (·).

DIFFERENTIAL EQUATIONS Vol. 39 No. 11 2003

Page 3: Control Under Indeterminacy and Double Constraints

1556 DAR’IN, KURZHANSKII

Problem 1′ coincides with Problem 1 with the only difference that the controls are taken inthe class U ′CL and the corresponding solvability set is denoted by W ′[t].

Theorem 2.1. The solvability sets W [t] of the problem with the phase constraint and W ′[t] ofthe problem without it coincide.

Proof. The inclusion W [t] ⊆ W ′[t] is obvious, since U ′CL ⊇ UCL. Let us show that this inclusionis actually an equality.

Let (x (t0) , k (t0)) ∈ W ′ [t0], and let U ′(t, x, k) be a control strategy guaranteeing the inclusionX ′ [t1] ⊆ M . Here X ′[t] is the solution funnel of the differential inclusion (1.5) with control U ′.We introduce a control U ∈ UCL as follows:

U (t, x, k) ={

U ′(t, x, k) for k ≥ 0{0} for k < 0. (2.1)

Let (x′(·), k′(·)) be a solution of the differential inclusion (1.5) with control U ′. Since

(x′ (t1) , k′ (t1)) ∈M ,

we have k (t1) ≥ 0 and hence k(t) ≥ 0, t0 ≤ t ≤ t1. The last inequality implies that(x′(t), k′(t)

)T

∈ B (t,U ′ (t, x′(t), k′(t))) + Q (t) = B (t,U (t, x′(t), k′(t))) + Q (t);

i.e., (x′(·), k′(·)) is also a solution of the differential inclusion (1.5) with control U . Therefore,X ′[·] ⊆ X [·].

On the other hand, let (x(·), k(·)) be a solution of the inclusion (1.5) with control U . Then itfollows from (2.1) that k(t) ≥ 0, and we obtain X [·] ⊆ X ′[·] by a similar argument. In turn, thisimplies that X [t1] ⊆ M and the point (x (t0) , k (t0)) lies in the solvability set W [t0]. The proofof the theorem is complete.

This theorem shows that, to take account of the integral (phase) constraint, it suffices to redefinethe strategies in U ′CL to be zero for negative values of the resource and that the solvability sets withand without the integral constraint coincide for an appropriate choice of the objective set. Thisgives one possible solution of Problem 1.

Problem 1′ is a synthesis problem for a linear system with geometric constraints on the noiseand the controls. Such problems were studied in [14–19], and the ellipsoidal calculus [22] can beused efficiently for the construction of the solvability set.

In the given setting, it is more natural to seek controls minimizing the deviationd (x (t1) ,M (k (t1))). First, the sets M (k) are convex; second, this permits one to compute thedistance without mixing the variables x and k, which usually have different dimensions. In general,the above-mentioned deviation is larger than the distance d ((x (t1) , k (t1)) ,M ), and the differencecan be very substantial.

Known methods (the alternating integral, the evolution equation, the ellipsoidal approximation)can be applied for the construction of the solvability set only if the terminal set is compact, butM is unbounded in the direction of the axis Ok. To sidestep this difficulty, we replace the originalobjective set by the “truncated” set

MK = M ∩ (Rn × [0,K]) , MK(k) ={

M (k) for k ≤ K∅ for k > K.

Here the number K > 0 can be treated as a reasonable upper bound for k. For example, thiscan be the fuel tank capacity: it makes no sense to solve the problem for all k if the resource isa priori bounded.

Let W ′K [t] be the solvability set in Problem 1′ with objective set MK .

Lemma 2.1. The sets W ′K [·] and W [·] and their sections satisfy the following relations :

(1) W ′K [t] ⊂ W [t];

(2) W ′K [k, t] = W [k, t] for k ≤ K;

(3)⋃K>0 W ′

K [t] = W [t].

DIFFERENTIAL EQUATIONS Vol. 39 No. 11 2003

Page 4: Control Under Indeterminacy and Double Constraints

CONTROL UNDER INDETERMINACY AND DOUBLE CONSTRAINTS 1557

Proof. The first relation follows from the inclusion MK ⊆ M . The second relation is a conse-quence of the fact that k(t) can only decrease in the course of time; therefore, W [k, t] depends onlyon the sections M (γ) of the objective set for γ ≤ k. Finally, the third relation is a straightforwardcorollary to the second relation. The proof of the lemma is complete.

3. THE ALTERNATING INTEGRAL

3.1. Program Solvability Sets

The solvability set W+(t; t1,M ) of the maximin type is defined as the set of vectors (x, k) ∈ Rn+1

such that for each admissible noise v(·) there exists an admissible program control u(·) ∈ UOL(k)such that x (t1) ∈M (k (t1)). Here (x (t1) , k (t1)) is the endpoint of the trajectory issuing from thepoint x(t) = x, k(t) = k.

In a similar way, we define the solvability set W− (t; t1,M ) of the minimax type as the set ofvectors for which there exists an admissible control guaranteeing that x (t1) ∈ M (k (t1)) for anyadmissible noise v(·).

Lemma 3.1. The sections of the maximin and minimax solvability sets can be found by theformulas

W+ (k, t; t1,M (·)) =

[ ⋃0≤γ≤k

M (γ)−XGI (t, t1, k − γ)

]−

t1∫t

Q (τ)dτ, (3.1)

W− (k, t; t1,M (·)) =⋃

0≤γ≤k

M (γ) −t1∫t

Q (τ)dτ

−XGI (t, t1, k − γ)

. (3.2)

Here XGI stands for the attainability set from the origin under the double constraint, studiedin [2]:

XGI (t, t1,∆k) =

t1∫t

B(τ)u(τ)dτ∣∣∣∣

t1∫t

‖u(τ)‖2R(τ)dτ ≤ ∆k, u(τ) ∈ P (τ)

.

The union in (3.1) and (3.2) can be taken over a smaller interval, namely, over [0 ∨ k − δ, k],where

δ =

t1∫t

∣∣R1/2(τ)P (τ)∣∣ dτ = O (t1 − t) ,

since for ∆k > δ, the integral constraint is nonbinding, the set XGI (t, t1,∆k) stops growing, andthe set M (γ) does not become larger.

Note that the sections (3.1) and (3.2) are not convex in the general case, since the union ofconvex sets is not necessarily convex.

3.2. Integral Sums

Let τ0, . . . , τm be an arbitrary partition of the interval [t, t1]; here t = τ0, t1 = τm, andσi = τi − τi−1 > 0. We denote this partition by T .

At the terminal time t1, we set W+T

[k, τm] = W−T

[k, τm] = M (k), and at each of the precedingtime instants, we recursively set

W+T

[k, τi−1] = W+(k, τi−1; τi,W+

T[·, τi]

),

W−T

[k, τi−1] = W− (k, τi−1; τi,W−T

[·, τi]).

The setsW+

T[k, τ0] = I +

T(k, t; t1,M (·)) = I +

T[k, t],

W−T

[k, τ0] = I −T

(k, t; t1,M (·)) = I −T

[k, t]obtained at the last step are referred to as the upper and lower integral sums.

DIFFERENTIAL EQUATIONS Vol. 39 No. 11 2003

Page 5: Control Under Indeterminacy and Double Constraints

1558 DAR’IN, KURZHANSKII

As was mentioned above, the program solvability sets and hence the integral sums are notnecessarily convex; therefore, we need the following assumption.

Assumption 3.1. The mappings I +T

[·, t] and I −T

[·, t] belong to the class M for every parti-tion T .

Under Assumption 3.1, the integral sums can be treated as mappings M → M, which permitsus to state the semigroup property for the upper and lower alternating integrals and the solvabilityset in what follows. If M is a convex set, i.e., belongs to the class M′, then Assumption 3.1 isnecessarily valid and the integral sums also belong to M′.

Suppose that, for some k, there exists a Hausdorff limit limdiam T →0 h (I +T

[k, t],I +[k, t]) = 0of upper integral sums independent of the choice of the sequence of partitions. Then the setI +[k, t] = I + (k, t; t1,M (·)) is referred to as the upper alternating integral (for this value of k).In a similar way, one defines the lower alternating integral I −[k, t] = I − (k, t; t1,M (·)) as thelimit of lower integral sums.

Lemma 3.2. (1) The inclusion I −[k, t] ⊆ W [k, t] ⊆ I +[k, t] holds for t0 ≤ t ≤ t1 and k ≥ 0;(2) the set of mappings of the form M (·)→ I +(·, t, τ,M (·)) has the semigroup property :

I + (k, t; t1,M (·)) = I +(k, t; τ,I + (·, τ ; t1,M (·))

), t0 ≤ t ≤ t1, k ≥ 0.

The same is true for the lower alternating integral.If the upper and lower integrals coincide, then the set I [k, t] = I +[k, t] = I −[k, t] is called the

alternating integral. Obviously, W [k, t] = I [k, t] in this case.

3.3. The Case of a Convex Objective Set

Let the objective set M be convex. Then the maximin and minimax solvability sets are convexand can be found by the formulas [15, 18]

W+ (t, t1; MK) =

MK −t1∫t

B (τ,P (τ))dτ

− t1∫t

Q (τ)dτ,

W− (t, t1; MK) =

MK −t1∫t

Q (τ)dτ

− t1∫t

B (τ,P (τ))dτ.

As usual, we need additional assumptions to prove the convergence of the integral sums.

Assumption 3.2. There exist continuous positive functions κ(t) and r(t), t0 ≤ t ≤ t1, suchthat I +

T[κ (τi) , τi] ⊇ Br(τi) for any partition T = {τ0, . . . , τm} and any i = 0, . . . ,m.

Assumption 3.3. There exist continuous positive functions κ(t) and r(t), t0 ≤ t ≤ t1, and anumber ε > 0 such that I −

T[κ (τi) , τi] ⊇ Br(τi) for all partitions T = {τ0, . . . , τm} of diameter

≤ ε, i = 0, . . . ,m.

We take K > max {κ(t) + r(t) | t0 ≤ t ≤ t1}; then the integral sums I +T

[t] = I +T

(t, t1; MK) andI −

T(t, t1; MK) satisfy the corresponding assumptions on the nonempty interior [19], which provide

the coincidence of the upper and lower alternating integrals. This permits one to state the followingassertion.

Theorem 3.1. Let M (·) ∈M′, and let

k+0 (t) ≡ inf

{k | ∀T I +

T(k, t; t1,M (·)) 6= ∅

}and k−0 (t) ≡ inf{k | ∃T I −

T(k, t; t1,M (·)) 6= ∅}. Then the following assertions hold.

DIFFERENTIAL EQUATIONS Vol. 39 No. 11 2003

Page 6: Control Under Indeterminacy and Double Constraints

CONTROL UNDER INDETERMINACY AND DOUBLE CONSTRAINTS 1559

(1) The upper alternating integral exists for all k ≥ k+0 (t) under Assumption 3.2.

(2) The lower alternating integral exists for all k ≥ k−0 (t) under Assumption 3.3.(3) If both assumptions are valid, then k+

0 (t) ≡ k−0 (t) = k0(t) and, in addition,(a) if k > k0(t), then the upper and lower alternating integrals coincide with each other and

with the solvability set : I +[k, t] = I −[k, t] = W [k, t] for k > k0(t) and t0 ≤ t ≤ t1;(b) if k < k0(t), then the upper and lower integrals are empty ;(c) if k = k0(t), then I − [k0(t), t] ⊆ I + [k0(t), t] for t0 ≤ t ≤ t1.

(4) The upper alternating integral coincides with the solvability set : I +[k, t] = W [k, t] for k ≥ 0and t0 ≤ t ≤ t1.

Proof. As was mentioned above, Assumptions 3.2 and 3.3 imply the corresponding assumptionsthat the integral sums I +

T[t] and I −

T[t] are nonempty for the problem with geometric constraints.

Consequently [19], there exist Hausdorff limits I +[t] and I −[t] of these integral sums.Under a refinement of the partition T , the upper integral sums decrease and the lower integral

sums increase with respect to inclusion; thus one can use Lemmas A.1 and A.2 (see Section Abelow), which imply that there exist limits I +

T[k, t] → I +[k, t] and I −

T[k, t] → I −[k, t], and

moreover,(1) I +[k, t] = I +[k, t], k ≥ k+

0 (t);(2) I −[k, t] = I −[k, t], k > k−0 (t);(3) I − [k−0 (t), t

]⊇ I − [k−0 (t), t

].

We have thereby justified the first two assertions. The remaining assertions follow from the factthat I +[t] = I −[t] = W [t] under our assumptions. The proof of the theorem is complete.

4. CONTROL SYNTHESIS

4.1. The Cost Function and the Hamilton–Jacobi–Isaacs–Bellman Equation

Let us require that the control should minimize the distance to the section of the objective setat the terminal time. This problem corresponds to the cost function

V (t, x, k) = infU ∈UCL

supz(·)∈ZU (·)

d (x (t1) ,M (k (t1))) , (4.1)

where z(t) = (x(t), k(t)) and ZU (·) is the ensemble of solutions of the differential inclusion (1.5)with control U . Obviously, the solvability set is related to the cost function by the formulasW [k, t] = {x ∈ Rn | V (t, x, k) ≤ 0} and W [t] = {(x, k) ∈ Rn+1 | V (t, x, k) ≤ 0}. In what follows,we show that the converse is also valid: the cost function does not exceed the distance to the sectionof the solvability set.

The cost function (4.1) is a minimax (viscous) solution of the Hamilton–Jacobi–Bellman–Isaacsequation [23]

∂V

∂t+ min

u∈P (t)maxv∈Q (t)

{⟨∂V

∂x,B(t)u+ v

⟩− ∂V

∂k‖u‖2R(t)

}= 0,

t0 ≤ t ≤ t1, k ≥ 0, x ∈ Rn,

(4.2)

with the boundary condition

∂V/∂t + maxv∈Q (t)

〈∂V/∂x, v〉|k=0 = 0, t0 ≤ t ≤ t1, x ∈ Rn, (4.3)

and the initial condition

V (t1, x, k) = d(x,M (k)), k ≥ 0, x ∈ Rn. (4.4)

The boundary condition can be rewritten in the explicit form

V (t, x, 0) = maxv(·):v(τ)∈Q (τ)

d

x+

t1∫t

v(τ)dτ,M (0)

≤ dx,M (0) −

t1∫t

Q (τ)dτ

.

DIFFERENTIAL EQUATIONS Vol. 39 No. 11 2003

Page 7: Control Under Indeterminacy and Double Constraints

1560 DAR’IN, KURZHANSKII

The boundary condition arises here in a natural way and hence does not make system (4.2)–(4.4)overdetermined.

Note that, in the general case, the cost function immediately loses smoothness for k = 0 andt < t1; therefore, taking d2(x,M (k)) as the initial condition would not make it a classical solution ofEq. (4.2). For this reason, we have chosen the initial condition (4.4), which is also more convenientfor the construction of numerical schemes.

The following assertion is obvious.

Theorem 4.1. Let Mµ(k) = M (k) + Bnµ , where Bn

µ is the ball of radius µ in Rn. ThenV (t, x, k) = inf {µ ≥ 0 | x ∈ W (k, t; t1,Mµ(·))}.

In the general case, the strict inclusion Mµ ⊂(

M +Bn+1µ

)∩ {k ≥ 0} is valid under the passage

to the space of the variables (x, k). For example, let M (k) = {(x, k) ∈ Rn+1 | ‖x‖ ≤ k}; thenMµ(0) = Bn

µ and(

M +Bn+1µ

)(0) = Bn√

2µ. This inclusion shows that the our cost function sub-

stantially differs from the cost function with the initial condition V (t1, x, k) = d((x, k),M ).If the cost function is known, then the optimal synthesis can be obtained as the minimizer in

Eq. (4.2):

U ∗(t, x, k) = Arg minu∈P (t)

{⟨∂V

∂x,B(t)u

⟩− ∂V

∂k‖u‖2R(t)

}.

Since ∂V/∂k ≤ 0, it follows that the function under the sign of minimum is convex; therefore, theset U ∗(t, x, k) is convex. Moreover, since the cost function is continuous, it follows that the mappingU ∗(t, x, k) is upper semicontinuous with respect to x and k. Consequently, the above-mentionedcontrol guarantees the existence and extendability of solutions of the differential inclusion (1.5).

4.2. The Successive Maximin and Minimax

Let T = {τ0, . . . , τm} be a partition of the interval [t, t1]. We set

V +T

(τm, x, k) = V −T

(τm, x, k) = d(x,M (k))

at the terminal time t1 = τm and

V +T

(τi−1, x, k) = maxv[τi−1,τi]

minu[τi−1,τi]

V +T

(τi, x (τi) , k (τi)) ,

V −T

(τi−1, x, k) = minu[τi−1,τi]

maxv[τi−1,τi]

V +T

(τi, x (τi) , k (τi))

at the preceding partition points, where (x (τi) , k (τi)) is the endpoint of the trajectory issuing fromthe point (x (τi−1) , k (τi−1)) = (x, k), the maximum is taken over noises satisfying the geometricconstraint, and the minimum is taken over controls in the class UOL(k).

At the last step, we obtain the functions V +T

(t, x, k) = V +T

(τ0, x, k) and V −T

(t, x, k). These arethe cost functions in the program control problem with correction at the time instants τi, the noiseon the next interval being either known (V +) or unknown (V −) to the control. Obviously, theysatisfy the inequality V +

T(t, x, k) ≤ V (t, x, k) ≤ V −

T(t, x, k).

The functions defined by the pointwise limits

V +(t, x, k) = limdiam T →0

V +T

(t, x, k) = supT

V +T

(t, x, k),

V −(t, x, k) = limdiam T →0

V −T

(t, x, k) = infT

V −T

(t, x, k)

will be called the successive maximin and successive minimax, respectively. Under our assumptions,the successive maximin coincides with the successive minimax and the cost function [24]:

V +(t, x, k) = V (t, x, k) = V −(t, x, k). (4.5)

DIFFERENTIAL EQUATIONS Vol. 39 No. 11 2003

Page 8: Control Under Indeterminacy and Double Constraints

CONTROL UNDER INDETERMINACY AND DOUBLE CONSTRAINTS 1561

Theorem 4.2. Suppose that the upper and lower alternating integrals exist. Then

V (t, x, k) ≤ d(x,I +[k, t]

)≤ d

(x,I −[k, t]

)(4.6)

for values k and t such that I +[k, t] 6= ∅.

Proof. Let us show that the successive maximin admits the estimate

V +(t, x, k) ≤ d(x,I +[k, t]

); (4.7)

then (4.6) will follow from (4.5) and (4.7). In turn, (4.7) follows from the estimate

V +T

(t, x, k) ≤ d(x,I +

T[k, t]

), (4.8)

which we shall prove by induction. Indeed, inequality (4.8) is valid as an equality at the terminaltime τm = t1. We suppose that it holds at time τi and show that then it is valid at time τi−1 :

V +T

(τi−1, x, k) = maxv[τi−1,τi]

minu[τi−1,τi]

V +T

(τi, x (τi) , k (τi)) = maxv[τi−1,τi]

min0≤γ≤k

minu[τi−1,τi]k(τi)=γ

V +T

(τi, x (τi) , γ) .

This value can only become larger if we take the minimum not over all γ ∈ [0, k] but only over γsuch that W+

T[k, τi] 6= ∅; the minimum of these γ will be noted by γ0. If γ0 ≤ γ ≤ k, then one can

use the estimate (4.8), which has already been proved for t = τi; therefore,

V +T

(τi−1, x, k) ≤ maxv[τi−1,τi]

minγ0≤γ≤k

minu[τi−1,τi]k(τi)=γ

d

x+

τi∫τi−1

(B(τ)u(τ) + v(τ))dτ, W+T

[γ, τi]

≤ d

x, ⋃γ0≤γ≤k

(W+

T[γ, τi]−XGI (τi−1, τi, k − γ)

)−

τi∫τi−1

Q (τ)dτ

= d

(x,W+

T[k, τi−1]

).

Here we have used the formulas minp∈P d(x + p,M) = d(x,M − P ), maxq∈Q d(x + q,M) ≤d(x,M − Q

), and mink1≤k≤k2 d(x,M(k)) = d

(x,⋃k1≤k≤k2

M(k)). The proof of the theorem is

complete.

4.3. THE EVOLUTION EQUATION

A multimapping (k, t)→ Z [k, t] is said to be weakly invariant if the inclusion

Z [k, t] ⊆W+(k, t; t + σ,Z (·, t + σ))

=⋃

0≤γ≤k

(Z [γ, t + σ]−XGI(t, t + σ, k − γ)) −t+σ∫t

Q (τ)dτ(4.9)

holds for t0 ≤ t < t+σ ≤ t1. Note that weak invariance coincides with u-stability in the Krasovskiitheory.

The solvability set W [k, t] is weakly invariant, since it necessarily lies in the program maximinsolvability set. One can readily see that this is a maximum (with respect to inclusion) weaklyinvariant system of sets satisfying the condition Z [k, t1] ⊆M (k).

By replacing the integral of Q (τ) in (4.9) by the set σQ (t) [with an error of at most O (σ2)] andby passing to the limit, we obtain the evolution equation for Z [k, t] in the form

limσ↓0

σ−1h+

(Z [k, t] + σQ (t),

⋃0≤γ≤k

(Z [γ, t + σ]−XGI(t, t + σ, k − γ))

)= 0. (4.10)

DIFFERENTIAL EQUATIONS Vol. 39 No. 11 2003

Page 9: Control Under Indeterminacy and Double Constraints

1562 DAR’IN, KURZHANSKII

Equation (4.10) can be simplified with regard to the fact that, on small intervals of time σ, the at-tainability set XG under the double constraint is close to the intersection of attainability sets underthe geometric and integral constraints. To prove this fact, we introduce the following additionalassumptions:

(1) nu = n and 0 ∈ int P (t);(2) the support function %(`|P (t)) and the function R(t) satisfy the Lipschitz condition.By the first assumption, without loss of generality, one can assume that B(t) ≡ I.

Theorem 4.3. Suppose that the above-mentioned conditions are satisfied at time t. Then

h (XGI(t, t+ σ, δ),XG(t, t+ σ) ∩XI(t, t + σ, δ)) = O(σ2), (4.11)

where XG and XI are the attainability sets under the geometric and integral constraints,3 respec-tively:

XG(t, t + σ) =

t+σ∫t

P (τ)dτ, XI(t, t+ σ, δ) = E

0, δ

t+σ∫t

R−1(τ)dτ

.

Proof. To prove this assertion, we use the following fact about the attainability set underthe double constraint [2]: first, XGI ⊆ XG ∩XI ; second, this inclusion becomes an equality for au-tonomous systems. Let us construct an autonomous system whose attainability setXGI = XG ∩ XI is an interior approximation to XGI . To this end, we need the following lem-mas, which can be proved by straightforward computations.

Lemma 4.1. Let 0 ∈ int P (t), and let the support function %(`|P (t)) at the point t satisfy theLipschitz condition with constant CP uniformly with respect to ` ∈ B1. Let

Pσ(t) = P (t)(1− σr−1CP

), r = max {r | Br ⊆ P (t)} .

Then the inclusionP (τ) ⊇ Pσ(t), t ≤ τ ≤ t+ σ, (4.12)

and the estimate

h

t+σ∫t

P (τ)dτ, σPσ(t)

= O(σ2)

(4.13)

are valid for σ < rC−1P .

Lemma 4.2. Let R(·) be a matrix function such that R(t) > 0 for all t and the Lipschitzcondition

∣∣‖`‖R−1(t) − ‖`‖R−1(τ)

∣∣ ≤ CR|τ − t| ‖`‖ for all ` ∈ Rn is valid at the point t. Let

R−1σ (t) = R−1(t)

(1− CRσλ−1/2

min

(R−1(t)

))2

.

Then the inequalityRσ(t) ≥ R(τ), t ≤ τ ≤ t+ σ, (4.14)

and the estimate

h

E

0,

t+σ∫t

R−1(τ)dτ

, E(0, σR−1

σ (t)) = O

(σ3/2

)(4.15)

are valid for σ < λ1/2min (R−1(t))C−1

R .

3 By E (q,Q) we denote the ellipsoid centered at q and determined by a matrix Q. It is the set whose support functionis equal to %(` | E (q,Q)) = 〈`, q〉+ ‖`‖Q.

DIFFERENTIAL EQUATIONS Vol. 39 No. 11 2003

Page 10: Control Under Indeterminacy and Double Constraints

CONTROL UNDER INDETERMINACY AND DOUBLE CONSTRAINTS 1563

Let us continue the proof of Theorem 4.3. By XGI we denote the attainability set of the system

x(τ) = u(τ), k(τ) = −‖u(τ)‖2Rσ(t), t ≤ τ ≤ t+ σ,

under the geometric constraint u(τ) ∈ Pσ(t) and the phase condition k(τ) ≥ 0. By (4.12) and(4.14), we have the inclusion XGI ⊆ XGI ⊆ XG ∩XI ; therefore, from Corollary A.1, we obtain

h (XGI ,XG ∩XI) ≤ h(

XG ∩ XI ,XG ∩XI

)≤ α+ (α+ β)r−1 diam XI , (4.16)

where r =√δσλ

1/2min (R−1

σ (t)), diam XI =√δσλ1/2

max (R−1σ (t)), α = h

(XG,XG

), and β = h

(XI ,XI

).

The ratio(

diam XI

)/r is independent of δ and σ and is bounded for t0 ≤ t ≤ t1; we denote

its maximum value by Cλ. It follows from the estimates (4.13) and (4.15) that α = O (σ2) andβ = O

(δ1/2σ3/2

). Since for δ � σ, the integral constraint is nonbinding and XGI = XG (i.e., the

assertion of the theorem is valid automatically), it is reasonable to consider only the case in whichδ = O(σ), i.e., β = O (σ2). Then relation (4.16) implies the desired estimate (4.11). The proof ofthe theorem is complete.

By using the estimate (4.11) and Lemma A.6 (see Section A below), we obtain the followingassertion.

Corollary 4.1. The equation

limσ↓0

σ−1h+

(Z [k, t] + σQ (t),

⋃0≤γ≤k

[Z [γ, t + σ]− σP (t) ∩ E

(0, (k − γ)σR−1(t)

)])= 0

is equivalent to the evolution equation (4.10).

Note that, in this case, just as in (3.1) and (3.2), the union is actually computed over the smallerinterval [0∨ k− δ, k], where δ is at most O(σ). The size of the set σP (t)∩ E (0, (k − γ)σR−1(t)) isalso estimated as O(σ), in spite of the fact that the ellipsoid has the size O (

√σ ) for large k − γ.

4.4. Synthesizing Strategies

Theorem 4.4. Let Z [k, t] be a weakly invariant multimapping such that the support functionof its values has continuous partial derivatives with respect to the variables t and k at points whereZ [k, t] is nonempty. Then the function h(t, x, k) = d(x,Z [k, t]) satisfies the differential inequality

minu∈P (t)

maxv∈Q (t)

dh

dt(t, x(t), k(t)) ≤ 0 (4.17)

on its domain.

Proof. We take a sufficiently small σ such that t < t+ σ ≤ t1 and estimate the expression

H(σ) = maxv[t,t+σ]

minu[t,t+σ]

h(t + σ, x(t + σ, k(t + σ)))

as in the proof of Theorem 4.2; then we obtain

H(σ) ≤ d

x,[ ⋃0≤γ≤k

Z [γ, t+ σ]−XGI(t, t + σ, k − γ)

]−

t+σ∫t

Q (τ)dτ

.

This, together with (4.9), implies the inequality H(σ) ≤ d(x,Z [k, t]). By passing to the limit inthe last relation, we obtain (4.17). The proof of the theorem is complete.

DIFFERENTIAL EQUATIONS Vol. 39 No. 11 2003

Page 11: Control Under Indeterminacy and Double Constraints

1564 DAR’IN, KURZHANSKII

The strategy UZ (t, x, k) providing the minimum in (4.17) is called the extremal strategy to themapping Z [k, t]. It consists of all elements u∗ ∈ P (t) satisfying the maximum principle

〈`0, B(t)u∗〉+ ‖u∗‖2R(t) %k (t, k, `0) = minu∈P (t)

{〈`0, B(t)u〉+ ‖u‖2R(t)%k (t, k, `0)

},

where %(t, k, `) = %(`|Z [k, t]) and `0 = `0(t, x, k) is the minimizer in the problem

〈`, x〉 − %(`|Z [k, t])→ max, ‖`‖ ≤ 1.

If the mapping Z [k, t] is continuous with respect to k, then its extremal strategy is upper semi-continuous with respect to x and k and hence belongs to the class UCL of positional strategiesguaranteeing the existence of solutions of the differential inclusion (1.5).

Corollary 4.2. Let (x(t), k(t)) be a solution of the differential inclusion (1.5) with control UZ .If x(t) ∈ Z [k(t), t], then x(τ) ∈ Z [k(τ), t] for t ≤ τ ≤ t1.

By taking Z [k, t] = W [k, t], we obtain the solution of Problem 1′, that is, the strategy UW .Moreover, since the interior ellipsoidal approximation E−[t] to the solvability set [19, 21] (computedfor Problem 1′ without the phase constraint in the case of a convex objective set) is u-stable, itfollows that the strategy UE− ensures that the trajectory does not leave E−[t].

A. AUXILIARY ASSERTIONS

The following assertions permit one to pass from the convergence of a sequence of sets in thespace Rn+1 to the convergence of their sections in the space Rn. We assume that a sequence{An}∞n=1 of nonempty convex compact sets is given which has a Hausdorff limit A ∈ convRn+1.We set k− = inf{k | ∃(x, k) ∈ A} and k+ = sup{k | ∃(x, k) ∈ A}. Obviously, we can speak ofconvergence only for k ∈ [k−, k+], since A(k) = ∅ outside this interval.

In the proof, we use the notation Πk = {(x,κ) | x ∈ Rn, κ = k} and the representation ofthe sections A(k) in the form A ∩ Πk. Although the latter set formally belongs to Rn+1 and thesections belong to Rn, we can identify them; this does not affect the distance and convergence forany fixed k.

Lemma A.1. Let {An} be a sequence decreasing by inclusion, i.e., satisfying An+1 ⊆ An. ThenAn(k)→ A(k) in the Hausdorff metric for all k ∈ [k−, k+].

Proof. Since An+1(k) ⊆ An(k), we have

An(k)→∞⋂n=1

An(k) =∞⋂n=1

An ∩Πk =

(∞⋂n=1

An

)∩Πk = A ∩Πk = A(k).

The proof of the lemma is complete.

Lemma A.2. Let {An} be a sequence increasing by inclusion, i.e., satisfying An+1 ⊇ An. ThenAn(k)→ A(k) in the Hausdorff metric for all k ∈ (k−, k+). The inclusion

An (k±)→ A± (k±) ⊆ A (k±)

is valid for the extreme values of k.

Proof. If k ∈ (k−, k+), then Πk ∩ riA 6= ∅; consequently, Πk ∩ ri⋃∞n=1 An 6= ∅. By using

Theorem 6.5 in [25], we obtain

limAn(k) = cl∞⋃n=1

An(k) = cl∞⋃n=1

(An ∩Πk) = cl

[(∞⋃n=1

An

)∩Πk

]

= cl

(∞⋃n=1

An

)∩ cl Πk = A ∩Πk = A(k).

The proof of the lemma is complete.

DIFFERENTIAL EQUATIONS Vol. 39 No. 11 2003

Page 12: Control Under Indeterminacy and Double Constraints

CONTROL UNDER INDETERMINACY AND DOUBLE CONSTRAINTS 1565

Lemma A.3. Let N be a nonempty closed convex set in the space Rn+1 of the variables (x, k).Then the support function of this set and the support functions of its sections are related by theformulas

%

((`

λ

)∣∣∣∣ N

)= sup

k∈R(%(`|N (k)) + kλ), (A.1)

%(`|N (k)) = infλ∈R

%

((`

λ

)∣∣∣∣ N −(

0k

)). (A.2)

Proof. Relation (A.1) follows from the definition of the support function and the fact that theset N consists of sets of the form N (k)× {k}.

To prove relation (A.2), we represent the left-hand side in the form

%(`|N (k)) = %

((`

0

)∣∣∣∣N (k)× {k})

= %

((`

0

)∣∣∣∣N (k) ∩Πk

),

where Πk is the plane {(x,κ) | κ = k}. Then

%(`|N (k)) = inf`1+`2=`m1+m2=0

[%

(`1m1

∣∣∣∣N )+ %

(`2m2

∣∣∣∣Πk

)].

The second term is infinite if `2 6= 0; therefore, the minimum is attained at `1 = ` and `2 = 0.By setting m = m1 = −m2, we obtain (A.2). The proof of the lemma is complete.

An application of Lemmas A.1–A.3 to the epigraph of a convex function gives results similar torelations for the level sets of this function [25, 26].

Condition 1. Let A1, A2, C1, and C2 be nonempty convex compact sets such that (1) A1 ⊆ A2

and C1 ⊆ C2; (2) h (A1, A2) ≤ α and h (C1, C2) ≤ γ, where α, γ ≥ 0; (3) A1 ⊆(C1 − Br

)6= 0 for

some r > 0. We wish to estimate the Hausdorff distance between the sections A1 ∩C1 and A2 ∩C2

via α and γ.

Lemma A.4. If Condition 1 is satisfied, then

h (A1 ∩ C1, A1 ∩ C2) ≤ γr−1 diamA1 ∩C1, (A.3)

where diamA stands for the diameter of the least ball containing A.

Proof. First, we note that C2 ⊆ C1 + Bγ by condition (2); we need to show that A1 ∩ C2 ⊆A1 ∩ C1 +Bε, where ε stands for the right-hand side of (A.3).

It follows from condition (3) that there exists a point x such that x ∈ A and x+Br ⊆ C1. Let apoint y lie in the intersection of A1 and C2. If y ∈ C1, then d (y,A1 ∩ C1) = 0; therefore, we assumethat y 6∈ C1 and d (y,C1) ≤ γ.

Since A1 is convex, we see that it contains the entire segment joining the points x and y. By zwe denote the point of intersection of the boundary of C1 with [x, y]. (Such a point is unique byvirtue of Theorem 2.6 in [27].) Then, by the separation theorem for convex sets, there exists adirection ` separating the set C1 and the interval [z, y]. Since, so much the more, ` separates [z, y]and the set x+Br, we have

〈`, z〉 ≥ % (` | x+Br) =⇒ 〈`, z − x〉 ≥ r‖`‖. (A.4)

Since d (y,C1) ≤ γ, it follows that

〈`, y〉 − % (`|C1)︸ ︷︷ ︸=〈`,z〉

≤ γ‖`‖ =⇒ 0 ≤ 〈`, y − z〉 ≤ γ‖`‖. (A.5)

DIFFERENTIAL EQUATIONS Vol. 39 No. 11 2003

Page 13: Control Under Indeterminacy and Double Constraints

1566 DAR’IN, KURZHANSKII

By combining inequalities (A.4) and (A.5), we obtain

0 ≤ 〈`, y − z〉 ≤ γr−1(r‖`‖) = γr−1〈`, z − x〉.

Since the vectors y − z and z − x are codirected, we have

‖y − z‖ ≤ γr−1‖z − x‖ ≤ γr−1 diamA1 ∩ C1.

The last inequality is valid since x, z ∈ A1 ∈ C1. Therefore, A1∩C2 ⊆ A1∩C1+Bε, which completesthe proof of the theorem.

Lemma A.5. If Condition 1 is satisfied, then

h (A1 ∩ C1, A2 ∩ C1) ≤ α(1 + r−1 diamA1 ∩ C1

).

Proof. Let x ∈ A2 ∩ C1; then d (x,A1) ≤ α. Let y be the point of A1 nearest to x. Since‖x− y‖ ≤ α and x ∈ C1, we have y ∈ C1 +Bα. Consequently,

d (x,A1 ∩ C1) ≤ ‖x− y‖+ d (y,A1 ∩ C1) ≤ α+ αr−1 diamA1 ∩ C1 = α(1 + r−1 diamA1 ∩ C1

)by the preceding lemma. The proof of the lemma is complete.

Lemmas A.4 and A.5, together with the triangle inequality, readily imply the following assertion.

Corollary A.1. One has h (A1 ∩ C1, A2 ∩ C2) ≤ α+ (α+ γ)r−1 diamA1 ∩ C1.

For practical applications of this result, it is convenient to use the inequality

diamA1 ∩ C1 ≤ min {diamA1,diamC1} .

Therefore, it is unnecessary to compute the number diamA1 ∩C1; if it is given, then the resultingestimate is more accurate.

Lemma A.6. Let Ξ = {ξ} be a given set of indices, and let Aξ and Cξ ∈ convRn be givensets for each index ξ ∈ Ξ. Moreover, let h (Aξ, Cξ) ≤ ε for all ξ ∈ Ξ. Then h

(A, C

)≤ ε,

A = conv⋃ξ∈ΞAξ, C = conv

⋃ξ∈ΞCξ.

Proof. Since h+ (Aξ, Cξ) ≤ ε, we have % (`|Aξ)− % (`|Cξ) ≤ ε‖`‖. Hence it follows that

h+

(A, C

)= max‖`‖≤1

[%(`|A)− %

(`|C)]

= max‖`‖≤1

[supξ′∈Ξ

% (`|Aξ′)− supξ′′∈Ξ

% (`|Cξ′′)]

= max‖`‖≤1

supξ′∈Ξ

infξ′′∈Ξ

[% (`|Aξ′)− % (`|Cξ′′)

]≤

ξ′= xi′′max‖`‖≤1

supξ′∈Ξ

[% (`|Aξ′)− % (`|Cξ′)

]≤ ε.

In a similar way, one can obtain an estimate for h−(A, C

). The proof of the lemma is complete.

ACKNOWLEDGMENTS

The work was supported in part by the program “Universities of Russia–Basic Research” (projectno. UR.3.3.07) and by the Russian Foundation for Basic Research (project no. 03-01-00663).

REFERENCES1. Ledyaev, Yu.S., Trudy Mat. Inst. im. Steklova Akad. Nauk SSSR, 1985, vol. 167, pp. 207–215.2. Dar’in, A.N. and Kurzhanskii, A.B., Differents. Uravn., 2001, vol. 37, no. 11, pp. 1476–1484.3. Dar’in, A.N., Izv. RAN. Teoriya i Sistemy Upravleniya, 2003, vol. 42, no. 4, pp. 515–523.

DIFFERENTIAL EQUATIONS Vol. 39 No. 11 2003

Page 14: Control Under Indeterminacy and Double Constraints

CONTROL UNDER INDETERMINACY AND DOUBLE CONSTRAINTS 1567

4. Krasovskii, N.N., Prikl. Mat. Mekh., 1965, vol. 29, no. 2, pp. 218–225.5. Bondarenko, V.I., Krasovskii, N.N., and Filimonov, Yu.M., Prikl. Mat. Mekh., 1965, vol. 29, no. 5,

pp. 828–834.6. Bondarenko, V.I. and Filimonov, Yu.M., Prikl. Mat. Mekh., 1968, vol. 32, no. 1, pp. 147–153.7. Kurzhanskii, A.B., Dokl. Akad. Nauk SSSR, 1970, vol. 192, no. 3, pp. 491–494.8. Kurzhanskii, A.B., Dokl. Akad. Nauk SSSR, 1986, vol. 287, no. 5, pp. 1047–1050.9. Kurzhanskii, A.B. and Filippova, T.F., Dokl. Akad. Nauk SSSR, 1986, vol. 289, no. 1, pp. 38–41.

10. Kurzhanskii, A.B. and Nikonov, O.I., Dokl. Akad. Nauk SSSR, 1990, vol. 311, no. 4, pp. 788–793.11. Kurzhanskii, A.B. and Nikonov, O.I., Dokl. Akad. Nauk SSSR, 1993, vol. 333, no. 4, pp. 578–581.12. Krasovskii, N.N., Igrovye zadachi o vstreche dvizhenii (Game Problems on the Encounter of Motions),

Moscow, 1970.13. Krasovskii, N.N. and Subbotin, A.I., Pozitsionnye differentsial’nye igry (Positional Differential Games),

Moscow, 1974.14. Pontryagin, L.S., Dokl. Akad. Nauk SSSR, 1967, vol. 175, no. 4, pp. 910–912.15. Pontryagin, L.S., Mat. Sb., 1980, vol. 112(154), no. 3(7), pp. 307–330.16. Nikol’skii, M.S., Mat. Sb., 1981, vol. 126(158), no. 1(9), pp. 136–144.17. Nikol’skii, M.S., Mat. Sb., 1985, vol. 128(170), no. 1(9), pp. 35–49.18. Kurzhanskii, A.B., Trudy Mat. Inst. im. Steklova RAN , 1999, vol. 224, pp. 234–248.19. Kurzhanskii, A.B. and Mel’nikov, N.B., Mat. Sb., 2000, vol. 191, no. 6, pp. 69–100.20. Kurzhanski, A.B. and Varaiya, P., SIAM J. on Control , 2002, vol. 41, no. 1, pp. 181–216.21. Filippov, A.F., Differentsial’nye uravneniya s razryvnoi pravoi chast’yu (Differential Equations with

Discontinuous Right-Hand Side), Moscow, 1985.22. Kurzhanski, A.B. and Valyi, I., Ellipsoidal Calculus for Estimation and Control , SCFA, Boston, 1997.23. Subbotin, A.I., Minimaksnye neravenstva i uravneniya Gamil’tona–Yakobi (Minimax Inequalities and

Hamilton–Jacobi Equations), Moscow, 1991.24. Fleming, W.H., J. Math. Anal. Appl., 1961, vol. 3, pp. 102–116.25. Rockafellar, R., Convex Analysis , Princeton: Princeton University, 1970. Translated under the title

Vypuklyi analiz , Moscow: Mir, 1973.26. Rockafellar, R.T., Trans. Amer. Math. Soc., 1966, vol. 123, pp. 46–63.27. Nikaido, H., Convex Structures and Economics Theory, New York: Academic, 1968. Translated under

the title Vypuklye struktury i matematicheskaya ekonomika, Moscow: Mir, 1972.

DIFFERENTIAL EQUATIONS Vol. 39 No. 11 2003