asynchronous and deterministic objects

12
Asynchronous and Deterministic Objects Denis Caromel Ludovic Henrio Bernard Paul Serpette INRIA Sophia-Antipolis - CNRS - I3S - Univ. Nice Sophia Antipolis, 2004 route des Lucioles – B.P. 93 F-06902 Sophia-Antipolis Cedex {caromel, henrio, serpette}@sophia.inria.fr Abstract This paper aims at providing confluence and determinism proper- ties in concurrent processes, more specifically within the paradigm of object-oriented systems. Such results should allow one to pro- gram parallel and distributed applications that behave in a deter- ministic manner, even if they are distributed over local or wide area networks. For that purpose, an object calculus is proposed. Its key characteristics are asynchronous communications with futures, and sequential execution within each process. While most of previous works exhibit confluence properties only on specific programs – or patterns of programs, a general condition for confluence is presented here. It is further put in practice to show the deterministic behavior of a typical example. Categories and Subject Descriptors: D.1.3: Concurrent Programming F.3.2: Semantics of Programming Languages General Terms: Languages Keywords: Object calculus, concurrency, distribution, parallelism, object-oriented languages, determinism, futures. 1 Introduction Confluence properties alleviate the programmer from studying the interleaving of concurrent instructions and communications. Very different works have been performed to ensure confluence of calcu- lus, languages, or programs. Linear channels in π-calculus [30, 20], non interference properties [29] or atomic type systems [8] in shared memory systems are typical examples. Starting from de- terministic calculi, Process Networks [17], or Jones’ technique in πoβλ [16] create deterministic concurrency. But none of them con- cerns a concurrent, imperative, object language with asynchronous communications. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. POPL’04, January 14–16, 2004, Venice, Italy. Copyright 2004 ACM 1-58113-729-X/04/0001 ...$5.00 In this paper, we propose a calculus where interference between processes are clearly identifiable thus simplifying reasoning about concurrent object-oriented programs.Our confluence property has a much more general goal: it identifies the sources of non- determinism and provides a minimal characterization of program behavior. Furthermore, some programs must behave deterministi- cally: one could not imagine an undeterministic result to a binary or a prime number search but only a few works ensure such results. Seeking determinism for parallel programming, we propose a cal- culus in which such properties can be verified either dynamically or by static analysis. A first contribution of this work lies in the design of an appropriate concurrent object calculus ( ASP, Asynchronous Sequential Processes). From a more practical point of view, we aim at a calculus model that is effective for parallel and distributed computations, both on local and wide area networks. Asynchronous communication is at the root of the calculus (for the sake of decou- pling processes and network latency hiding). In ASP some objects are active, active objects are accessible through global (remote) references. Communications are per- formed through asynchronous method calls called requests: the calling object sends a method call to an active object but does not wait for the result. Instead the request sender obtains a future repre- senting the result that will be calculated. The result will be updated when it will be available. Inside each activity, execution is sequen- tial: only one thread performs instructions. ASP is based on a purely sequential and classical object calcu- lus: impς-calculus [1] extended with two parallel constructors: Active and Serve. Active turns a standard object into an active one, executing in parallel and serving requests in the order speci- fied by the Serve operator. Automatic synchronization of processes comes from a data-driven synchronization mechanism called wait- by-necessity [5]: a wait automatically occurs upon a strict operation (like a method call) on a communication result not yet available (a future). The association of systematic asynchronous communi- cations towards processes, wait-by-necessity, and automatic deep- copy of parameters provides a smooth transition from sequential to concurrent computations. An important feature of ASP is that fu- tures can be passed between processes, both as method parameters and as method results. The main contributions of this paper are: The formal definition of an imperative and asynchronous ob- ject calculus with futures (ASP). Parallel programming as a smooth extension of sequential objects, mainly due to data-flow synchronizations (wait-by-

Upload: independent

Post on 13-May-2023

1 views

Category:

Documents


0 download

TRANSCRIPT

Asynchronous and Deterministic Objects

Denis Caromel Ludovic Henrio Bernard Paul SerpetteINRIA Sophia-Antipolis - CNRS - I3S - Univ. Nice Sophia Antipolis,

2004 route des Lucioles – B.P. 93

F-06902 Sophia-Antipolis Cedex

{caromel, henrio, serpette}@sophia.inria.fr

Abstract

This paper aims at providingconfluenceanddeterminismproper-ties in concurrent processes, more specifically within the paradigmof object-orientedsystems. Such results should allow one to pro-gram parallel and distributed applications that behave in adeter-ministic manner, even if they are distributed over local or wide areanetworks. For that purpose, an object calculus is proposed.Its keycharacteristics areasynchronouscommunications with futures, andsequential execution within each process.

While most of previous works exhibit confluence properties onlyon specific programs – or patterns of programs, a general conditionfor confluence is presented here. It is further put in practice to showthe deterministic behavior of a typical example.

Categories and Subject Descriptors:D.1.3: Concurrent ProgrammingF.3.2: Semantics of Programming Languages

General Terms: Languages

Keywords: Object calculus, concurrency, distribution, parallelism,object-oriented languages, determinism, futures.

1 Introduction

Confluence properties alleviate the programmer from studying theinterleaving of concurrent instructions and communications. Verydifferent works have been performed to ensure confluence of calcu-lus, languages, or programs. Linear channels inπ-calculus [30, 20],non interference properties [29] or atomic type systems [8]inshared memory systems are typical examples. Starting from de-terministic calculi, Process Networks [17], or Jones’ technique inπoβλ [16] create deterministic concurrency. But none of them con-cerns a concurrent, imperative, object language with asynchronouscommunications.

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. To copy otherwise, to republish, to post on servers or to redistributeto lists, requires prior specific permission and/or a fee.POPL’04,January 14–16, 2004, Venice, Italy.Copyright 2004 ACM 1-58113-729-X/04/0001 ...$5.00

In this paper, we propose a calculus where interference betweenprocesses are clearly identifiable thus simplifying reasoning aboutconcurrent object-oriented programs.Our confluence property hasa much more general goal: it identifies the sources of non-determinism and provides a minimal characterization of programbehavior. Furthermore, some programs must behave deterministi-cally: one could not imagine an undeterministic result to a binaryor a prime number search but only a few works ensure such results.

Seeking determinism for parallel programming, we propose acal-culus in which such properties can be verified either dynamically orby static analysis. A first contribution of this work lies in the designof an appropriate concurrent object calculus (ASP, AsynchronousSequential Processes). From a more practical point of view, weaim at a calculus model that is effective for parallel and distributedcomputations, both on local and wide area networks. Asynchronouscommunication is at the root of the calculus (for the sake of decou-pling processes and network latency hiding).

In ASP some objects are active,active objectsare accessiblethrough global (remote) references. Communications are per-formed throughasynchronous method callscalled requests: thecalling object sends a method call to an active object but does notwait for the result. Instead the request sender obtains afuturerepre-senting the result that will be calculated. The result will be updatedwhen it will be available. Inside each activity, execution is sequen-tial: only one thread performs instructions.

ASP is based on a purely sequential and classical object calcu-lus: impς-calculus [1] extended with two parallel constructors:Active and Serve. Active turns a standard object into an activeone, executing in parallel and serving requests in the orderspeci-fied by theServeoperator. Automatic synchronization of processescomes from adata-drivensynchronization mechanism calledwait-by-necessity[5]: a wait automatically occurs upon a strict operation(like a method call) on a communication result not yet available(a future). The association of systematic asynchronous communi-cations towards processes, wait-by-necessity, and automatic deep-copy of parameters provides a smooth transition from sequential toconcurrent computations. An important feature of ASP is that fu-tures can be passed between processes, both as method parametersand as method results.

The main contributions of this paper are:

• The formal definition of an imperative and asynchronous ob-ject calculus with futures (ASP).

• Parallel programming as a smooth extension of sequentialobjects, mainly due to data-flow synchronizations (wait-by-

necessity) and pervasive futures with concurrent out-of-orderupdates.

• The characterization of sufficient conditions for deterministicbehavior in such a highly asynchronous setting.

On the practical side, it represents the formalization of anexist-ing library that takes into account the practical constraint of asyn-chrony in wide-area networks; the ASP model is implemented asan open-source Java library (ProActive [7]), allowing parallel anddistributed programming.

Section 2 presents the ASP calculus, it starts with a sequential partbased on theimpς-calculus; then ASP calculus and its principlesare presented; the example of a parallel binary tree illustrates thecalculus. Section 3 presents the semantics of ASP, and Section 4 itsmain properties including confluence. ASP is compared with othercalculi in section 5.

2 Calculus

2.1 Sequential calculus

ASP sequential calculus is very similar to imperativeς-calculus [1],[11]. Note that a few characteristics have been changed betweenimpς-calculus and ASP sequential calculus:

• Because arguments passed to active objects methods will playa particular role, we added a parameter to every method likein [23]: in addition to the self argument of methods (notedx j ), aparameter can be sent to the method (y j in our syntax).

• We do not include the method update in our calculus because wedo not find it necessary and it is possible to express updatablemethods in our calculus anyway. Note that method update couldbe included in our calculus anyway.

• As in [11], in order to simplify the semanticslocations(referenceto objects in a store) can appear in terms.

a,b∈ L ::=x variable,| [l i = bi ;mj = ς(x j ,y j )a j ]

i∈1..nj∈1..m object,

|a.l i field access,|a.mj (b) method call,|a.l i := b field update,|clone(a) superficial copy,| ι location (not in source).

Note thatlet x= a in b1 and sequencea;b2 can be easily expressedin our calculus and will be used in the following. Lambda expres-sions, and methods with zero and more than one argument are alsoeasy to encode and will also be used in this paper.

Semantic structures

Let locs(a) be the set of locations occurring ina and f v(a) the set ofvariables occurring free ina. Thesource terms(initial expressions)are closedterms (f v(a) = /0) without any location (locs(a) = /0).Locations appear when objects are put in the store. The substitutionof b by c in a is written: a{{b← c}}. θi will denote substitutions.

Let ≡ be the equality modulo renaming of locations (substitutionof locations by locations) provided the renaming is injective (alpha-conversion of locations).

1let x= a in b, [m= ς(z,x)b].m(a)2a;b, [m= ς(z,x)b].m(a)

The storeis a mapping from locations to objects where all fields arereduced:

σ ::= {ι→ [l i = ιi ;mj = ς(x j ,y j )a j ]i∈1..nj∈1..m}

Let o ::= [l i = ιi ;mj = ς(x j ,y j)a j ]i∈1..nj∈1..m be a reduced object.

Let dom(σ) be the set of locations defined byσ. Let σ :: σ′ appendtwo stores with disjoint locations.σ+σ′ is defined by

(σ+σ′)(ι) = σ(ι) if ι ∈ dom(σ)σ′(ι) otherwise

Like in [11] reduction contextsare expressions with a single hole(•) that specifies the order of reduction. For example, objectsarereduced by a left to right evaluation of field. Reduction contexts aredefined in Table 1

ι 6∈ dom(σ)

(R [o],σ)→S (R [ι],{ι→ o} :: σ)(STOREALLOC)

σ(ι) = [l i = ιi ;mj = ς(x j ,y j )a j ]i∈Ij∈J k∈ 1..n

(R [ι.lk],σ)→S (R [ιk],σ)(FIELD)

σ(ι) = [l i = ιi ;mj = ς(x j ,y j)a j ]i∈1..nj∈1..m k∈ 1..m

(R [ι.mk(ι′)],σ)→S (R [ak{{xk← ι,yk← ι′}}],σ)(INVOKE)

σ(ι) = [l i = ιi ;mj = ς(x j ,y j )a j ]i∈1..nj∈1..m k∈ 1..n

o′ =

[

l i = ιi ; lk = ι′; lk′ = ιk′ ;mj = ς(x j ,y j )a j

]i∈1..k−1,k′∈k+1..n

j∈1..m

(R [ι.lk := ι′],σ)→S (R [ι],{ι→ o′}+σ)

(UPDATE)

ι′ 6∈ dom(σ)

(R [clone(ι)],σ)→S (R [ι′],{ι′→ σ(ι)} :: σ)(CLONE)

R ::=•|R .l i |R .mj(b) | ι.mj(R )|R .l i := b|ι.l := R |clone(R )|[

l i = ιi ; lk = R ;lk′ = bk′ ;mj = ς(x j ,y j )a j

]i∈[1...k−1],k′∈[k+1...n]

j∈1..m

Table 1. Sequential reduction

We define a small step substitution-based operational semantics forASP sequential calculus (Table 1); it is similar to the one definedin [11]. It defines new object creation (STOREALLOC), field access(FIELD), method invocation (INVOKE), field update (UPDATE) andshallow clone (CLONE).

2.2 Parallel calculus

An active objectis an object that can be referenced by distant point-ers and can handle distant asynchronous method calls (requests).Informally, an activity is formed by a single active object, somepassive(non-active) objects, an execution thread (calledprocess)and an environment.

When a request is received by an activity it is stored in a requestqueue. Later on, this request will be served and when the result

αβ

Future to a

pending request

Reference to anactive object

f

f2

f3

Future correspondingto the current termFuture

Future values

Requestparameter

The storeσβ

Active object

f oo

current term

Legend:

Active object Active object reference

Future reference

Passive object

Current term

Local reference

Request parameter

f oo Request on methodf oo

Currentrequest Pending

requests

Futurevalue

Activity

Figure 1. Example of a parallel configuration

will be calculated, it will be stored in afuture values list. Pendingrequestsdenote requests inside a request queue,current requestsare being served andserved requests(requests whose service is fin-ished) have a result value.

In ASP, each activity has a single process and a single activeobject.Processes of different activities execute instructions concurrently,and interact only through requests. When activityα sends a requestto activityβ, β stores it in its pending request queue andα continuesits execution; inα a future will represent the result of this requestuntil it is calculated and returned toα (updated). ASP activities donot share memory. Moreover, synchronization is only due towait-by-necessityon a future. Indeed, a future reference is not sufficientto perform strict operations on an object (e.g. a field accessor amethod call). Thus, a strict operation on a future is blockeduntilthe value associated to this future has been updated.

ASP syntax is extended in order to introduce parallelism. TheActiveoperator creates a newactivity by activating objecta. Serveallows to specify which requests should be served.⇑ is used toremember the continuation of the current request while we serveanother one; it should not be present in source programs.

a,b∈ L ::= ...|Active(a,s) Creates an activity

s is either a service methodmjor ø for a FIFO service

|Serve(M) Specifies request to serve,|a⇑ f ,b awith continuationb (not in source)

WhereM is a set of method labels used to specify which requesthas to be served.

M = m1, . . . ,mn

2.3 Informal semantics

Figure 1 gives a representation of a configuration consisting of twoactivities. In every activityα, a current termaα represents the cur-rent computation. Every activity has its own storeσα which con-tains one active and many passive objects.

An activity consists of a process, a store, several pending requestsand calculatedreplies(results of requests). It is able to handle re-quests coming from other activities. The store contains a uniqueactive object andpassiveobjects. Every object belongs to only oneactivity (no shared memory). Passive objects are only referencedby objects belonging to the same activity but passive objects canreference active objects.

The Activeoperator (Active(a,mj )) creates a new activityα withthe objecta at his root. The objecta is copied as well as all its de-pendencies3 (deep copy) in a new activity.AO(α) acts as a proxyfor the active object of activityα. All subsequent calls to meth-ods ofa via AO(α) are considered as remote request sending to theactive object of activityα. The second argument to theActiveop-erator is the name of the method4 which will be called as soon asthe object is activated. If no service method is specified, aFIFOservice will be performed.

Communications between activities are due to method calls on ac-tive objects and returns of corresponding results. A methodcall onan active object (Active(o). f oo()) consists in atomically adding anentry topending requestsof callee, and associating afuture to theresponse. In practice, the request sender waits for an acknowledg-ment before continuing its execution. In Figure 1, futuresf2 and

3to prevent distant references to passive objects4with no argument

f3 denote pointers to not yet computed requests whilef is a futurepointing to a value computed by a request sent toβ. Argumentsof requests and value of futures are deeply copied3 when they aretransmitted between activities. Active objects and futures are trans-mitted with a reference semantics.

The primitive Servecan appear at any point in source code. Itsexecution stops the activity until a request matching its argumentsis found in the requests queue. The first matching request is thenexecuted (served).

Futures are generalized references that can be manipulatedclassi-cally while we do not perform strict operations on the objecttheyrepresent. Futures can be transmitted to other activities and sev-eral objects can reference them. But, upon a strict operation (fieldor method access, field update, clone) on a future, the executionis stopped until the value of the future has been updated (wait-by-necessity). When a request is treated, the corresponding result (fu-ture value) becomes available; each activity stores the associationsbetween future and its computed value.

The moment where the value of a future is returned is not specifiedin our calculus. From a theoretical point of view, every reference toa future can be replaced by a copy3 of the future value (partial orcomplete) at any time.

In Figure 1, the pending requests are merged with the future list(indeed futures correspond to previously executed requests).

2.4 Example

Figure 2 shows an example of a simple parallel binary tree withtwo methods:addandsearch. Each node can be turned into an ac-tive object. The calculus allows us to express lambda expressions,integers and comparisons (Church integers for example), booleansand conditional expressions, methods zero or with many parame-ters, and the definition of classes. All these definitions canbe easilyexpressed in ASP and most of them have been previously definedon ς-calculus.

BT, [new= ς(c,z)[empty= true, l f t = [],rgt = [],key= 0,val = [],search= ς(s,k)(c.searchsk),add= ς(s,k,v)(c.add skv)],

search= ς(c,z)λsk.i f (s.empty) then[]else i f(s.key== k) then s.valelse i f(s.key> k) then s.l f t .search(k)else s.rgt.search(k),

add= ς(c,z)λskv.i f (s.empty) then(s.rgt := Factory(s);s.l f t := Factory(s);s.val := v;s.key:= k;s.empty:= f alse; s)

else i f(s.key> k) then s.l f t .add(k,v)else i f(s.key< k) then s.rgt.add(k,v)else(s.val := v; s) ]

Factory(s), s.newin the sequential case andFactory(s), Active(s.new,ø) for the concurrent BT.

Figure 2. Example: a binary tree

addstores a new key at the appropriate place and creates two emptynodes. Note that in the concurrent case, nodes are activatedas soonas they are created.

searchsearches a key in the tree and returns the value associatedwith it or an empty object if the key is not found.

newis the method invoked to create a new node.

We parameterize the example by a factory able to create a sequential(sequential binary tree) or an active (parallel binary tree) node.

In the case of the parallel factory, the term

let tree= (BT.new).add(3,4).add(2,3).add(5,6).add(7,8)in[a = tree.search(5),b = tree.search(3)].b := tree.search(7)

creates a new binary tree, puts in parallel four values in it andsearches two in parallel. Then it searches another value andmodi-fies the fieldb. It always reduces to :[a = 6,b = 8].

Note that as soon as a request is delegated to another node, a newone can be handled. Moreover, when the root of the tree is the onlynode reachable by only one activity, the result of concurrent calls isdeterministic (cf section 4).

3 Parallel semantics

There are three distinct name spaces: activities (α,β,γ ∈ Act), lo-cations (ι) and futures (fi) (in addition to the field, methods andvariables identifiers which already appear in the source code andare not created dynamically). Note that locations and future iden-tifiers fi are local to an activity. A future is characterized by itsidentifier fi , the source activityα and the destination activityβ of

the corresponding request (f α→βi ).

A parallel configuration is a set of activities:P,Q ::= α[a;σ; ι;F ;R; f ]‖β[. . .]‖ . . . characterized by :

• current termto be reduced :a = b ⇑ f γ→αi ,b′ (the process).

a contains several terms corresponding to the requests beingtreated, separated by⇑. The left partb is the term currently eval-uated, the right onef γ→α

i ,b′ is the continuation: future and termcorresponding to a request that has been stopped before the endof its execution (because of aServeprimitive). Of course,b′ canalso contain continuations;

• storeσ containing all objects of the activityα;• active object locationι is the location of the active object of ac-

tivity α (master object of the activity);• future values, a list associating, for each served request, the lo-

cation of its calculated resultf : F = { f → ι};• request queue R= {[mj ; ι; f γ→α

i ]}, a list of pending requests;• current future f, the future associated with the term currently

evaluated.

A request can be seen as the “reification” of a method call(seefor

example [28]). Each requestr ::= [mj ; ι; f α→βi ] consists of the name

of thetarget method(mj ), the location of theargumentpassed to therequest (ι) and thefuture identifier which will be associated to the

response to this request (f α→βi ).

A reference to the active object of activityα is denoted byAO(α)

and a reference to futuref α→βi by f ut( f α→β

i ). Due to distant point-ers, the store codomain is extended withGeneralized references(i.e. futures and active objects references). Reduced objects be-come:

o ::= [l i = ιi ;mj = ς(x j ,y j)a j ]i∈1..nj∈1..m|AO(α)| f ut( f α→β

i )

(a,σ)→S (a′,σ′) →S does not clone a future

α[a;σ; ι;F ;R; f ] ‖ P−→ α[a′;σ′; ι;F ;R; f ] ‖ P(LOCAL)

γ fresh activity ι′ 6∈ dom(σ) σ′ = {ι′ 7→ AO(γ)} :: σσγ = copy(ι′′,σ) Service=

(

if (mj = ø) thenFi f oServiceelseι′′.mj())

α[R [Active(ι′′,mj )];σ; ι;F ;R; f ] ‖ P−→ α[R [ι′];σ′; ι;F ;R; f ] ‖ γ[Service;σγ; ι′′; /0; /0; /0] ‖ P

(NEWACT)

σα(ι) = AO(β) ι′′ 6∈ dom(σβ) f α→βi new future ι f 6∈ dom(σα)

σ′β = Copy&Merge(σα, ι′ ; σβ, ι′′) σ′α = {ι f 7→ f ut( f α→βi )} :: σα

α[R [ι.mj(ι′)];σα; ια;Fα;Rα; fα] ‖ β[aβ;σβ; ιβ;Fβ;Rβ; fβ] ‖ P−→

α[R [ι f ];σ′α; ια;Fα;Rα; fα] ‖ β[aβ;σ′β; ιβ;Fβ;Rβ :: [mj ; ι′′; f α→βi ]; fβ] ‖ P

(REQUEST)

R= R′ :: [mj ; ιr ; f ′] :: R′′ mj ∈M ∀m∈M, m /∈ R′

α[R [Serve(M)];σ; ι;F ;R; f ] ‖ P−→ α[ι.mj(ιr) ⇑ f ,R [[]];σ; ι;F ;R′ :: R′′; f ′] ‖ P(SERVE)

ι′ 6∈ dom(σ) F ′ = F :: { f 7→ ι′} σ′ = Copy&Merge(σ, ι ; σ, ι′)

α[ι ⇑ f ′,a;σ; ι;F ;R; f ] ‖ P−→ α[a;σ′; ι;F ′;R; f ′] ‖ P(ENDSERVICE)

σα(ι) = f ut( f γ→βi ) Fβ( f γ→β

i ) = ι f σ′α = Copy&Merge(σβ, ι f ; σα, ι)

α[aα;σα; ια;Fα;Rα; fα] ‖ β[aβ;σβ; ιβ;Fβ;Rβ; fβ] ‖ P−→ α[aα;σ′α; ια;Fα;Rα; fα] ‖ β[aβ;σβ; ιβ;Fβ;Rβ; fβ] ‖ P(REPLY)

Table 2. Parallel reduction (used or modified values are non-gray)

The functionMergemerges two stores (it merges independentlyσandσ′ except forι which is taken fromσ′):

Merge(ι,σ, σ′) = σ′θ+σwhere θ = {{ι′← ι′′ | ι′ ∈ dom(σ′)∩dom(σ)\{ι}, ι′′ fresh}}

copy(ι,σ) will designate the deep copy of storeσ starting at lo-cation ι. That is the part of storeσ that contains the objectσ(ι)and, recursively, all (local) objects which it references.The deepcopy is the smallest store satisfying the rules of Table 3. The deepcopy stops when a generalized reference is encountered. In thatcase, the new store contains the generalized reference. In Table 3,the first two rules specify which locations should be presentin thecreated store, and the last one means that the codomain is similarin the copied and the original store (copy of the objects values). Adeep copy can be calculated by marking the source object and recur-sively all objects referenced by marked objects. When a fix-pointis reached, the deep copy is the part of store containing markedobjects.

ι ∈ dom(copy(ι,σ))ι′ ∈ dom(copy(ι,σ))⇒ locs(σ(ι′))⊆ dom(copy(ι,σ))ι′ ∈ dom(copy(ι,σ))⇒ copy(ι,σ)(ι′) = σ(ι′)

Table 3. Deep copy

The following operator deeply copies the part of the storeσ startingat the locationι at the locationι′ of the storeσ′, except forι′, thedeep copy is added in a new part of the storeσ′:

Copy&Merge(σ, ι ; σ′, ι′),Merge(ι′,σ′, copy(ι,σ){{ι← ι′}})

Reduction contexts become :

R ::= . . . | Active(R ,mj)| R ⇑ f ,a

The rules of Table 2 present the formal semantics of ASP (the con-catenation of lists will be denoted by ::):

LOCAL inside each activity, a local reduction can occur followingthe rules of Table 1. Note that sequential rulesFIELD, INVOKE,UPDATE, CLONE5 are stuck (wait-by-necessity) when the target lo-cation is a generalized reference. OnlyREQUESTallows to invoke anactive object method, andREPLY may transform a future referenceinto a reachable object (ending a wait-by-necessity).

NEWACT creates a new activityγ containing the deep copy of theobject and empty current future, pending requests and future values.A generalized reference to this activityAO(γ) is stored in the sourceactivity α. Other references toι in α are unchanged (still pointingto a passive object).mj specifies theservice(first method executed). It has no argument.If no service method is specified, a FIFO service is performed. Aninfinite loopRepeatand the FIFO service are defined below (M isthe set of all method labels defined by the activated object):

Repeat(a) , [repeat= ς(x)a;x.repeat()].repeat()Fi f oService, Repeat(Serve(M ))

REQUEST sends a new request from activityα to activity β (Fig-

ure 3). A new futuref α→βi is created to represent the result of the

request, a reference to this future is stored inα. A request con-taining the name of the method, the location of a deep copy of the

argument stored inσβ, and the associated future ([mj ; ι′′; f α→βi ]) is

added to the pending requests ofβ (Rβ).

SERVE serves a new request (Figure 4). The current reduction isstopped and stored as a continuation (futuref , expressionR [[]])and the oldest (first received) pending request concerning one ofthe labels specified inM is treated. The activity is stuck until amatching request is found in the pending requests queue.

5cloning future is considered as a strict operation to ensurede-terminism.

REQUEST

α β

ι.l j (ι′)

ι′

α β

ι f

ι′

f

l j

ι′′

Figure 3. REQUEST

SERVE

α

ια.mj (ιr )

ιr

α

Serve(M)

Request to serve

mj

ιr

Figure 4. SERVE

ENDSERVICE

α

ι

ι

α

ι

Continuation

a

Figure 5. ENDSERVICE

ENDSERVICEapplies when the current request is finished (currentlyevaluated term is reduced to a location). It associates, thelocationof the result to the futuref . The response is (deep) copied to preventpost-service modification of the value and the new current term andcurrent future are obtained from the continuation (Figure 5).

REPLY updates a future value (Figure 6). It replaces a reference toa future by its value. Deliberately, it is not specified when this ruleshould be applied. It is only required that an activity contains a ref-erence to a future, and another one has calculated the correspondingresult. The only constraint about the update of future values is thatstrict operations (e.g.INVOKE) need the real object value of someof their operands. Such operations may lead to wait-by-necessity,which can only be resolved by the update of the future value. Note

that a futuref γ→βi can be updated in an activity different from the

origin of the request (α 6= γ) because of the capability to transmitfutures (e.g. as method call parameters).

Note that an activity may bestuckeither on a wait-by-necessity ona future (upon a strict operation), or on the service on a set of labelswith no corresponding request in the request queue, or if it tries toaccess or modify a field on a reference to an active object.

reply� �f� �

f'f'f'f

f

Figure 6. REPLY of future f

Initial configuration

An initial configurationconsists of a single activity, calledmain ac-tivity, containing only a current termµ[a; /0; /0; /0; /0; /0]. This activitycan only communicate by sending requests or receiving replies.

Note that the syntax of intermediate terms guarantees that there areno shared references in ASP except future and active object refer-ences.

4 Properties and confluence

This section starts with a property about object topology inside ac-tivities (4.2), then it introduces a notion of compatibility betweenterms (4.3) and an equivalence modulo replies (4.4). Finally, suffi-cient condition for confluence between ASP reductions (4.5)and aspecification of a set of terms behaving deterministically is given:DON terms (4.6). A static approximation allows us to define a sim-ple deterministic sub-calculus in 4.7. Detailed proofs of propertiespresented in this section can be found in [6].

4.1 Notations and Hypothesis

In the following,αP denotes the activityα of configurationP. Wesuppose that the freshly allocated activities are chosen determinis-tically: the first activity created byα will have the same identifierfor all executions.

We consider that the future identifierfi is the name of the invokedmethod indexed by the number of requests that have already beenreceived byβ. Thus if the 4th request received byβ comes fromγand concerns methodf oo, its future identifier will bef ooγ→β

4 .

∗−→ will denote the transitive closure of−→, and

T−→ will denote

the application of ruleT (e.g.LOCAL, REPLY. . . ).

4.2 Futures and parameters isolation

The following theorem states that the value of each future and eachrequest parameter are situated in isolated parts of the store. Figure 7illustrates the isolation of a future value (on the left) anda requestparameter (on the right).

α

(ι f ,σα) Activecopy

ff

Storef

(ιr ,σα)copy

The storeσα

Figure 7. Store Partitioning

THEOREM1 (STORE PARTITIONING).Let

ActiveStore(α) = copy(ια,σα)[

ι∈locs(aα)

copy(ι,σα),

At any stage of computation, each activity has the followinginvari-ant:

σα ⊇

ActiveStore(α)M

{ f 7→ι f }∈Fα

copy(ι f ,σα)M

[l j ;ιr ; f ]∈Rα

copy(ιr ,σα)

whereL

is the disjoint union.

This invariant is proved by checking it on each reduction rule. Thisis mainly due to the deep copies performed insideREQUEST, END-SERVICE and REPLY. The part ofσα that does not belong to thepreceding partition may be freely garbage collected.

4.3 Configuration Compatibility

The principles are the following: two configurations arecompatibleif the served, current and pending requests of one is a prefix of thesame list in the other; moreover, if two requests can not interfere,that is to say if noServe(M) can concern both requests, then thisrequests can be safely exchanged. In ASP, the order of activitiessending requests to a given one fully determines the behavior of theprogram (Theorem 3, RSL-confluence). Note that this means thatfutures updates and imperative aspects of ASP do not act upontheresult of evaluation.

Let FL(α) denote the list of futures corresponding to requestsadressed to activityα:

DEFINITION 1 (FUTURES LIST). Let FL(α) be the list of futuresthat have been calculated, the current futures (the one in the activ-ity and all those in the continuation of the current expression) andfutures corresponding to pending requests. It is depicted by therectangles of Figure 1.

FL(α) = { f β→αi |{ f β→α

i 7→ ι} ∈ Fα} :: { fα} :: F (aα)

:: { f β→αi |[mj , ι, f β→α

i ] ∈Rα}

where

{

F (a⇑ f ,b) = f :: F (b)F (a) = /0 if a 6= a′ ⇑ f ,b

DEFINITION 2 (REQUESTSENDERL IST). The request senderlist (RSL) is the list of request senders in theorder the requestshave beenreceivedand indexed by the invoked method.The ith element of RSLα is defined by:

(RSLα)i = β f if f β→αi ∈ FL(α)

The RSL list is obtained fromfuturesassociated toserved requests,current requestsandpending requests.

DEFINITION 3 (RSLCOMPARISON�). RSLs are ordered bythe prefix order on activities:

α1f1 . . .αn

fn�α′1f ′1 . . .α′m

f ′m⇔ n≤m ∧ ∀i ∈ [1..n],αi = α′i

DEFINITION 4 (RSLCOMPATIBILITY 1). Two RSLs are com-patible if one is prefix of the other:

RSLα 1 RSLβ ⇔ RSLα⊔RSLβ exists⇔ RSLα�RSLβ∨RSLβ�RSLα

LetMαP be astatic approximationof the set ofM that can appearin theServe(M) instructions ofαP. For a given source programP0,for each activityα created, we consider that there is a setMαP0

suchthat if α will be able to perform aServe(M) thenM ∈MαP0

.

DEFINITION 5 (POTENTIAL SERVICES). Let P0 be an initialconfiguration.MαP0

is any set verifying:

P0∗−→ P ∧ aαP = R [Serve(M)]⇒M ∈MαP0

Let RSLα∣

M represent the restriction of theRSL list on the set oflabelsM ((α f0 :: β f1 :: γ f2)

f0, f2= α f0 :: γ f2).

Two configurations, deriving from the same source term, are said tobe compatible if all the restriction of their RSL that can be servedare compatible:

DEFINITION 6 (CONFIGURATION COMPATIBILITY: P1Q).If P0 is an initial configuration such that P0

∗−→ P and P0

∗−→Q

P1Q⇔∀α ∈ P∩Q, ∀M ∈MαP0,RSLαP

M 1 RSLαQ

M

Following the RSL definition (Definition 2) the configurationcom-patibility only relies on the arrival order of requests; thefuture list(FL) order (Definition 1), potentially different on served and currentrequests, does not matter.

In the general case,Serveoperations can be performed while an-other request is being served; then the relation between RSLorderand FL order can only be determined by a precise study. If noServeoperation is performed while another request is being served (onlythe service method performsServeoperations), then all the restric-tions (to potential services) of the RSL and the FL are in the sameorder. In the FIFO case, the FL order and the RSL order are thesame.

4.4 Equivalence modulo replies

Let≡ denote the equivalence modulo renaming of locations and fu-tures (renaming of activities is not necessary as activities are createddeterministically).

Furthermore≡ allows to exchange pending requests that can notinterfere. Indeed, pending request can be reordered provided thecompatibility of RSLs is maintained: requests that can not interfere(because they can not be served by the sameServeprimitive) can besafely exchanged. Modulo these allowed permutations, equivalentconfigurations are composed of equivalent pending requestsin thesame order. More formally,ϕ is a valid permutation on the requestqueues ofP if P1 ϕ(P).

Equivalence modulo future replies(P≡F Q) is an extension of≡authorizing the update of some calculated futures. This is equiv-alent to considering references to a future already calculated asequivalent to the local reference to the part of store which is the(deep copy of the) future value. Or, in other words, a future is equiv-alent to a part of store if this part of store is equivalent to the storewhich is the (deep copy of the) future value (provided the updatedpart does not overlap with the remaining of the store). Two equiva-lent definitions of equivalence modulo future replies have been for-malized in [6]. Note that this equivalence is decidable.

As explained informally here, two configurations only differing bysome future update sare equivalent:

PREPLY−→ P′⇒ P≡F P′

More precisely, we have the following sufficient condition forequivalence modulo future replies:

{

P1REPLY−→ P′

P2REPLY−→ P′

⇒ P1≡F P2

But this condition is not necessary as it does not deal with mutualreferences between futures. Indeed, in case of a cycle of futuresone can obtain configurations that will never converge but behaveidentically (Figure 8).

REPLY

α βf2

REPLY

α β

f1

f1β

f2f2

f1

α

P1 P2

Figure 8. Updates in a cycle of futures

The simplest definition of equivalence consists in following pathsfrom the root object of equivalent activities. If the same paths canbe followed in both configurations then the two configurations areequivalent. Of course, paths are insensitive to the following of cal-culated future references.

Let T ∈ {LOCAL, NEWACT, REQUEST, SERVE, ENDSERVICE, REPLY} beany parallel reduction. Then let us denote by=⇒� the reduction−→ preceded by some applications of theREPLY rule.

DEFINITION 7 (REDUCTION WITH FUTURE UPDATES).

T=⇒� =

REPLY−→

∗ T−→ if T 6= REPLY and

REPLY−→

∗if T = REPLY

THEOREM2 (EQUIVALENCE AND PARALLEL REDUCTION).

PT

=⇒�Q ∧ P≡F P′ ⇒ ∃Q′, P′T

=⇒�Q′ ∧ Q′ ≡F Q

This important theorem states that if one can apply a reduction ruleon a configuration then, after severalREPLY, a reduction using thesame rule can be applied on any equivalent configuration.

Idea of the proof

Theorem 2 is a direct consequence of the following property:

PROPERTY 1.

PT−→Q∧P≡F P′⇒∃Q′, P′

T=⇒�Q′∧Q′ ≡F Q

Indeed, if a reduction can be made on a configuration then the sameone (up to equivalence) can be made on an equivalent configuration.The proof is decomposed in two parts.

First, we may need to apply oneREPLY rule to be able to performthe same rule on the two terms : if we cannot apply the same reduc-

tion thanPT−→ Q (same rule on the same activities . . . ) onP′, we

applyREPLY−→ enough times to be able to apply the reduction onP′′

(P′REPLY∗−→ P′′, P′′

T−→Q′). It is straightforward to check that if two

configurations are equivalent, the same reduction can be applied onthe two configuration except if one of them is stuck.

The second part of the proof consists in verifying that the applica-tion of the same reduction rule on equivalent terms leads to equiv-alent terms. This is done by a long case study (not detailed here).24.5 Confluence

Two configurations are said to beconfluentif they can be reducedto equivalent configurations.

DEFINITION 8 (CONFLUENCE: P1 . P2 ).

P1 . P2⇔∃R1,R2,(

P1∗−→ R1 ∧ P2

∗−→ R2 ∧ R1≡F R2

)

The principles of confluence property can be summarized by: theonly potential source of non-confluence is the interferenceof twoREQUEST rules on the same destination activity; the order of up-dates of futures does not have any influence on the reduction of aterm. Note that even if this property is natural, it allows a lot ofasynchrony, and flexibility in futures usage and updates even in animperative object calculus. The fact that, for non-FIFO service, theorder of requests does not matter if they cannot be involved in thesameServeprimitive allows us to extend the preceding principle.

Thus, two compatible configurations obtained from the same termare confluent.

THEOREM3 (RSL CONFLUENCE).

P∗−→Q1 ∧ P

∗−→Q2 ∧ Q1 1Q2 =⇒Q1 .Q2

Idea of the proof

The key idea is that if two configurations are compatible, then thereis a way to perform missing sending of requests in the right order.

Thus the configurations can be reduced to a common one (modulofuture replies equivalence).

LetQ be the set of configurations obtained fromP and compatiblewith Q1 andQ2

The proof of diamond property on−→ is a long case study on con-flicts between rules. Finally, we obtain:

ST1−→ S1

ST2−→ S2

S,S1,S2 ∈ Q

=⇒ S1 ≡F S2∨∃S′1,S′2,

S1T2−→ S′1

S2T1−→ S′2

S′1 ≡F S′2S′1,S

′2 ∈ Q

Note that the problem of conflicts betweenREQUESTrules is solvedby the introduction of RSL and compatibility between RSL. Notealso that the case whereT1 or T2 is REPLY is not necessary for theproof of Theorem 3 because in that case Theorem 2 is sufficienttoconclude.

Also note that the determinism of previous reductions implies thatthe prefix order on activities is a sufficient condition for RSL com-patibility: the fact that received requests come from the same activ-ities implies that these requests are the same (have the sameargu-ment and method invoked).

The transition from the preceding property to the diamond propertyon=⇒� (Property 2) can be summarized by the diagram in Figure 9.More details on this proof can be found in [6].

PROPERTY2 (DIAMOND ).

P1T1

=⇒�Q′1

P2T2

=⇒�Q′2Q′1,Q

′2 ∈ Q

P1≡F P2

=⇒Q′1 ≡F Q′2∨∃R1,R2,

Q′1T2

=⇒�R1

Q′2T1

=⇒�R2R1≡F R2R1,R2 ∈ Q

Q′1 Q′2

P1

P′1 P′2

S

S1 S2

S′1

= −→

= =⇒�R R

R

R=

REPLY−→

= ≡F

P2

R1 R2

S′2

T1

T1

T1

T1

T1

T2

T2

T2

T2

T2

Figure 9. Diamond property proof

Theorem 3 is a classical consequence of Property 2. 2Note that, if we replaced the primitiveServe(M) by a primitive al-lowing to serve a request coming from a given activityServe(α)

then ASP calculus would be deterministic. Such a calculus wouldbe more similar to process networks whereget are performed ona given channel and a channel only have one source process. Fur-thermore, the order of return of results still would not act upon con-fluence, and futures would provide powerful implicit channels forresults.

4.6 Deterministic Object Networks

The work of Kahn and MacQueen on process networks [18] sug-gested us the following properties ensuring the determinism ofsome programs. In process networks, determinacy is ensuredbythe facts that channels have only one source, and destinations readdata independently (values are broadcasted to all destination pro-cesses). And, most importantly, the reading of entry in the buffer isblocking: the order of reading on different channels is fixedfor agiven program. In ASP, the semantics ensures thatServeare block-ing primitives. Moreover, ensuring compatibility impliesthat twoactivities cannot send concurrently a request on a given method (orset of method labelsM that appears in aServe(M)) of the sameactivity.

In order to formalize this principle, Deterministic ObjectNetworks(DON) are defined below.

DEFINITION 9 (DON).A configuration P, derived from an initial configuration P0, is aDeterministic Object Network (DON(P)) if :

P∗−→Q⇒

{

∀α ∈Q, ∀M ∈MαP0,∃1β ∈Q,∃m∈M,

aβ = R [ι.m(. . .)]∧σβ(ι) = AO(α)

where∃1 means “there is at most one”

A program is a deterministic object network if at any time, for eachset of labelM on whichα can perform aServeprimitive, only oneactivity can send a request on methods ofM. Consequently, DONterms always reduce to compatible configurations:

PROPERTY3 (DON AND COMPATIBILITY ).

DON(P)∧P∗−→Q1∧P

∗−→Q2⇒Q1 1Q2

DON definition implies that two activities can not be able to sendrequests that can interfere to the same third activity thenDON(P)ensures RSL compatibility between terms obtained fromP. Indeed,suppose two requests could intefere in the sameServe(M) insidethe activityγ. Then there would be a reduction fromP that wouldlead to a term where two different activities try to send these requestto γ. This would be contradictory with DON definition.Thus the set of DON terms is a deterministic sub-calculus of ASP:

THEOREM4 (DON DETERMINISM).

DON(P) ∧ P∗−→Q1 ∧ P

∗−→Q2 =⇒ Q1 .Q2

We have shown here that we can easily identify a sub-calculus(DON terms) of ASP that is deterministic.

4.7 Application: tree topology

In this section we propose a simple static approximation of DONterms which has the advantage to be valid even in the highly inter-leaving case of FIFO services.

Let us consider therequest flow graph, that is to say the graph wherenodes are activities and there is an edge between two activities ifone activity sends requests to another one (α→R β if α sends re-quests toβ).

If, at every step of the reduction, the request flow graph is a treethen for eachα, RSLα contains occurrences of at most one activity.Then for allQ andRsuch thatP

∗−→Q∧P

∗−→ R, we haveQ1 R.

As a consequence, we can conclude:

THEOREM5 (TREE REQUEST FLOW GRAPH).If at any time the request flow graph forms a set of trees then thereduction is deterministic.

This theorem proves the determinism of the binary tree of Figure 2.In general, such a property is useful to prove deterministicbehaviorof any tree-like part of a program. Determinism Theorem 5 is easyto specify but difficult to ensure. For example, a program that selec-tively serves different methods coming from different activities willstill behave deterministically upon out of order receptions betweenthose methods. This is a direct consequence of DON property thatis not directly related to object topology and such a programwillnot verify the Theorem 5.

FIFO service is, to some extent, the worst case with respect to de-terminism, as any out of order reception of requests will lead tonon-determinism. At the opposite, a request service by source ac-tivity (i.e. if service primitive was of the formServe(α)) would beentirely confluent. Even if DON definition allows much more flex-ibility in the general case, it seems difficult to find a more preciseproperty for FIFO services.

4.8 A deterministic example: The binary Tree

The Binary Tree of section 2 verifies Theorem 5 and thus behavesdeterministically provided, at each time, at most one client can addnew nodes.

Figure 10 illustrates the evaluation of the term:

let tree= (BT.new).add(3,4).add(2,3).add(5,6).add(7,8) in[a = tree.search(5),b = tree.search(3)].b := tree.search(7)

This term behaves in a deterministic manner whatever order ofreplies occurs.

Client

3 4

2 3 5 6

7 81

Flow of (indirect) replies

Flow of requests

Figure 10. Concurrent replies in the binary tree case

Now consider that the result of a preceding request is used tocreatea new node (dotted lines in Figure 10):

let tree= (BT.new).add(3,4).add(2,3).add(5,6).add(7,8) inlet Client = [a = tree.search(5),b = tree.search(3)] in

Client.b := tree.search(7); tree.add(1,Client.a)

Then the future update that fills the node indexed by 1 can oc-cur at any time since we do not need the value associated to thisnode. Consequently the future update can occur directly from nodenumber 5 to node number 1.

5 Related works

The ASP-calculus is based on the untyped imperative object calcu-lus of Abadi and Cardelli (impς-calculus of [1]). ASP local seman-tics looks like the one of [11] but we did not find any concurrentobject calculus [12, 15, 24] with a similar way of communicationbetween asynchronous objects (no shared memory, asynchronouscommunication, futures, . . . ).

Obliq [4] is a language based on theς-calculus that expresses bothparallelism and mobility. It is based on threads communicating witha shared memory. Like in ASP, calling a method on a remote objectleads to a remote execution of the method but this execution is per-formed by the original thread (or more precisely the original threadis blocked). Moreover, for a non-serialized object, many threadscan manipulate the same object. Whereas in ASP, the notion ofex-ecuting thread is linked to the activity and thus every object is “se-rialized” but a remote invocation does not stop the current thread.Finally, in ASP data-driven synchronization is sufficient and no no-tion of thread is necessary: we have a process for each activity.Øjeblik [23] is a sufficiently expressive subset of Obliq which hasa formal semantics. The generalized references for all mutable ob-jects, the presence of threads and the principle of serialization (withmutexes) make the Obliq and Øjeblik languages very different fromASP.

Halstead defined Multilisp [13], a language with shared memorywith futures. But the combination of shared memory and side ef-fects prevents Multilisp from being determinate.

ASP can be rewritten inπ-calculus [22] but this would not help usto prove confluence property directly.Under certain restrictions [20, 30],π-calculus terms can be stati-cally proved to be confluent and such results could be applicable tosome ASP terms.π-calculus terms communicate over channels then a notion of chan-nels may be introduced in ASP. Let a channel be a pair(destinationactivity, set of method label)and suppose everyserveprimitive con-cerns a single label (if several methods can be served by the sameprimitive, then they must belong to the same channel). A communi-cation over a channel(α, f oo) is equivalent to a remote method callon the methodf ooof the active object ofα. If at any time only oneactivity can send a request on a given channel then the term verifiestheDON property and the program behaves deterministically.In π-calculus, such programs would be considered as using onlylinearized channels and would lead to the same conclusion. Indeedla linear channel is a channel on which only one input and one out-put can be performed. The unicity of destination is ensured by thedefinition of channels and requests, and the unicity of source pro-cess is ensured by the DON property.Note that in ASP, updates of response along non-linearized chan-nels can be performed which makes ASP confluence property morepowerful. Moreover, this definition of channels is more flexible be-

cause it can contain several method labels and then, one can waitfor a request on any subset of the labels belonging to a channel, inother words we can perform aServeon a part of a channel withoutlosing determinacy.

Pierce and Turner usedPICT (a language derived fromπ-calculus)to implement object-based programming and synchronizationbased on channels in [25]. But, all languages derived fromπ-calculus necessitate explicit channel based synchronization ratherthan the implicit data-flow synchronization proposed in thecurrentpaper which accounts very much for ASP expressivity.

The join-calculus [9, 10] is a calculus with mobility and distribu-tion. Synchronization in join-calculus is based on filtering patternsover channels. The differences between channel synchronizationand data-driven synchronization also make the join-calculus inade-quate for expressing ASP principles.

Process networks [17] provide confluent parallel processesbut re-quire that the order of service is predefined and two processes can-not send data on the same channel which is more restrictive and lessconcurrent than ASP.

The ASP channel view introduced above can also be compared toProcess Networks channels. Like in theπ-calculus case, ASP chan-nels seem more flexible and our property more general especiallyby the fact that future updates can occur at any time: return channelsdo not have to verify any constraint andServecan be performed ona part of a “channel”.

πoβλ [16] is a concurrent object-oriented language. A sufficientcondition is given for increasing the concurrency without losingdeterminacy, it is based on a program transformation. Underthiscondition, one can return result from a method before the endof itsexecution. Then, the execution of the method continues in parallelwith the caller thread. This sufficient condition is expressed by anequivalence between original and transformed program. Sangiorgi[27], and Liu and Walker [21] proved the correctness of transforma-tions onπoβλ described in [16]. Inπoβλ, a caller always wait forthe method result: synchronous method call with anticipated result.In ASP, method calls are systematically asynchronous, thusmoreinstructions can be executed in parallel: the futures mechanism al-lows one to continue the execution in the calling activity withouthaving the result of the remote call. A simple extension to ASPcould provide a way to assign a value to a future before the endofthe execution of a method. Note that inπoβλ, this characteristic isthe source of parallelism whereas in ASP this would simply allowan earlier future update.

Relying on the active object concept, the ASP model is ratherclosed to, and was somehow inspired by, the notion ofactors[2, 3].Both relies on asynchronous communications, but actors areratherfunctionals, while ASP is in an imperative and object-oriented set-ting. While actors are interacting by asynchronousmessage pass-ing, ASP is based on asynchronousmethod calls, which remainstrongly typed and structured, with future-based synchronizationsand without explicit continuations. To some extend, ASP futuresemantics, with the store partitioning property (isolation betweenfuture values, the active store, and the pending requests),accountsfor the capacity to achieve confluence and determinism in an im-perative setting.More generally, parallel purely functional evaluators aredetermin-istic and have been widely studied [19, 14].

Finally, the importance of tree topology in concurrent computa-tions as already been underline in [26] but this model is based on aspreadsheet framework and does not ensure determinism.

6 Conclusion

In this paper, we have introduced a parallel object calculuswithasynchronous communications. It is based on a sequential calculusa la Abadi-Cardelli extended with a primitive that creates a newprocess (activity). Communications in ASP are based on asyn-chronous requests sends and replies. Simple conditions allow tospecify parallel and distributed applications that behavedetermin-istically.

A compatibility condition on the Request Sender List (RSL: or-dered list of activities that have sent requests) characterizes conflu-ence between terms. Then, a set of Deterministic Object Networks(DON) programs which behaves deterministically have been identi-fied. While requests can be non-deterministically interleaved in thepending queue, DON programs will always serve them in a deter-ministic manner. Finally, a simple application to tree-like configu-rations exhibited a deterministic sub-calculus of ASP.

An interesting property of our calculus is that the order in whichthe replies to requests occur has no influence on determinism. Thisprovides parallelism as a process can continue its activitywhile stillexpecting the result of several requests. Moreover, on a practicalside, this property allows to perform future updates with separatethreads.

Even if parts of this work can be seen as an application ofπ-calculuslinearized channels or process networks, ASP calculus providesa powerful generalization of these techniques: futures create im-plicit channels providing both convenient return of results and data-driven synchronizations. Moreover, the order of replies does not actupon programs behavior.

No typing of the ASP calculus has been proposed in this paper.Ofcourse, Abadi and Cardelli typing of objects could be adapted toASP. But a more promising perspective lies in a type system forprovided and required services that could allow us to approximatethe potential services, and most importantly to staticallyidentify alot of deterministic programs. More generally, a larger checking ofthe DON condition by applying static analysis techniques could beperformed.

Acknowledgments

We thank Andrew Wendelborn and Davide Sangiorgi for commentson earlier versions of this paper.

7 References

[1] M. Abadi and L. Cardelli. A Theory of Objects. Springer-Verlag New York, Inc., 1996.

[2] G. Agha, I. A. Mason, S. Smith, and C. Talcott. Towardsa theory of actor computation (extended abstract). In W. R.Cleaveland, editor,CONCUR’92: Proc. of the Third Interna-tional Conference on Concurrency Theory, pages 565–579.Springer, Berlin, Heidelberg, 1992.

[3] G. Agha, I. A. Mason, S. F. Smith, and C. L. Talcott. A foun-dation for actor computation.Journal of Functional Program-ming, 7(1):1–72, 1997.

[4] L. Cardelli. A language with distributed scope. InConfer-ence Record of the 22nd ACM SIGPLAN-SIGACT Symposiumon Principles of Programming Languages (POPL’95), pages286–297, San Francisco, January 22–25, 1995. ACM Press.

[5] D. Caromel. Toward a method of object-oriented concurrentprogramming. Communications of the ACM, 36(9):90–102,Sept. 1993.

[6] D. Caromel, L. Henrio, and B. P. Serpette. Asynchronous se-quential processes. Technical report, INRIA Sophia Antipolis,2003. RR-4753.

[7] D. Caromel, W. Klauser, and J. Vayssiere. Towards seamlesscomputing and metacomputing in Java.Concurrency: Prac-tice and Experience, 10(11–13):1043–1061, 1998. Proactiveavailable at http://www.inria.fr/oasis/proactive.

[8] C. Flanagan and S. Qadeer. A type and effect system for atom-icity. In Proceedings of the ACM SIGPLAN 2003 conferenceon Programming language design and implementation, pages338–349. ACM Press, 2003.

[9] C. Fournet and G. Gonthier. The reflexive CHAM andthe join-calculus. InConference Record of the 23rd ACMSIGPLAN-SIGACT Symposium on Principles of Program-ming Languages (POPL’96), pages 372–385, St. Petersburg,Florida, January 21–24, 1996. ACM Press.

[10] C. Fournet, G. Gonthier, J. Levy, L. Maranget, and D. Remy.A Calculus of Mobile Agents. In U. Montanari and V. Sas-sone, editors,Proc. 7th Int. Conf. on Concurrency Theory(CONCUR), volume 1119 ofLecture Notes in Computer Sci-ence, pages 406–421, Pisa, Italy, Aug. 1996. Springer-Verlag,Berlin.

[11] Gordon, Hankin, and Lassen. Compilation and equivalence ofimperative objects.FSTTCS: Foundations of Software Tech-nology and Theoretical Computer Science, 17, 1997.

[12] A. D. Gordon and P. D. Hankin. A concurrent object calcu-lus: Reduction and typing. InProceedings HLCL’98. ElsevierENTCS, 1998.

[13] R. H. Halstead, Jr. Multilisp: a language for concurrent sym-bolic computation.ACM Transactions on Programming Lan-guages and Systems (TOPLAS), 7(4):501–538, 1985.

[14] K. Hammond. Parallel Functional Programming: An Intro-duction (invited paper). In H. Hong, editor,First InternationalSymposium on Parallel Symbolic Computation (PASCO’94),Linz, Austria, pages 181–193. World Scientific Publishing,1994.

[15] A. Jeffrey. A distributed object calculus. InACM SIGPLANWorkshop Foundations of Object Oriented Languages, 2000.

[16] C. B. Jones and S. Hodges. Non-interference propertiesof aconcurrent object-based language: Proofs based on an opera-tional semantics. In B. Freitag, C. B. Jones, C. Lengauer, andH.-J. Schek, editors,Object-Orientation with Parallelism andPersistence, chapter 1, pages 1–22. Kluwer Academic Pub-lishers, 1996. ISBN 0-7923-9770-3.

[17] G. Kahn. The semantics of a simple language for parallelpro-gramming. In J. L. Rosenfeld, editor,Information Process-ing ’74: Proceedings of the IFIP Congress, pages 471–475.North-Holland, New York, NY, 1974.

[18] G. Kahn and D. MacQueen. Coroutines and Networks of Par-allel Processes. In B. Gilchrist, editor,Information Process-ing 77: Proc. IFIP Congress, pages 993–998. North-Holland,1977.

[19] O. Kaser, S. Pawagi, C. R. Ramakrishnan, I. V. Ramakrish-nan, and R. C. Sekar. Fast parallel implementation of lazylanguages - the EQUALS experience. InLISP and FunctionalProgramming, pages 335–344, 1992.

[20] N. Kobayashi, B. C. Pierce, and D. N. Turner. Linearity andthe pi-calculus. InProceedings of POPL ’96, pages 358–371.ACM, Jan. 1996.

[21] X. Liu and D. Walker. Confluence of processes and systemsofobjects. In P. D. Mosses, M. Nielsen, and M. I. Schwarzbach,editors,TAPSOFT ’95: Theory and Practice of Software De-velopment, 6th International Joint Conference CAAP/FASE,volume 915 ofLNCS, pages 217–231. Springer, 1995.

[22] R. Milner, J. Parrow, and D. Walker. A calculus of mobileprocesses, part I/II.Journal of Information and Computation,100:1–77, Sept. 1992.

[23] U. Nestmann, H. Huttel, J. Kleist, and M. Merro. Aliasingmodels for mobile objects.Information and Computation,175(1):3–33, 2002.

[24] O. Nierstrasz. Towards an object calculus. In M. Tokoro,O. Nierstrasz, and P. Wegner, editors,Proceedings of theECOOP’91 Workshop on Object-Based Concurrent Comput-ing, volume 612 ofLNCS, pages 1–20. Springer-Verlag, 1992.

[25] B. C. Pierce and D. N. Turner. Concurrent objects in a processcalculus. In T. Ito and A. Yonezawa, editors,Proceedings The-ory and Practice of Parallel Programming (TPPP 94), pages187–215, Sendai, Japan, 1995. Springer LNCS 907.

[26] Y. ri Choi, A. Garg, S. Rai, J. Misra, and H. Vin. Orches-trating computations on the world-wide web. In B. Monienand R. Feldmann, editors,Euro-Par, volume 2400 ofLectureNotes in Computer Science. Springer, 2002.

[27] D. Sangiorgi. The typedπ-calculus at work: A proof ofJones’s parallelisation theorem on concurrent objects.The-ory and Practice of Object-Oriented Systems, 5(1), 1999. Anearly version was included in theInformal proceedings ofFOOL 4, January 1997.

[28] B. C. Smith. Reflection and semantics in lisp. InConfer-ence Record of the Eleventh Annual ACM Symposium on Prin-ciples of Programming Languages, pages 23–35, Salt LakeCity, Utah, January 15–18, 1984. ACM SIGACT-SIGPLAN,ACM Press.

[29] G. L. Steele, Jr. Making asynchronous parallelism safefor theworld. In ACM, editor,POPL ’90. Proceedings of the seven-teenth annual ACM symposium on Principles of programminglanguages, January 17–19, 1990, San Francisco, CA, pages218–231, New York, NY, USA, 1990. ACM Press.

[30] M. Steffen and U. Nestmann. Typing confluence. InternerBericht IMMD7-xx/95, Informatik VII, Universitat Erlangen-Nurnberg, 1995.