formal specification and verification of multimedia systems in open distributed processing

24
ELSEVIER Computer Standards & Interfaces 17 (1995) 413-436 specification and verification of mu in open distributed processing a me Blair, Gordon Blair * , Howard Bowman +, Amanda Che Dtstributed Multimedia Research Group, Department of Computing, Lancaster Uniuersiiy, Builrigg, Lancaster, LA1 4YR, UK Abstract The field of distributed systems is now entering a stage of maturity with work focusing on standards for Open Distributed Processing (ODP). However, it is still important that standardization remains responsive to new technological demands such as the emergence of distributed multimedia computing. This paper focuses on the likely impact of multimedia computing on formal description within ODP. In particular, a framework is proposed for the formal specification and verification of quality of service and more general real-time concerns in distributed multimedia systems. This framework exhibits a separation of concerns between the specification of behaviour and requirements and alsobetween the specification of abstractbehaviour and real-time concerns. The usefulness of this framework is demonstrated by the development of an approach based on LOTOS together with a real-time temporal logic, QTL. Keywords: Formal specification; Open distributed processing; Multimedia; LOTOS; Real-time temporal ?ogics 1. Introduction The importance of formal description techniques is well recognised by the International Standards rganisation as captured in the following quotation from draft ODP standards: “‘The work of the RM-ODP is based on the use, as far as possible, of FDTs to giue it a clear and ~~a~b~guo~s interpretation”. The main aim of ODP standardization is to provide a framework to enable the development of systems which are open and can interconnect. This goal would be jeopardised by misinterpretation of imprecise notations. Furthermore, formal techniques enable rigorous system development to be defined from these standards, e.g., they support a rigorous approach to checking conformance of components. *The work described in this paper was carried out as part of the Tempo Project at Lancaster University, sponsored by SERC/DTP (Grant GR/G/01362) and uncled by BT Labs. * Corresponding author. Email:[email protected] ‘Now at Computer Laboratory, University of Kent at Canterbury, Kent, UK. 0921%5489/95/$09.50 0 1995 Eisevier Science B.V. All rights reserved SSDI 0920-5489(95)00016-X

Upload: lynne-blair

Post on 28-Aug-2016

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Formal specification and verification of multimedia systems in open distributed processing

ELSEVIER Computer Standards & Interfaces 17 (1995) 413-436

specification and verification of mu in open distributed processing a

me Blair, Gordon Blair * , Howard Bowman +, Amanda Che Dtstributed Multimedia Research Group, Department of Computing, Lancaster Uniuersiiy, Builrigg, Lancaster, LA1 4YR, UK

Abstract

The field of distributed systems is now entering a stage of maturity with work focusing on standards for Open Distributed Processing (ODP). However, it is still important that standardization remains responsive to new technological demands such as the emergence of distributed multimedia computing. This paper focuses on the likely impact of multimedia computing on formal description within ODP. In particular, a framework is proposed for the formal specification and verification of quality of service and more general real-time concerns in distributed multimedia systems. This framework exhibits a separation of concerns between the specification of behaviour and requirements and also between the specification of abstract behaviour and real-time concerns. The usefulness of this

framework is demonstrated by the development of an approach based on LOTOS together with a real-time temporal logic, QTL.

Keywords: Formal specification; Open distributed processing; Multimedia; LOTOS; Real-time temporal ?ogics

1. Introduction

The importance of formal description techniques is well recognised by the International Standards rganisation as captured in the following quotation from draft ODP standards:

“‘The work of the RM-ODP is based on the use, as far as possible, of FDTs to giue it a clear and ~~a~b~guo~s interpretation”.

The main aim of ODP standardization is to provide a framework to enable the development of systems which are open and can interconnect. This goal would be jeopardised by misinterpretation of imprecise notations. Furthermore, formal techniques enable rigorous system development to be defined from these standards, e.g., they support a rigorous approach to checking conformance of components.

*The work described in this paper was carried out as part of the Tempo Project at Lancaster University, sponsored by

SERC/DTP (Grant GR/G/01362) and uncled by BT Labs. * Corresponding author. Email:[email protected] ‘Now at Computer Laboratory, University of Kent at Canterbury, Kent, UK.

0921%5489/95/$09.50 0 1995 Eisevier Science B.V. All rights reserved SSDI 0920-5489(95)00016-X

Page 2: Formal specification and verification of multimedia systems in open distributed processing

414 L. Blair et al. /Computer Standards & Interfaces 17 (1995) 413-436

Thus, in order to realise the dual aims of precise standards and rigorous system development, formal methods are certain to play a key role in Open Distributed Processing.

The area of formal methods for ODP has recently attracted a great deal of research and a number of techniques have been proposed, including LOTOS [ll, Z [2] and various object-based or object-oriented extensions to these languages [3,4]. With the diverse requirements of ODP (both across the viewpoints and across the range of application domains), it is now becoming clear that no single language will meet the requirements of ODP. Attention is therefore focusing on the development of more general frameworks to enable the formal description of ODP components, thus enabling various languages to co-exist.

This paper addresses the development of a framework for the class of distributed multimedia applications. It should be stressed that this paper does not attempt to address all issues relating to the formal specification of distributed multimedia systems in ODP. Rather, it focuses on the representation of the real-time requirements of continuous media with emphasis on the Computational Viewpoint. It is our belief, however, that the results of the paper could be applied to the more general field of real-time systems [5,61.

2. Multimedia in ODP

2.1. The challenge

It is now recognised that distributed multimedia computing imposes a range of new challenges to ODP. The most fundamental characteristic of multimedia systems is that they incorporate continuous media [7] such as voice, video and animated graphics. The use of such media in distributed systems implies the need for continuous data transfers over relatively long periods of time, e.g., playout of video from a remote surveillance camera. Furthermore, the timeliness of such media transmissions must be maintained as an ongoing commitment for the duration of the continuous media presentation. Continu- ous media also imposes the following additional requirements on distributed systems: (i) Quality of service

Quality of service is a central issue in the support of distributed multimedia applications. In such applications, the timeliness of media transmissions must be maintained through quality of service (QOS) parameters. For example, throughput, end-to-end delay (latency) and delay variance (jitter) are important parameters.for continuous media communication l-81.

(ii) Real-time synchronisation Existing distributed systems provide a range of mechanisms to support synchronisation between processes. However, in distributed multimedia systems, there is an added requirement to provide real-time synchronisation to maintain real-time relationships across different media types. A classic example of real-time synchronisation is maintaining lip synchronisation between an audio and a video channel.

The ODP community has recently responded to these challenges by proposing a number of extensions to the draft standards. Most of this work has concentrated on extensions to the Computational Model (and, to a lesser extent, the Engineering Model) in order to accommodate multimedia. While this work has not yet reached maturity, a few key proposals appear to be emerging, including the need for streams (abstractions of an ongoing flow of continuous media), the need for engineering support for QOS negotiation, monitoring and renegotiation support and the need for real-time objects (e.g., the reactive objects of 191).

Page 3: Formal specification and verification of multimedia systems in open distributed processing

L. Blair et al. /Computer Standards & Interfaces 17 (1995) 413-436 415

These issues, and a number of other new proposals to support multimedia in ODP, are desribed in more detail in [lo]. Since it seems certain that significant changes will be made to ODP to accommodate multimedia, it is important that developments in formal methods reflect these changes.

2.2. Implications for formal methods

From the discussion above, it can be seen that the representation of timing properties is the most fundamental requirement for the specification of distributed multimedia systems. More specifically, it is important to be able to specify and verify the required temporal behaviour of multimedia systems.

A number of researchers have developed formal methods which support the expression of real-time. For example, many timed extensions to LOTOS have been proposed [11,12]. Other notable techniques include: Timed Petri Nets [3], Statecharts [14] and Real-Time Temporal Logics [El. At an earlier stage in our investigation, we carried out an in-depth study of such approaches [16]. We summarise some of our results below: (i) Overly prescriptive

No one technique can possibly meet all the requirements for ODP. For example, different viewpoints have different requirements in terms of, for example, the level of abstraction in the formal method. Similarly, some techniques are more suitable for expression of behaviour whilst others are more tailored towards requirements capture.

(ii) Abstraction versus time It is important in multimedia systems, and in real-time systems in general, to be able to specify and verify real-time properties at an early stage of specification. In particular, this permits timing problems within designs to be picked up at an early stage of the software lifecycle. However, we believe there is a fundamental conflict between embedding real-time in a specification and maintaining a level of abstraction in the language. A detailed analysis of this conflict can be found in 1171.

(iii) Embedded time Most timed notations embed timing assumptions at the level of states, events or transitions. This has two repercussions. Firstly, timing assumptions are entwined with the behaviour and, hence, are not readily accessible or changeable. Secondly, it is awkward (in terms of maintaining a suitable specification style) to make general real-time statements ucross a particular behaviour. Such statements are required for the specification of certain quality of service statements (e.g., maintain- ing and controlling throughput or jitter between components).

We, therefore, see a need for a framework which encourages the co-existence of different languages, which avoids timing assumptions being directly embedded in the language, which supports the expression of general real-time properties and which enables abstract specification of behaviour where required. Furthermore, such a framework must enable verification techniques to be employed to retain the rigorous nature of the formal lifecycle.

3. The proposed framework

3.1. A separation of concerns

The principle behind the proposed framework is that of maintaining a separation of concerns. Firstly, we believe a clear separation should be maintained between the specification of the behaviour and the specification of requirements. Their purposes are clearly distinct: one reflects the actual behaviour (the system) whilst the other reflects the desired behaviour (properties required of the system). It should be

Page 4: Formal specification and verification of multimedia systems in open distributed processing

416 L. Blair et al. /Computer Standards & Interfaces 17 (1995) 413-436

Aement s Assumptions

Fig. 1. Styles of timing behaviour.

noted that this separation of concerns is typical of many existing dual language specification techniques (for example, Esterel/QL [18], lTMs/RTTL [15] and TPCCS/TPCTL [19]). Importantly, however, we believe that a second separation of concerns can be drawn when time is considered. Time can occur in various forms in a specification, as illustrated in Fig. 1.

The first important distinction is between behaviourul time and red-time. Behavioural time refers to any behaviour associated with time passing in an algorithm (e.g., the tick of a clock or the occurrence of a timeout event), but which does not relate this event to real-time. Real-time, in contrast, associates behaviour with real world clocks (e.g., a clock ticks every second or a timeout happens after 250 milliseconds). The second distinction illustrated in Fig. 1 refers back to the first separation of concerns mentioned above: a distinction is drawn between the basic real-time assumptions in a specification and the required real-time requirements across a specification.

Real-time assumptions effectively ground behavioural time in real-time (e.g., an assumption may associate a specific time instance with a timeout event). In contrast, real-time requirements generally apply timing constraints across a specification, e.g., a protocol must provide a throughput of 100 kbits/second. Quality of service statements such as bounded jitter, latency and throughput are all typical examples of real-time multimedia requirements. Note, however, that real-time requirements are only a subset of the set of all requirements, i.e. requirements need not refer to real-time.

3.2. Applying this principle

Importantly, no mention has yet been made of specific languages. Instead, a language-independent framework has been provided which could be realised using one or more different languages. This framework is illustrated in Fig. 2. The figure shows a clean separation between the specification of behaviour and the specification of requirements. The former consists of a specification of abstract behaviour together with a set of real-time assumptions which ground the abstract behaviour in real-time. The latter consists of a specification of requirements giving the desired behaviour of the system.

The figure also shows a series of refinements leading to an implementation on an ODP-conformant platform. It is assumed, certainly at later stages of this refinement process, that an object-based specification technique is employed to facilitate a mapping to the ODP Computational Model. Similarly, at this stage of refinement, it is assumed that the real-time requirements will target specific objects. A number of interesting observations can be made regarding the mapping to the ODP Computational Model. For example, the combination of abstract behaviour together with real-time assumptions provides the necessary information to develop computational objects which have the necessary real-time be- haviour (this corresponds to the proposed extension to the Computational Model concerning real-time objects). Similarly, the real-time requirements provide the necessary information to verify QoS annota- tions on these objects (this corresponds to the extension concerning QoS support). The precise mapping to the Computational Model remains a matter for future research. In general, however, we believe that

Page 5: Formal specification and verification of multimedia systems in open distributed processing

L. Blair et al. /Computer Standards & Interfaces I7 (1995) 413-436 417

Cal-time ssumptions I

zEzT;

Cal-time sumptions I+

4 REFINEMENT

c ODP CONFORMANT IMPLEMENTATION 1 Fig. 2. The proposed framework.

the concept of maintaining a separation of concerns is orthogonal to the concept of viewpoints and could apply in any of the five viewpoints defined in ODP.

3.3. Verification

Within this framework, there are two important aspects to verification. Firstly, in the vertical plane of Fig. 2, it is necessary to ensure that a refined specification satisfies the previous specification. Estab- lished standard verification techniques can be employed in this process for the chosen language and will not be considered further in this paper. Secondly, in the horizontal plane, it is necessary to confirm that the behaviour of the specification meets the requirements. This second aspect of verification is more complicated than in traditional dual language approaches because of the separation between the abstract behaviour and the real-time assumptions. Consequently, a specialised verification process must be developed. We return to this issue in Section 6.

4. An approach based on LOTOS and QTL

The previous section discussed a general framework for the specification and verification of dis- tributed multimedia systems in which care has been taken to avoid prescription of specific languages or verification techniques. In this section, however, we describe one approach based on the LOTOS language and a real-time temporal logic called QTL.

4.1. Specifying abstract behaviour in LOTOS

In our approach, we use the process algebra LOTOS [ll for the specification of abstract behaviour. LOTOS was developed as a language for specifying OS1 protocols and is now a recognised IS0 standard ml.

We believe there are a number of advantages in using a process algebra such as LOTOS for the representation of abstract behaviour. Process algebraic techniques generally feature an elegant set of

Page 6: Formal specification and verification of multimedia systems in open distributed processing

418 L. Blair et al. /Computer Standards & Interfaces I7 (1995) 413-436

operators for developing concurrent systems. Thus, succinct expressions of communicating concurrent processes can be made. Similarly, the emphasis on non-determinism encourages elegant specification and abstracts away from implementation details. Furthermore, rich and tractable mathematical models of the semantics of process algebra have been developed. In LOTOS, this model is based upon concepts of equivalence through observation of the external behaviour of a specification and is derived from the seminal work of Robin Milner [21]. Finally, due to the standardisation of LOTOS and the application oriented nature of the language a large number of support tools have been developed.

It should be stressed that we use standard LOTOS and do not require any semantic alterations to model time. The LOTOS specification, therefore, describes the possible event orders in the system, but does not relate events to real-time. Some advantages of using standard LOTOS are that changes are not required to the standard and existing toolkits can still be used.

4.2. Specifying requirements and assumptions in QTL

We feel that a temporal logic offers the most natural means for expressing requirements of distributed systems. It has been demonstrated that such logics enable the abstract expression of requirements and facilitate rigorous reasoning about these requirements. More specifically, for our application domain, a real-time temporal logic is required in order to specify the real-time requirements prevalent in dis- tributed multimedia systems. Note that the added expressiveness of real-time temporal logic over their first order equivalents (such as the real-time logic RTL [22]) allows us to express a wider range of requirements including liveness properties [23].

We have developed a logic called QTL (for quality of service temporal logic) for the specification of distributed multimedia system requirements. QTL is based upon Koymans’ metric temporal logic, MTL [24], which has already been demonstrated to be highly suitable for the expression of a wide range of real-time properties in the area of real-time control ‘. QTL is designed specifically to be compatible with LOTOS and hence LOTOS events may be use as propositions in QTL. Characteristics of the logic are that it is linear time, it uses bounded operators, it employs a dkcrete time domain, it uses a basic set of operations (addition by constant only) and it incorporates past-tense operators as well as the more usual future-tense operators. QTL has been defined to be suitable for use in conjunction with LOTOS. However, as well as referring to LOTOS events, there is also a need to refer to data variables in QTL formulae. These data variables are necessary in order to store additional numerical information such as the number of occurrences of a particular LOTOS event or, more generally, arbitrary functions over event occurrences and their timings.

The formal syntax and semantics of QTL are presented in the Appendix. It is worth noting that timed state sequences are used as a model over which temporal logic formulae are interpreted, These sequences provide the link between the LOTOS specification of abstract behaviour and the LOTOS events referred to in the logic formulae. A full justification for our choice of logic can be found in [25]. It should be noted that, in addition to using QTL to specify requirements, we also use QTL to specify real-time assumptions. As will be seen in Section 6, the use of the logic for both requirements and assumptions simplifies the verification process.

5. Examples of use

The following two examples illustrate how the LOTOS/QTL approach can be used to specify quality of service on streams and real-time synchronisation, respectively.

‘Notehat an earlier version of QTL was based on Ostroffs RTTL [15].

Page 7: Formal specification and verification of multimedia systems in open distributed processing

L. Hair et al. /Computer Standards & Interfaces 17 1199% 413-436 419

Data

Source

II) Send Frames

II) Play Frames

Data

CHANNEL >

Sink

Fig. 3. A multimedia stream.

5.1. Quality of service on a multimedia stream

The following example illustrates how the LOTOS/QTL approach can be used to specify quality of service parameters over a continuous multimedia stream.

5.1.1. Informal description A stream is a multimedia structure which links a data source and a data sink. The source and the sink

are assumed to be communicating asynchronously over an appropriate channel (see Fig. 3). Suppose that the stream exhibits the following behaviour (real-time values have been arbitrarily

chosen): . All communication between the data source and the data sink is asynchronous (it is assumed that the

queue at the data sink is unbounded). . The channel over which frames are transmitted is unreliable and may result in the reordering of

messages. . The data source repeatedly transmits data frames every 50 ms (i.e., 20 frames per second). . Successfully transmitted frames arrive at the data sink between 80 ms and 90 ms after their

transmission (channel latency). . If the number of frames arriving at the data sink is not within 15 to 20 frames per second (channel

throughput), then an error should be reported. . The time taken by the data sink to process a frame is 5 ms (i.e., the time between receiving a frame and

being able to play it).

51.2. Abstract behaviour A LOTOS specification of the abstract behaviour of a stream can be derived from the above informal

description as shown below.

specification stream [start, play, error]: noexit behaviour

hide source-out, sink-in in ((Source[source _ out]

III Sintisink ~ in, play, error]

h source _ out, sink- in] I Channel[source _ out, sink- in] I

f where

process Source[source- out]: noexit: = source _ out; Source[source -out]

endproc( * Source *)

Page 8: Formal specification and verification of multimedia systems in open distributed processing

420 L. Blair et al. /Computer Standards & Interfaces 17 (1995) 413-436

process Sink[sink _ in, play, error]: noexit: = hide tick in (SinkBehaviour[sink _ in, play] [ > error; stop) 111 Clock[tick]

where process SinkBehaviour[sink-in, play]: noexit: =

sink-in; play; SinkBehaviour[sink-in, play] endproc ( * SinkBehaviour * > process Clock[tickl: noexit: =

tick; Clock[tick] endproc (* Clock *)

endproc (* Sink *)

process Channel[source -out, sink- in]: noexit: = source _ out; ((sink-in; stop ( * frame is transmitted successfully * )

[I i; stop ( * frame is lost during transmission * )

Ill Channellsource -out, sink-in]) ( * allow another frame to be sent * )

endproc( * Channel * ) endspec( * Stream *)

Note that the clock process has been included to allow periodic monitoring to be carried out (see the real-time assumptions below). Note also that, for simplicity, in this specification no sequence numbers (to distinguish different frames) have been specified. Furthermore, this channel specifies a potentially unbounded queue. Whilst this is valid LOTOS, it prevents the generation of a finite state machine; this causes problems with the verification process for the LOTOS/QTL approach. However, an extra process can be added to the specification which places an upper bound on the number of frames a queue may hold at any one time, thus permitting a finite state machine to be generated.

5.1.3. Real-time assumptions It can be seen that the abstract specification above contains no timing information. We now show how

events can be ground in real-time when combined with real-time assumptions. These assumptions represent implementation-specific timing, i.e., timing which results from the execution speed of system components. Using QTL, the real-time constraints detailed in the informal description above can be formalised as shown below.

Transmission of frames from the data source. The informal description of the stream behaviour states that the source transmits a frame every 50 ms. This can be expressed in QTL as:

Al. Source rate of transmission

4 . 0 source-out -3 j0 O=,( 7(source-out) U =,,-,source-out) Al. i 1

Page 9: Formal specification and verification of multimedia systems in open distributed processing

L. Blair et al. /Computer Standards & Interfaces 17 (1995) 413-436 421

This statement means that whenever a source-out event occurs, no further source-out event will occur until 50 ms have passed, at which time the subsequent source-out event will occur. Note that the right hand side of the implication specifies a choice of times (between 0 and 50 ms after the initial source-out) at which the next state may occur.

Latency of the channel. The second real-time assumption states that the channel imposes a latency delay of between 80 ms and 90 ms (inclusive) on each transmission.

A2. Latency imposed by the channel

4 A2: ~1 sink-in --j tys0(G3=tsourcePout) i 1

This logic statement means that whenever a sink-in event occurs, there is a choice of times for some past state (between times 80 and 90 ms inclusive) at which the corresponding source-out was true. Note that this is not specified the other way round (e.g., in some state after a source -out, a sink-in will occur) since the channel is unreliable and messages may be lost during transmission.

Grounding the tick event in real-time. The next real-time assumption ensures that ticks occur every second:

A3. Ticks occur every second 1000

tick -+ Jo O=,( ,(tick) u =iW-&ick) 1

This statement means that whenever a tick event occurs, no further tick will occur until 1000 ms have passed, at which time the subsequent tick event will occur.

It is also necessary to ensure that the clock does actually start at time zero:

+A3.b: O=,tick

Channel throughput and reporting errors. The abstract behaviour has simply stated that an error may disable the behaviour of the Sink. The informal description of the stream behaviour given above states that there must be between 15 and 20 frames received at the Sink per second or else an error should be reported. Let data variable x denote the number of sink-in events per second, where:

Initial condition: x=0 Reset condition: x = 0 immediately after event tick Increment condition: x := x + 1 on event sink- in

The assumption governing the occurrence of an error can now be specified as follows: A4. Checking if an error should be reported

4 A4 : q ((tickr\(x<15vx>20))-+0=,error)

If there is a tick (marking the passage of a second) and the number of sink-in events have fallen outside the acceptable bounds, then an error should be reported immediately.

The playing of frames by the data sink. The final piece of real-time information from the informal description of the stream example concerns the playing of frames by the data sink. The informal description stated that the data sink took 5 ms to process a frame; this can be expressed as follows:

A5. Time taken for Sink to process a frame

4 A5: q (sink-in -j OZ5 play)

Page 10: Formal specification and verification of multimedia systems in open distributed processing

422 L. Blair et al. /Computer Standards & Interfaces 17 (1995) 413-436

This means that whenever a sink-in event occurs, a frame is played in some future state which occurs 5 ms later.

5. I. 4. Requirements In our example, we define the following real-time requirements over the behaviour of a stream:

. The end-to-end latency should never exceed 100 ms. Consequently, a frame taking longer than this time can be assumed to have been lost during transmission.

. The end-to-end throughput should remain between 15 and 20 frames per second (this requirement incorporates jitter into the acceptable figures for throughput). Requirements such as these can be specified in QTL as follows:

End-to-end latency. The QTL specification for maintaining a latency of no more than 100 ms is similar to the real-time assumption A2 presented for the stream above.

R 1. The latency imposed by the channel never exceeds 100 ms

&r: q (play -+ eIioo source-out)

This formula states that, for every sink-in event, the corresponding source-out event preceded it within 100 ms. As before, this formula only imposes latency on those messages which are successfully transmitted. Note that sequence numbers would be needed to ensure that the source-out event corresponded to the intended sink-in event.

End-to-end throughput /jitter. This requirement stated that an end-to-end throughput of between 15 and 20 frames per second must be maintained over the stream. Suppose that the data variable y is used to count the number of play events (compare with the real-time assumption A4 above):

Initial condition: y=o Reset condition: y = 0 immediately after event tick Increment condition: y :=y + 1 on event play

The second requirement can now be specified as follows:

R2. Maintaining a throughput of between 15 and 20 frames per second

4 . q (tick+ 15<yI20) K2’

For each tick, the number of play events in the last second must be within the required bounds.

Considering error rates. A further requirement that may be specified for the multimedia stream example is that of end-to-end error rates. Suppose that the error rate should be no more than 10%. From the real-time assumption Al, it is known that 20 frames are sent per second (one every 50 ms). Additionally, the data variable y (declared above) counts the number of frames that are played per second. Therefore, it is possible to state that:

q (tick--+(20-ys2))

or equivalently,

0 (tick -+ ( y 2 18))

However, simply comparing the number of frames played per second against the number of frames sent per second takes no account of the number of frames currently in the channel (i.e., mid-transmis- sion). Consequently, this method of specifying a bound on error rates does not cater for the possibility of

Page 11: Formal specification and verification of multimedia systems in open distributed processing

L. Blair et al. /Computer Standards & Interfnces 17 (1995) 413-436 423

congestion in the network. By measuring the number of frames played and transmitted over a broader interval (e.g., a minute), a truer indication of the error rate can be obtained.

Note that, to place such a requirement on the error rate, it is also necessary to constrain the non-determinism in the abstract behaviour of the channel regarding message loss. Although it would be possible to place a constraint on the number of frames received with respect to the number transmitted, a better method would be to introduce probabilities into the specification technique. This would allow the specification of properties such as the probability of a successful transmission is 90% and the probability of a message being lost is 10%. The inclusion of probabilities is an issue for future work.

A note on requirements. For this example, the requirements that have been considered are similar to a number of the real-time assumptions. This is a direct result of the abstract level at which the behaviour of the straem has been specified. As the specification is refined towards an implementation, the real-time assumptions would reflect the more detailed nature of the specification, for example, the time taken in executing the transport protocol or the delay in accessing the network from the device driver.

5.2. Specifying lip synchronisa tion

5.2.1. Informal description The second example is that of the well-known lip-synchronisation problem, that is, ensuring that the

presentation of independent data streams (e.g., voice and video) is synchronized. This example was, to the authors’ knowledge, first specified in the real-time programming language ESTEREL [9] and has also recently been written in Temporal LOTOS [ll].

Suppose that there are two data streams, one carrying video frames and the other carrying voice packets, which are being transmitted to a common data sink from two independent data sources. At the data sink, these two data streams must be played out in a manner such that lip-synchronisation is achieved between the video and the voice. In order to do this, certain real-time constraints which relate (synchronise) the two streams must be maintained. Suppose that the requirements for obtaining lip-synchronisation can be expressed as follows (all numerical figures have been kept consistent with those presented in [9] and 1111 to allow comparisons to be drawn).

The first two requirements for achieving lip-synchronisation are that both sound and video data should be presented at regular intervals. These intervals are detailed below. Note that, in keeping with the description of lip-synchronisation given in 193 and [ll], no jitter is permitted on the sound presentation; jitter is permitted on the video presentation. (1) Sound intervals

Sound packets must be presented every 30 milliseconds (ms).

!+lO t+20 t+30 time (ms)

w sound pkts

(2) i4deo intervals s(n) s(n+l)

Video packets should be presented every 40 ms, but a jitter of & 5 ms is permitted (i.e., video packets must be presented between 35 ms and 45 ms after the previous packet).

he;m;;s ; , t+;o , t+z, , t+r , “” ,

v(n) v(n+l)

Page 12: Formal specification and verification of multimedia systems in open distributed processing

424 L. Blair et al. /Computer Standards & Interfaces I7 (1995) 413-436

Presentation Device ,~~ound

- ) Asynchronous Interactions I Synchronous Interactions

Fig. 4. The overall structure of the lip synchronisation algorithm.

The third (and final) requirement is that the presentation of the sound and video data must be synchronised, that is, the presentation of one media type must not lag or precede the presentation of the other media type by more than a certain amount. The constraints on the synchronisation between the sound and video presentations are presented below. Note that in this example, no jitter is permitted on the sound presentation. Hence, it is only the jitter on the video data which can cause the presentation to drift out of synchronisation (e.g., if video frames repeatedly arrive late).

(3) Synchronisation of sound and video (i) The video presentation must not lag the associated sound by more than 150 ms. (ii) The video presentation must not precede the associated sound by more than 15 ms.

t-10 -J++ypq - - - 5

time (ms)

so”nd pkts

s(n+l) IIll l ! l ! l ! --- I I I I video pkts

v(m) where s(n) is associated with v(m)

If these real-time constraints (requirements) cannot be met or are broken at some stage in the data presentation, then lip-synchronisation has not been achieved. In this example, if lip-synchronisation cannot be achieved or maintained, the presentation should be stopped and an error should be reported.

5.2.2. Abstract behaviour We now consider the specification of an algorithm to achieve lip-synchronisation. Fig. 4 shows the

overall structure of this algorithm. The figure shows two data sources, each of which are connected to the presentation device by a stream across which data can be transmitted (asynchronously). When a sound packet arrives at the presentation device, a message (s _ avail) is sent to a sound synchroniser process to signal the availability of that sound packet. When appropriate, the synchronisation process returns a message (s-present) back to the presentation device telling it to present the data. An identical procedure is carried out when video frames arrive at the presentation device (events v-avail and v-present). The figure shows that the controller is split into two sub-processes (sound synchroniser and video synchro- niser) which handle the synchronisation of sound and video data respectively. The starting and stopping of a presentation is achieved in this example through the use of two synchronous messages (events s-start and s-stop) transmitted from the sound synchroniser to the sound source (respectively video). The presentation is only stopped if an error has occurred.

Page 13: Formal specification and verification of multimedia systems in open distributed processing

L. Blair et al. /Computer Standards & interfaces 17 (1995) 413-436 425

The LOTOS description of the lip-synchronisation algorithm closely follows the overall structure shown in Fig. 4. However, since this example aims to highlight the specification of issues related to real-time synchronisation, the specification of the two data sources (for sound and video) and the transmission of data over streams are omitted (these can be specified in a similar way to the multimedia stream example presented earlier in this paper).

The controller process has the job of maintaining lip-synchronisation between sound and video, controlling when sound packets and video frames should be presented and when errors should be reported. Note, however, that this is specified at an abstract level, using a non-deterministic choice between different possible behaviours. Importantly, no real-time constraints are considered in this component of the specification. The controller process can be broken down into a number of subpro- cesses as specified below:

process CONTROLLER[s-start, v-start, s-avail, v-avail, s-present, v-present, s-late -error, v _ late -error, synch- error, s-stop, v-stop]: exit: =

(s-start; exit (*start the sound presentation * ) III v_ start; exit) ( * start the video presentation *>

(zide s-delay, v-delay in (SOUND _ SYNCHRONISER[s _ avail, s _ delay, s-present, s _ late _ error]

III VIDEO _ SYNCHRONISER[v- avail, v-delay, v-present, v- late _ error, synch _ error]

1) I [synch _ error, s-late _ error, v-late _ error] I ERROR-HANDLER[synch-error, s-late- error, v-late-error, s-stop, v-stop]

) endproc( * CONTROLLER * )

Initially, the controller sends a start message to both the sound source and video source signalling them to start sending data to the presentation device. The sound and video synchroniser processes control the receipt of s-avail and v-avail messages and reply (after a possible delay) with s-present and v-present messages. The synchroniser processes may also report a late- error if the avail messages are late arriving or, in the case of the video synchroniser, a synchronisation error occurs. Note that since no jitter is permitted on sound data, only video data can cause the presentation to drift out of synchronisa- tion. The sound and video synchroniser processes can be specified as shown below.

process SOUND -SYNCHRONISER[s-avail, s-delay, s-present, s-late-error]: exit: = s - avail (s _ present; SOUND - SYNCHRONISER[s- avail, s _ delay, s _ present, s _ late _ error]

[I s _ delay; s-present; SOUND _ SYNCHRONISER[s _ avail, s _ delay, s-present, s _ late _ error]

) [] s _ late _ error; exit

endproc ( * SOUND _ SYNCHRONISER * >

process VIDEO _ SYNCHRONISER[v- avail, v-delay, v-present, v-late _ error, synch _ error]: exit: = v _ avail; (v _ present;

VIDEO -SYNCHRONISER[v-avail, v-delay, v-present, v-late-error, synch-error] [I

Page 14: Formal specification and verification of multimedia systems in open distributed processing

426 L. Blair et al. /Computer Standards & Interfaces 17 (1995) 413-436

v-delay; v-present; VIDEO _ SYNCHRONISER[v- avail, v-delay, v- present, v-late _ error, synch _ error]

[] v _ late _ error; exit [I synch-error; exit

endpro * VIDEO _ SYNCHRONISER * >

In the specification of the controller, the above two synchroniser processes are placed in parallel with an error handler process. If either synchroniser process raises an error, the error handler process ensures that a stop message is sent to both data sources. This process can be specified as follows:

process ERROR _ HANDLER[synch _ error, s _ late _ error, v _ late _ error, s _ stop, v_ stop]: exit: = (synch _ error; exit

[] s _ late _ error; exit [J v-late-error; exit)

>> (s _ stop; exit ( * stop sound presentation * 1

III V.-stop; exit) ( * stop video presentation * 1

endproc ( * ERROR _ HANDLER * )

The abstract behaviour of the lip-synchronisation algorithm is completed by giving a LOTOS description of the overall behaviour:

specification LIPSYNC[s-start, v-start, s-play, v-play, s-late-error, v-late-error, synch-error, s-stop, v-stop, I: noexit

behaviour hide s-avail, v-avail, s-present, v-present, s-send, v-send, s-ret, v-ret in (CONTROLLER[s-start, v-start, s-avail, v-avail, s-present, v-present,

s-late-error, v-late-error, synch-error, s-stop, v-stop] I [s- start, v-start, s-stop, v-stop, s-avail, v-avail, s-present, v-present] I ( * - the specifications of the following processes have not been presented here - * > (((SOUND-SOURCE[s-start, s-send, s-stop]

I [s - send1 I SOUND _ STREAM[s _ send, s _ ret]) III (VIDEO _ SOURCE[v- start, v-send, v-stop] I h- send1 I VIDEO _ STREAM[v- send, v- ret]))

I[s-ret, v-recll PRESENTATION _ DEVICE[s- ret, v-ret, s-play, v-play, s-avail, v-avail, s-present, vpre-

sent]) )

endspec

5.2.3. Real-time assumptions In the LOTOS/QTL approach, the real-time assumptions ground the time at which events in the

abstract specification occur. The real-time assumptions for the controller process will be designed such

Page 15: Formal specification and verification of multimedia systems in open distributed processing

L. Blair et al. /Computer Standards & Interfaces 17 (1995) 413-436 427

that the lip-synchronisation requirements (presented informally above) hold. More specifically, it is necessary to constrain the behaviour of events in the controller such that sound packets are presented every 30 ms, video frames are presented every 35 to 45 ms and video may lag the sound presentation by a maximum of 150 ms and precede it by a maximum of 15 ms.

Initial conditions. Firstly, consider the presentation of the initial item of data. When this data arrives at the presentation device, a signal is sent to the controller (either s-avail or v-avail) indicating the availability of data. According to the abstract behaviour presented in the previous section, after receiving an avail signal, the controller may either delay or instruct the presentation device to present the data (see the two synchroniser processes). For the very first item of data, a delay should not be permitted and the signal to present the data should be issued immediately, thus ensuring that the presentation actually starts as soon as data is available. This can be specified in QTL as follows (in all formulae it will be assumed that the unit of time is one millisecond):

Al .a Presentation of initial sound packet (if it is available before video)

4Al.a: q ((s-avail A -fO>,(s-avail Vv-avail)) + O=,s-present)

This states that if a sound packet is available now and no sound or video has previously been available (i.e., this is the first available data item) then this packet must be presented immediately. The corresponding assumption, regarding the case when a video frame is available first, can be similarly specified:

Al .b Presentation of initial video frame (if it is available before sound)

+Al.h: q ((v-avail A -rO,,(s-avail Vv_avail)) + O=,v-present)

If this video frame is the first available data item, then present the frame immediately.

Presentation of sound packets. The presentation of subsequent data is governed by the requirements for lip-synchronisation as described above. The presentation of sound packets will be considered first. This can be split into three parts: the case when an s-avail signal is early, when it is on time and when it is late. Firstly, if an s-avail arrives early, the controller must delay until 30 ms have passed since the last s-present before issuing the next s-present.

Note that the case must also be considered where a synch-error or a v-late-error is reported before (or at) the time of the next s-present (for brevity, the occurrence of either of these errors is simply denoted error in A2.a and A2.b below). The occurrence of such an error will cauve the presentation to be stopped and thus the next s-present will not occur.

A2.a Sound packet is available early

4A2.a: q v ((s-avail A+=,s-present) + (O=,s-delay A (O=,,-,s-presentVOlj+,error))) t=o

In every state in which s-avail occurs and in which there has been an s-present in the last 29 ms, the next event (state) must be an s-delay and must occur immediately. Furthermore, in some future state which occurs 30 ms after the previous s-present, there must be another s-present or there must be an error (either synch-error or v-late-error) before or at this time.

The second case to consider is when an s-avail arrives on time (i.e., 30 ms after the previous s-present) In this case, the next s-present should be issued immediately or an error (either synch-error or v-late error) must occur.

Page 16: Formal specification and verification of multimedia systems in open distributed processing

428 L. Blair et al. /Computer Standards & Interfaces 17 (1995) 413-436

A2.b Sound packet is available on time

+A2.b: q ((s-avail A e=,,s-present) -+ O=,(s-present V error))

In every state in which s-avail occurs and in which there has been an s-present 30 ms previously, the next event (state) must be an s-present or an error (either synch-error or v-late error) and it must occur immediately.

The final case to consider is when an s-avail is late. Since no jitter is permitted on the sound presentation, this represents a sound late error.

A2.c Sound packet is late

dJA2.c: q(%o(s-P resent V v-present) A R s30 7ssavail) + 0 =&ate-error)

The first part of this statement ensures that the presentation has started (i.e., there has already been at least one s-present or v-present). If this is the case and there has been no s-avail within the last 30 ms, then an s _ late _ error must occur immediately.

Presentation of video frames. The presentation of video frames is more complex than the presentation of sound packets since jitter is permitted on the video presentation. Consequently, the controller may issue a v-present signal between 35 and 45 ms after the previous v-present.

Initially, consider the early arrival of a video frame. In this case, a delay must be enforced until 35 ms at which time the video frame can be presented. However, as with the sound presentation above, note that it is necessary to consider the case where an error (either a synch-error or an s-late- error) occurs before, or at, the intended presentation time of the video frame, thus causing the presentation to be stopped. For brevity, the occurrence of one of these errors will simply be denoted error in A3.a and A3.b below.

A3.a Video frame is available early

+A3,an v ((v-avail A $=,vpresent) + (O=,v-delays (O_,,-tv-presentV0,,5-terror))) t=o

In every state in which a v-avail occurs and in which there has been a v-present in the last 34 ms, the next event (state) must be a v-delay and must occur immediately. There must then either be a v-present signal 35 ms after the previous frame or else an error must occur before or at this time.

The case when a video frame arrives on time (i.e., between 35 ms and 45 ms after the previous frame) will now be considered.

A3.b Video frame is available on time

b, . 0 v ((v-avail A e=,v...present) + O=,(v-present V error)) A3.b’ t=35

If a v-avail occurs and there has been a v-present in the last 35 to 44 ms, then the frame must be presented immediately in the next state or else an error must occur.

The following formula now considers the case when a video frame arrives 45 ms after the previous one. In this case, the frame must be presented immediately (regardless of the value of the v-drift variable) or an error must occur.

A3.c Video frame is late

4A3.c: q(%o(s-P resent V v-present) A q s45 TV-avail) + O=,v-late-error)

As with the corresponding formula for sound above, the first part of this statement ensures that the presentation has started. If this is the case and there has been no v-avail within the last 45 ms, then a v _ late _ error must occur immediately.

Page 17: Formal specification and verification of multimedia systems in open distributed processing

L. Blair et al. /Computer Standards & Interfaces 17 (1995) 413-436 429

Out of synchronisation error. The formulae above have considered the presentation of sound and video data, stating when delays should occur and when data should be presented. They have also considered the late arrival of data and the resulting s-late-error or v-late error events. The final case to be considered is that of the third type of error: an out of synchronisation error (event synch-error). The informal requirements stated that the video presentation should not lag the sound presentation by more than 150 rns and should not precede it by more than 15 ms. It is, therefore, necessary to keep a check on the cumulative drift of the video presentation from the sound presentation at all times. This can be achieved by declaring a data variable (named v-drift) to measure the drift. Note that the simple notation used in the previous example for data variable declarations is not sufficient to define the increment condition of the v-drift variable; a first order logic has thus been used to specify this condition.

Initial condition: v-drift = 0

Reset condition: none

Increment condition: q ,?s, ((v-present A e=,v-present)

-+ 3u.(v-drift = u A O(v-drift = u + t - 40 A O(v-drift = u + t - 4OlJ 8 (v-present)))))

This states that whenever a v-present occurs and another v-present occurred between 35 ms and 45 ms in the past, then, in the next state, the current value of v-drift (i.e., u) is incremented by t - 40. The variable v-drift then retains this value until the state following the next v-present (when v-drift will again be updated).

Having defined the v-drift variable, it is now possible to specify the formula which constrains the occurrence of an out of synchronisation error.

A4. Video presentation drifts out of synchronisation

4 * q ((v-drift < -15 Vv-drift > 150) + O,,synch-error) A4*

If the value of the v-drift variable exceeds the bounds stated in the informal requirements, then the next event (state) must be a synch-error and must occur immediately.

52.4. Requirements The three requirements for the lip-synchronisation algorithm have been presented informally in

Section 5.2.1. above. These will now be formally specified using QTL. Consider the first requirement which stated that sound packets must be presented every 30 ms. If a sound packet is not presented at this time, then either a synch _ error or a v-late _ error must have occurred previously or else an s-late _ error must occur. This can be expressed in QTL as follows:

Rl . Sound packets presented every 30 ms

4 . q (s-present + (OX&s-present V s-late-error) V 0 RI. s &synch _ error V v-late _ error)))

The meaning of this QTL statement follows from the description above. The second requirement, stating that video frames must be presented between 35 and 45 ms after the

previous frame, can similarly be expressed using QTL. As above, if no video frame is presented in the required interval and no other error has occurred in this interval, then a v-late-error should occur.

R2. Video frames presented every 35 to 45 ms

4 n2: Cl v-present + i i(

G O=,(v-present V v-late-error) t=35

VO.,,(synch- error V s _ late _ error)

Page 18: Formal specification and verification of multimedia systems in open distributed processing

130 L. Blair et al. /Computer Standards & Interfaces 17 (1995) 413-436

Finally, the third requirement concerning the synchronisation of sound and video must be considered. This states that the video presentation should not lag the sound presentation by more than 150 ms and should not precede it by more than 15 ms. This must be true in every state or else a synch-error must occur immediately in the following state. This can be specified as follows:

R3. Synchronisation of sound and video presentation

4 R3’ a(-15~v_drift_<150~O,,synch_error)

Note that, as in the stream example, there is a close relationship between the real-time requirements and real-time assumptions (e.g., between &n3 and 4*J This, again, reflects the level of abstraction of the specification. As behaviour is refined, it can be anticipated that real-time assumptions will become more numerous and specific.

5.3. Analysis

The stream example provides an initial demonstration of the style of specification supported by the LOTOS/QTL approach. Furthermore, this example illustrates that QTL is sufficiently expressive to capture quality of service properties such as throughput, latency and jitter (although problems are identified with the specification of probabilistic properties such as error rates).

The second example demonstrates that a solution to the lip synchronisation problem can be expressed elegantly in the LOTOS/QTL approach. The use of (untimed) LOTOS encourages the abstract description of the algorithm without reference to real-time concerns. QTL also provides an elegant means of expressing requirements over this (abstract) behaviour. It could be argued that the description of real-time assumptions is relatively complex (e.g., compared to the equivalent behaviour in a timed specification language). However, considerable benefits are gained from this separation of concerns. For example, the strategy for real-time synchronisation is readily identifiable in the collected real-time assumptions. Furthermore, this strategy can be readily changed without altering the general behaviour. This aspect of separation of concerns is illustrated in [25] where, firstly, jitter is permitted on sound presentations and, secondly, a more sophisticated strategy for maintaining lip synchronisation is intro- duced. Both changes were isolated to the real-time assumptions.

6. Verification with LOTOS and QTL

As discussed in Section 3.3, it is necessary to support two styles of verification which we have referred to as vertical and horizontal verification. In vertical verification, it is necessary to check that a refinement satisfies the previous specification. Since we use standard LOTOS, it is possible to use any of the standard techniques already developed for the language, for example, correctness preserving transforma- tions [26] or equivalence checking [l]. Similarly, standard theorem proving techniques can be employed to verify QTL refinements (e.g., checking that there are no contradictions between QTL statements).

In contrast, horizontal verification requires the development of new techniques. We are currently developing a set of techniques for this aspect of verification. An overview of the (horizontal) verification process is shown in Fig. 5.

As can be seen from Fig. 5, the LOTOS specification is initially translated into a finite,state machine (FSM). This process can be achieved through the use of a tool such as CWSAR/ALDEBARAN (see [27] and [28]) which generates a finite state machine providing certain conditions are adhered to. For example, recursion is disallowed on the left-hand side of a disable operator and on the left or right-hand side of a parallel operator; additionally, care must be taken when using infinite data types (such as

Page 19: Formal specification and verification of multimedia systems in open distributed processing

L. Blair et al. /Computer Standards & Interfaces 17 (I 995) 4I3-436 431

LOTOS spedfkation LOTOS spedfkation

01 1 .

Finite State Machine Finite State Machine

Modified EFSM Modified EFSM

Satisfiability Check Satisfiability Check Model Checking Model Checking

Fig. 5. An overview of the stages in the verification process.

natural numbers), so as to avoid a state machine with an infinite number of states. In stage 2, the finite state machine and the data variable declarations are integrated and an Extended Finite State Machine (EFSM) derived from this information.

It is then important to check the real-time assumptions against this EFSM (stage 3). This stage is necessary since it is possible that the real-time assumptions may affect the set of valid paths through the extended state machine. For example, consider the specification of a protocol which contains a possible path to deadlock. Consider also a set of real-time assumptions for this protocol which actually prevent this path from occurring (perhaps because time constraints imposed on certain events prevent its occurrence). If only the LOTOS specification is considered, asking if deadlock exists in the specification clearly results in the answer yes, deadlock does occur. However, if the timing information contained in the real-time assumptions prevents deadlock and this information is taken into account, then the answer is no, deadlock does not occur. This latter result is expected with the LOTOS/QTL approach. It is, therefore, necessary to remove paths which exist in the EFSM but which are prevented by the real-time assumptions, To achieve this, it is first necessary to check if each real-time assumption is valid with respect to the EFSM; if the assumption is not valid, the EFSM should be modified by removing the invalid behaviour.

In stage 4 of the verification process, a single requirement is considered and checked against the set of real-time assumptions. For example, consider a system with two real-time assumptions which ensure that event b always follows event a in exactly 2 ms (milliseconds) and that event c always follows event b in 5 ms. A requirement stating that event c should follow event a within 10 ms will always be satisfiable with respect to the above real-time assumptions. However, any requirement that c follows a within a time less than 7 ms will not be satisfiable. Therefore, an algorithm must be developed which takes the conjunction of one requirement and the set of real-time assumptions and determines the sutisfiubility of the resulting

Page 20: Formal specification and verification of multimedia systems in open distributed processing

432 L. Blair et al. /Computer Standards & Interfaces 17 (1995) 413-436

logic formula. There have been many techniques developed in the literature to perform such a task, often based on what is known as the tableau method. An excellent overview of the tableau method for propositional temporal logic can be found in [29] and [30] describes a tableau method for MTL.

The final stage (5) in the verification process concerns checking the (modified) EFSM against the requirements. Strong similarities exist between this stage and the stage labelled 3. For this stage, it is simply necessary to check if each requirement is valid with respect to the set of models. This is known as model-checking and many algorithms have been developed which address this problem (for example, [30] describes a model checking algorithm for MTL).

This section has only provided a brief overview of the verification process for the LOTOS/QTL approach. A full description of the process can be found in [25]; in particular, algorithms are described which check the satisfiability of a QTL formula (based on the tableau method) and which perform model checking of QTL formulae.

7. Related work

There has been little work specifically addressing the impact of multimedia on the formal description of distributed systems. Researchers at CNET, however, have developed an approach based on the synchronous language Esterel and have developed a separate logic (QL) to enable the specification of quality of service requirements [181.

A number of researchers have considered the more general impact of real-time on the field on distributed systems. For example, in [26], a method is proposed for the formal design of distributed systems which permits the specification of real-time properties. As in our approach, LOTOS is used to specify the temporal ordering of events in a system. However, the work is particularly relevant since it considers a separation of concerns between abstract behaviour and real-time behaviour. A new language (based on a timestamped LOTOS-like notation) is introduced to specify the real-time behaviour. However, no formal syntax or semantics are provided for this new language.

Other methods have also been developed in which real-time is incorporated into the specification technique and a real-time temporal logic is used to specify requirements (e.g., the logic R’ITL is used to specify requirements over timed transition models (TTMs) in [15] whilst the logic TPCI’L (timed probabilistic computational tree logic) is used to specify requirements over a timed probabilistic version of ccs 1191.

8. Concluding remarks

The emergence of distributed multimedia computing adds new challenges to the field of formal specification. It is our belief that most existing techniques do not adequately address these new requirements, particularly in terms of representing real-time behaviour. A new framework is therefore introduced based on a separation between the specification of behaviour and requirements and also between the abstract specification and real-time concerns. The major benefits of this approach are: (i) the most appropriate language can be used for each part of the specification, (ii) the level of abstraction in the behavioural specification is not compromised, (iii) real-time assumptions are separated out allowing alterations to be made and making the behavioural specification portable to other environ- ments, (iv) real-time requirements are clearly identified, and (v) minimal changes are required to existing languages and hence existing tools are not invalidated.

The paper also discussed the use of LOTOS as a language to specify abstract behaviour and a real-time temporal logic (QTL) for the specification of real-time assumptions and requirements. Two

Page 21: Formal specification and verification of multimedia systems in open distributed processing

L. Blair et al. /Computer Standards Br Interfaces 17 (1995) 413-436 433

examples have been given to illustrate the use of the technique (further examples and case studies can be found in [25]>. We believe that specifications written using LOTOS/QTL elegantly capture the required real-time behaviour of distributed multimedia systems. Furthermore, we believe that this approach is sufficiently general for the specification of other real-time applications. The paper has also presented an overview of the verification process developed for the new approach. Future work will concentrate on extending QTL with probabilities and on developing tools to support the verification process. In addition, further work on larger case studies will enable a fuller evaluation of the new approach to be carried out.

Appendix: the QTL language

1. Syntax of QTL

QTL has the following syntax (where C#J is an arbitrary QTL formula and n(a) denotes the gate name of the LOTOS event a):

~::=falseI~1--+~210-c~I~1U -c~210-c~I~1S-c~21aI”I”~2

-::=</=I>

a::=xlclx+c ( * addition by constant only * )

where the symbols 0, tJ, 8 and S denote the temporal operators next, until, previous and since respectively (a full description of these (and other) operators can be found in [31]). Furthermore, c E No (the set of positive natural numbers, including zero), x E Var (the set of data variables), a E Act (the set of LOTOS events) and event a has either the simple form g or the form g!v,!v, . . . or g?v,:nat?v,:nat . . . (or any combination of ! and ?> where each vi E No.

Other propositional, existential and temporal operators can be derived in the usual way. A number of abbreviations can also be adopted, for example, 0 .,c#I = 0, ,+ V 0 =,c/I and 0 4 = 0 204. The notation Vt4$sOZrp is shorthand for O=35p v O=36p v . . . v O=45p.

2. Semantics of QTL

Timed state sequences QTL formulae are interpreted over timed state sequences such that p (a timed state sequence) is a pair

(a, T) where u is a state sequence and r is a time sequence (associating a discrete time with each state). State sequences and time sequences are infinite sequences of states and times as defined below (let the function q(a) denote the gate name of the LOTOS event a):

u = uoa,az . . . ) where a, c Prop (the set of QTL propositions) for i 2 0 and,

Vi 2 0,3 exactly one x E a, s.t. q(x) E Act

T = T~T~T~. . . ,where 7i E No, i 2 0.

Note that each state (in a state sequence) must include exactly one element which is a LOTOS event (in addition to any other propositions of the form rTT1 N rTT2 ). The notation ai[~] will be used to denote the value of GT in state ai of the state sequence (T.

Every time sequence must be monotonic, that is, time cannot decrease throughout a sequence. However, time need not progress from one state to another (since LOTOS events are instantaneous, any number of events may occur at the same time). Additionally, time gaps are permitted in time sequences, that is, there can exist a t E No such that ~3i 2 0. 7i = t. An important consequence of this is that the

Page 22: Formal specification and verification of multimedia systems in open distributed processing

434 L. Blair et al. /Computer Standards & Interfaces I7 (1995) 413-436

first state (event) does not have to occur at time 0 (since there can be no guarantee on the time at which LOTOS events offered to the environment will occur).

Sa tisfiability For a timed state sequence p = (a, ~1, let (p, i) denote (a;, TJ p is said to be a model of some

formula C$ iff (p, 0) satisfies 4 (or (p, 0) k= 4). The satisfaction relation k can be defined for QTL as follows (where p = ((T, 7)):

( p, i) t+ false

(P, i> b45 +b iff (p, i) I= 4, implies (p, i) K c#+

(P, i) b O-,4 iff(p,i+l)k+, and T~+,-~~+c

(P, i) b 4, U -A2 iff 3j L i. (p, j) I= c$z and Vk,irk<j.(p,k)~~#~, and T,-~~+c

(P? 4 b e-4 iff(i.0) and (p,i-l)!== and ~~-~~-r+c

(P, i) i= 4,S-A2 iff 3j, Oljli. (p, j) I=& and Vk, j<k<i, (p, k) F41

and ri - rj + c

(P, i) ba iff q(a) EAct and (acq or (acEa, and 3x~a, s.t. a=x

and v(x) E Act))

(p, i) k 7rl N r2 iff ui[ rrr] N uJ~-~]

Note that the penultimate rule governs the satisfiability of a proposition which is a LOTOS event. This rule ensures that, for a LOTOS event a, either a E CT~ or (if a 4 ai> there exists a LOTOS event in 0; which is “equivalent to” a. Such equivalent events are determined by the standard LOTOS synchronisation rules, e.g., g!5 is equivalent to g!3 + 2 (under the usual rules for addition), whilst g!O is not equivalent to the events g!l or h!O.

References

[l] T. Bolognesi and E. Brinksma, Introduction to the IS0 specification language LOTOS, Computer Networks and ISDN Systems. Vol. 14, No. 1 (Elsevier Science B.V., 1988) pp. 25-59.

[2] A. Diller, Z: An introduction to formal methods (Wiley, NY, 1990). [3] R.G. Clark, LOTOS design-oriented specifications in the object-based style, Tech. Report TR84, Available from: University of

Stirling, Stirling FK9 4LA (April 1992). [4] E. Cusack and M. Lai, Object Oriented Specification in LOTOS and Z or, My Cat Is Object Oriented!, Workshop on the

Foundations of Object Oriented Languages, Noordwijkerhout (May 1990). [5] J.S. Ostroff, Formal methods for the specification and design of real-time safety critical systems, Systems and Software, Vol.

18, No. 1 (1992) 33-60. [6] J.A. Stankovic, and K. Ramamritham (Eds.), Tutorial: Hard Real-Time Systems (IEEE Computer Society Press, 1988). [7] D.P. Anderson, S.Y. Tzou, R. Wahbe, R. Govindan and M. Andrews, Support for continuous media in the DASH system,

Proc, 20th fnt. Cant on Distributed Computing Systems, Paris (May 1990). [8] D.B. Hehmann, M.G. Salmony and H.J. Stiittgen, Transport services for multimedia applications on broadband networks,

Computer Communications, Vol. 13, No. 4 (1990) pp. 197-203. [9] J-B. Stefani, L. Hazard and F. Horn, Computational model for distributed multimedia applications based on a synchronous

programming language, Computer Communications (Special Issue on FDTs), Vol. 15, No. 2 (March 19921. [lo] G. Coulson, G.S. Blair, J-B. Stefani, F. Horn and L. Hazard, Supporting the real-time requirements of continuous media in

open distributed processing, Computer Networks and ISDN Systems (Special Issue on ODP) (Elsevier Science B.V., 1994) to appear.

[ll] T. Regan, Multimedia in temporal LOTOS: A lip synchronisation algorithm, Proc. 13th Int. Symposium on Protocol Specification, Testing and Verification (PSTVXIIZ) (Elsevier Science B.V., North-Holland, 1993).

Page 23: Formal specification and verification of multimedia systems in open distributed processing

[121

Cl31

[141

[151

1161

[171

1181

[I91

DO1 Dll [22]

(731

041 [251

ml

Dl

[281

[291

[301

[311

L. Blair et al. /Computer Standards & Interfaces I7 (1995) 413-436 435

L. Leonard, and G. Leduc, An enhanced version of timed LOTOS and its application to a case study, Proc. 6th Int. Conf. on

Formal Description Techniques (FORTE ‘93), R.L. Tenney, P.D. Amer and M. Umit Uyar (Eds.) (Elsevier Science B.V., North-Holland, IFIP, 1994) pp. 483-498. B. Walter, Timed petri-nets for modelling and analyzing protocols with real-time characteristics, 3rd IFIP Workshop on

Protocol Specification, Testing and Verification, H. Rudin and C.H. West (Eds.) (North-Holland, 1983 pp. 149-159). D. Hare], H. Lachover, A. Naamed, A. Pneuli, M. Politi, R. Sherman and M. Trachtenbrot, Statemate: a working environment

for the development of complex reactive systems, IEEE Transactions on Software Engineering, Vol 16, No. 4 (1990) pp. 403-414. J.S. Ostroff, Verification of safety critical systems using TTM/RlTL, REX Workshop: Real-Time: Theory in Practice J.W. de

Bakker. C. Huizing, W.P. de Roever and G. Rozenberg (Eds.) (Springer-Verlag, LNCS 600, June 1991) pp. 573-602. G. Blair, L. Bair, H. Bowman and A. Chetwynd, Formal support for the specification and construction of distributed

multimedia systems (the Tempo Project), Final Project Deliuerable, Internal Report MPG-93-23. Available from: Dept of Computing, Lancaster University, Bailrigg, Lancaster, LA1 4YR, UK (December 1993).

H. Bowman, G.S. Blair, L. Blair and A.G. Chetwynd, Time versus abstraction in formal description, Proc. 6th Int. Conf on Formal Description Techniques tFORTE’931, R.L. Tenney, P.D. Amer and M. Umit Uyar (Eds.) (Elsevier Science B.V.. North-Holland, IFIP, 1994) pp. 467-482.

J-B. Stefani. Some computational aspects of QoS in an object based distributed architecture, Proc. 3rd Int. Workshop on Responske Computer Systems, Lincoln, NH, USA (September 1993).

H. Hansson, Time and probability in formal design of distributed systems, PhD Thesis. Available from: Dept of Computer Systems, Uppsala University, PO Box 502, S-751 20 Uppsala, Sweden (September 1991). ISO. LQTOS: A formal description technique based on the temporal ordering of observational behaviour. IS0 DP 8807, 1988.

R. Mimer, Communication and Concurrency (Prentice-Hall, 1989). F. Jahanian and A.K. Mok. Safety analysis of timing properties in real-time systems, IEEE Transactions on Softwnre

Engineering (September 1986) pp. 890-904. A. Pnueli. The temporal logic of programs. Proc. 18th Annual Symposium on Foundations of Computer Science (1977) pp.

46-57. R. Koymans, Specifying real-time properties with metric temporal logic, Real-Time Systems, Vol. 2 (1990) pp. 255-299. L. Blair, Formal specification and verification of distributed multimedia systems, PhD Thesis. Available from: Dept of

Computing, Lancaster University, Baihigg, Lancaster, LA1 4YR, UK (1994).

J. Schot, The role of architectural semantics in the formal approach of distributed systems design, PhD Thesis, University of Twente. ISBN 90-9004877-4 (1992). H. Garavel CESAR Reference Manual, Internal Report, Available from: Laboratoire de Genie Informatique, Institut IMAG,

Grenoble, France (1990). J.C. Fernandez and L. Mournier, ALDEBARAN: User’s Manual, Internal Report, Available from: Laboratoire de Genie Informatique, Institut IMAG, Grenoble, France (1990).

P. Wolper. The tableau method for temporal logic: An Overview, Logique et Analyse, Vol. 28 (1985) pp. 119-136. R. Alur and T.A. Henzinger, A really temporal logic, Proc. 30th Annual Symposium on Foundations of Computer Science

(IEEE Computer Society Press, 1989) pp. 164-169. 2. Manna and A. Pneuli, The Temporal Logic of Reactive and Concurrent Systems (Springer-Verlag, New York, 1992).

Lyme Blair graduated from Lancaster University in 1990 with a First Class Honours Degree in Computer Science and Mathematics. Following this, she remained at Lancaster, working on a PhD in the area of the formal specification and verification of distributed multimedia systems. Her thesis was submitted in September 1994. Her research interests include formal description techniques, LOTOS and real-time temporal logic.

Gordon Blair is currently a senior lecturer in the Computing Department at Lancaster University and has been actively involved in research in distributed systems for the last eleven years. He completed his PhD on “Distributed Operating Systems Structures for Local Area Network Based Systems” at Strathclyde University in 1983. Since then, he was an SERC Research Fellow at Lancaster before taking up a lectureship post in 1986. He has been responsible for a number of research projects at Lancaster in the areas of distributed systems and multimedia support and has published over a hundred papers in his field. His current research interests include distributed multimedia computing, operating system support for continuous media, the impact of mobility on distributed systems and the use of formal methods in distributed system development.

Page 24: Formal specification and verification of multimedia systems in open distributed processing

436 L. Blair et al. /Computer Standards & Interfaces 17 (1995) 413-436

Amanda Chetwynd studied mathematics at Nottingham University, UK. She completed her PhD in graph theory from the UK’s Open University. In 1984 Dr Chetwynd joined the Mathematics Department at Lancaster University as a lecturer and is now a senior lecturer in the department. Her main research interest is in the colouring of graphs. She has recently become interested in the area of formal methods, in particular the formal specification of timing properties associated with distributed systems.