behaviour of tcp in the european atm pilot

12
ELSEVIER computer communications Computer Communications 19 (1996) 264-275 Behaviour of TCP in the European ATM Pilot Olivier Bonaventurea, Espen Klovningb, Andrk Danthine” ‘htltut d’ElectricitC bdontefiore, B-28, CJniversitP de Liige, B-4000 LGge, Belgtum bTelenor Research and Development, P 0 Box 83, N-2007 Kjeller, Nomta) Abstract We present and discuss detailed measurements of TCP in the European ATM Pilot. In this wide area ATM network, the traffic contract is enforced by a Usage Parameter Control (UPC) mechanism. We show that when the ATM level traffic was compliant, TCP achieved a good utilization of the link. However, when the ATM level traffic was only almost compliant, the TCP throughput almost collapsed. Keywords: Asynchronous Transfer Mode; TCP; Usage Parameter Control; Measurements 1. Introduction Asynchronous Transfer Mode (ATM), selected in 1988 by the ITU as the basic technology for the B-ISDN [l], immediately raised a lot of interest in the data com- munications world, and soon appeared as a promising technology for the LAN environment. The extraordinary success of the ATM Forum, created at the end of 1991, is a clear indication of this trend. The question of the integration of TCP/IP and the ATM technology has already attracted a lot of interest, and experiments have been done with various goals and equipment. Some experiments [2] have concentrated their attention on the end-to-end performance of two workstations equipped with ATM adapters and con- nected through a first generation ATM switch. Some other researchers have analysed, using simulations, situa- tions where several TCP connections are multiplexed on a single link through an output buffered switch [3]. Not surprisingly, the lack of access and admission control create situations of congestion that are difficult to handle by TCP. Several wide area ATM networks are currently being deployed in the world [4,5]. To support real-time traffic as well as multimedia-based communications, and legacy data communications, some of these networks imple- ment the admission control and policing mechanisms specified by ITU [6] and the ATM Forum [7]. The Euro- pean ATM Pilot [4] is an example of such a network. In this paper, we will present and discuss detailed measurements related to the performance of TCP in 0 140-3664/96/S 15.00 0 1996 Elsevier Science B.V. All rights reserved PII SO140-3664(96)01058-4 the European ATM Pilot. We mainly study how TCP is able to adapt to an ATM-level traffic contract enforced by a UPC mechanism inside the network. 2. Measurement environment Our measurement environment consisted of ATM LANs interconnected via a wide area ATM network (the European ATM Pilot). 2.1. The European ATM Pilot The European ATM Pilot [4] is one of the first public wide area ATM networks deployed in the world. This network has been built as a collaboration among 16 Public Network Operators from 15 countries, and covers most of western Europe. Each PNO participating in the Pilot network has installed at least one ATM switch and established links with its adjacent countries. Wide area ATM switches from different vendors are used inside the network. These ATM switches are entirely managed by the PNOs, and as simple users of the network we had no control over them. Most of the links on the European ATM Pilot are 34Mbit/s E-3 ATM links [8], but some links have already been upgraded to 155 Mbit/s SDH links [9]. The measurements presented in this paper were done between the testbeds of the University of Liige (Belgium), ElectricitC de France (Paris, France) and Telenor Research and Development (Kjeller, Norway).

Upload: olivier-bonaventure

Post on 21-Jun-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

ELSEVIER

computer communications

Computer Communications 19 (1996) 264-275

Behaviour of TCP in the European ATM Pilot

Olivier Bonaventurea, Espen Klovningb, Andrk Danthine”

‘htltut d’ElectricitC bdontefiore, B-28, CJniversitP de Liige, B-4000 LGge, Belgtum bTelenor Research and Development, P 0 Box 83, N-2007 Kjeller, Nomta)

Abstract

We present and discuss detailed measurements of TCP in the European ATM Pilot. In this wide area ATM network, the traffic contract is enforced by a Usage Parameter Control (UPC) mechanism. We show that when the ATM level traffic was compliant, TCP achieved a good utilization of the link. However, when the ATM level traffic was only almost compliant, the TCP throughput almost collapsed.

Keywords: Asynchronous Transfer Mode; TCP; Usage Parameter Control; Measurements

1. Introduction

Asynchronous Transfer Mode (ATM), selected in 1988 by the ITU as the basic technology for the B-ISDN [l], immediately raised a lot of interest in the data com- munications world, and soon appeared as a promising technology for the LAN environment. The extraordinary success of the ATM Forum, created at the end of 1991, is a clear indication of this trend.

The question of the integration of TCP/IP and the ATM technology has already attracted a lot of interest, and experiments have been done with various goals and equipment. Some experiments [2] have concentrated their attention on the end-to-end performance of two workstations equipped with ATM adapters and con- nected through a first generation ATM switch. Some other researchers have analysed, using simulations, situa- tions where several TCP connections are multiplexed on a single link through an output buffered switch [3]. Not surprisingly, the lack of access and admission control create situations of congestion that are difficult to handle by TCP.

Several wide area ATM networks are currently being deployed in the world [4,5]. To support real-time traffic as well as multimedia-based communications, and legacy data communications, some of these networks imple- ment the admission control and policing mechanisms specified by ITU [6] and the ATM Forum [7]. The Euro- pean ATM Pilot [4] is an example of such a network.

In this paper, we will present and discuss detailed measurements related to the performance of TCP in

0 140-3664/96/S 15.00 0 1996 Elsevier Science B.V. All rights reserved PII SO140-3664(96)01058-4

the European ATM Pilot. We mainly study how TCP is able to adapt to an ATM-level traffic contract enforced by a UPC mechanism inside the network.

2. Measurement environment

Our measurement environment consisted of ATM LANs interconnected via a wide area ATM network (the European ATM Pilot).

2.1. The European ATM Pilot

The European ATM Pilot [4] is one of the first public wide area ATM networks deployed in the world. This network has been built as a collaboration among 16 Public Network Operators from 15 countries, and covers most of western Europe. Each PNO participating in the Pilot network has installed at least one ATM switch and established links with its adjacent countries. Wide area ATM switches from different vendors are used inside the network. These ATM switches are entirely managed by the PNOs, and as simple users of the network we had no control over them. Most of the links on the European ATM Pilot are 34Mbit/s E-3 ATM links [8], but some links have already been upgraded to 155 Mbit/s SDH links [9].

The measurements presented in this paper were done between the testbeds of the University of Liige (Belgium), ElectricitC de France (Paris, France) and Telenor Research and Development (Kjeller, Norway).

0. Bonaventure et al./Computer Communrcatrons 19 (1996) 264-275 265

These testbeds were connected to their national ATM network with links provided, respectively, by Belgacom, France Telecom and Telenor. We also used transit links provided by Deutsche Telekom (Germany) and Telia (Sweden).

The European ATM Pilot provides an ATM Virtual Path (VP) bearer service. The VPs are not permanent. They are activated. by the PNOs and upon prior request from the users, usually for short periods of time (several hours). When a user requests the activation of a VP, it has to specify precisely its bandwidth requirements. A VP may only be activated if the network has enough resources in the switches and the transit links to support it. If the network does not have enough resources, the activation of the VP is refused.

The traffic sent by a user is always monitored by a UPC mechanism inside the network. This UPC will dis- card the cells that do not comply with the negotiated traffic contract. Tagging the non-compliant cells is not currently an option in the European ATM Pilot. This is in line with the current UN1 specification [7], where every cell flow is subject to a policing of the peak cell rate of the aggregated traffic of the high and low priority cell flows. This UPC is probably one of the main characteristics of the European ATM Pilot compared to other wide area ATM pilot networks.

2.2. The ATM LANs

The three ATM LANs we used were very similar. They were all built around an ASX-200ATM switch from Fore Systems Inc. The ASX-200 is a 2..5Gbit/s bus- based, non-blocking, output buffered switch equipped with an internal Spare 2 based controller. Each output port contains a buffer which can hold 256 cells. The UPC mechanism available in the ASX-200 was disabled during the measurements. The testbeds located in Liege and Paris were connected with an E-3 link to their national ATM Pilot, while the testbed located in Oslo used an OC-3 link.

The workstations used in the three testbeds were Spare 10 clones from Axil (model 3 11/5.x) running SunOS 4.1.3, and equipped with an SBA-200 ATM adapter from Fore. The testbed located in Liege used SDH 155 Mbit/s SBA-200 ATM adapters, while the other test- beds used Taxi lOOMbit/s and Taxi 140Mbit/s ATM adapters. These variants of the SBA-200 offer similar performance. Measurements done in the local area have shown that the bottleneck on the SBA-200 is not the physical rate of the ATM link, but mainly the trans- fers between the workstation memory and the ATM adapter [2]. The maximum TCP throughput measured by ttcp [lo] with these SBA-200 ATM adapters on our workstations are similar. For all the measurements pre- sented in this paper, the sender was always located in Liege, and thus used a 155Mbit/s SDH ATM adapter,

TCP IP

AAL European &m -* ATM ASX-200 34 Mbps ATM Pilot

PDH UPC

155 Mbps SDH 256 cells

output buffer

Fig. 1. Sending side of our environment for the first campaign of

measurements

the other testbeds used Taxi lOOMbit/s and Taxi 140 Mbit/s ATM adapters, respectively.

2.3. Characteristics of a compliant trajic

In a wide area network such as the European ATM Pilot, where the traffic contract is enforced by a UPC mechanism, it is essential for the user to generate a com- pliant traffic. If the traffic is not completely compliant, the UPC will discard the non-compliant cells, and this may lead to a high cell loss rate.

In our environment (Fig. l), cells can be discarded in two places. The local ATM switch will loose cells if there is an overflow in its output buffer (256 cells), and the UPC in the ATM Pilot will discard cells that do not comply with the traffic contract.

The maximum burst size sent at a 155 Mbit/s line rate which does not cause overflow in the output buffer on the 34 Mbit/s port of the ASX-200 is given by Eq. (l), assum- ing that the output buffer is idle before the transmission of the burst. That is, if T, is the cell time on the input link of the switch, To the cell time on the output link of the switch, and L the size of the output buffer, measured in cells, the maximum burst size (in cells), neglecting the physical layer framing, is:

maximum-burst = L/( 1 - ( T,/To)) (1)

In our sending environment (T, = 2.726 ps,

To = 12.50~s, L = 256 cells), the maximum burst size is 327 cells. Such a burst has to be followed by an idle time of 256*12.50 = 3200~s (i.e. 1174 cell times at the 155 Mbit/s line rate) to empty the output buffer of the switch.

However, a burst absorbed by the ASX-200 must still be compliant with the traffic contract enforced by the UPC mechanism. If q is the cell time at the input line of the UPC, T,, and r,, respectively, the cell rate and the cell delay variation enforced by the UPC, the maximum burst size declared as compliant by the UPC, assuming that the UPC is in the idle state upon the arrival of the first cell, is:

maximum-burst = 1 + (T,/(T,, - T,)) (2)

266 0. Bonarenture et al./Computer Communications 19 (1996) 264-275

For example, let us consider that the traffic contract enforced by the UPC mechanism is 61,000 cells/s (one cell every 16.39 pus), with an allowed Cell Delay Variation (CDV) of 101~s ([T = 16.39~s r = lOlps] with the notations of the Virtual Scheduling Algorithm [6,7]). In our environment (q = 12.50 hs, T, = 16.39 ps, r,, = 101 ,us), the maximum burst size declared as com- pliant by the UPC is only 26 cells. This burst has to be followed by an idle time of at least 104 ps to let the UPC return to the idle state. Thus, the most bursty traffic allowed by the [T = 16.39~s r = 101 ps] UPC is bursts of 26 cells separated by an idle time of 104 ~.ls (i.e. 38 cell times at the 155 Mbit/s line rate).

As shown by Eq. (2) the maximum burst size declared as compliant by the UPC depends upon both the cell rate of the traffic contract and the cell rate at the input of the UPC.

2.4. The ATM adapters

For our measurements, we used SBA-200 ATM adap- ters from Fore Systems Inc. The spacing was done with release 2.2.9 of the ATM driver and a modified download firmware for the i960 processor on the SBA-200 supplied by Fore. Our experience is that this spacing works rea- sonably well for a wide range of bit rates, although the granularity is high. The main drawback is that this spac- ing works on a per port basis, and thus it is impossible to use it to shape several SVCs at different rates.

To have a better understanding of the behaviour of the spacing implemented on the SBA-200 adapters, we per- formed the following experiment. One workstation, equipped with a Taxi 140Mbit/s SBA-200 sent a train of 8 KBytes (i.e. 172 cells) long UDP packets through an ASX-200 ATM switch, to an Alcatel ATGA ATM tester equipped with a 155 Mbit/s SDH interface. The ATGA measured the cell interarrival time (IAT), i.e. the differ- ence measured in cell times between two successive assigned cells, of the ATM cells in the UDP packets. The requested throughput at the ATM level was one cell every 17.35~s (i.e. one cell every 6.4 cell slots on the 155 Mbit/s SDH interface of the ATGA).

Fig. 2 shows that release 2.2.9 produces an almost regular traffic. The only significant variation in the IAT is an increase of 7-8 cell times every 21 or 22 cells [l 11. This figure also shows that the IAT between two cells of consecutive packets jumps to 25 cell times, and thus the interpacket time can be estimated as approximately 20 cell slots or 52~s.

According to the documentation, this modified firm- ware is able to insert an idle time of n*0.492ps between two consecutive assigned cells transmitted on the output link. However, detailed measurements of the spacing [ 1 l] have shown that, on average, this idle time was much closer to n*0.473 ps. We will use this measured value throughout this paper. The value n, and thus the

10 -

15 -

30 -

25 -

20 -

15 -

0 I I 8 I 8 u I I I

25 50 75 100 125 150 175 200 225 250

cell e

Fig. 2. Spacing with release 2.2.9 of the Fore driver.

average rate of the output link, can be changed during run-time.

3. First campaign of measurements

For this first campaign of measurements, we used the testbeds located in Liege and Paris. The distance between these two testbeds is approximately 400 km, and the round-trip-time measured by ‘ping’ is 8 ms.

For these measurements, we requested a 58,762 cells per second (i.e. one cell every 17.02~s) VP from the European ATM Pilot. This VP was monitored by a UPC inside the network, and we were told later that this UPC was set to enforce a traffic contract of 61,000 cells (i.e. one cell every 16.39~s) per second with an allowed CDV of 101 ps. Each TCP measurement con- sisted of the memory-to-memory transfer of 8 Mbyte between the sender and the receiver with ttcp [lo]. The workstations, the local ATM switches, and our VP in the ATM Pilot were exclusively used for the measurements. Thus, there were no contentions due to the sharing of the VP by several TCP connections.

The results showed that the TCP throughput is very sensitive to segment losses. In the following sections, we will present results from individual measurements which are representative of the problems we encountered dur- ing our measurements.

3. I. Measurements without spacing

A common belief within the data communication com- munity, based on TCP’s excellent track record, is that TCP can adapt to any kind of bandwidth. However, during congestion and in case of traffic contract viola- tions in ATM networks, the network will discard ATM cells, but not complete TCP segments. This may result in a flow of ATM cells but no flow of correct TCP seg- ments, and thus no real data transfer.

0. Bonaventure et a/./Computer Commumcarions 19 (1996) 264-275 261

We tried to perform some measurements without enabling the spacing on our ATM adapters, and it was impossible to transfer any data with TCP. The TCP con- nection was easily established (the SYN segments and the corresponding ACKs are contained in a single ATM cell). However, it was impossible to perform any data transfer. This problem was caused by the default Maximum Transmission Unit (MTU) size of 9188 bytes selected for IP over ATM [12]. When spacing is not enabled, the ATM adapters send the AAL-PDUs con- taining TCP segments as bursts of 192 cells, at the line rate. While a single burst of 192 cells can be absorbed by the output buffer of the local ATM switch, the resulting burst on the 34 Mbit/s link is not compliant, and at least one cell from each TCP segment containing data is discarded by the UPC of the ATM Pilot. Thus, with the default MTU size for IP over ATM, ATM level spacing was really necessary, even with TCP, in our environment.

3.2. Measurements with compliant traJic

With our SBA-200, the closest possible value for the spacing, while still remaining larger than the peak cell rate of the traffic contract was one cell every 16.44~s. As shown in Fig. 3, the highest throughput is obtained with the largest window size. Measurements with the spacing set to a value higher than one cell every 16.44 ps gave similar results. The throughput variations shown in Fig. 3 are caused by limited segment loss (the segment loss rate was lower than 0.2%) and correspond- ing retransmissions. The reason for these segment losses is not known. As each TCP segment contains 192 cells, and assuming that a single cell is lost in each lost seg- ment, this corresponds to a cell loss rate of approxi- mately 7 x 10p6. We notice that the losses mainly occur when the window size is large.

30

25

Fig. 3. TCP throughput with a compliant ATM traffic. 0 48 Kbyte Fig. 5. Segment loss rate with spacing set to one cell every 15.50~s. window, + 32 Kbyte window; 0 16 KByte window. 0 48 KByte wmdow, + 32 KByte window; 0 16 KByte wmdow.

0 ------ 0 2 4 6 8 10 12 14 16

TSD" SlLe [Kaytesl

Fig. 4. TCP throughput with spacing set to one cell every 15.50~s.

0 48 Kbyte wmdow; + 32 Kbyte window; 0 16 Kbyte window.

3.3. Measurements bcith almost compliant trajic

To see what happens when the ATM level traffic is not completely compliant, we performed the same through- put measurements with a cell rate of one cell every 15.97 bs, which is the closest non-compliant cell rate achievable with our SBA-200 ATM adapters. With this cell rate, the results were similar to the results presented in Fig. 3.

When we tried the next lower value for the spacing (one cell every 15.50~s), TCP almost collapsed. This is shown in Fig. 4. The most affected traffic is the traffic sent with a 48 KBytes window. Fig. 4 shows that with this window size, the throughput drops below 1 Mbit/s (compared to 19 Mbit/s when the spacing was set to one cell every 16.44 ps). This collapse of the TCP throughput is accompanied by a high segment loss rate, as shown in Fig. 5.

*OI

0 2 4 6 8 10 12 14 16

TSW size [KBytesI

268 0. Bonaeenture et al./Computer (

4. Discussion of the first campaign of measurements

Before explaining the results of our first campaign of measurements in detail, we first briefly review the main mechanisms used by TCP. We focus on the mechanism actually used by the SunOS implementation. A more detailed presentation of TCP may be found in Ref. [13].

4.1. Datapow

To achieve a high utilization of the links, TCP always tries to use the largest possible segments to transfer the user data [ 141. If the source and the destination are in the same subnet, the Maximum Segment Size (MSS) used by TCP is given by the difference between the MTU size of the ATM subnet and the TCP + IP headers (40 bytes). In our environment, TCP computed an MSS of 9148 bytes. If the source and the destination are not in the same subnet, TCP uses the default value for the MSS, which in SunOS 4.1.3. is set to 512 bytes.

TCP uses a window-based flow control, and thus the data flow is constrained by the available window and the acknowledgements sent by the receiver. The TCP data flow is further constrained by the slowstart and the congestion avoidance mechanisms [ 151. The maximum window size used by the standard TCP is 65,535 bytes. Extensions have been proposed to allow TCP to use a much larger window, but these extensions are not implemented in SunOS 4.1.3. SunOS 4.1.3 further con- strains the maximum window size to 52,428 bytes.

4.2. Acknowledgement polic)

The acknowledgements used by TCP are cumulative. When a TCP entity receives a data segment, it may send a corresponding acknowledgement immediately or delay it. In most implementations, an acknowledgement will be sent immediately after the reception of a data segment when either of the following conditions holds [16]:

(a) the window sequence number will slide by at least a fraction (e.g. 35% or 50%) of the maximum window socket receive buffer;

(b) the highest announced window sequence number edge will slide by at least twice the MSS size.

The first condition is only true when small windows are used. Over wide area ATM, TCP should use large windows, and thus only the second condition will usually hold. If the acknowledgement is not sent immediately, it is delayed until either a new (data or acknowledgement) segment is sent on the connection, or the delayed acknowledgement timer expires [17]. In most implemen- tations, this timer is set to 200ms.

Tommumcations 19 (1996) 264-275

4.3. Retransmission policies

In most TCP implementations, there are two comple- mentary retransmission policies. The first one is the clas- sical timer-based retransmission. TCP maintains a single retransmission timer for each connection, and when this timer expires, TCP assumes that all the unacknowledged segments have been lost. The congestion window is reduced to MSS bytes, TCP performs slow start and the unacknowledged segments are retransmitted. The second one relies on duplicate acknowledgements. When a TCP entity receives an out of order segment, it may send an acknowledgement immediately. If d (usually 3) duplicate acknowledgements are received for the same segment, TCP assumes that the requested segment has been lost. The congestion window is reduced by 50%, TCP enters into the congestion avoidance phase, the requested seg- ment is retransmitted, and TCP continues the data transfer as if only this segment was lost [18].

4.4. Behaviour of TCP with a compliant ATM trafJic

During our measurements with compliant ATM level traffic, the segment loss rate was lower than 0.2%. How- ever, even with such a low segment loss rate we still measured variations in the TCP throughput. To explain these variations, we gathered packet traces during the measurements. The traces showed that, in SunOS, the lost segments are only retransmitted after the expiration of the retransmission timer. On average, the cost of a single retransmission is 750ms. This behaviour was caused by a peculiarity of SunOS. While SunOS will retransmit a segment after three duplicate acknowledge- ments have been received, it does not send an immediate acknowledgement when it receives an out of order segment, and thus the fast retransmit algorithm is only partially implemented in SunOS.

Even with a correct implementation, the fast retrans- mit algorithm will not necessarily be sufficient to avoid the expiration of the retransmission timer. For random and infrequent single segment losses, this algorithm will only work well if the window contains at least (d + 2) MSS sized segments. If several segments are lost, the window must be even larger.

If a single segment is lost, the worst case for the fast retransmit algorithm occurs when several segments are sent after the connection has been idle for some time. The first segment does not advance the window by at least 2 x MSS bytes, and thus the receiver does not send an immediate acknowledgement. If we assume that the second segment is lost, the third segment is out-of- order, and will therefore trigger an immediate acknow- ledgement. However, as TCP does not distinguish between a standard acknowledgement and an acknow- ledgement for out-of-order segments, this acknowledge- ment acknowledges the first segment. At this time, there

0 Bonaventure et al.lCompurer Communications 19 (19961 264-27s 269

are already two unacknowledged segments, and d new segments are necessary to generate the d duplicate acknowledgements. Thus, the window must contain at least (d + 2) MSS sized segments. With the default value of 9148 bytes for the MSS with TCP over ATM, and a default value of 3 for d, the window should contain at least 45,740 bytes. It should be noted that, during the data transfer, the slow start and congestion avoidance mechanisms used by TCP may reduce the usable window to values lower than the maximum window size requested at connection establishment. Thus, even if the window is set to (d + 2) MSS sized segments when the connection is established, there is no guarantee that the fast retransmit algorithm will always avoid the expiration of the retrans- mission timer when the random losses occur.

4.5. Behaviour of TCP with almost compliant ATM trafic

Our measurements have shown that when TCP is used with the spacing set to one cell every 15.50 ps (while the peak cell rate enforced by the UPC was one cell every 16.39 ps), the loss rate increases quickly with the window size. During these measurements, the TCP throughput achieved with a large window was lower than the throughput achieved with a small window.

To explain this unexpected behaviour we looked at the packet traces gathered during the measurements, and found that when a segment was lost with a 48 KByte window, it was always the third segment in a burst of at least three segments. However, in a few three segment bursts, the third segment was not lost. From the packet traces, it seemed that bursts of two TCP segments were compliant, while bursts of three TCP segments were not complaint.

This explains why the segment loss rate with a 16 KByte window is still less than 0.25%, as this window prevents TCP sending bursts of three segments with such a window. The difference between the small and the large TSDUs (i.e. the application buffers which are sent with the send0 system call) with a 32 KByte window can also be explained by looking at the segments actually sent by TCP. When the TSDUs (i.e. the size of the buffer trans- mitted with the send () system call) are smaller than 7 KBytes, TCP may send bursts of three segments after a retransmission, and these bursts are declared as non- compliant by the UPC mechanism. When the TSDUs are larger than 7KBytes, TCP does not send bursts larger than two segments, and thus the ATM level traffic is still declared as compliant by the UPC mechanism. This difference in the behaviour of TCP with large and small TSDUs is due to the interactions between the memory management in the socket layer and the trans- mission of TCP segments [2]. An important point to note is that when the ATM level traffic is not completely compliant, a large window may give a much lower throughput than a smaller window.

c 0 1 2 3 4 5 6 7 a

Tune [set]

Fig. 6. Fmt 8 seconds of a tract (48 KByte window) with the spacmg

set to one cell every 15.5Oj1s.

The packet traces gathered during the measurements with a 48 KByte window reveal that a TCP connection is idle for most of the time. Fig. 6 shows the first 8 seconds of the packet trace corresponding to a 8 MByte transfer with an almost compliant ATM traffic and a 48 KByte window. This figure shows that the TCP connection is idle for most of the time and has an almost periodical behaviour. Eight segments are transmitted very quickly (in four round trip times), but the sixth segment is lost, and TCP has to wait for the expiration of the retransmis- sion timer (2 ticks - almost one second) to retransmit the lost segment. The larger bursts which occurred around the first and the eighth second contained a few three segment bursts that were declared as compliant by the UPC mechanism.

Even other BSD-based TCP implementations experi- ence similar problems. They correspond to a standard behavior with the slowstart mechanism. With respect to burst loss, 4.4 BSD [19] behaves like SunOS. If the UPC only accepts bursts as long as two segments, 4.4BSD will behave as follows. After a loss, TCP will enter the slow start phase. During this phase, TCP increases its window exponentially, but as soon as the congestion window reaches 3 x MSS bytes, a burst of three segments is sent, and the third segment is lost. On average, 4.4BSD will send and acknowledge seven segments in a period, corresponding to four round trip times and two ticks (i.e. the minimum value of the retransmission timer), and one segment out of seven has to be retransmitted. This is because TCP is always in the slow start phase, and does not enter in the con- gestion avoidance mode. If the UPC is changed so that the largest allowed burst contains 1 segment, 4.4 BSD will transmit on average three segments within a period corresponding to two round trip times and two ticks, and half the segments have to be retransmitted. If the non- compliance of the ATM traffic is such that even a single TCP segment is non-compliant, the data transfer is

270 0. Bonaventure et aLlComputer Communications 19 (1996) 264-275

TCP IP

AALS

E!

12.5

ATM ASX-200 spacer-controller European ATM Pilot

oC_3 [XI~[~~~_ZFq

E-3 256 cells

output buffer UPCl: [T= 16.39 psec,r= 187.5 psec] UPC2: [T115.70 psec, ‘5 = ? w=cl

Fig. 7. Sending side of our environment for the second campaign of

measurements. 16.44 15.5 14.55 13.6 12.65 11.71

cell spacing [mxrosecondsl

impossible as TCP always tries to use MSS sized segments.

Fig. 8. Measurements with a 48KByte window. 0 9188 bytes MTU;

+ 4832 bytes MTU; 0 1500 bytes MTU; x 552 bytes MTU.

An important point to note from the measurements with an almost compliant traffic is that TCP performs badly when the network limits the maximum burst size to three segments or less.

5. Second campaign of measurements

enforce a traffic contract of 61,000 cells/second (i.e. one cell every 16.39 ps) with an allowed CDV of 187.5 ps. As the spacer-controller is also a traffic shaper, we were sure that the cell flow sent in the European ATM Pilot was compliant with the traffic contract enforced in the net- work. Thus, if we ignore the rare random cell losses, the only source of cell losses during the additional measure- ments was the UPC of the spacer-controller.

Our first campaign of measurements has shown that a small variation in the peak cell rate of the ATM level traffic can cause a lot of problems to TCP. To further investigate the behaviour of TCP, we have performed an additional set of measurements. First, we wanted to have a better control over the measurement environment, and especially the parameters of the UPC. For this, we modi- fied the testbed of the University of Liege. A spacer- controller [20] built by CNET was inserted between the output of the Fore ASX-200 ATM switch and the E-3 link to the Belgian ATM Pilot (Fig. 7). The spacer- controller is able to perform both traffic shaping and traffic policing. The main advantage of the spacer- controller over a standalone UPC, such as the one con- tained in the Fore ASX-200, is that the cells declared as compliant by the UPC part of the spacer-controller are reshaped at the output of the UPC before being trans- mitted on the output link. Thus, the ATM cell flow at the output of the spacer-controller has a very low associated CDV, even if its UPC allows a large CDV.

During the additional measurements, we varied the MTU size (and thus the maximum size of the packets transmitted by TCP) and the spacing of the ATM adapters. We performed our measurements with the MTU set to 9188 bytes (the default value for IP over ATM), 4832 bytes (the default value for IP over FDDI), 1500 bytes (the default value for IP over Ethernet) and 552 bytes (which forced TCP to compute an MSS of 512 bytes, which is the default value for the MSS size when the destination is not in the local subnet). The figures presented below show the results of individual measure- ments which are representative of the problems we encoun- tered during our second campaign of measurements.

5.1. Measurements with 48 KByte window

Unfortunately, when we received the spacer-controller. Electricite de France was no longer connected to the European ATM Pilot. Instead, we used the testbed of Telnor Research and Development as the receiver for the additional measurements. The round-trip-time, measured by ping, between the University of Liege and Telenor Research and Development was 31 ms.

The measurements with a 48 Kbyte window are pre- sented in Fig. 8. They show that the variations of the spacing have a lower impact on TCP when it uses small packets than when it uses large packets.

With an MTU size of 9188 bytes, the data transfer was almost impossible with the spacing set to one cell every 15.50~s. The packet traces revealed that the longest compliant burst contained only one segment.

For the additional measurements, we requested a 63,679 cells per second (i.e. one cell every 15.70~s) VP through the ATM Pilot. We used the spacer-controller to

With an MTU size of 4832 bytes, there is a moderate throughput drop when the spacing is set to one cell every 15.50 11s. This throughput drop is caused by a few cells being discarded by the UPC. When the spacing is set to one cell every 15.02 ps per cell, the data transfer is almost

0. Bonavenrure et al.iComputer Communrcations 19 (1996) 264-275 271

impossible and the packet traces reveal that the longest compliant bursts contained only one segment.

With an MTU size of 1500 bytes. the throughput drop occurs when the spacing was set to one cell every 14.07~s. The packet traces revealed that the losses mainly occurred when TCP sent bursts of 8 or 9 seg- ments. When the spacing was set to one cell every 13.60~s, the losses mainly occurred when TCP sent bursts of 4 or 5 segments, and the throughput dropped to 250 kbit/s. We did not perform measurements with the spacing set to a value lower than one cell every 13.60 ~LS.

With an MTU size of 552 bytes, the UPC starts to discard cells and the throughput drops when the spacing is set to one cell every 12.66~s. The throughput drop is ‘smoother’ with a small MTU size than with a large MTU size. With the spacing set to one cell every 11.24 ps, the achieved throughput was approximately 400 kbit/s. The packet traces revealed that, with this spacing. bursts of eight segments were not compliant with the traffic contract.

5.2. Measurements with 16 KByte window

The measurements corresponding to a window size of 16 KByte are presented in Fig. 9. With this window size, the throughput drops occur when the spacing is set to one cell every 15.02 ~.ls with MTU sizes of 9 188 and 4832 bytes. With this spacing, the data transfer was almost impossible with an MTU size of 9188 bytes, while it was very slow (less than 100 kbit/s) with an MTU size of 4832 bytes.

With and MTU size of 1500 bytes, the throughput dropped to 250 kbit/s when the spacing was set to one cell every 13.60~s. With an MTU size of 552 bytes, the throughput drops occur when the spacing is set to one cell every 12.18 pus.

Fig. 9. Measurements with a 16KByte window. 0 9188 bytes MTU;

+ 4832 bytes MTU; 0 1500 bytes MTU; x 552 bytes MTU.

6. Discussion of the second campaign of measurements

6.1. The maximum throughput

Compared with the first campaign of measurements, TCP only achieved a maximum throughput of 11 Mbit/s while it achieved 19 Mbit/s during the first campaign. This difference is, of course, due to the larger value of the round-trip time during the second campaign.

Figs. 8 and 9 show that, when the cell spacing of the ATM adapter is larger than one cell every 15.02~s a large MTU does not always result in the highest through- put. This can be explained by the Nagle [ 141 and the Silly Window Syndrome [17] avoidance algorithms. Due to these algorithms, TCP will not send a segment unless it contains MSS bytes or at least half the maximum win- dow size advertised by the receiver. This means that the usable window is actually reduced to an integer number of MSS-sized segments.

Another limitation is due to the delayed acknowledge- ment strategy. This strategy forces TCP to acknowledge immediately only every second segment. For example, with a maximum window corresponding to five MSS sized segments (as with the 9188 bytes MTU and a window of 48 KByte), when TCP sends five segments, the second and the fourth segments will be acknowledged immediately. Thus, within the first round-trip time, TCP has sent five segments (segments 1 to 5) but only four segments have been acknowledged (segments 1 to 4). During the second round-trip-time, TCP will thus only send four segments, and the sixth and eighth segments will be immediately acknowledged. During the second rount-trip-time, TCP has sent four segments (segments 6 to 9) and received acknowledgements corresponding to four segments (segments 5 to 8) . . .

It should, however, be noted that these reductions of the usable window do not occur when the window size is sufficiently larger than the bandwidth delay product.

6.2. The throughput drops

The throughput drops can be explained by looking closely at the behaviour of the SBA-200 ATM adapter. This ATM adapter performes the CRC calculation for AALS but contains a limited amount of memory and uses DMA to retrieve the AAL-SDUs from the main memory of the workstation. The traffic shaping is done on a per AAL-PDU basis. Each AAL-PDU is sent at the requested peak cell rate and there is an inter-PDU time between two adjacent AAL-PDUs. This behaviour is captured by the packet train model shown in Fig. 10. The inter-PDU time is probably caused by the prepara- tion of the DMA operation for the next AAL-SDU by the ATM driver. We were not able to measure this inter- PDU time accurately with TCP, but we have measured an inter-PDU time of 52~s with 8 KByte long UDP

272 0 Bonaventure et al./Computer Communications 19 (1996) 264-275

R psec Inter-PDF time (I)

Fig. 10. Packet train model.

packets. As this inter-PDU time is caused by the SBA- 200 ATM adapter, and not by the processing in the higher layer protocols, we can assume that the same value also holds for TCP segments. With 552 bytes long UDP packets, this inter-PDU time is reduced to 41 ,us. It should be noted that other ATM adapters may behave differently.

From the specification of the Virtual Scheduling Algorithm [6,7] it can be shown that such a sequence of p packets separated by an inter-PDU time I, with each packet containing c cells sent with the spacing set to one cell every R ps, will be compliant with the traffic contract

[T = TUPC, 7 = CDV] if, considering that the UPC is in the idle state upon arrival of the first cells:

tlk=O...(p- l),

~j=o...(c-1):(kxc+j)xz-~p~ gk

x (cx R+Z)+jx R+CDV

(3)

This packet train model can be used together with Eq. (3) to calculate the length of the longest burst which is compliant with the [T = 16.39~s, 7 = 187.5~~1 traffic contract enforced by the spacer-controller. In Table 1, we have used an inter-PDU time (I) of 52~s for the MTU sizes of 1500 bytes (32 ATM cells), 4832 bytes (101 ATM cells) and 9188 bytes (192 ATM cells) and an inter-PDU time of 41 ps with an MTU of 552 bytes (12 ATM cells). In

Table 1

Length of the longest compliant burst (in MTU sized packets)

MTU size [bytes]

R bs per cell] 552 1500 4832 9188

15.97 CG OZI 4

15.50 oci DC, 3” 1 15.02 CQ 00 1 0

14.55 CC 20 I 0

14.08 cc 6 0 0

13.60 cc, 3 0 0 13.13 cn 2 0 0

12.65 53 2 0 0

12.18 17 1 0 0

11.71 10 1 0 0 11.23 8 1 0 0

Table 1, we also take the buffering of the ASX-200 into account when the spacing is set to values lower than one cell every 12.50 ps (i.e. the cell time on the E-3 interface). In this table, the value 0 means that the single MTU-sized packet is not compliant.

Table 1 can be used to explain the throughput drops with an MTU size of 9188 bytes. When the spacing is set to one cell every 15.97~s, the largest compliant burst contains four segments. TCP is not affected by this lim- itation, because TCP did not send bursts containing more than three segments. With the spacing set to one cell every 15.50 11s and a 48 KByte window, the model agrees with the measurements. With a 16 KByte window, due to interactions between the socket layer and the memory management [16], TCP sent one 4096 bytes seg- ment followed by one 8 192 bytes segment every round- trip-time. This burst of two segments was compliant with the traffic contract.

The measurements with an MTU size of 4832 bytes can also be explained by Table 1. It should be noted that due to the inter-PDU time of 52~s with our SBA- 200 ATM adapters, the traffic sent with the spacing set to one cell every 15.97~~ was compliant with the traffic contract enforced by the spacer-controller.

The measurements with an MTU size of 1500 bytes almost agree with Table 1. With the spacing set to one cell every 14.08 ~LS and a 48 KByte window, the packet traces revealed a maximum burst length of eight seg- ments, while the model predicts a maximum burst size of six segments. This may be explained by the fact that the actual transmission of ATM cells is not as regular as the model, especially for long bursts. When an acknowl- edgement is received while segments are being trans- mitted, it interrupts the normal transmission of ATM cells, and thus causes a small idle time. When the spacing is set to one cell every 13.60 ps, the model agrees with the measurements, as the throughput drops heavily.

With an MTU size of 552 bytes and a 48 KByte win- dow, the measurements indicated that the UPC started to discard cells when the spacing was set to one cell every 12.65~s. This explains the throughput drop with this spacing as a 48 KByte window permits bursts of up to 96 segments with an MTU size of 552 bytes, while the model predicts a maximum burst length of 53 packets. With a 16 KByte window, the throughput dropped when the spacing was set to one cell every 12.18 ps. This is also coherent with the model as a 16KByte window permits bursts of up to 32 segments with an MTU size of 552 bytes, while the model predicts a maximum burst length of 17 packets.

With an MTU-size of 552 bytes, the throughput drop is ‘smoother’ than with a large MTU size. However, the cost of this ‘smoother’ degradation when the traffic is not entirely compliant is a high CPU utilisation. With our workstations, the highest achievable throughput with an MTU-size of 552 bytes is almost 15 Mbit/s, even in the

0. Bonawrtture et al.iCompu!er Commumcations 19 (1996) 264-275 273

local area, while the same workstations are able to sus- tain a 70 Mbit/s throughput with an MTU size of 9188 bytes. Thus, using a small MTU is not necessarily a wise choice.

7. Possible improvements to the TCP implementations

Two simple improvements to the current TCP imple- mentations could slightly improve the behaviour of TCP in our environment. The first improvement concerns the implementation of the timers. In most BSD derived implementations of TCP, including SunOS, the round trip time measurement has a granularity of 500ms (1 tick), and the retransmission timer has a minimum value of 2 ticks. These default values were chosen to avoid overloading the 1 MIPS workstations that were available when TCP was included in 4.x BSD. At that time, the networks were also much slower than today, and such a high granularity for the round trip time measurements and the retransmission timers did not cause problems. Now the networks are much faster, and even on the Internet, round trip times lower than 500ms are very common, and it would be very useful to have a much lower granularity for the round trip time measurements and the retransmission timer. In BSD derived implementations, this may be done by changing the value of the PR_SLOWHZ constant in the kernel.

The second improvement concerns the retransmission algorithm. The cumulative acknowledgements used by TCP are not the best answer to random losses, selective acknowledgements would probably be better. However, this would require changes to the TCP specification. A possible change in the fast retransmit algorithm is the value of the duplicate acknowledgement threshold. The default value of 3 for this threshold was chosen to avoid unnecessary retransmissions when the network may reorder packets without loosing them. In an ATM net- work, the cells are always delivered in sequence, and thus reordering does not occur. In ATM networks, the opti- mal value for the duplicate acknowledgement threshold would thus be 1. This value is sufficient to avoid the expiration of the retransmission timers when the largest compliant bursts contain one or two segments. However, this will not prevent the TCP throughput to drop to a few segments per round trip time. In other networks such as the Internet, where reordering may occur, especially when the route changes, setting the duplicate acknowl- edgement threshold to 1 is probably too aggressive. The duplicate ack threshold might thus be selectable on a per- connection basis (e.g. via the setsockopt function).

Recently, several major modifications to TCP have been proposed under the acronym TCP Vegas [21]. The main advantage of TCP Vegas over other proposed mod- ifications is that they are restricted to the sending side of

the implementation and they do not require any modifi- cation to the receiving side or the TCP specification. These modifications were done with the Internet envir-

onment in mind, but it is interesting to see how they could have influenced TCP in our environment. The main modifications concern the retransmission scheme and the congestion control algorithm. The retransmis- sion scheme proposed in TCP Vegas no longer relies on the high granularity timers used by most TCP implemen- tations. In TCP Vegas, the sender measures the round- trip-time for each transmitted segment with the high- resolution clock of the workstation. When a duplicate ACK is received, TCP Vegas will compare the measured round-trip-time with the retransmission timer, and will retransmit the packet if the round-trip-time is already larger than the value of the retransmission timer. This modified retransmission scheme should work well in our environment.

The second modification introduced in TCP Vegas concerns the congestion avoidance mechanism. In con- trast with TCP, Vegas does not rely exclusively on packet losses as an indication of congestion in the network. Instead, it evaluates the current throughput once per round-trip-time, and adjusts the congestion window accordingly. The expected throughput is the ratio between the window size and the lowest value of the round-trip-time measured during the connection. If the current throughput is lower than the expected through- put by a sufficient amount, the congestion window is decreased. If the current throughput is higher than the expected throughput by a sufficient amount, the con- gestion window is increased. Between these two limits, the congestion window is kept unchanged. It is difficult to estimate how this scheme would work in our environ- ment as it relies on an increase of the round-trip-time prior to congestion. When the packet losses are caused by a UPC mechanism, they are not preceded by an increase of the round-trip-time.

8. Related work

The interactions between TCP and ATM have been studied in the research community during the last few years. Most of the published material dealing with TCP over ATM are either based on performance measure- ments or simulations. The performance measurements have mainly been done in a LAN environment [2,22].

They have mainly studied the factors limiting the per- formance of TCP over ATM in a LAN environment where TCP performance is good. However, there are some situations where TCP behaves badly.

It has been shown [16] that TCP may enter temporary deadlock situations if the socket buffers (i.e. TCP window size) of the sender and the receiver are different. The probability of these deadlock situations are much

274 0. Bonaventure et al./Computer Communicatrons 19 (1996) 264-275

higher for local ATM networks than for Ethernet and FDDI due to the large MTU used in ATM networks.

Nearly all the simulation studies have concentrated on the congestion problem. These simulations evaluate how TCP’s congestion control scheme reacts to congestion where several TCP connections try to share a single bottleneck link (and switch output buffer). It has been shown [3] that, in a LAN environment, TCP does not react efficiently to congestion unless the bottleneck ATM switch uses large buffers and is modified to discard whole packets instead of single cells when congestion occurs. Other strategies, simulated in a WAN environment, are discussed in Ref. [23].

The local ATM networks provide a best effort service, and there are no traffic contracts. When TCP is used in wide area ATM networks, these issues must be dealt with. Preliminary measurements showing the benefits of traffic shaping in a wide area ATM network where the traffic contract is enforced by a UPC mechanism have been presented [24].

The measurements presented in this paper are very different from the earlier simulations [3,23]. In our measurements, we study how a single TCP connection can adapt its behaviour to a traffic contract enforced by a UPC mechanism. Resources are reserved inside the ATM network according to the traffic contract, and thus our TCP connection does not suffer from con- gestion due to other ATM flows inside the network. We learned subsequently that first simulations showing that TCP’s throughput degrades when a Variable Bit Rate (VBR) traffic contract is enforced by a UPC mechanism in an ATM network have been presented by P.P. Mishra [25]. Basically, these simulations show that TCP has difficulties in adapting to an ATM network where a maximum burst size is enforced by a UPC mechanism. They are completely in line with our measurements.

9. Conclusions

In this paper, we have presented a detailed evaluation of the behaviour of TCP in a wide area ATM network, where the traffic contract is enforced by a UPC mecha- nism. Our main conclusion is that in such a network, it is very important to generate compliant ATM traffic.

Our measurements have shown that if the ATM traffic is compliant, TCP behaves correctly, and it is usually able to achieve a high utilization of the link. However, TCP relies too much on the retransmission timer, and the high granularity of this timer in most TCP implementa- tions causes a lot of idle time when random losses occur.

When the ATM traffic is only almost compliant, we have shown that the UPC of the ATM network will actually limit the length of the maximum burst. Our measurements have shown that TCP has difficulties in

adapting to a network which enforces a maximum burst size. When the maximum burst size is less than four segments, TCP behaves very badly. When the maxi- mum burst size is larger than three segments, but still smaller than the maximum window size, TCP behaves better, but the utilization of the link is still low.

While our measurements were done in a CBR environ- ment, they also apply to a VBR environment. In a VBR environment, TCP will not behave correctly unless the maximum burst size is set to at least four MSS sized segments, but a safer solution would be to set it to the maximum window size used by the TCP connection, at least when the window size is smaller than the bandwidth delay product.

Acknowledgements

We would like to thank Jocelyne Lemagnen of EDF and Olivier Danthine of INS for their help with the first campaign of measurements. We would also like to thank Pierre Boyer of CNET for the loan of a prototype spacer- controller. Finally, we would like to thank the anony- mous reviewers for their comments which have greatly improved the quality of the paper.

This work was partially supported by the European Commission within the RACE 2060 CIO project and by a contract between the University of Liege and Belgacom.

References

111

121

[31

[41

151

161

[71

PI

[91

PO1

M. De Prycker, Asynchronous Transfer Mode Solution for Broad-

band ISDN, Second Edition, Ellis Horwood, 1995.

K. Moldeklev, E. Klovning and 0. Kure, TCP/IP behaviour in a

high-speed local ATM network environment, in Proceedings of the 19th Conference on Local Computer Networks, Minneapolis, USA, 1994, pp. 176-185.

A. Romanov and S. Floyd, Dynamics of TCP traffic over ATM

networks, in Proceedings of ACM SIGCOMM 94. ACM Computer Communication Rester, 24(4) (October 1994) 79-88.

M. Parker, P. Robinson, R. Wade, D. Alley, P. Adam, B. Le

Moine-Py and P.B. Saint-Hilaire, The European ATM Pilot, in

Proceedtngs of the 15th Internattonal Switching Symposium

(KY’95). Berbn. 1, April 23-28 1995, pp. 146-150.

L. Grovenstein, C. Pittman, J. Simpson and D. Spears, NCIH

services, architecture, and implementation, IEEE Network Magazine (November/December 1994) 18-22.

ITU-T TraJic Confrol and Congestion Control tn B-ISDN, ITU-T Rec. 1.371, 1993.

ATM Forum ATM User Network Interface Spectfication - Verston 3.0, Prentice-Hall, NJ, 1993.

ITU-T, Physical/electrical Characteristics of Hierarchtcal Digital Interfaces, ITU-T Rec. G.703.

ITU-T Synchronous Digital Hierarchy Bit Rates, ITU-T Rec.

G.707. T. Slattery and M. Muss, TTCP.C, available as ftp://ftp.brl.mil/

pub/ttcp.c. (The version we used is available as ftp://spa.

montefiore.ulg.ac.be/pub/soft/ttcp2.c).

0. Bonaventure et al./Computer Communrcattons 19 (1996) 264-275 275

[I I] E Klovning and 0. Kure, Host adapter spacmg m SBA-200,

T&nor Research (scientific document N17/95), 1995.

[IZ] R. Atkinson, Default If MTU for use over ATM AAL 5, Internet

RFC 1626, 1994.

[13] R. Stevens TCPIIP Illustrated, volume I: The Protocols, Addison-

Wesley, USA, 1994.

[14] J. Nagle, Congestion control m TCP/IP mternetworks, Internet

RFC 896, 1984.

[ 151 V. Jacobson, Congestion avoidance and control, in Proceedings of

ACM SIGCOMM88, ACM Computer Commumcation Review,

18(4) (1988) 314-329.

[16] K. Moldeklev and P. Gunmngberg, Deadlock situations in TCP

over ATM, m G. Neufeld and M. Ito, eds., Protocols for High

Speed Nerworks, Chapman & Hall, UK, 1995.

[17] D. Clark, Window and Acknowledgement Strategy in TCP,

Internet RFC 813, 1982.

[18] V. Jacobson, Modified TCP Congestion Avoidance Algorithm,

message sent to the end2end-interest mailing list, April 30 1990.

[19] G. Wright and R. Stevens, TCPjIP Illustrated, volume 2. The

Implementation, Addison-Wesley, USA, 1995.

[20] P. Boyer, M. Serve1 and F. Guillemin, The SPACER-

CONTROLLER: An efficient UPC/NPC for ATM networks,

in Proceedings of the 14th International Switchtng Symposium,

Yokohama, Japan, 1992.

[21] L. Brakmo, S. O’Malley and L. Peterson, TCP Vegas: new tech-

niques for congestion detection and avoidance, in Proceedings of

ACM SIGCOMM 94, ACM Computer Communication Review,

24(4) (October 1994), 24-35. [22] T. Luckenbach, R. Ruppelt and F. Schulz, Performance experi-

ments within local ATM networks, in EFOC&V’94, Twelfth

Annual Conference on European Fibre Optic Communicatton and

Networks, Heidelberg, Germany, June 22-24, 1994.

[23] M. Perloff and K, Reiss, Improvements to TCP performance m

high-speed ATM networks, Comm. ACM. 38(2) (February 1995)

90-100. [24] 0. Danthine and P. Boyer, Benefits of a spacer/controller in an

ATM WAN: prehminary traffic measurements, in EXPLOIT

Traffic Workshop, Basel, September 1994.

[25] J. Sterbenz, H. Schulzrinne and J. Touch, Report and dls-

cussion on the IEEE ComSoc TCGN gigabit networkmg

workshop 1995. IEEE Network Magazine (July-August 1995)

9-21.