paper e - uppsala universityuser.it.uu.se/~melander/thesis/papere.pdf · 2002. 12. 10. · paper e...

43
Paper E Bob Melander and Mats Bj¨orkman. Trace-Driven Network Path Emulation. Technical Report 2002-037, Department of Information Technology, Uppsala University, Sweden, November 2002.

Upload: others

Post on 31-Jan-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

  • Paper E

    Bob Melander and Mats Björkman. Trace-Driven Network Path Emulation.Technical Report 2002-037, Department of Information Technology, UppsalaUniversity, Sweden, November 2002.

  • Trace-Driven Network Path Emulation¦

    Bob Melander Mats Björkman([email protected]) ([email protected])

    Computer Systems, Dept. of Information Technology,Uppsala University, Box 337, SE-751 05 Uppsala, Sweden.

    Abstract

    This paper reports on on-going work where a trace-driven approach tonetwork path emulation is investigated. Time stamped probe packets aresent along a network path whereby a probe packet trace can be generated.It basically contains the send times and the one-way delays/loss indica-tions of the probe packets. Inside the emulator, the probe packet traceis used by a loss model and a delay model. These determine if a packetshould be dropped or what the delay of the packet should be. Threeloss models and three delay models are evaluated. For non-responsiveUDP-based flows, the trace-driven loss and delay models that are foundto perform best are those that determine loss and delay based on loss ratesand delay distribution parameters calculated across the probe packet traceusing a small gliding window. For adaptive TCP flows, none of the evalu-ated trace-driven models performs well. Instead, the Bernoulli loss modeland an independent average delay model performs best.

    1 Introduction

    Before a new network protocol or application is deployed and taken into op-eration, its behavior and performance should be evaluated to ensure that, forinstance, it will not be malicious to the network. A very common way of per-forming the evaluation is by simulations. The benefits are (at least) two-fold;the experiments are reproducible and can be fully controlled.

    To be able to reproduce behavior is important since that makes it possible tosubject alternative designs to identical network conditions whereby comparisonscan be made. It is also useful in debugging an implementation since the condi-tions that trigger the incorrect behavior can be recreated whenever needed. Tohave a controllable environment is also important because it is then possible tostudy performance and effectiveness over a range of operating conditions (e.g.,queue capacity in routers and end-points, amount and type of competing traffic,delay variations etc.).

    However, there are difficulties associated with simulations. A key problem ishow to set up realistic simulation scenarios. Decisions have to be made regarding¦ This work is partially supported by the SITI/Ericsson CONNECTED project.

  • 4 Paper E: Trace-Driven Network Path Emulation

    what network topology to use as well as what values to assign parameters ofthe components included in the simulations (e.g. buffer capacities in routers,intensities of generated traffic etc.). Another more practical problem is thatevent-driven simulators (which are the most common network simulators, e.g.ns [20] and OPNET [21]) are memory and CPU time intensive. Large networktopologies where substantial volumes of network traffic is generated are thereforehard to simulate.

    D1 DnD2

    S D

    µ1 µ2 µn

    XX X

    P

    Figure 1: A network path modeled as a set of queueing elements with servicerates µi and constant delay factors Di, Probe packets are sent from the sourcenode S to the destination node D. The Xs correspond to cross traffic on thepath.

    Trace-driven network emulation can be regarded as an evaluation frameworkthat provides the reproducible behavior of simulations while simplifying theprocess of generating realistic network scenarios. The whole idea is illustratedby Figures 1 and 2. A host probes a network path by sending probe packetsto a receiving host at the other end of the path. A probe packet contains itssend time and a sequence number. The receiving host records the arrival timeof the probe packets. Using those arrival times and the information in theprobe packets, a probe packet trace with one-way delays1 and loss indicationscan be generated. That trace serves as input to the emulator where it is usedto estimate parameters of a model of the network path and/or to control theemulation.

    In this work, the emulation is done entirely in the event-driven simulatorns so the emulator operates on simulated traffic. This is in contrast to, andshould not be confused with, traditional emulators such as NISTNet [18], Dum-mynet [25], NetShaper [11] and others [1, 12, 14]. These emulators are insertedon the path (typically a single link) between two physical hosts and thus operateon real traffic. Another important difference is that these traditional emulatorsare not trace-driven, i.e. a probe packet trace is not used to control the emula-tion. Trace-driven emulation of a wireless LAN has, however, been investigatedin [19]. The emulator there operates on real traffic and the network path onlyhas one hop, the wireless LAN. The work presented in this paper is inspired bythe ideas in [19].

    A benefit of doing trace-driven network emulation in an event-driven simu-lator is that simulation time and memory usage can be reduced. This is because

    1The time for a packet to travel from its source host to its destination host.

  • 2 Network Path Models 5

    S D

    Tracefile

    Network

    emulatorpath

    Figure 2: Emulation of a network path. The emulator replaces the networkpath and mimics its behavior. A trace file obtained by probing the networkpath serves as input to the emulator and is used to estimate model parametersand/or to control the emulation.

    the number of events (e.g. when a packet is enqueued, dequeued or dropped)will be fewer compared to when the network is not emulated in the simula-tion. These kinds of emulation-enabled simulations are also studied in [26]. Assuch, that work is very similar to the one presented here, not the least sincethe framework described in Section 3 is also used there. Basically, [26] differ inthat loss and delay are modeled using a continuous-time hidden Markov model(CTHMM). Their evaluation of the CTHMM suggests that it works well forboth non-responsive UDP-flows and adaptive TCP flows.

    The rest of the paper is organized as follows. The network path modelsare described in Section 2, followed by a description of the general emulationframework in Section 3. Different probing schemes are discussed and studiedin Section 4.2. Three loss models and three delay models are then evaluatedwith respect to non-responsive flows in Sections 4.3 and 4.4. Two of the lossand delay models are found to perform well. The performance of the models isalso studied with respect to TCP in Section 4.5. For these adaptive flows, thestudied trace-driven models do not perform well. Section 5, finally, summarizesour conclusions.

    2 Network Path Models

    In this work, only the loss and delay characteristics of a network path aremodeled. There are of course other aspects, such as packet reordering andpacket payload corruption, that should be considered in order to provide a morecomplete emulation of a network path. However, it is probably not difficult toextend the emulation framework used here to also model such phenomena.

    Roughly, the loss and delay models presented below can be divided intotwo types; those that only use the packet trace off-line to estimate certain

  • 6 Paper E: Trace-Driven Network Path Emulation

    model parameters and those that use the packet trace during the emulation(but possibly also off-line for parameter estimation). We will refer to the lattercategory as the trace-driven models. The Bernoulli loss model belongs to thefirst category and loss models 1 and 2 belong the second category, i.e. they aretrace-driven. Of the delay models, the independent average delay model belongsto the first category whereas delay models 1 and 2 are trace-driven and belongto the second category.

    In the description of the models, the following notation is used:

    • 1 ≤ i ≤ N where N is the length of the packet trace.• Four data series ti, si, di, and li describe the packet trace:

    – ti, send time of the i-th packet. tj ≥ ti when j > i.– si, size of the i-th packet.

    – di, one-way delay2 of the i-th packet. di ≥ 0. If packet i is lost, diis arbitrarily taken to be zero. The di data series is an observationof a sequence of random variables {Di}Ni=1. Each delay model makeassumptions about the statistical properties of those variables.

    – li, loss indication for the i-th packet.

    li ={

    0 if packet i is delivered to the destination.1 if packet i is lost.

    The li data series is an observation of a sequence of random variables{Li}Ni=1. Again, assumptions about the statistical properties of thesevariables vary among the loss models.

    • ltot, total number of lost packets. ltot =∑N

    i=1 li.

    2.1 Loss Models

    Model 1

    The first loss model is depicted in Figure 3a. The Os and Xs show the sendtimes of the probe packets. In this case, the packets are sent periodically witha fixed spacing. A O indicates that the packet reached its destination whereasan X indicates that the packet was lost.

    In this model, whether or not a packet is dropped is determined directlyfrom the packet trace in the following way: Suppose that a packet arrives to theemulated path at some (emulation/simulation) time t. That packet is droppedwith probability Pt being the value at t of the linear interpolation of the lossindications of the probe packets sent immediately before and after time t. Therandomization is done by generating a random number r from a uniform distri-bution with extreme values 0 and 1. If, and only if, r < P , the decision is todrop the packet.

    2The total time for a packet to travel from its source to its destination.

  • 2 Network Path Models 7

    t2

    ta tctb td

    t1

    t1

    tb

    Pb

    a)

    b)

    x x x0

    P(loss)

    tooo o o o

    1W

    x x x0

    P(loss)

    tooo o o o

    1Pt2

    Figure 3: Two simple models where packets are lost in accordance with a lossrate observed in the trace file. a) Loss rate calculated from the linear interpo-lation of the loss indications of the nearest surrounding probe packets. b) Lossrate calculated over a time window of width W .

    In the figure, a packet arriving at (emulation/simulation) time t1 will bedropped with probability 1 since the probe packets sent at times ta and tb wereboth lost. Similarly, the packet arriving at time t2 will be dropped with theprobability Pt2 , which is the circled point lying on the line interpolated fromthe loss indications of the probe packets sent at times tc and td.

    Model 2

    This model, depicted in Figure 3b, is a slight variation of loss model 1 in thata window of width W is introduced. For every probe send time in the packettrace, a loss rate is calculated over that window. For example, if the width is 7,then each probe send time will be associated with a loss rate that is calculatedusing the three preceding and the three succeeding loss indications in additionto the loss indication of the send time itself. The loss rate is calculated as thenumber of lost probe packets divided by the number of sent probe packets. It isalso possible to attach different weights to the losses in the window whereby a“weighted” loss rate is calculated. That makes it possible to give recent lossesmore importance than distant losses. It should be noted this loss model is quitesimilar to the loss model used in [19]. The difference is that weights are notused there and each calculated loss rate is used in a time interval that equalsthe width of the window.

    During the emulation, this model operates similarly to model 1. A packet

  • 8 Paper E: Trace-Driven Network Path Emulation

    that arrives to the emulated path at (emulation/simulation) time t, is droppedwith the probability P associated with the nearest (in time) surrounding probesend time. Again, the randomization is done by generating a random number rfrom a uniform distribution with extreme values 0 and 1. If, and only if, r < P ,the decision is to drop the packet. In the figure, a packet arriving at time t1during the emulation will be dropped with the probability Pb that is associatedwith the send time tb. This is the loss probability of all packets arriving duringthe time span marked by the dotted line.

    Bernoulli Model

    The Bernoulli loss model considers the random variables {Li}Ni=1 as independentand identically distributed (IID). This means that the probability of Li being1, Pi, is independent of Lj ∀j 6= i and that the probability Pi is the same forall i, i.e. Pi = P . Thus, the Bernoulli model only has one state as depicted inFigure 4.

    P1−P S

    Figure 4: The Bernoulli loss model. There is only one state and packets are lostwith probability P and not lost with probability 1− P .

    P is estimated from the packet trace as

    P̃ =∑N

    i=1 liN

    =ltotN

    (1)

    i.e. the number of lost probe packets divided by the total number of probepackets that were sent.

    The Bernoulli model is used as follows during the emulation. When requestedto make a drop decision, a random number r is generated from a uniform dis-tribution with extreme values 0 and 1. If r < P , the decision is to drop thepacket. Otherwise the decision is not to drop the packet.

    2.2 Delay Models

    The one-way delay di of packet i that traverse an network path with M hops isgiven by

    di =M∑

    j

    dp(j) + dt(j) + dr(j) + dq,i(j) (2)

  • 2 Network Path Models 9

    where dp(j) and dt(j) are the propagation and transmission delays, respectively,of link j, dr(j) is the processing (e.g. lookup and forwarding) delay in routerj and dq,i(j) is the delay that arise when there is queueing in router j. Thetransmission delay is proportional to the packet size si, i.e. dt(j) = 1/L(j) · siwhere L(j) is link bandwidth of link j. The remaining delay components canbe considered independent of packet size. The queueing delay varies over time(hence the extra i in the subscripted index) and is therefore dynamic while theother delays can be considered static with respect to time.3

    By regrouping the delay components, Equation 2 can be rewritten, nowacknowledging dis dependence on si, as

    di(si) =M∑

    j

    (dp(j) + dr(j)) + dt(j) + dq,i(j)

    =M∑

    j

    (dp(j) + dr(j)) +M∑

    j

    1L(j)

    ︸ ︷︷ ︸β

    si +M∑

    j

    dq,i(j) (3)

    =M∑

    j

    (dp(j) + dr(j))

    ︸ ︷︷ ︸dS

    + βsi︸︷︷︸dP (si)

    +M∑

    j

    dq,i(j)

    ︸ ︷︷ ︸dQ(i)

    = dS + dP (si) + dQ(i)

    where dS is the static component, dP (si) is the component that is proportionallydependent of the packet size si, and dQ(i) is the dynamic component.

    In all delay models, the non-dynamic delay components dS and dP are mod-eled separately from dQ. More specifically, a new packet size independent timeseries d′i with mean zero is created as illustrated in Figure 5. First, dS and βare estimated. This is done by fitting a straight line so that it lies under thesmallest di for each packet size in the trace. The details of how the estimationis done can be found in the Appendix A. Given those estimates (d̃S and β̃), d′iis calculated as

    d′i = di(si)− (d̃S + β̃si)− d (4)

    where d is the mean value

    d = mean(di(si)− (d̃S + β̃si)) (5)

    calculated over all i.

    3The processing delay actually depend on the packet size and can also vary slightly overtime. However, it is usually so small that we consider it independent of packet size andconstant.

  • 10 Paper E: Trace-Driven Network Path Emulation

    d (s)i

    Packet size s

    d +S βs

    d +S βsd (s)i ))= mean( − (d

    d (s)i d +S βs d’id = − ( ) −

    di’

    (s)dmind (s)i

    d (s)i

    s

    = mini

    Figure 5: Removal of static delay components.

    It should be clear that d′i only reflects the dynamic component dQ. Thed′i time series is used to determine the parameters (and possibly also to con-trol the operation) of the models that will be described below. These modelsconsequently model dQ and the delay value they determine when operating isdenoted ddynamic. The over-all delay, d, that a packet experiences as it traversethe emulated path, is therefore

    d = ddynamic + d̃S + β̃s + d (6)

    where s is the size of the packet for which the delay value is returned.When the packet trace is obtained from measurements in a real network,

    clock offset and skew are two potential problems. They arise when the clocks atthe sender and the receiver are not synchronized and run at different frequencies.Several methods for removing clock skew from a time series of measured delayshave been proposed [16, 23]. Any of these methods can be applied to the di timeseries should there be signs of it being biased due to clock skew4. Naturally,that should be done before d′i is created from di(si).

    Model 1

    This delay model is in spirit very similar to loss model 1 since it infers packetdelays directly from the packet trace using linear interpolation. It is illustrated

    4The skew compensation is preferably calculated using the di delays for one specific packetsize. That compensation can then be applied on all di values.

  • 2 Network Path Models 11

    in Figure 6a. The solid arrows show the delay experienced by the probe packets(in this case sent periodically).5 An X indicates that the probe packet was lost.

    In this model, the delay of a packet arriving to the emulated path at (emula-tion/simulation) time t is the value at t of the linear interpolation (shown as thedashed lines in the figure) of the delays of the probe packets sent immediatelybefore and after time t. This is the case at time t1 in the figure. If one of thoseprobe packets were lost, the delay value is simply the delay value of the probepacket that was not lost (see t2 and t4 in the figure). If both probe packetswere lost, the delay equals the delay the last non-dropped packet (see t3 in thefigure).

    t4t2 t3

    t1

    t1

    ta

    a)

    b)

    x x x

    Delay

    0t

    x x

    Delay

    0 xt

    W

    {

    std.devmean

    Figure 6: Two simple models where packet delay is determined directly fromthe trace file. a) Delay calculated from the linear interpolation of the delaysof the nearest surrounding probe packets. b) Mean and variance of the delayis calculated over a time window of width W . The delay is a random numberfrom a statistical distribution with the calculated mean and variance.

    Model 2

    Similarly to loss model 2, this delay model (illustrated in Figure 6b) uses awindow of width W . For every send time in the packet trace, the mean andstandard deviation of the delay is calculated over that window. It is possible toattach weights to the delay values in the window whereby a weighted average iscalculated.

    5Hence, the di values are shown and not the d′i values that are actually used in the inter-

    polation. However, that is not crucial to this description of the model.

  • 12 Paper E: Trace-Driven Network Path Emulation

    During the emulation, this model operates as follows: The delay of a packetthat arrives to the emulated path at (emulation/simulation) time t, is a randomnumber drawn from a normal distribution with the parameters associated withthe nearest (in time) surrounding probe packet send time. In the figure, thepacket arriving at time t1 during the emulation will experience a random delaybased on the parameters associated with the send time ta. These parametersare used throughout the time span marked by the dashed line.

    Independent Average Delay Model

    This model is essentially equivalent to delay model 2 with an infinitely largewindow and all weights set to one. Hence, the random delay variables {Di}Ni=1are considered independent and identically distributed. As illustrated in Fig-ure 7, this model can be thought of as having one state. That state containsthe mean and standard deviation parameters of the normal distribution (whichis assumed by the model). These parameters are estimated by calculating themean and standard deviation of all delays in the probe packet trace.

    MeanStd.devS

    Figure 7: The independent average delay model. Packet delay is a randomnumber drawn from a statistical distribution with mean and variance calculatedfrom all observed delays in the packet trace.

    Whenever the model is requested to produce a delay value during the em-ulation, a random number is generated from a normal distribution with theestimated parameters. That random number is used as the delay value.

    3 Implementation and Simulator Issues

    The emulated path functionality is implemented as a new queue class, Emul-PathQ, in ns. The encapsulation is such that it should not be difficult to modifyit so that it can be used in a traditional emulator such as NISTNet. For emu-lation/simulation times that are not covered by the packet trace, EmulPathQacts as a normal FIFO queue with infinite buffer capacity, zero service timeand zero propagation delay. This means that the emulated path is effectivelyshort-circuited during time spans not covered by the packet trace.

    Figure 8 gives a conceptual view of the EmulPathQ class. It holds pointers tothe loss end delay model objects, EPLossModel and EPDelayModel, respectively.Every loss model is derived from the EPLossModel class. The most important

  • 3 Implementation and Simulator Issues 13

    ...modelDeterminedDelay() {

    }

    double

    ...

    }

    int modelSaysDrop() {

    EPLossModel *lm;

    EmulPathQ: Queue EPLossModel

    EPDelayModel

    Tracefile

    void recv(...) {if (lm−>modelSaysDrop()) {

    } else {/* Drop packet */

    delay = dm−>modelDeterminedDelay();/* Delay packet */

    }}

    EPDelayModel *dm;

    void parseTraceFile() { ... }

    Figure 8: Conceptual view of the three primary components of the networkpath emulation implementation in ns. The EmulPathQ class, the EPLossModelclass, and the EPDelayModel class.

    method of that class is modelSaysDrop which, given the arrival time of a packetand other packet information (e.g. size), returns 1 if the packet, accordingto the loss model, should be dropped and 0 otherwise. All delay models arederived from the EPDelayModel and the most important method of that classis modelDeterminedDelay. Given similar input parameters as modelSaysDrop,it returns the delay a packet should experience according to the delay model.By having this class hierarchy it is very easy to implement new loss and delaymodels.

    The most important method in EmulPathQ is recv. It is invoked every timea packet arrives to the emulated path. First it calls the modelSaysDrop methodof the EPLossModel derived loss model. If dictated so by the loss model, thepacket is dropped. Otherwise the modelDeterminedDelay method of the delaymodel derived from the EPDelayModel class is called to determine the delay thepacket should experience. Another important method in the EmulPathQ classis parseTraceFile. It reads and parses a trace file and feeds that information tothe loss and delay models.

    A trace file has a very simple text format (see Table 1). Each row correspondsto a probe packet and there are three columns on every row. These columnscontain (starting with the left-most column); the send time, the size (in bytes),and the delay (set to -1 to indicate loss), respectively, of the probe packet. Inaddition, a trace file also contains information that specifies the loss and delay

  • 14 Paper E: Trace-Driven Network Path Emulation

    Table 1: The layout of a trace file.

    Send time Packet size Delay7.093172 566 0.1411197.096224 566 0.1390947.107514 566 -17.107557 566 0.138417

    ......

    ...

    models that should be used as well as provides configuration parameters (shouldthere be any) for those models.

    To specify an emulated path in an ns simulation is straightforward and verysimilar to specifying an ordinary simplex link. Suppose that $n1 and $n2 aretwo nodes. A 10 Mbps simplex link with propagation delay 30 ms is specifiedas:

    $ns simplex-link $n1 $n2 10Mb 30ms DropTail.

    and an emulated path between the same two nodes is specified as:

    $ns emulated-path $n1 $n2 tracefile.

    where tracefile is the trace file to be used.

    4 Experiments

    In this section we describe and discuss the experiments that we have performedto evaluate the models described in Section 2 and to study different probingschemes. The following notation will be used:

    • Tp is the average time interval between the sending of two consecutiveprobe packets.6

    • T is the average inter-packet time of the evaluation packets (the packetsin the flows that traverse the emulated path).

    • ∆t is the inter-packet time of two consecutive evaluation packets. ∆t ∈Exp(T ) means that ∆t is a random variable from an exponential distri-bution with mean T .

    All experiments are simulation based (using ns) and the simulation time hasbeen 600 seconds. For loss and delay models 2, weighted averages are calculated.

    6We will refer to the time between the sending of two consecutive packets as the inter-packettime.

  • 4 Experiments 15

    The weights are 1/|i|, where i = 1 for the probe packet around which the windowis centered. For the remaining probe packets covered by the window, i growsby one with distance from the center probe packet so i = ±2, ±3, . . . etc.

    4.1 Simulation Setup

    The network topology we have used in the ns simulations is shown in Figure 9.The network path that is probed consists of seven routers. Each router servespackets first-come-first-served (FCFS), has a queue capacity of 100 packets anddrops packets at the tail of the queue whenever the queue becomes full (so-calledtail-dropping). The probe source host is connected to router R0 and the probedestination host is connected to router R6. The probe flow is either UDP based(with a strictly periodic spacing or a uniform, exponential, or Pareto distributedspacing) or TCP Reno based (an infinite FTP transfer).

    H

    H

    H

    R0 R2 R5R4R3R1 R6

    H

    H

    H

    FFUU

    FUU F

    Cli

    ents

    Serv

    ers

    Servers

    Clients

    10Mbps 20Mbps 34Mbps 10Mbps34Mbps 15Mbps2ms 7ms 12ms 15ms 12ms

    20Mbps15Mbps10ms 5ms 2ms

    S D

    Figure 9: The topology used in the non-emulating ns simulations. Shaded cir-cles marked with Ri, (i = 0, 1, . . . , 6) represent routers. The shaded hexagonsrepresent the probe source and destination hosts. Circles marked with H, F,and U represent HTTP, FTP, and UDP (Pareto on-off) cross traffic hosts, re-spectively.

    The cross traffic is a combination of HTTP flows, persistent TCP flows(infinite FTP transfers) and on-off UDP flows (with Pareto distributed on andoff times7). The TCP variant that is used is TCP Reno. The HTTP trafficfollows the default empirical data provided by ns. All HTTP servers (ten intotal) are connected to router R0 whereas the HTTP clients (of which thereare also ten) are connected to router R6. The propagation delay between aTCP client/server and its corresponding router is uniformly distributed in theinterval [10, 20] ms.

    7The shape parameter equals 1.5 and average on and off times are 500 ms.

  • 16 Paper E: Trace-Driven Network Path Emulation

    Table 2: The number of UDP (Pareto on-off) and FTP cross traffic clients andservers attached to the routers Ri,(i = 0, 1 . . . , 6).

    ↓Src Dst→ R0 R1 R2 R3 R4 R5 R6P→

    R0 4/3 -/2 -/- -/1 -/- -/3 4/9

    R1 -/- 4/2 -/2 -/1 -/1 -/- 4/6

    R2 -/- -/- 4/3 -/2 -/1 -/- 4/6

    R3 -/- -/- -/- 4/2 -/4 -/- 4/6

    R4 -/- -/- -/- -/- 4/3 -/3 4/6

    R5 -/- -/- -/- -/- -/- 4/6 4/6

    R6 -/- -/- -/- -/- -/- -/- -/-P ↓ -/- 4/3 4/4 4/5 4/6 4/9 4/12 24/39

    Table 2 shows how the UDP and FTP flows traverse the routers. A cell[Ri, Rj ] contains two values, nu/nf . These are the number of UDP flows (nu)and FTP flows (nf ) flowing from clients attached to router Ri to servers at-tached to router Rj . The last row (column) contains the total number of servers(clients) attached to every router. In the R0 row, it can be seen that there arefour UDP flows and three FTP flows traversing the R0 → R1 link. The same rowalso shows that there are two FTP flows traversing the links R0 → R1 → R2,one FTP flow traversing the links R0 → R1 → R2 → R3 → R4 and three FTPflows traversing the links R0 → R1 → · · · → R6. The ninth and last column(∑ →) shows the total number of UDP and FTP clients, four and nine, respec-

    tively. The cells in the ninth row (∑ ↓), shows that there are, for instance, four

    UDP and six FTP servers connected to router R4.

    For the evaluation of the different models, the topology illustrated by Fig-ure 10 has been used. The forward path from the probe source host to the probedestination host is the emulated path. The reverse path is a simplex link withlink bandwidth 11

    10+134+

    120+

    115+

    134+

    115+

    120+

    110≈ 2 Mbps and propagation delay

    2 + 7 + 12 + 15 + 12 + 10 + 5 + 2 = 65 ms. That is, the link bandwidth and thepropagation delay of the simplex link equal those of the return path in the fulltopology (see Figure 9). There is no cross traffic on the simplex link.

    The savings in simulation time and amount of trace information generatedby the simulator are substantial when network path is emulated. As an ex-ample we provide some numbers for simulations performed on a Linux-basedPentium PIII 800 MHz equipped with 256 MB of RAM. The simulations of thetopology in Figure 9, i.e. non-emulation, typically last for 33 ± 2 minutes andgenerates approximately 1.5 GB of trace information. When the network pathis instead emulated, the simulation time drops to 20±2 seconds and the amountof generated trace information is about 4.5 MB.

  • 4 Experiments 17

    EmulatedPath

    65 ms

    Simplex link

    2 Mbps

    DS

    Figure 10: The topology used in the emulating ns simulations. The shadedhexagons represent the probe source and destination hosts.

    4.2 Probing Schemes

    Since the probe packet trace is central to the emulation, a natural starting pointis to study how packet traces resulting from different probing schemes differ fromeach other. There have been several probing based studies of packet loss anddelay in the Internet (e.g. [4, 6, 10, 15, 22, 27]). However, these studies have notbeen performed from the viewpoint of trace-driven network emulation and mosthave not focused on the probing aspects per se. Sampling strategies (whichis essentially what we refer to as probing schemes) are discussed, primarily ingeneral terms, in the RFCs [2, 3, 24] and in [7, 9]. This section complementsthe RFCs by presenting simulation results and by studying in some detail theTCP probing scheme.

    Ideally, the probe packets should merely sense the network state8 withoutaffecting the network or the existing traffic. That is, the probe traffic shouldnot be so aggressive that it significantly alters the behavior of the cross traffic.It is obviously impossible to avoid affecting the cross traffic completely since theprobe packet will indeed traverse the network path and thus also affect it. Theyquestion is then how strong the impact of probe packets are given a certainprobing scheme.

    Probing Schemes vs. Loss Rates

    Table 3 shows the loss rates of the probe flow and the aggregated cross traffic fordifferent probing schemes. The probes are either sent periodically with a fixedinter-packet time Tp or with an inter-packet time that is a random variable froma uniform, exponential or a Pareto distribution with mean Tp. The network pathis also probed with a infinite TCP Reno source. The average inter-packet timefor the TCP probe flow is approximately 40 ms. The loss rates are presentedfor each hop of the network path where losses occur (columns 2 through 5) as

    8By which we essentially mean queue levels, link capacities and link propagation delays.These factors translate into the observable delays and losses of the probe packet flow.

  • 18 Paper E: Trace-Driven Network Path Emulation

    well as totally (although only for the probe flow9).

    Table 3: Loss rates for the probe flow and the cross traffic, specified per hopwhere losses occur and totally. The loss rates are given in percent as m ± cwhere m is the mean value and c is the confidence interval at the 95% level.For each probing scheme, 16 simulations, each with different random seeds, hasbeen performed.

    Type Tp Hop 2 [%] Hop 3 [%] Hop 5 [%] Hop 6 [%] Total [%]

    Periodic 0.02s 0.32±0.04 0.60±0.04 1.73±0.08 0.31±0.04 2.91±0.09X-traffic 0.38±0.04 0.76±0.04 2.04±0.06 0.38±0.04 -Periodic 0.04s 0.32±0.03 0.61±0.04 1.71±0.07 0.35±0.04 2.93±0.11X-traffic 0.37±0.03 0.72±0.03 1.99±0.06 0.39±0.04 -Periodic 0.08s 0.26±0.03 0.60±0.06 1.73±0.09 0.32±0.04 2.85±0.12X-traffic 0.33±0.03 0.73±0.02 1.94±0.06 0.34±0.03 -Uniform 0.02s 0.33±0.03 0.57±0.04 1.74±0.05 0.34±0.03 2.92±0.08X-traffic 0.41±0.03 0.74±0.03 2.06±0.05 0.40±0.03 -Uniform 0.04s 0.29±0.05 0.63±0.05 1.78±0.08 0.31±0.05 2.94±0.12X-traffic 0.35±0.04 0.75±0.04 2.02±0.05 0.39±0.04 -Uniform 0.08s 0.31±0.06 0.59±0.07 1.75±0.09 0.30±0.04 2.88±0.10X-traffic 0.35±0.06 0.71±0.04 2.01±0.05 0.36±0.03 -Expon. 0.02s 0.35±0.04 0.63±0.03 1.84±0.07 0.34±0.03 3.09±0.08X-traffic 0.40±0.04 0.71±0.03 2.03±0.05 0.38±0.04 -Expon. 0.04s 0.32±0.04 0.64±0.04 1.77±0.08 0.35±0.06 3.02±0.10X-traffic 0.36±0.03 0.72±0.04 2.02±0.06 0.38±0.04 -Expon. 0.08s 0.32±0.04 0.68±0.07 1.81±0.11 0.32±0.04 3.06±0.12X-traffic 0.34±0.03 0.71±0.03 1.96±0.04 0.36±0.04 -Pareto 0.02s 0.41±0.07 0.69±0.05 1.82±0.11 0.37±0.06 3.21±0.10X-traffic 0.39±0.05 0.73±0.05 2.02±0.07 0.39±0.05 -Pareto 0.04s 0.36±0.06 0.62±0.06 1.75±0.08 0.35±0.04 3.02±0.11X-traffic 0.40±0.05 0.74±0.04 2.03±0.06 0.39±0.05 -Pareto 0.08s 0.33±0.05 0.62±0.04 1.78±0.11 0.35±0.05 3.02±0.12X-traffic 0.34±0.03 0.71±0.03 1.95±0.07 0.39±0.04 -TCP 0.04s 0.45±0.07 0.96±0.11 2.62±0.15 0.45±0.05 4.34±0.13X-traffic 0.36±0.04 0.71±0.03 2.01±0.05 0.37±0.04 -

    Intuitively, one would expect the over-all probe packet loss rate to increaseas Tp is decreased, i.e. as the probe rate increases. As can be seen from thetable, this is not necessarily the case. For instance, the loss rates for the periodicand uniform probe flows with Tp = 20 ms are lower than the loss rates for thesame probing schemes with Tp = 40 ms. However, the confidence intervals forthe different Tp values for each probing scheme overlap to a large degree (seeFigure 11). Furthermore, the per-hop loss rates for the cross traffic do not show

    9The information we collected from the ns simulations unfortunately made it impossibleto calculate the over-all loss rates for the aggregated cross traffic.

  • 4 Experiments 19

    a particularly strong pattern of increasing with smaller Tp values. This can betaken as a indication that the probe traffic (with these Tp values and in thisscenario) is only a small fraction of the total network traffic. The impact of theprobe traffic is therefore limited, which is manifested in the relatively fixed lossrates for the different Tp values.

    From the table it can be seen that the exponential, the Pareto and, in par-ticular, the TCP probe flows tend to have a slightly higher loss rate than theother probe flows. The reason can be that the exponential, Pareto and TCP (inparticular during slow-start) probe flows have a larger variance in Tp than theperiodic and uniform probe flows. When routers are dropping packets at thetail of the queue in overflow situations (as in the simulations), it is likely thatbursty flows will suffer more (in terms of losses) than less bursty flows (with thesame average probe rate).

    2.5

    3

    3.5

    4

    4.5

    0.04s0.02s 0.08s

    0.02s0.04s

    0.08s0.02s

    0.04s0.08s

    0.02s0.04s

    0.08s

    loss

    rat

    e [%

    ] with

    95%

    con

    f. in

    t.

    PeriodicUniformExponentialParetoTCP

    Figure 11: Loss rates with confidence intervals at the 95% level for the differentprobe flows. This plot corresponds to the values in the last column of Table 3.

    Probing schemes vs. Loss Bursts

    The loss rate only tells what fraction of the probe packets that are lost. Anotherway of describing losses are in terms of loss burstiness. A loss burst of lengthk is defined as (exactly) k consecutive packets losses. That is, the packetsimmediately before and after the k lost packets are not lost. Figure 12 showsthe loss burst distributions for periodic and exponential probe flows for differentTp values. It can be seen that the loss bursts become longer as Tp is decreased.

  • 20 Paper E: Trace-Driven Network Path Emulation

    The loss bursts of the exponential probe flow are also longer compared to theperiodic probe flow for a given Tp.

    1 2 3 4 5 60.7

    0.75

    0.8

    0.85

    0.9

    0.95

    1

    Loss burst length x [# packets]

    P[X

    < x

    ]

    Periodic 0.02sPeriodic 0.04sPeriodic 0.08sExponential 0.02sExponential 0.04sExponential 0.08s

    Figure 12: Loss burst length distributions for UDP probe flows with fixed andexponentially distributed inter-packet times, Tp = 20 ms, 40 ms, and 80 ms.Note that the y axis starts at 0.7.

    Both of these observations are not surprising since the routers schedule pack-ets FCFS and do tail-dropping. This means that as Tp decreases, the risk thatthe full queue buffer in the congested router will not drain between the arrivalsof several consecutive probe packets increases. In the exponential probe flow,there will be probe packets with an inter-packet time that is less than Tp. Thisis why the exponential probe flow will suffer from longer loss bursts than theperiodic probe flow.

    Figure 13 shows the distribution of loss bursts for periodic, uniform, expo-nential, Pareto and TCP probe flows where Tp = 40 ms. It can be seen that theperiodic probe flow, more than 90% of the losses are single losses. For the re-maining probe flows, starting with the uniform and ending with the TCP probeflow, the loss burst distributions tend towards longer and longer loss bursts. Forthe TCP probe flow, only 73% of the loss bursts have length 1. Again, this canbe explained similarly as above, i.e. in terms of inter-packet time variances (orburstiness) and how the congested routers drop packets (tail-dropping).

    Probing schemes vs. Biases

    The TCP probing scheme is different from UDP-based probing schemes in thatTCP is adaptive (primarily on the basis of packet loss but packet delay is also

  • 4 Experiments 21

    1 2 3 4 5 60.7

    0.75

    0.8

    0.85

    0.9

    0.95

    1

    Loss burst length x [# packets]

    P[X

    < x

    ]

    Periodic 0.04sUniform 0.04sExponential 0.04sPareto 0.04sTCP

    Figure 13: Loss burst length distributions for UDP probe flows with fixed,uniformly, exponentially, and Pareto distributed inter-packet times and for aTCP Reno probe flow. For all flows, Tp ≈ 40 ms. Note that the y axis starts at0.7.

    important). When the TCP probe sender detects congestion10 it will reduceits send rates. Hence, when congestion is detected, fewer probe packets will besent. On the flip side, as long as no congestion is detected, the TCP probe ratewill increase, i.e. more and more probe packets will be sent.

    The adaptivity of TCP raises the question whether there will be biases (withrespect to when probe packets are sent and when there is congestion) in theprobe packet trace when the TCP probing scheme is used. For instance, if theTCP probing scheme tends to send probe packets when there is congestion onthe probe network path, it will experience more losses (i.e. a higher loss rate)than it otherwise would.

    We have investigated whether the TCP probing scheme is biased with re-spect to loss and one-way delay. The cross traffic losses have been observed inperiods when the TCP sender is sending probe packets and in periods when theTCP sender is not sending probe packets. In addition, a periodic probe flowwith Tp = 20 ms has be sent in parallel with the TCP probe flow (traversingexactly the same path albeit with different source and destination hosts). Theone-way delays of the packets in this periodic flow have also been observed,again in periods when the TCP sender is sending probe packets and in periods

    10That happens when the TCP sender discovers that one or several acknowledgements aremissing. A missing acknowledgement is due either to loss of the acknowledgement itself or thepacket that were to be acknowledged.

  • 22 Paper E: Trace-Driven Network Path Emulation

    when it is not sending probe packets. The observations are summarized in Ta-ble 4. Exactly how the values in this table have been calculated is described inAppendix B.

    Table 4: Cross traffic losses and one-way delay conditioned on probe packetsend and non-send instants. Each bin is 40 ms long.

    Per bin Per packetWithout packet With packet Overall

    Type Tp Mean Std Mean Std Mean Std Mean Std

    CROSS TRAFFIC LOSSES [# packets]

    Exp. 0.02s 3.66 9.06 3.64 8.68 3.64 8.73 3.65 8.60

    Par. 0.02s 3.79 9.20 3.66 8.60 3.71 8.85 3.63 8.59

    TCP 0.04s 3.28 7.87 2.95 7.05 3.19 7.67 2.90 6.95

    PERIODIC 0.02s DELAYS [s]

    Exp. 0.02s 0.120 0.029 0.120 0.029 0.120 0.029 0.120 0.029

    Par. 0.02s 0.120 0.028 0.122 0.030 0.121 0.030 0.122 0.030

    TCP 0.02s 0.123 0.030 0.125 0.030 0.126 0.031 0.124 0.030

    The six middle columns show the means and standard deviations of thecross traffic losses (row 7) and of the one-way delays of the periodic probe flow(row 11) per bin. Each bin is a fixed time interval of 40 ms. The “binning”is done since the TCP probe packets are not sent with a fixed inter-packettime. Columns 3 through 6 show values conditioned on whether TCP probepackets are sent or not in the bins. Column 7 and 8 show overall values, i.e.values calculated over all bins. The last columns show the means and standarddeviations of the cross traffic losses and of the one-way delays for the periodicprobe flow per TCP probe packet. For comparison, the table also contains valuesfor the non-adaptive exponential and Pareto probing schemes (rows 5 through6 and rows 9 through 10).

    As can be seen from the table, the average number of dropped cross trafficpackets is slightly higher for bins where TCP probe packets are sent compared tobins where TCP probe packets are not sent. The former value is also higher thanthe overall average cross traffic losses whereas the latter values is lower than theoverall average cross traffic losses, The same goes for the standard deviation.However, these patterns can also be observed for the non-adaptive probe flows.The table also shows that the average one-way delay of packets in the periodicflow is slightly higher for bins where TCP packets are sent compared to binswhere TCP packet are not sent. Furthermore, both delay values are slightly lessthan the average delay calculated over all bins. Again we essentially find thesame pattern for the non-adaptive probe flows.

    In all, the TCP probe flow does not stand out in any particular way in Table 4compared to the non-adaptive exponential and Pareto probe flows. From that,our conclusion is that TCP does not appear to be biased in terms of cross traffic

  • 4 Experiments 23

    losses and one-way delay.

    Probing schemes vs. Resolution Distribution

    Another aspect of a probing scheme that is of importance to trace-driven emu-lation is the resolution of the probe flow and how that resolution is distributedover time. By resolution we mean how frequently the path is probed, i.e. howfrequently probe packets are sent. Intuitively, it is desirable to have a high reso-lution since that means that the state of the network path will be known often.But that means that the path has to be probed at a high rate. As pointed outin the beginning of Section 4.2, if the probe rate is too high, the probe flowmight affect the behavior of the cross traffic.

    For the periodic probing scheme, the distribution of the resolution is fixedand equal over entire probing session (since a fixed inter-packet time is used).For the other non-adaptive probing schemes, given a certain Tp, the resolutionwill vary in accordance with the statistical properties of the process that gener-ates the inter-packet times. This means that with the uniform probing scheme,the variance in resolution is 13Tp

    3, whereas with the exponential and Paretoprobing schemes, the resolution variance is Tp2 and ∞, respectively. The con-

    420 425 430 435 440 445 450 455 4600

    0.5

    1

    1.5

    2

    Inte

    r−pa

    cket

    tim

    e [s

    ]

    Exponential 0.04s

    420 425 430 435 440 445 450 455 4600

    0.5

    1

    1.5

    2

    Time [s]

    Inte

    r−pa

    cket

    tim

    e [s

    ]

    TCP

    Figure 14: Time between consecutive probe packets as a function of simulationtime. The upper graph is for a probe flow with exponentially distributed inter-packet times with mean Tp = 40 ms. The lower graph is for a TCP Reno probeflow, also with mean inter-packet time Tp ≈ 40 ms.

    sequence of a larger variance is that the periods when no probe packets are sent

  • 24 Paper E: Trace-Driven Network Path Emulation

    will be longer. During those times, the state of the network path will not beobserved. It is reasonable to suspect that those silent periods will reduce theaccuracy of the trace-driven emulation models.

    Figure 14 shows the time that passes between the sending of consecutiveprobe packets for the exponential probing scheme (with Tp = 40 ms) and for theTCP probing scheme (also with Tp ≈ 40 ms). It can be seen that the resolutionfor the TCP probe can be very coarse at times, e.g. there is an almost 2.5 secondgap 442 seconds into the simulation. At most times, however, the resolution ismuch higher (i.e. the gaps are smaller). The non-adaptive exponential probeflow shows a very different distribution of resolution over time. There are gapsbut none of them is longer than 0.35 seconds. Although not shown in the figure,the Pareto probe scheme with its infinite variance also has a very coarse grainedresolution at times.

    To summarize this section, when the probe flow is only a tiny fraction ofthe overall traffic on the probe path, the different probing schemes do not seemto result in drastically different observed loss rates. The exception, to somedegree, is the TCP probe flows that showed a slightly higher (about 1.3%) lossrate than the other probing schemes. (even when varying Tp). From a loss burstperspective, the probing schemes differ more significantly as more variance in Tptend to push the loss burst length distribution towards longer bursts. The TCPprobing scheme also stands out in this respect by having the longest loss bursts.Our study does not indicate that the TCP probing scheme has a bias towardsprobing the network path at times of congestion or times of non-congestion.The resolution of the TCP probing scheme is highly variable though. At certaintimes (especially during slow start), probe packets are sent with a very smallinter-packet time whereas at other (silent) times no probe packets are sent atall. From the perspective of trace-driven emulation, long silence periods areundesirable since that means that the state of the network will not be observedthen.

    4.3 Loss Model Evaluations and Comparisons

    To study the performance of the different loss and delay models we use tracefiles obtained by probing the network path periodically with Tp = 20 ms, 40 ms,and 80 ms. The reason is that with a fixed inter-packet time (instead of inter-packet times from some statistical distribution, e.g. uniform), it is easier tostudy what the effects of different Tp values are on the models. However, asnoted in [24], with periodic probing there is a risk of synchronization with crosstraffic variations. We have no indications that our simulations suffer from this.

    The evaluation flows (i.e. the flows that traverse the emulated path11) haveexponentially distributed inter-packet times with mean T . The mean value Thas been varied among different experiments. In the interest of space, the graphsand tables are shown for T = 20 ms.12 UDP based evaluation flows are used

    11Hence, there is one evaluation flow for each loss/delay model combination.12These graphs and tables are representative for all T values that have been used.

  • 4 Experiments 25

    since that makes it possible to study the loss and delay models independently.For each combination of Tp and T , there is a validation flow (also with expo-

    nentially distributed inter-packet times with mean T ) that traverse the probednetwork path in parallel with the probe flow. The “goodness” of a particularmodel is determined by how well the evaluation flow of that model matches thecorresponding validation flow. For the loss models, the metrics used for the com-parisons are loss rate, loss burst distribution and loss autocorrelation function(ACF). The loss rate records the loss ratio of the packets, loss burst distributionreveals to what extent losses are consecutive, and loss autocorrelation functionrecords temporal dependencies of the losses.

    Table 5: Relative errors in loss rates for evaluation flows with exponentiallydistributed inter-packet times, T = 20 ms. The packet trace used by the lossmodels is generated from a periodic probe flow with fixed inter-packet time,Tp = 20 ms, 40 ms, and 80 ms.

    Loss Loss model 2 [%] BernoulliT Tp model 1 [%] W = 3 W = 7 W = 15 model [%]

    0.02s 0.02s 2.21 3.36 -2.42 -2.21 0.23

    0.02s 0.04s 2.34 0.89 -1.98 2.70 3.17

    0.02s 0.08s -5.53 -7.71 -6.67 -9.26 -6.78

    Table 5 show the relative errors in loss rate of the evaluation flows whenthe different loss models are used. Several simulations have been run for eachmodel (with the parameters shown in the table), all with different seeds forthe randomization. The values in the table are from one of those simulationsbut these values are consistent with the outcomes of all simulations. For allmodels, the error in loss rate tend to increase as Tp is increased.13 Loss model 2,with window width 3 and 7 is sometimes an exception, since (as shown in thetable) the error is less for Tp = 40 ms than for Tp = 20 ms. For Tp = 80 ms, allmodels perform significantly worse than with the smaller Tp values. Somewhatsurprising, for the Bernoulli model the error in loss rate decreases significantlyas Tp is increased. We have no good explanation for this. The loss rate of theperiodic probe flow changes very little for the Tp values used (see the last columnof Table 3). That would suggest that the loss rate of the evaluation flows forthe Bernoulli model with the same Tp values should not vary much either butthey do, as shown in the table.

    Figures 15 through 17 show the loss burst distribution for the evaluationflows and for the validation flow. It is can be seen that of all models, lossmodel 1 has a strongest tendency to emphasize long bursts. That is expectedsince for this model, the probability of loss will be 1 in time intervals between twoconsecutive probe packet losses. This is also the reason why the loss burst dis-

    13When we talk about increases or decreases in Tp that should be read “in relation to T”.

  • 26 Paper E: Trace-Driven Network Path Emulation

    0 2 4 6 8 100

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    Loss burst length x [# packets]

    P[X

    < x

    ]

    Non−emulationLoss model 1Loss model 2, Win = 3Loss model 2, Win = 7Loss model 2, Win = 15Bernoulli model

    Figure 15: Loss burst length distribution for flows with ∆t ∈ Exp(20 ms). Thepacket trace used by the loss models is generated from a periodic probe flowwith fixed inter-packet time, Tp = 20 ms.

    tribution is pushed towards longer bursts as Tp is increased. For loss model 2, asmall window will push the loss burst distribution towards longer bursts whereasa large window will emphasize shorter bursts. The can also be foreseen sincethe smaller the window, the more will loss model 2 behave like loss model 1.Similarly, the larger the window, the more will loss model 2 behave like theBernoulli model (which essentially has a window width that equals the lengthof the probe packet trace). With respect to loss burst distribution, the graphsshow that loss model 2 with a window of width 7 performs best.

    The loss autocorrelation functions for Tp = 20 ms and 80 ms are shown infigures 18 and 19. Appendix C shows how the ACF has been calculated. Tomake the ACF graphs more readable, the lag starts at 30 ms instead of 0 ms.As expected, the Bernoulli loss model does not capture temporal dependencies(since it assumes that the Li variables are IID). In all of the simulations thathave been performed, loss model 1 performs best with respect to loss ACF.However, it tends to produce correlations that are stronger than those in thenon-emulation. This becomes especially acute when Tp grows as illustrated byfigure 18. For loss model 2, a smaller window is preferable to a larger one sincethe window averaging has a smearing effect on the ACF. For large Tp values,using a large window in loss model 2 tend to result in a slowly decaying lossACF.

    In summary, for the non-adaptive flows used in these simulations, loss model 2

  • 4 Experiments 27

    0 2 4 6 8 100

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    Loss burst length x [# packets]

    P[X

    < x

    ]Non−emulationLoss model 1Loss model 2, Win = 3Loss model 2, Win = 7Loss model 2, Win = 15Bernoulli model

    Figure 16: Loss burst length distribution for flows with ∆t ∈ Exp(20 ms). Thepacket trace used by the loss models is generated from a periodic probe flowwith fixed inter-packet time, Tp = 40 ms.

    with a relatively small window seems to perform best. However, it is impor-tant that Tp is not too large relative to T as that will significantly reduce theaccuracy of the loss model. A simple rule of thumb could be to keep Tp ≤ T .

    4.4 Delay Model Evaluations and Comparisons

    We now focus on the delay models. The evaluation is done similarly to theevaluation of the loss models. Hence, the trace files are obtained by probing thenetwork path periodically with Tp = 20 ms, 40 ms, and 80 ms. The evaluationflows have exponentially distributed inter-packet times with mean T . To savespace, the graphs and tables are presented for T = 20 ms. The metrics used forthe evaluation are average delay, delay distribution and delay autocorrelationfunction (ACF).

    Table 6 shows the relative error in average delay of the evaluation flows whenthe different delay models are used. The fact that the error when the indepen-dent average delay model is used is so large (≈ 10%) is a strong indication thatthe Di variables are not normally distributed. Some measurement studies inthe real Internet have found that delays have gamma-like distributions [5, 17].In all, delay model 1 gives the smallest errors and the error is particularly smallwhen Tp ≤ T (although no graphs are shown for the case Tp < T ). For delaymodel 2, the error increases as the window grows. This is not surprising since

  • 28 Paper E: Trace-Driven Network Path Emulation

    0 2 4 6 8 100

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    Loss burst length x [# packets]

    P[X

    < x

    ]

    Non−emulationLoss model 1Loss model 2, Win = 3Loss model 2, Win = 7Loss model 2, Win = 15Bernoulli model

    Figure 17: Loss burst length distribution for flows with ∆t ∈ Exp(20 ms). Thepacket trace used by the loss models is generated from a periodic probe flowwith fixed inter-packet time, Tp = 80 ms.

    Table 6: Relative errors in average delay for evaluation flows with exponentiallydistributed inter-packet times, T = 20 ms. The packet trace used by the lossmodels is generated from a periodic probe flow with fixed inter-packet time,Tp = 20 ms, 40 ms, and 80 ms.

    Delay Delay model 2 [s] Ind. avg. delayT Tp model 1 [s] W = 3 W = 7 W = 15 model [s]

    0.02s 0.02s -0.096 0.51 1.29 2.36 10.06

    0.02s 0.04s -0.39 0.99 2.18 4.16 10.45

    0.02s 0.08s -0.25 2.15 4.12 6.60 10.29

    with a larger window, the variance in the delays in the probe packet trace islikely to increase. That, in combination with the fact that delay model 2 alsogenerates delays from a normal distribution, makes it behave more and more likethe independent average delay model as the window grows. Another observationwe made is that the error tend to increase for all models as Tp is increased.

    The discussion above is illustrated well by Figures 20 and 21. They show thedelay distributions when Tp = 20 ms and 80 ms, respectively. As the window inloss model 2 increases, the delay distributions for the corresponding evaluationflows tend towards the delay distribution of the flow corresponding to the in-

  • 4 Experiments 29

    0 0.1 0.2 0.3 0.4 0.5−0.2

    −0.1

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    Lag [s]

    Loss

    aut

    ocor

    rela

    tion

    Non−emulationLoss model 1Loss model 2, Win = 3Loss model 2, Win = 7Loss model 2, Win = 15Bernoulli model

    Figure 18: Loss ACF for flows with ∆t ∈ Exp(20 ms). The packet trace used bythe loss models is generated from a periodic probe flow with fixed inter-packettime, Tp = 20 ms.

    dependent average delay model. Also evident from the graphs is that the delaydistributions for the evaluation flows deviate more from that of the validationflow when Tp = 80 ms as opposed to when Tp = 20 ms.

    The delay autocorrelation functions for Tp = 20 ms and 80 ms are shown inFigures 22 and 23. Analogously to the Bernoulli loss model, the independentaverage delay model treats the Di variables as IID. This delay model shouldtherefore not capture any temporal dependencies in delay. The two figuresshow that this is indeed the case. When Tp = T , the performances of delaymodels 1 and 2 are roughly equal as illustrated by the first delay ACF graph.Although not shown, this also turned out to be the case when Tp < T . Notsurprisingly, the smearing effect that large windows has in loss model 2 is alsoapparent in delay model 2 but to a lesser degree.

    The conclusions for the delay models are quite similar to those of the lossmodels. For the non-adaptive flows used in these simulations, delay model 1and delay model 2 with a relatively small window seem to perform best. Again,it is important that Tp is not too large relative to T as that will significantlyreduce the accuracy of the delay model. Therefore, the simple rule of thumbfrom Section 4.3 to keep Tp ≤ T seems reasonable here too.

  • 30 Paper E: Trace-Driven Network Path Emulation

    0 0.1 0.2 0.3 0.4 0.5−0.2

    −0.1

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    Lag [s]

    Loss

    aut

    ocor

    rela

    tion

    Non−emulationLoss model 1Loss model 2, Win = 3Loss model 2, Win = 7Loss model 2, Win = 15Bernoulli model

    Figure 19: Loss ACF for flows with ∆t ∈ Exp(20 ms). The packet trace used bythe loss models is generated from a periodic probe flow with fixed inter-packettime, Tp = 80 ms.

    4.5 TCP Throughput

    In this section, the evaluation and validation flows are persistent TCP flows (in-finite FTP transfers) instead of UDP-based flows with exponentially distributedinter-packet times with mean T . In this case, the loss model and the delay modelcannot be studied independently since they both affect the behavior of a TCP.Because of their similarities, loss model 1 is used together with delay model 1,loss model 2 is used together with delay model 2 (the same window width isused for both models). Finally, the Bernoulli loss model is used together withthe independent average delay model.

    For these adaptive TCP flows, none of the loss and delay models 1 and 2combinations seem to perform well. Instead, the Bernoulli/independent averagedelay model combination does a very good job. Figures 24 through 26 show thethroughput distribution for the evaluation flows and the validation flows. In allof the graphs, the throughput distribution for the evaluation flows correspondingto Bernoulli/independent average delay model follow those of the validationflows quite closely. The performance of the loss/delay models 1 combination isthe worst. However, the performance of the loss/delay models 2 is not muchbetter regardless of the window sizes that are used. As Tp grows, the loss anddelay model combinations 1 and 2 deteriorates seriously. That does not happenwith the Bernoulli/independent average delay model.

    In terms of the throughput ACF (see Figures 27 and 28), neither loss/delay

  • 4 Experiments 31

    0.05 0.1 0.15 0.2 0.250

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    One−way delay d [s]

    P[D

    < d

    ]

    Non−emulationDelay model 1Delay model 2, Win = 3Delay model 2, Win = 7Delay model 2, Win = 15Indep. delay dist. model

    Figure 20: Delay distribution for flows with ∆t ∈ Exp(20 ms). The packet traceused by the delay models is generated from a periodic probe flow with fixedinter-packet time, Tp = 20 ms.

    model combination stand out in a conclusive way. What can be seen is thatthe throughput ACFs corresponding to the Bernoulli/independent average delaymodel combination tend to follow the throughput ACFs for the validation flows.Thus, the temporal independence in delay and loss of those models is in somesense overshadowed by temporal dependences introduced by the adaptation ofTCP. Another observation is, as in earlier the earlier sections, that a smallerTpis preferable to a larger one.

    Currently, we do not have a good explanation why the performances of thetrace-driven loss and delay model combinations 1 and 2 are so bad. The errorsin loss-rate of the evaluation flows corresponding to the emulations where lossmodels 1 and 2 are used are roughly the same as those corresponding to theemulations where the Bernoulli loss model is used. Furthermore, the errors indelay are smaller for the evaluations flows corresponding to the emulations wheredelay models 1 and 2 are used, compared to the errors for the evaluation flowscorresponding to the emulations where the independent average delay modelis used. A reason could be that the patterns of how the packets are lost (e.g.loss bursts etc.) differ in a way that result in the observed performance. Theinvestigation of this is on-going work and a conclusive answer cannot be givenhere.

  • 32 Paper E: Trace-Driven Network Path Emulation

    0.05 0.1 0.15 0.2 0.250

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    One−way delay d [s]

    P[D

    < d

    ]

    Non−emulationDelay model 1Delay model 2, Win = 3Delay model 2, Win = 7Delay model 2, Win = 15Indep. delay dist. model

    Figure 21: Delay distribution for flows with ∆t ∈ Exp(20 ms). The packet traceused by the delay models is generated from a periodic probe flow with fixedinter-packet time, Tp = 80 ms.

    5 Conclusions

    We have explored a trace-driven approach to emulation of multiple hop networkpaths. As part of that exploration, the effect that different probing schemes haveon the resulting probe packet trace have been studied. Our observations suggestthat for moderate probe rates (when the probe traffic is only a small fractionof the overall traffic, the probe loss rate is essentially the same regardless ofhow the probe packets are sent. Furthermore, when the variance in inter-packettime of the probe packets increases, longer loss bursts appear in the probe packettrace. Probing by TCP does not seem to introduce biases (in terms of loss anddelay) in the trace that is obtained. However, due to the adaptation mechanismin TCP, there can be lengthy time intervals (in the order of seconds) whenno probe packets are sent. From the standpoint of trace-driven emulations, inparticularly with the models studied here, that is undesirable since the state ofthe network path will then be unknown.

    Of the six loss and delay models that have been evaluated, loss and delaymodels 2 with a small window seem to perform best for non-responsive flows.A large window tend to have a smearing effect on the ACF and also emphasizeshort bursts. For all trace-driven models that have been studied, the probe rate,or equivalently, the time interval between the send instants of the probe packets,is important. It should not be significantly larger than the inter-packet times of

  • 5 Conclusions 33

    0 0.1 0.2 0.3 0.4 0.5−0.1

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    Lag [s]

    Del

    ay a

    utoc

    orre

    latio

    n

    Non−emulationDelay model 1Delay model 2, Win = 3Delay model 2, Win = 7Delay model 2, Win = 15Indep. delay dist. model

    Figure 22: Delay ACF for flows with ∆t ∈ Exp(20 ms). The packet trace used bythe delay models is generated from a periodic probe flow with fixed inter-packettime, Tp = 20 ms.

    the flows that will traverse the emulated path. If that is the case, the accuracyof the models will be low. A simple rule of thumb can be to keep the probe rateless or equal to the rate of the flow that will traverse the emulated path.

    Disappointingly, none of the trace-driven loss and delay models are foundto perform well for adaptive TCP flows. Instead, the Bernoulli loss model incombination with the independent average delay model performed very wellin this case both in terms of average throughput and temporal dependence inthroughput. Investigating the reasons for the poor performance of the trace-driven models in the TCP case is on-going work.

  • 34 Paper E: Trace-Driven Network Path Emulation

    0 0.1 0.2 0.3 0.4 0.5−0.1

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    Lag [s]

    Del

    ay a

    utoc

    orre

    latio

    n

    Non−emulationDelay model 1Delay model 2, Win = 3Delay model 2, Win = 7Delay model 2, Win = 15Indep. delay dist. model

    Figure 23: Delay ACF for flows with ∆t ∈ Exp(20 ms). The packet trace used bythe delay models is generated from a periodic probe flow with fixed inter-packettime, Tp = 80 ms.

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.70

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    Throughput r [Mbps]

    P[R

    < r

    ]

    Non−emulationLoss & delay models 1Loss & delay models 2, Win = 3Loss & delay models 2, Win = 7Loss & delay models 2, Win = 15Bernoulli & ind. delay dist. model

    Figure 24: Throughput distribution for TCP Reno flows. The packet trace usedby the loss and delay models is generated from a periodic probe flow with fixedinter-packet time, Tp = 20 ms.

  • 5 Conclusions 35

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.70

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    Throughput r [Mbps]

    P[R

    < r

    ]

    Non−emulationLoss & delay models 1Loss & delay models 2, Win = 3Loss & delay models 2, Win = 7Loss & delay models 2, Win = 15Bernoulli & ind. delay dist. model

    Figure 25: Throughput distribution for TCP Reno flows. The packet trace usedby the loss and delay models is generated from a periodic probe flow with fixedinter-packet time, Tp = 40 ms.

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.70

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    Throughput r [Mbps]

    P[R

    < r

    ]

    Non−emulationLoss & delay models 1Loss & delay models 2, Win = 3Loss & delay models 2, Win = 7Loss & delay models 2, Win = 15Bernoulli & ind. delay dist. model

    Figure 26: Throughput distribution for TCP Reno flows. The packet trace usedby the loss and delay models is generated from a periodic probe flow with fixedinter-packet time, Tp = 80 ms.

  • 36 Paper E: Trace-Driven Network Path Emulation

    0 5 10 15 20−0.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    Lag [s]

    Thr

    ough

    put a

    utoc

    orre

    latio

    n

    Non−emulationLoss & delay models 1Loss & delay models 2, Win = 3Loss & delay models 2, Win = 7Loss & delay models 2, Win = 15Bernoulli & ind. delay dist. model

    Figure 27: Throughput autocorrelation for TCP Reno flows. The packet traceused by the loss and delay models is generated from a periodic probe flow withfixed inter-packet time, Tp = 20 ms.

    0 5 10 15 20−0.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    Lag [s]

    Thr

    ough

    put a

    utoc

    orre

    latio

    n

    Non−emulationLoss & delay models 1Loss & delay models 2, Win = 3Loss & delay models 2, Win = 7Loss & delay models 2, Win = 15Bernoulli & ind. delay dist. model

    Figure 28: Throughput ACF for TCP Reno flows. The packet trace used bythe loss and delay models is generated from a periodic probe flow with fixedTp = 40 ms.

  • A Static Delay Parameters Algorithm 37

    Appendix

    A Static Delay Parameters Algorithm

    The following algorithm is used to estimate dS and β in the delay models. Anassumption of the algorithm is that there is an s(imin) so that the minimumobserved delays for all packet sizes larger than s(imin) exceed dmin(s(imin)).14

    Using a shorter notation, the assumption is that dmin(s(j)) > dmin(s(imin)) ∀j >imin. In Figure 29, this assumption holds starting from the second smallestpacket size s(2), i.e. imin = 2.

    d (s)i

    (1)s s( )n(2)s(3)s

    (1)sdmin ( )

    (s)dmind (s)i

    d (s)i= mini

    d +S βs

    Figure 29: Estimation of dS and β given a delay data series di(s) with variedpacket sizes s.

    The assumption we make is not an unrealistic one. Since the transmissiondelay increases with packet size, there should be a trend of increasing one-waydelays as the packet size is increased. That trend can of course be obscuredsomewhat due to noise caused by queuing. However, if the number of samples(i.e, probe packets) is large, the chance that a few packets for each packetsize will traverse the network path without significant queueing should be highenough for the min-filtering to reveal the trend. A similar assumption is made insome methods that tries to measure the hop-by-hop link capacities of a networkpath [8, 13].

    The basic idea of the algorithm is to, starting from packet size s(i), searchfor the packet size s(j) (with j > i) that gives the straight line through thepoints (s(imin), dmin(s(imin))) and (s(j), dmin(s(j)) the smallest positive slope.By minimizing the slope, the straight line is guaranteed to lie under all delaypoints. Once the slope (β) has been estimated, the estimate of the static delaycomponent dS (the intercept of the straight line) is easily calculated.

    14We use superscript indices enclosed in parentheses to number the different packet sizes(s(j) > s(i) if j > i). Also, we omit the subscript indices that number the packets since theyare of no relevance here.

  • 38 Paper E: Trace-Driven Network Path Emulation

    Algorithm 1 Estimate intercept dS and slope βDefine: M = number of packet sizes.

    /* Find the packet size that has the smallest minimum delay. *//* Start searching from the smallest packet size. */imin = Mfor i = 1 to M − 1 do

    if dmin(s(i)) < dmin(s(imin)) thenimin = i

    end ifend for/* Find the smallest β̃ > 0, starting from packet size s(imin+1). */β̃ = 0for i = imin + 1 to M do

    β̃cand =dmin(s

    (i))−dmin(s(imin))s(i)−s(imin)

    if β̃cand > 0 and (β̃cand < β̃ or β̃ == 0) thenβ̃ = β̃cand

    end ifend ford̃S = dmin(s(imin))− β̃s(imin)

    B Calculation of Values in Table 4

    During the time interval of each bin (40 ms), a number of TCP probe packetshave been sent (possibly zero). During the same intervals, there have also beena number of cross traffic losses (possibly none), and periodic probe packet delays(again, possibly none).

    To describe how the values in column 3 through 6 of Table 4 have beencalculated we take the value 3.28 in column 3 of row 7 as one example. This isthe average number of cross traffic losses when not sending TCP probe packets.It is calculated as

    ∑j∈ζ

    nx(j)|ζ| , where ζ is the set of bins containing zero TCP

    probe packets, nx(j) is the number of cross traffic losses in bin j, and |ζ| is thenumber of bins in the set ζ. Another example is the value 0.125 s in column 5of row 11. It is the average delay of the periodic probe packets in bins whereTCP packets have been sent. It is calculated as

    ∑j∈Ω

    dp(j)|Ω| , where Ω is the set

    of bins containing at least one TCP probe packet, dp(j) is the average delay ofthe periodic probe packets in bin j, and |Ω| is the number of bins in the set Ω.

    The value 2.90 in column 9 of row 7 is the average number of cross trafficlosses per TCP probe packet. It is calculated as

    ∑j∈Ω

    nT CP (j)nx(j)NT CP

    , whereNTCP =

    ∑j∈Ω nTCP (j) and nTCP (j) is the number of TCP probe packet in

    bin j. The other factors have the same meaning as above.Since a TCP sender will detect congestion some time after the congestion has

    occured (namely, the time it takes for the acknowledgements from the receiver

  • C Loss/Delay ACF Calculation 39

    to reach the TCP sender), the calculations above have also been performed withdifferent offsets. More specifically, the TCP probe packets have been sorted intoseparate bins from the cross traffic and periodic probe packet bins. During thecalculations described above, the bins have been given an offset (a multiple ofthe bin time 40 ms) with respect to each other. It has thereby been possibleto condition cross traffic losses and periodic probe packet delays on earlier (andalso later) TCP probe packet send instants. However, no significant differencein the calculated values has been observed. Table 4 therefore presents valuesfor offset 0.

    C Loss/Delay ACF Calculation

    The autocorrelation function, acfx, for a data series x1, x1, . . . , xN is givenby

    acfx(k) =∑N

    i=1(xi −mx)(xi+k −mx)∑Ni=1(xi −mx)2

    (7)

    where k = 1, 2, . . . , , N is the lag and mx is the mean value of the xi values.When calculating the ACFs for the loss (li) and delay (di) data series we

    want to express the lag in terms of time and not packets. The problem is thatwhen packets are not sent strictly periodically, the time between consecutive liand di values will vary. That effectively makes it impossible to express the lagin terms of time.

    To circumvent this problem, the total time covered by the data series15 isdivided into bins of equal size. Using these bins, two new time series, l′j and d

    ′j

    are created as follows: Suppose that bin j covers the time interval [tjmin , tjmax ].Then, l′j =

    ∑∀i, tjmin

  • 40 Paper E: Trace-Driven Network Path Emulation

  • References

    [1] Mark Allman, Adam Caldwell, and Shawn Ostermann. ONE: The Ohionetwork emulator. Technical Report TR-19972, School of Electrical Engi-neering and Computer Science, Ohio University, August 1997.

    [2] Guy Almes, Sunil Kalidindi, and Matt Zekauskas. A one-way packet delaymetric for IPPM. RFC 2679, September 1999.http://www.rfc-editor.org/rfc/rfc2679.txt.

    [3] Guy Almes, Sunil Kalidindi, and Matt Zekauskas. A one-way packet lossmetric for IPPM. RFC 2680, September 1999.http://www.rfc-editor.org/rfc/rfc2680.txt.

    [4] Michael S. Borella, Debbie Swider, Syleyman Uludag, and Gregory B.Brewster. Internet packet loss: Measurement and implications for end-to-end QoS. In Proceedings of International Conference on Parallel Processing,Minneapolis, MN, USA, August 1998.

    [5] C. J. Bovy, H. T. Mertodimedjo, G. Hooghiemstra, H. Uijterwaal, andP. Van Mieghem. Analysis of end-to-end delay measurements in Internet.In Passive and Active Measurement (PAM) Workshop 2002 Proceedings,pages 25–27, Fort Collins, CO, USA, March 2002.

    [6] Kimberly C. Claffy, Hans-Werner Braun, and George. C. Polyzos. Mea-surement considerations for assessing unidirectional latencies. Journal ofInternetworking: Research and Experience, 4(3):121–132, 1993.

    [7] Kimberly C. Claffy, George C. Polyzos, and Hans-Werner Braun. Appli-cation of sampling methodologies to network traffic characterization. InProceedings of ACM SIGCOMM, pages 194–203, 1993.

    [8] Allen B. Downey. Using pathchar to estimate Internet link characteristics.In Proceedings of ACM SIGCOMM, pages 241–250, Cambridge, MA, USA,August 1999.

    [9] Nick Duffield, Carsten Lund, and Mikkel Thorup. Properties and predictionof flow statistics from sampled packet streams. In Proceedings of ACM

    41

  • 42 Paper E: Trace-Driven Network Path Emulation

    SIGCOMM Internet Measurement Workshop, Marseille, France, November2002.

    [10] Aiguo Fei, Guangyu Pei, Roy Liu, and Lixia Zhang. Measurement ondelay and hop-count of the Internet. In Proceedings of IEEE GLOBECOM,Sydney, Australia, November 1998.

    [11] Daniel Herrscher and Kurt Rothermel. A dynamic network scenario emula-tion tool. In Proceedings of the 11th International Conference on ComputerCommunications and Networks (ICCCN 2002), Miami, FL, USA, October2002.

    [12] David B. Ingham and Graham D. Parrington. Delayline: A wide-areanetwork emulation tool. Computing Systems, 7(3):313–332, 1994.

    [13] Van Jacobson. Pathchar - a tool to infer characteristics of Internet paths.Presented at the Mathematical Sciences Research Institute (MSRI), April1997. Slides available from ftp://ftp.ee.lbl.gov/pathchar/.

    [14] Markku Kojo, Andrei Gurtov, Jukka Mannner, Pasi Sarolahti, TimoAlanko, and Kimmo Raatikainen. Seawind: a wireless network emulator.In Proceedings of 11th GI/ITG Conference on Measuring, Modelling andEvaluation of Computer and Communication Systems, Aachen, Germany,September 2001.

    [15] Sue B. Moon, Jim Kurose, Paul Skelly, and Don Towsley. Correlation ofpacket delay and loss in the Internet. Technical Report 98-11, Departmentof Computer Science, University of Massachusetts, Amherst, MA, USA,January 1998.

    [16] Sue B. Moon, Paul Skelly, and Don Towsley. Estimation and removal ofclock skew from network delay measurements. In Proceedings of IEEEINFOCOM, New York, NY, USA, March 1999.

    [17] A. Mukherjee. On the dynamics and significance of low frequency compo-nents of Internet load. Journal of Internetworking: Research and Experi-ence, 5(4), December 1994.

    [18] NISTnet network emulation package. NIST Internetworking TechnologyGroup, June 2000. http://www.antd.nist.gov/itg/nistnet/.

    [19] Brian D. Noble, M. Satyanarayanan, Giao T. Nguyen, and Randy H. Katz.Trace-based mobile network emulation. In Proceedings of ACM SIGCOMM,pages 51–61, Cannes, France, September 1997.

    [20] Network simulator - ns2, 2002. http://www.isi.edu/nsnam/ns/.

    [21] Opnet modeler. OPNET Technologies, 2001.http://www.mil3.com/products/modeler/home.html.

  • References 43

    [22] Vern Paxson. End-to-end Internet packet dynamics. In Proceedings of ACMSIGCOMM, pages 139–152, Cannes, France, September 1997.

    [23] Vern Paxson. Measurements and Analysis of End-to-End Internet PacketDynamics. PhD thesis, University of California, Berkeley, CA, USA, April1997.

    [24] Vern Paxson, Guy Almes, and Jamshid Mahdavi. Framework for IP per-formance metrics. RFC 2330, May 1998.http://www.rfc-editor.org/rfc/rfc2330.txt.

    [25] Luigi Rizzo. Dummynet: A simple approach to the evaluation of networkprotocols. ACM SIGCOMM Computer Communication Review, 27(1):31–41, 1997.

    [26] Wei Wei, Bing Wang, and Don Towsley. Continuous-time hidden Markovmodels for network performance evaluation. In Proceedings of IFIP WG7.3 International Symposium on Computer Performance Modeling, Mea-surement and Evaluation, Rome, Italy, September 2002.

    [27] Maya Yajnik, Sue B. Moon, Jim Kurose, and Don Towsley. Measurementand modeling of the temporal dependence in packet loss. In Proceedings ofIEEE INFOCOM, pages 345–352, New York, NY, USA, March 1999.