equation-based congestion control for unicast applicationskmp/comp832-s08/readings/tfrc.pdf ·...

14
Equation-Based Congestion Control for Unicast Applications Sally Floyd, Mark Handley AT&T Center for Internet Research at ICSI (ACIRI) Jitendra Padhye University of Massachusetts at Amherst org Widmer International Computer Science Institute (ICSI) ABSTRACT This paper proposes a mechanism for equation-based congestion control for unicast traffic. Most best-effort traffic in the current Internet is well-served by the dominant transport protocol, TCP. However, traffic such as best-effort unicast streaming multimedia could find use for a TCP-friendly congestion control mechanism that refrains from reducing the sending rate in half in response to a single packet drop. With our mechanism, the sender explicitly adjusts its sending rate as a function of the measured rate of loss events, where a loss event consists of one or more packets dropped within a single round-trip time. We use both simulations and ex- periments over the Internet to explore performance. We consider equation-based congestion control a promising avenue of development for congestion control of multicast traffic, and so an additional motivation for this work is to lay a sound basis for the further development of multicast congestion control. 1. INTRODUCTION TCP is the dominant transport protocol in the Internet, and the cur- rent stability of the Internet depends on its end-to-end congestion control, which uses an Additive Increase Multiplicative Decrease (AIMD) algorithm. For TCP, the ‘sending rate’ is controlled by a congestion window which is halved for every window of data containing a packet drop, and increased by roughly one packet per window of data otherwise. End-to-end congestion control of best-effort traffic is required to avoid the congestion collapse of the global Internet [3]. While TCP congestion control is appropriate for applications such as bulk data transfer, some real-time applications (that is, where the data is being played out in real-time) find halving the sending rate in re- sponse to a single congestion indication to be unnecessarily severe, This material is based upon work supported by AT&T, and by the National Science Foundation under grants NCR-9508274, ANI- 9805185 and CDA-9502639. Any opinions, findings, and conclu- sions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of AT&T or the National Science Foundation. as it can noticeably reduce the user-perceived quality [22]. TCP’s abrupt changes in the sending rate have been a significant impedi- ment to the deployment of TCP’s end-to-end congestion control by emerging applications such as streaming multimedia. In our judge- ment, equation-based congestion control is a viable mechanism to provide relatively smooth congestion control for such traffic. Equation-based congestion control was proposed informally in [12]. Whereas AIMD congestion control backs off in response to a sin- gle congestion indication, equation-based congestion control uses a control equation that explicitly gives the maximum acceptable sending rate as a function of the recent loss event rate. The sender adapts its sending rate, guided by this control equation, in response to feedback from the receiver. For traffic that competes in the best-effort Internet with TCP, the appropriate control equation for equation-based congestion control is the TCP response function characterizing the steady-state sending rate of TCP as a function of the round-trip time and steady-state loss event rate. Although there has been significant previous research on equation based and other congestion control mechanisms [8, 18, 17, 22, 15, 21], we are still rather far from having deployable conges- tion control mechanisms for best-effort streaming multimedia. Sec- tion 3 presents the TCP-Friendly Rate Control (TFRC) proposal for equation-based congestion control for unicast traffic. In Section 5 we provide a comparative discussion of TFRC and previously pro- posed mechanisms. The benefit of TFRC, relative to TCP, is a more smoothly-changing sending rate. The corresponding cost of TFRC is a more moderate response to transient changes in congestion, including a slower response to a sudden increase in the available bandwidth. One of our goals in this paper is to present a proposal for equa- tion based congestion control that lays the foundation for the near- term experimental deployment of congestion control for unicast streaming multimedia. Section 4 presents results from extensive simulations and experiments with the TFRC protocol, showing that equation-based congestion control using the TCP response func- tion competes fairly with TCP. Both the simulator code and the real-world implementation are publicly available. We believe that TFRC and related forms of equation-based congestion control can play a significant role in the Internet. For most unicast flows that want to transfer data reliably and as quickly as possible, the best choice is simply to use TCP directly. However, equation-based congestion control is more appropriate for applications that need to maintain a slowly-changing sending rate, while still being responsive to network congestion over longer 45 43 Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGCOMM '00, Stockholm, Sweden. Stockholm, Sweden. Copyright 2000 ACM 1-58113-224-7/00/0008...$5.00.

Upload: dinhthu

Post on 19-Apr-2018

220 views

Category:

Documents


2 download

TRANSCRIPT

Equation-Based Cong estion Contr ol for UnicastApplications

Sally Floyd, MarkHandley

AT&T Center for InternetResearch at ICSI (ACIRI)

Jitendra PadhyeUniversity of Massachusetts at

Amherst

Jorg WidmerInternational ComputerScience Institute (ICSI)

ABSTRACTThis paperproposesa mechanismfor equation-basedcongestioncontrol for unicasttraffic. Most best-effort traffic in the currentInternet is well-served by the dominanttransportprotocol, TCP.However, traffic suchasbest-effort unicaststreamingmultimediacould find usefor a TCP-friendlycongestioncontrol mechanismthat refrainsfrom reducingthe sendingratein half in responsetoa singlepacket drop. With our mechanism,the senderexplicitlyadjustsits sendingrateasa function of the measuredrateof lossevents,wherea losseventconsistsof oneor morepacketsdroppedwithin a singleround-triptime. We useboth simulationsandex-perimentsover theInternetto exploreperformance.

Weconsiderequation-basedcongestioncontrolapromisingavenueof developmentfor congestioncontrol of multicasttraffic, andsoanadditionalmotivationfor thiswork is to lay asoundbasisfor thefurtherdevelopmentof multicastcongestioncontrol.

1. INTRODUCTIONTCPis thedominanttransportprotocolin theInternet,andthecur-rent stability of the Internetdependson its end-to-endcongestioncontrol, which usesan Additive IncreaseMultiplicative Decrease(AIMD) algorithm. For TCP, the ‘sendingrate’ is controlledbya congestionwindow which is halved for every window of datacontaininga packet drop,andincreasedby roughlyonepacket perwindow of dataotherwise.

End-to-endcongestioncontrol of best-effort traffic is requiredtoavoid the congestioncollapseof the global Internet [3]. WhileTCPcongestioncontrolis appropriatefor applicationssuchasbulkdatatransfer, somereal-timeapplications(thatis, wherethedataisbeingplayedout in real-time)find halving the sendingratein re-sponseto asinglecongestionindicationto beunnecessarilysevere,�This material is basedupon work supportedby AT&T, and by

theNationalScienceFoundationundergrantsNCR-9508274,ANI-9805185andCDA-9502639.Any opinions,findings,andconclu-sionsor recommendationsexpressedin this materialarethoseoftheauthor(s)anddo not necessarilyreflecttheviews of AT&T ortheNationalScienceFoundation.

asit cannoticeablyreducetheuser-perceivedquality [22]. TCP’sabruptchangesin thesendingratehave beena significantimpedi-mentto thedeploymentof TCP’send-to-endcongestioncontrolbyemergingapplicationssuchasstreamingmultimedia.In our judge-ment,equation-basedcongestioncontrol is a viablemechanismtoprovide relatively smoothcongestioncontrolfor suchtraffic.

Equation-basedcongestioncontrolwasproposedinformally in [12].WhereasAIMD congestioncontrolbacksoff in responseto a sin-gle congestionindication,equation-basedcongestioncontrolusesa control equationthat explicitly gives the maximumacceptablesendingrateasa functionof therecentlosseventrate. Thesenderadaptsits sendingrate,guidedby thiscontrolequation,in responseto feedbackfrom the receiver. For traffic that competesin thebest-effort Internetwith TCP, the appropriatecontrolequationforequation-basedcongestioncontrol is the TCP responsefunctioncharacterizingthe steady-statesendingrateof TCP asa functionof theround-triptimeandsteady-statelosseventrate.

Althoughtherehasbeensignificantpreviousresearchon equationbasedand other congestioncontrol mechanisms[8, 18, 17, 22,15, 21], we are still rather far from having deployable conges-tioncontrolmechanismsfor best-effort streamingmultimedia.Sec-tion 3 presentstheTCP-FriendlyRateControl(TFRC)proposalforequation-basedcongestioncontrol for unicasttraffic. In Section5weprovide acomparative discussionof TFRCandpreviously pro-posedmechanisms.Thebenefitof TFRC,relativeto TCP, is amoresmoothly-changingsendingrate.Thecorrespondingcostof TFRCis a more moderateresponseto transientchangesin congestion,including a slower responseto a suddenincreasein the availablebandwidth.

Oneof our goalsin this paperis to presenta proposalfor equa-tion basedcongestioncontrolthatlaysthefoundationfor thenear-term experimentaldeployment of congestioncontrol for unicaststreamingmultimedia. Section4 presentsresultsfrom extensivesimulationsandexperimentswith theTFRCprotocol,showing thatequation-basedcongestioncontrol using the TCP responsefunc-tion competesfairly with TCP. Both the simulatorcodeand thereal-world implementationarepublicly available. We believe thatTFRCandrelatedformsof equation-basedcongestioncontrolcanplayasignificantrole in theInternet.

For most unicastflows that want to transferdatareliably andasquickly aspossible,thebestchoiceis simply to useTCPdirectly.However, equation-basedcongestioncontrol is more appropriatefor applicationsthat needto maintaina slowly-changingsendingrate,while still beingresponsive to network congestionover longer

4543

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires priorspecific permission and/or a fee.SIGCOMM '00, Stockholm, Sweden.Stockholm, Sweden.Copyright 2000 ACM 1-58113-224-7/00/0008...$5.00.

time periods(seconds,asopposedto fractionsof a second).It isourbeliefthatTFRCis sufficiently maturefor awiderexperimentaldeployment,testing,andevaluation.

A secondgoal of this work is to contribute to the developmentandevaluationof equation-basedcongestioncontrol. We addressa numberof key concernsin thedesignof equation-basedconges-tion control that have not beensufficiently addressedin previousresearch,includingresponsivenessto persistentcongestion,avoid-anceof unnecessaryoscillations,avoidanceof the introductionofunnecessarynoise,androbustnessoverawide rangeof timescales.

Thealgorithmfor calculatingthelosseventrateis a key designis-suein equation-basedcongestioncontrol,determiningthetradeoffsbetweenresponsivenessto changesin congestionandtheavoidanceof oscillationsor unnecessarilyabruptshifts in the sendingrate.Section3 addressesthesetradeoffs anddescribesthefundamentalcomponentsof theTFRCalgorithmsthatreconcilethem.

Equation-basedcongestioncontrolfor multicasttraffic hasbeenanactive areaof researchfor several years[20]. A third goalof thiswork is to build asolidbasisfor thefurtherdevelopmentof conges-tion control for multicasttraffic. In a large multicastgroup,therewill usuallybe at leastonereceiver that hasexperienceda recentpacket loss. If thecongestioncontrolmechanismsrequirethat thesenderreducesits sendingratein responseto eachloss,asin TCP,thenthereis little potentialfor the constructionof scalablemulti-castcongestioncontrol. As we describein Section6, many of themechanismsin TFRCaredirectly applicableto multicastconges-tion control.

2. FOUNDATIONS OF EQUATION-BASEDCONGESTION CONTROL

Thebasicdecisionin designingequation-basedcongestioncontrolis to choosethe underlyingcontrol equation. An applicationus-ing congestioncontrol thatwassignificantlymoreaggressive thanTCP couldcausestarvation for TCPtraffic if both typesof trafficwerecompetingin a congestedFIFO queue[3]. From[2], a TCP-compatibleflow is definedasa flow that, in steady-state,usesnomorebandwidththanaconformantTCPrunningundercomparableconditions.For best-effort traffic competingwith TCP in thecur-rentInternet,in orderto beTCP-compatible,thecorrectchoiceforthe control equationis the TCP responsefunction describingthesteady-statesendingrateof TCP.

From [14], one formulationof the TCP responsefunction is thefollowing: ��� ���� �� � ����������� � � ��������� �!�#"$� � � (1)

This givesanupperboundon thesendingrate�

in bytes/sec,asafunctionof the packet size � , round-triptime

�, steady-stateloss

eventrate� , andtheTCPretransmittimeoutvalue ������� .

An applicationwishingto sendlessthantheTCP-compatiblesend-ing rate(e.g.,becauseof limited demand)would still becharacter-izedasTCP-compatible.However, if asignificantlylessaggressiveresponsefunctionwereused,thenthe lessaggressive traffic couldencounterstarvation whencompetingwith TCP traffic in a FIFOqueue. In practice,whentwo typesof traffic competein a FIFOqueue,acceptableperformancefor bothtypesof traffic only resultsif thetwo traffic typeshavesimilar responsefunctions.

Someclassesof traffic might not competewith TCP in a FIFOqueue,but could insteadbe isolatedfrom TCP traffic by somemethod(e.g.,with per-flow scheduling,or in a separatedifferenti-atedservicesclassfrom TCPtraffic). In suchtraffic classes,appli-cationsusingequation-basedcongestioncontrolwould not neces-sarilyberestrictedto theTCPresponsefunctionfor theunderlyingcontrolequation.Issuesaboutthemeritsor shortcomingsof vari-ouscontrolequationsfor equation-basedcongestioncontrolareanactive researchareathatwedo notaddressfurtherin thispaper.

2.1 Viable congestioncontrol doesnot requireTCP

Thispaperproposesdeploymentof a congestioncontrolalgorithmthatdoesnot halve its sendingratein responseto a singleconges-tion indication.Giventhatthestabilityof thecurrentInternetrestson AIMD congestioncontrolmechanismsin general,andon TCPin particular, aproposalfor non-AIMD congestioncontrolrequiresjustificationin termsof its suitability for the global Internet. Wediscusstwo separatejustifications,onepracticalandtheotherthe-oretical.

A practicaljustificationis thattheprinciplethreatto thestabilityofend-to-endcongestioncontrolin theInternetcomesnot from flowsusingalternateforms of TCP compatiblecongestioncontrol, butfrom flowsthatdonotuseany end-to-endcongestioncontrolatall.For muchcurrenttraffic, thealternativeshave beenbetweenTCP,with its reductionof the sendingratein half in responseto a sin-gle packet drop,andno congestioncontrolat all. We believe thatthedevelopmentof congestioncontrolmechanismswith smootherchangesin thesendingratewill increaseincentivesfor applicationsto useend-to-endcongestioncontrol,thuscontributing to theover-all stabilityof theInternet.

A more theoreticaljustification is that preservingthe stability ofthe Internetdoesnot requirethat flows reducetheir sendingrateby half in responseto a singlecongestionindication.In particular,the prevention of congestioncollapsesimply requiresthat flowsusesomeform of end-to-endcongestioncontrol to avoid a highsendingratein thepresenceof a high packet droprate. Similarly,aswe will show in this paper, preservingsomeform of “f airness”againstcompetingTCPtraffic alsodoesnot requiresucha drasticreactionto asinglecongestionindication.

For flows desiringsmootherchangesin the sendingrate,alterna-tives to TCP includeAIMD congestioncontrol mechanismsthatdo not usea decrease-by-halfreductionin responseto congestion.In DECbit, which wasalsobasedon AIMD, flows reducedtheirsendingrate to 7/8 of the old value in responseto a packet drop[11]. Similarly, in VanJacobson’s 1992revision of his 1988paperon CongestionAvoidanceand Control [9], the main justificationfor a decreaseterm of 1/2 insteadof 7/8, in AppendixD of therevisedversionof thepaper, is that theperformancepenaltyfor adecreasetermof 1/2 is small. A relative evaluationof AIMD andequation-basedcongestioncontrol in [4] exploresthe benefitsofequation-basedcongestioncontrol.

3. THE TCP-FRIENDLY RATE CONTROL(TFRC) PROTOCOL

Theprimarygoalof equation-basedcongestioncontrolis not to ag-gressively find anduseavailablebandwidth,but to maintaina rel-atively steadysendingratewhile still beingresponsive to conges-tion. To accomplishthis,equation-basedcongestioncontrolmakes

4644

the tradeoff of refrainingfrom aggressivelyseekingout availablebandwidth% in themannerof TCP. Thus,severalof thedesignprin-ciplesof equation-basedcongestioncontrolcanbeseenin contrastto thebehavior of TCP.& Do not aggressively seekout availablebandwidth. That is,

increasethesendingrateslowly in responseto a decreaseinthelosseventrate.& Do not halve the sendingrate in responseto a single lossevent. However, do halve the sendingrate in responsetoseveralsuccessive lossevents.

Additional designgoalsfor equation-basedcongestioncontrol forunicasttraffic include:& The receiver shouldreport feedbackto the senderat least

onceperround-triptime if it hasreceivedpacketsin thatin-terval.& If the senderhasnot received feedbackafter several round-trip times,thenthesendershouldreduceits sendingrate,andultimatelystopsendingaltogether.

3.1 ProtocolOverviewApplying theTCPresponsefunction(Equation(1)) asthecontrolequationfor congestioncontrolrequiresthattheparameters

�and� aredetermined.Thelossevent rate,� , mustbecalculatedat the

receiver, while theround-triptime,�

, couldbemeasuredat eitherthesenderor thereceiver. (Theothertwo valuesneededby theTCPresponseequationaretheflow’s packet size, � , andthe retransmittimeoutvalue,������� , whichcanbeestimatedfrom

�.) Thereceiver

sendseithertheparameter� or thecalculatedvalueof theallowedsendingrate,

�, backto the sender. The senderthenincreasesor

decreasesits transmissionratebasedon its calculationof�

.

For multicast,it makessensefor thereceiver to determinetherel-evantparametersandcalculatetheallowedsendingrate.However,for unicastthefunctionalitycouldbesplit in anumberof ways.Inour proposal,thereceiver only calculates� , andfeedsthis backtothesender.

3.1.1 SenderfunctionalityIn orderto usethecontrolequation,thesenderdeterminestheval-uesfor theround-triptime

�andretransmittimeoutvalue ������� .

The senderandreceiver togetherusesequencenumbersfor mea-suringtheround-triptime. Every timethereceiver sendsfeedback,it echoesthe sequencenumberfrom the mostrecentdatapacket,along with the time sincethat packet was received. In this waythesendermeasurestheround-triptime throughthenetwork. Thesenderthensmoothesthemeasuredround-triptimeusinganexpo-nentiallyweightedmoving average.Thisweightdeterminesthere-sponsivenessof thetransmissionrateto changesin round-triptime.

The sendercould derive the retransmittimeoutvalue �'�(�)� usingtheusualTCPalgorithm:� ����� �+*,����� �.- � �����0/$132where

�4�4�0/$132is thevarianceof RTT and

*,�����is theround-trip

timeestimate.However, in practice������� only critically affectstheallowed sendingratewhenthe packet lossrateis very high. Dif-ferentTCPsusedrasticallydifferentclockgranularitiesto calculateretransmittimeoutvalues,soit is notclearthatequation-basedcon-gestioncontrol canaccuratelymodela typical TCP. Unlike TCP,

TFRCdoesnot usethisvalueto determinewhetherit is safeto re-transmit,andso the consequencesof inaccuracy are lessserious.In practicethe simpleempiricalheuristicof � ����� � - � worksreasonablywell to provide fairnesswith TCP.

Thesenderobtainsthelosseventrate� in feedbackmessagesfromthereceiverat leastonceperround-triptime.

Every timea feedbackmessageis received,thesendercalculatesanew valuefor theallowedsendingrate

�usingtheresponsefunc-

tion fromequation(1). If theactualsendingrate�(165�798:13;

is lessthan�, the sendermay increaseits sendingrate. If

� 165�7�8<13;is greater

than�

, thesenderdecreasesthesendingrateto�

.

3.1.2 ReceiverfunctionalityThereceiver providesfeedbackto allow thesenderto measuretheround-triptime (RTT). The receiver alsocalculatesthe losseventrate� , andfeedsthisbackto thesender. Thecalculationof thelosseventrateis oneof thecritical partsof TFRC,andthepartthathasbeenthroughthelargestamountof evaluationanddesigniteration.Thereis a clear trade-off betweenmeasuringthe loss event rateover a shortperiodof time andrespondingrapidly to changesintheavailablebandwidth,versusmeasuringover a longerperiodoftimeandgettingasignalthatis muchlessnoisy.

Themethodof calculatingthe lossevent ratehasbeenthesubjectof muchdiscussionandtesting,andoverthatprocessseveralguide-lineshaveemerged:

1. The estimatedloss rateshouldmeasurethe lossevent rateratherthanthepacket lossrate,wherea losseventcancon-sist of severalpackets lost within a round-triptime. This isdiscussedin moredetailin Section3.2.1.

2. Theestimatedlosseventrateshouldtrackrelativelysmoothlyin anenvironmentwith astablesteady-statelosseventrate.

3. Theestimatedlosseventrateshouldrespondstronglyto losseventsin severalsuccessive round-triptimes.

4. Theestimatedlosseventrateshouldincreaseonly in responseto anew lossevent.

5. Let a lossinterval be definedasthe numberof packetsbe-tweenlossevents. Theestimatedlossevent rateshouldde-creaseonly in responseto a new lossinterval that is longerthanthepreviously-calculatedaverage,or asufficiently-longinterval sincethelastlossevent.

Obviousmethodswe lookedat includetheDynamicHistory Win-dow method,the EWMA LossInterval method,andthe AverageLossInterval methodwhich is themethodwechose.& The DynamicHistory Window methodusesa history win-

dow of packets,with the window lengthdeterminedby thecurrenttransmissionrate.& TheEWMA LossInterval methodusesanexponentially-weightedmoving averageof thenumberof packetsbetweenlossevents.& TheAverageLossInterval methodcomputesa weightedav-erageof thelossrateover thelast = lossintervals,with equalweightsoneachof themostrecent=?> " intervals.

TheDynamicHistory Window methodsuffersfrom theeffect thatevenwith aperfectlyperiodiclosspattern,losseventsenteringandleaving the window causechangesto the measuredlossrate,andhenceaddunnecessarynoiseto the losssignal. In particular, the

4745

DynamicHistory Window methoddoesnot satisfyproperties(2),(3), (4), and(5) above. TheEWMA LossInterval performsbetterthantheDynamicHistoryWindow method.However, it is difficultto chooseanEWMA weightthatrespondssufficiently promptlytolosseventsin several successive round-triptimes,andat thesametimedoesnotover-emphasizethemostrecentlossinterval. TheAv-erageLossInterval methodsatisfiesproperties(1)-(5)above,whilegiving equalweightsto themostrecentlossintervals.

We have comparedthe performanceof the AverageLossIntervalmethodwith theEWMA andDynamicHistory Window methods.In thesesimulationscenarioswe set up the parametersfor eachmethodsothattheresponseto anincreasein packet lossis similarlyfast. Under thesecircumstancesit is clear that the AverageLossInterval methodresultsin smootherthroughput[13].

Time now@

Intervalsince mostrecent loss

interval 1A

interval 2A

interval nA

weight 1

weight n

weightedinterval 1

weighted interval 2

weighted interval n

Sequence BNumber

Time@

PacketArrival

Packetlost

Figure1: Weightedintervalsbetweenlossusedto calculatelossprobability.

Theuseof aweightedaverageby theAverageLossInterval methodreducessuddenchangesin thecalculatedratethatcouldresultfromunrepresentative lossintervalsleaving thesetof lossintervalsusedto calculatethelossrate.Theaveragelossinterval C�ED�F'G HEI is calcu-latedasaweightedaverageof thelast = intervalsasfollows:

C� D9F'G H#I �KJ HLNM FPO L � LJ HLNM F O LRQfor weightsO L : O L � � Q �TSVUWS =�> " Qand O L � ��X U?X =�> "=?> "��Y� Q =?> "[ZVUWS =]\For = �_^

, thisgivesweightsof 1, 1, 1, 1, 0.8,0.6,0.4,and0.2forO F throughO � , respectively.

The sensitivity to noiseof the calculatedlossratedependson thevalue of = . In practicea value of = �`^

, with the most recentfour samplesequallyweighted,appearsto be a lower boundthatstill achievesa reasonablebalancebetweenresilienceto noiseandrespondingquickly to realchangesin network conditions.Section4.4 describesexperimentsthatvalidatethevalueof = �a^

. How-ever, wehavenotcarefullyinvestigatedalternativesfor therelativevaluesof theweights.

Any methodfor calculatingthe lossevent rateover a numberoflossintervals requiresa mechanismto dealwith the interval sincethe mostrecentlossevent,asthis interval is not necessarilya re-flectionof theunderlyinglossevent rate. Let � L bethenumberofpackets in the U -th most recentloss interval, and let the most re-cent interval �3b be definedas the interval containingthe packetsthat have arrived sincethe last loss. Whena lossoccurs,the lossinterval thathasbeen�:b now becomes� F , all of thefollowing lossintervals arecorrespondinglyshifteddown one,andthe new lossinterval �:b is empty. As �3b is not terminatedby a loss, it is dif-ferentfrom theotherlossintervals. It is importantto ignore � b incalculatingtheaveragelossinterval unless�:b is largeenoughthatincludingit would increasetheaverage.Thisallows thecalculatedlossinterval to tracksmoothlyin anenvironmentwith astablelosseventrate.

To determinewhetherto include �3b , theinterval sincethemostre-centloss,theAverageLossInterval methodalsocalculatesC�ED b G Hdc?FeI :

C�#D b G Hfc�FeI � J Hdc�FL�M b O L�g F � LJ HL�M FPO L \Thefinal averagelossinterval C� is hji<k � C� D9F'G H#I Q C� D b G Hfc�FeI � Q andthereportedlosseventrateis � >(C� .BecausetheAverageLossInterval methodaveragesoveranumberof lossintervals, ratherthanover a numberof packet arrivals, thismethodwith the given fixed weightsrespondsreasonablyrapidlyto asuddenincreasein congestion,but is slow to respondto asud-dendecreasein the lossraterepresentedby a large interval sincethelast lossevent. To allow a moretimely responseto a sustaineddecreasein congestion,wedeploy historydiscountingwith theAv-erageLossInterval method,to allow theTFRCreceiverto adapttheweightsin theweightedaveragein thespecialcaseof aparticularlylong interval sincethe last droppedpacket, to smoothlydiscounttheweightsgivento olderlossintervals.Thisallowsamoretimelyresponseto asuddenabsencein congestion.Historydiscountingisdescribedin detail in [5], andis only invoked by TFRC after themostrecentlossinterval �3b is greaterthantwice the averagelossinterval. Wehavenotyetexploredthepossibilityof allowing moregeneraladaptive weightsin theweightedaverage.

050

100150200250300

3 4 5 6 7 8 9 10 11 12 13 14 15 16

Lo

ss I

nte

rva

llTime (s)

current loss interval (s0)estimated loss interval

00.05

0.10.15

0.20.25

0.30.35

3 4 5 6 7 8 9 10 11 12 13 14 15 16

Lo

ss R

ate

Time (s)

estimated loss ratesquare root of estimated loss rate

0

100

200

300

400

3 4 5 6 7 8 9 10 11 12 13 14 15 16

TX

Ra

te (

KB

yte

s/s)

Time (s)

Transmission Rate

Figure2: Illustration of the AverageLossInter val methodwithidealizedperiodic loss.

Figure2 shows a simulationusingthe full AverageLossIntervalmethodfor calculatingthe lossevent rateat thereceiver. Thelinkloss rate is 1% beforetime 6, then10% until time 9, andfinally

4846

0.5%until theendof therun. Becausethelossesin thissimulationareperfectlyperiodic, the scenariois not realistic; it waschosento illustratetheunderlyingpropertiesof theAverageLossIntervalmethod.

For the top graph,the solid line shows the numberof packets inthe most recentloss interval, as calculatedby the receiver onceper round-triptime beforesendinga statusreport. The smootherdashedline shows the receiver’s estimateof the averageloss in-terval. Themiddlegraphshows thereceiver’s estimatedlosseventrate� , whichissimplytheinverseof theaveragelossinterval,alongwith m � . The bottomgraphshows the sender’s transmissionratewhich is calculatedfrom � .

Severalthingsarenoticeablefrom thesegraphs.Before � �+n, the

loss rate is constantand the AverageLoss Interval methodgivesa completelystablemeasureof the lossrate. Whenthe lossrateincreases,the transmissionrateis rapidly reduced.Finally, whenthelossratedecreases,thetransmissionrateincreasesin a smoothmanner, with no stepincreaseseven whenolder (10 packet) lossintervalsareexcludedfrom thehistory.

3.1.3 ImprovingstabilityOneof the goalsof the TFRC protocol is to avoid the character-istic oscillationsin thesendingratethat resultfrom TCP’s AIMDcongestioncontrolmechanisms.In controllingoscillations,a keyissuein theTFRCprotocolconcernstheTCPresponsefunction’sspecificationof theallowedsendingrateasinverselyproportionalto the measuredRTT. A relatively promptresponseto changesinthemeasuredround-triptimeis helpful to preventflowsfrom over-shootingtheavailablebandwidthafteranuncongestedperiod. Onthe other hand,an over-prompt responseto changesin the mea-suredround-trip time canresult in unnecessaryoscillations. Theresponseto changesin round-triptimesis of particularconcerninenvironmentswith Drop-Tail queuemanagementandsmall-scalestatisticalmultiplexing, wheretheround-triptimecanvary signifi-cantlyasa functionof changesin asingleflow’s sendingrate.

2

8

32

64buffer size

020

4060

80100

120140

160180

time (s)

0

100

200

300

Send Rate (KByte/s)

Figure3: Oscillationsof a TFRC flow over Dummynet,EWMAweight 0.05for calculating the RTT.

2

8

32

64buffer size

020

4060

80100

120140

160180

time (s)

0

100

200

300

Send Rate (KByte/s)

Figure4: TFRC flow over Dummynet: oscillationsprevented

If thevalueof theEWMA weightfor calculatingtheaverageRTT is

setto asmallvaluesuchas0.1(meaningthat10%of theweightisonthemostrecentRTT sample)thenTFRCdoesnotreactstronglyto increasesin RTT. In this case,we tendto seeoscillationswhena small numberof TFRC flows sharea high-bandwidthlink withDrop-Tail queuing;theTFRCflows overshootthe link bandwidthandthenexperiencelossover severalRTTs. Theresultis thattheybackoff togetherby a significantamount,andthenall start to in-creasetheir ratetogether. This is shown for a singleflow in Figure3 aswe increasethebuffer sizein Dummynet[19]. Althoughnotdisastrous,the resultingoscillationis undesirablefor applicationsandcanreducenetwork utilization. Thisis similarin somerespectsto theglobaloscillationof TCPcongestioncontrolcycles.

If theEWMA weightis setto ahighvaluesuchas0.5,thenTFRCreducesits sendingratestronglyin responseto anincreasein RTT,giving adelay-basedcongestionavoidancebehavior. However, be-causethe sender’s responseis delayedandthe sendingrateis di-rectlyproportionalto � > � , it is possiblefor short-termoscillationsto occur, particularly with small-scalestatisticalmultiplexing atDrop-Tail queues.While undesirable,the oscillationsfrom largeEWMA weightstendto be lessof a problemthantheoscillationswith smallervaluesof theEWMA weight.

What we desireis a middle ground,wherewe gain someshort-termdelay-basedcongestionavoidance, but in a form thathaslessgainthansimplymakingtherateinverselyproportionalto themostrecentRTT measurement.To accomplishthis,weuseasmallvaluefor theEWMA weightin calculatingtheaverageround-triptime

�in Equation(1), and apply the increaseor decreasefunctionsasbefore,but thensettheinterpacket-spacingasfollows:

� L H 79o�2 c 165�pqo�7 �r�� m � bs (2)

where� b is the most recentRTT sample,and

sis the average

of thesquare-rootsof theRTTs,calculatedusinganexponentiallyweightedmoving averagewith the sametime constantwe usetocalculatethe meanRTT. (The useof the square-rootfunction inEquation(2) is not necessarilyoptimal; it is likely thatothersub-linear functionswould serve as well.) With this modificationofthe interpacket-spacing,we gain the benefitsof short-termdelay-basedcongestionavoidance,but with alowerfeedbackloopgainsothatoscillationsin RTT dampthemselvesout, asshown in Figure4. The experimentsin Figure3 did not usethis adjustmentto theinterpacket spacing,unlike theexperimentsin Figure4.

3.1.4 SlowstartTFRC’s initial rate-basedslow-startprocedureshouldbesimilar tothewindow-basedslow-startprocedurefollowedby TCPwherethesenderroughlydoublesits sendingrateeachround-triptime. How-ever, TCP’s ACK-clock mechanismprovidesa limit on the over-shootduringslow start.No morethattwo outgoingpacketscanbegeneratedfor eachacknowledgeddatapacket, soTCPcannotsendat morethantwice thebottlenecklink bandwidth.

A rate-basedprotocoldoesnothave thisnaturalself-limiting prop-erty, andsoa slow-startalgorithmthatdoublesits sendingrateev-erymeasuredRTT canovershootthebottlenecklink bandwidthbysignificantlymore thana factorof two. A simplemechanismtolimit this overshootis for the receiver to feed back the rate thatpacketsarrivedat thereceiverduringthelastmeasuredRTT. If lossoccurs,slowstartis terminated,but if lossdoesn’t occurthesendersetsits rateto:�(1q5�798:13; G LNg F �_t U =vu " �(165�798:13; G L Q " �02�ow5�o L /xowy G Lez

4947

Thislimits theslow-startovershootto benoworsethanTCP’sover-shoot{ onslow-start.

Whena lossoccurscausingslowstartto terminate,thereis no ap-propriatelosshistory from which to calculatethe lossfraction forsubsequentRTTs. Theinterval until thefirst lossis notverymean-ingful astheratechangesrapidly duringthis time. Thesolutionisto assumethat thecorrectinitial datarateis half of the ratewhenthe lossoccurred;the factorof one-halfresultsfrom thedelayin-herentin the feedbackloop. We thencalculatethe expectedlossinterval that would be requiredto producethis datarate,andusethis syntheticloss interval to seedthe history mechanism.Realloss-interval datathenreplacesthis syntheticvalueasit becomesavailable.

3.2 Discussionof ProtocolFeatures3.2.1 Lossfractionvs. losseventfractionThe obvious way to measureloss is asa loss fraction calculatedby dividing the numberof packets that were lost by the numberof packets transmitted. However this doesnot accuratelymodelthe way TCP respondsto loss. Different variantsof TCP copedifferentlywhenmultiple packetsarelost from a window; Tahoe,NewReno,andSackTCPimplementationsgenerallyhalvethecon-gestionwindow oncein responseto several lossesin a window,while RenoTCPtypically reducesthecongestionwindow twice inresponseto multiple lossesin awindow of data.

Becausewe are trying to emulatethe bestbehavior of a confor-mantTCP implementation,we measurelossasa lossevent frac-tion. Thusweexplicitly ignorelosseswithin a round-triptime thatfollow an initial loss,andmodela transportprotocolthat reducesits window at mostoncefor congestionnotificationsin onewin-dow of data. This closely modelsthe mechanismusedby mostTCPvariants.

In [5] weexplorethedifferencebetweentheloss-eventfractionandtheregularlossfractionin thepresenceof randompacket loss.Weshow thatfor astablesteady-statepacket lossrate,andaflow send-ing within a factorof two of therateallowedby theTCPresponsefunction,thedifferencebetweentheloss-eventfractionandthelossfractionis at most10%.

WhereroutersuseREDqueuemanagement,multiplepacket dropsin awindow of dataarenotverycommon,but with Drop-Tail queuemanagementit is commonfor multiplepacketsto belostwhenthequeueoverflows. Thiscanresultin asignificantdifferencebetweenthe lossfractionandthe lossevent fractionof a flow, andit is thisdifferencethat requiresus to usethe loss event fraction so as tobettermodelTCP’sbehavior underthesecircumstances.

A transientperiodof severecongestioncanalsoresultin multiplepackets droppedfrom a window of datafor a numberof round-trip times,againresultingin a significantdifferencebetweenthelossfractionandthelosseventfractionduringthattransientperiod.In suchcasesTFRC will reactmoreslowly using the loss eventfraction,becausethelosseventfractionis significantlysmallerthanthelossfraction.However, thisdifferencebetweenthelossfractionandthe lossevent fraction dimishesif the congestionpersists,asTFRC’s ratedecreasesrapidly towardsonepacket perRTT.

3.2.2 IncreasingthetransmissionrateOneissueto resolve is how to increasethe sendingratewhentherategivenby thecontrolequationis greaterthanthecurrentsend-

ing rate. As the loss rate is not independentof the transmissionrate,to avoid oscillatorybehavior it might benecessaryto providedamping,perhapsin theform of restrictingtheincreaseto besmallrelative to the sendingrateduring the period that it takes for theeffectof thechangeto show upin feedbackthatreachesthesender.

In practice,thecalculationof thelosseventrateprovidessufficientdamping,andthereis little needto explicitly boundthe increasein thetransmissionrate.As shown in AppendixA.1, givena fixedRTT andno historydiscounting,TFRC’s increasein thetransmis-sionrateis limited to about0.14packetsperRTT everyRTT. Afteranextendedabsenceof congestion,historydiscountingbegins,andTFRCbeginsto increaseits sendingrateby up to 0.22packetsperround-triptime.

An increasein transmissionratedueto adecreasein measuredlosscanonly resultfrom the inclusionof new packets in the mostre-centlossinterval at the receiver. If | is thenumberof packetsintheTFRCflow’s averagelossinterval, and O is thefractionof theweighton themostrecentlossinterval, thenthe transmissionratecannotincreaseby morethan } � packets/RTT every RTT, where:

} � � � \ "�~ � | � O � \ " m | X m |��Thederivationis givenin AppendixA.1 assumingthesimplerTCPresponsefunctionfrom [12] for thecontrolequation.Thisbehaviorhasbeenconfirmedin simulationswith TFRC,andhasalsobeennumericallymodeledfor the TCP responsefunction in Equation(1), giving similar resultswith low lossratesandgiving lower in-creaseratesin high loss-rateenvironments.

3.2.3 Responseto persistentcongestionIn orderto besmootherthanTCP, TFRCcannotreduceits sendingrateasdrasticallyasTCP in responseto a singlepacket loss,andinsteadrespondsto theaveragelossrate. Theresultof this is thatin thepresenceof persistentcongestion,TFRCreactsmoreslowlythanTCP. Simulationsin AppendixA.2 andanalysisin [5] indicatethatTFRCrequiresfrom four to eightround-triptimesto halve itssendingratein responseto persistentcongestion.However, aswenotedabove,TFRC’smilder responseto congestionis balancedbyaconsiderablymilder increasein thesendingratethanthatof TCP,of about0.14packetsperround-triptime.

3.2.4 Responseto quiescentsendersLikeTCP, TFRC’smechanismfor estimatingnetwork conditionsispredicatedon theassumptionthat thesenderis sendingdataat thefull ratepermittedby congestioncontrol. If a senderis applicationlimited ratherthannetwork-limited,theseestimatesmayno longerreflect the actualnetwork conditions. Thus,whensufficient databecomesavailableagain,theprotocolmaysendit at a ratethat ismuchtoohigh for thenetwork to handle,leadingto high lossrates.

A remedyfor thisscenariofor TCPis proposedin [7]. TFRCiswellbehaved with an application-limitedsender, becausea senderisnever allowedto senddataat morethantwice therateat which thereceiver hasreceived datain theprevious round-triptime. There-fore,a senderthathasbeensendingbelow its permissibleratecannotmorethandoubleits sendingrate.

If the senderstopssendingdatacompletely, the receiver will nolongersendfeedbackreports.Whenthishappens,thesenderhalvesits permittedsendingrateevery two roundtrip times,preventinga

5048

largeburstof databeingsentwhendataagainbecomesavailable.We are� investigatingtheoptionof reducinglessaggressively afteraquiescentperiod,andof usingslow-startto morequickly recovertheold sendingrate.

4. EXPERIMENT AL EVALUATIONWehave testedTFRCextensively acrossthepublic Internet,in theDummynetnetwork emulator[19], andin thensnetwork simulator.TheseresultsgiveusconfidencethatTFRCis remarkablyfair whencompetingwith TCPtraffic, thatsituationswhereit performsverybadlyarerare,andthatit behaveswell acrossaverywide rangeofnetwork conditions.In thenext section,we presenta summaryofnssimulationresults,andin Section4.3we look at behavior of theTFRCimplementationoverDummynetandtheInternet.

4.1 Simulation Results

TFRC vs TCP, DropTail Queuing

(TCP + TFRC)

throughput

12

48

1632

64Link Rate (Mb/s)

2

8

32

128

Number of Flows

0

0.5

1.0

Normalized TCP

TFRC vs TCP, RED Queuing

(TCP + TFRC)

throughput

12

48

1632

64Link Rate (Mb/s)

2

8

32

128

Number of Flows

0

0.5

1.0

Normalized TCP

Figure5: TCP flow sendingrate while co-existingwith TFRC

To demonstratethat it is feasibleto widely deploy TFRCwe needto demonstratethatTFRCco-existsacceptablywell whensharingcongestedbottlenecksof many kindswith TCPtraffic of differentflavors. We alsoneedto demonstratethat it behaveswell in isola-tion, andthatit performsacceptablyover a wide rangeof networkconditions.Thereis only spaceherefor asummaryof ourfindings,but werefertheinterestedreaderto [13,5] for moredetailedresultsandsimulationdetails,andto thecodein thenssimulator[6].

Figure5 illustratesthefairnessof TFRCwhencompetingwith TCPSacktraffic in both Drop-Tail andRED queues.In thesesimula-tions = TCP and = TFRC flows sharea commonbottleneck;wevary thenumberof flows andthebottleneckbandwidth,andscalethequeuesizewith thebandwidth.ThegraphshowsthemeanTCPthroughputover the last 60 secondsof simulation,normalizedsothatavalueof onewouldbeafair shareof thelink bandwidth.Thenetwork utilization is alwaysgreaterthan90% andoften greaterthan99%,soalmostall of theremainingbandwidthis usedby theTFRCflows. ThesefiguresillustratethanTFRCandTCPco-existfairly acrossa wide rangeof network conditions,and that TCPthroughputis similar to what it would be if the competingtrafficwasTCPinsteadof TFRC.

Thegraphsdo show thattherearesomecases(typically wherethemeanTCPwindow is verysmall)whereTCPsuffers.Thisappearsto be becauseTCP is morebursty thanTFRC.An openquestionthatwe have notyet investigatedincludesshort-andmedium-termfairnesswith TCP in an environmentwith abruptchangesin thelevel of congestion.

0

0.5

1

1.5

2

0 10 20 30 40 50 60 70

Nor

mal

ized

Thr

ough

put TCP Flows

TFRC FlowsMean TCP

Mean TFRC

05

1015

0 10 20 30 40 50 60 70Lo

ss R

ate

(%)�

Number of TCP Flows, Number of TFRC Flows, 15Mb/s RED

Figure6: TCP competingwith TRFC, with RED.

0

0.2

0.4

0.6

0.8

1

1.2

1.4

2 3 4 5 10 20

CoV�

Loss Rate

TCP CoVTFRC CoV

Mean TCP CoVMean TFRC CoV

Figure7: Coefficientof variation of thr oughput betweenflows

Althoughthe meanthroughputof the two protocolsis rathersim-ilar, thevariancecanbequitehigh. This is illustratedin Figure6which shows the15Mb/sdatapointsfrom Figure5. Eachcolumnrepresentsthe resultsof a singlesimulation,andeachdatapointis thenormalizedmeanthroughputof a singleflow. Thevarianceof throughputbetweenflowsdependson thelossrateexperienced.Figure7 showsthisby graphingthecoefficientof variation(CoV1)betweenflowsagainstthelossratein simulationswith 32TCPand32 TFRC flows aswe scalethe link bandwidthandbuffering. Inthiscase,wetakethemeanthroughputof eachindividualflow over15 seconds,andcalculatetheCoV of thesemeans.Theresultsoftensimulationrunsfor eachsetof parametersareshown. Thecon-clusionis thatonmediumtimescalesandtypicalnetwork lossrates(lessthanabout9%), the inter-flow fairnessof individual TFRCflows is betterthanthat of TCP flows. However, in heavily over-loadednetwork conditions,althoughthemeanTFRCthroughputissimilar to TCP, TFRCflows show a greatervariancebetweentheirthroughputthanTCPflowsdo.

Wehavealsolookedat TahoeandRenoTCPimplementationsandatdifferentvaluesfor TCP’stimergranularity. AlthoughSackTCPFCoefficient of Variationis the standarddeviation divided by the

mean.

5149

TFRC vs TCP Sack1, 32 flows, 15Mb/s link, RED Queue

ThroughputDropped Packet

TF 1TF 2

TF 3TF 4

TCP 1TCP 2

TCP 3TCP 4

TFRC or TCP Flow

16

18

20

22

24

26

28

30

Time (s)

5101520253035

Throughput (KB/0.15s)

TFRC vs TCP Sack1, 32 flows, 15Mb/s link, Droptail Queue

ThroughputDropped Packet

TF 1TF 2

TF 3TF 4

TCP 1TCP 2

TCP 3TCP 4

TFRC or TCP Flow

16

18

20

22

24

26

28

30

Time (s)

5101520253035

Throughput (KB/0.15s)

Figure8: TFRC and TCP flows fr om Figure5.

with relatively low timergranularitydoesbetteragainstTFRCthanthe alternatives, the performanceof TahoeandRenoTCP is stillquiterespectable.

Figure8 showsthethroughputfor eightof theflows(four TCP, fourTFRC)fromFigure5, for thesimulationswith a15Mb/sbottleneckand32 flows in total. Thegraphsdepicteachflow’s throughputonthe congestedlink during the secondhalf of the 30-secondsimu-lation, wherethe throughputis averagedover 0.15 secintervals;slightly morethana typical round-triptime for this simulation.Inaddition,a0.15secinterval seemsto beaplausiblecandidatefor aminimuminterval overwhichbandwidthvariationswouldbegin tobenoticeableto multimediausers.

Figure8 clearlyshowsthemainbenefitfor equation-basedconges-tion controlover TCP-stylecongestioncontrol for unicaststream-ing media,which is the relative smoothnessin the sendingrate.A comparisonof the RED andDrop-Tail simulationsin Figure8alsoshows how thereducedqueuingdelayandreducedround-triptimesimposedby RED requirea higherlossrateto keeptheflowsin check.

4.1.1 Performanceat varioustimescalesWeareprimarily interestedin two measuresof performanceof theTFRCprotocol. First, we wish to comparetheaveragesendratesof aTCPflow andaTFRCflow experiencingsimilarnetwork con-ditions. Second,we would like to comparethe smoothnessandvariability of thesesendrates.Ideally, we would like for a TFRCflow to achieve thesameaveragesendrateasthat of a TCPflow,andyethave lessvariability. Thetimescaleat which thesendratesaremeasuredaffectsthevaluesof thesemeasures.

Wedefinethesendrate��� G � �9�'� of agivendataflow F using � -byte

packetsat time � , measuredat a timescale} :�4� G � �9�'� �K� � packetssentby F between� and �?� }} Q (3)

We characterizethesendrateof the flow betweentime � b and � F ,where � F � � b � =(} , by the time series: � � � G � �9� b ��U � } ��� HL�M b .Thecoefficientof variation(CoV) of thistimeseriesis standardde-viationdividedby themean,andcanbeusedasa measureof vari-ability [10] of thesendingratewith anindividual flow at timescale} . For flows with the sameaveragesendingrateof one,thecoef-ficient of variation would simply be the standarddeviation of thetime series;a lower valueimplies a flow with a rangeof sendingratesmorecloselyclusteredaroundtheaverage.

To comparethesendratesof two flows � and � atagiventimescale} , wedefinetheequivalence� � G 1 G � �9��� at time � :� � G 1 G � �9�'� � h���� ~ � � G 1 �9���� � G � �9�'� Q � � G � �9���� � G 1 �9�'� � Q (4)�9�<� ��� G 1 �9�'����� ��� �4� G � �9�'�����

Taking the minimum of the two ratios ensuresthat the resultingvalueremainsbetween0 and1. Note that theequivalenceof twoflows at a given time is definedonly whenat leastoneof the twoflows hasa non-zerosendrate. The equivalenceof two flows be-tweentime � b and � F canbecharacterizedby thetimeseries:�<� � G 1 G � �9� b ��U � } ��� HLNM b . Theaveragevalueof thedefinedelementsof this time seriesis calledthe equivalenceratio of the two flowsat timescale} . Thecloserit is to 1, themore“equivalent” thetwoflows are. We chooseto take theaverageinsteadof themediantocapturethe impactof any outliers in the equivalencetime series.We cancomputethe equivalenceratio betweena TCP flow andaTFRCflow, betweentwo TCPflows or betweentwo TFRCflows.Ideally, the ratio would be very closeto 1 over a broadrangeoftimescalesbetweentwo flows of the sametype experiencingthesamenetwork conditions.

In [4] wealsoinvestigatethesmoothnessof TCPandTFRCby con-sideringthechangein thesendingratefrom oneinterval of length} to thenext. We show thatTFRC is considerablysmootherthanTCPoversmallandmoderatetimescales.

4.1.2 Performancewith long-durationbackgroundtraffic

For measuringthe steadyperformanceof the TFRC protocol,weconsiderthesimplewell-known singlebottleneck(or “dumbbell”)simulationscenario.The accesslinks aresufficiently provisionedto ensurethatany packetdrops/delaysdueto congestionoccuronlyat thebottleneckbandwidth.

We consideredmany simulationparameters,but illustrate hereascenariowith 16 SACK TCP and16 TFRC flows, with a bottle-neckbandwidthof 15Mbpsanda RED queue.To plot thegraphs,we monitor theperformanceof oneflow belongingto eachproto-col. The graphsarethe resultof averaging14 suchruns,andthe90%confidenceintervalsareshown. Thelossrateobservedat thebottleneckrouterwasabout0.1%.Figure7hasshown thatfor theselow lossrates,TCP shows a greatervariancein meanthroughputthatdoesTFRC.

Figure9 shows theequivalenceratiosof TCPandTFRCasafunc-tion of the timescaleof measurement.Curves areshown for themeanequivalenceratiobetweenpairsof TCPflows,betweenpairs

5250

0

0.2

0.4

0.6

0.8

1

0.2 0.5 1 2 5 10

Equ

ival

ance

rat

io

Timescale for throughput measurement (seconds)

TFRC vs TFRCTCP vs TCP

TFRC vs TCP

Figure9: TCP and TFRC equivalence

0

0.1

0.2

0.3

0.4

0.5

0.6

0.2 0.5 1 2 5 10

Coe

ffici

ent o

f Var

iatio

n

Timescale for throughput measurement (seconds)

TFRCTCP

Figure10: Coefficientof Variation of TCP and TFRC

of TFRCflows,andbetweenpairsof flows of differenttypes.Theequivalenceratio of TCPandTFRCis between0.6and0.8over abroadrangeof timescales.Themeasuresfor TFRCpairsandTCPpairsshow thattheTFRCflowsare“equivalent” to eachotheronabroaderrangeof timescalesthantheTCPflows.

Figure10showsthatthesendrateof TFRCis lessvariablethanthatof TCPover a broadrangeof timescales.Both this andthebetterTFRC equivalenceratio are due to the fact that TFRC respondsonly to theaggregatelossrate,andnot to individual lossevents.

Fromthesegraphs,we concludethat in this low-lossenvironmentdominatedby long-durationflows, the TFRC transmissionrateiscomparableto thatof TCP, andis lessvariablethananequivalentTCP flow acrossalmostany timescalethat might be importanttoanapplication.

4.1.3 PerformancewithON-OFFflowsasbackgroundtraffic

In thissimulationscenario,wemodeltheeffectsof competingweb-like traffic with very smallTCPconnectionsandsomeUDPflows.Figures11-13 presentresultsfrom simulationswith backgroundtraffic provided by ON/OFFUDP flows with ON andOFF timesdrawn from a heavy-taileddistribution. ThemeanON time is onesecondandthe meanOFF time is two seconds,with eachsourcesendingat 500KbpsduringanON time. Thenumberof simultane-ousconnectionsis variedbetween50 and150 andthe simulationis run for 5000seconds.Therearetwo monitoredconnections:along-durationTCPconnectionanda long-durationTFRCconnec-tion. We measurethe sendrateson several different timescales.Theresultsshown in Figures12and13 areaveragesof tenruns.

Thesesimulationsproducea wide rangeof loss rates,as shownin Figure 11. From the resultsin Figure 12, we can seethat atlow lossratestheequivalenceratioof TFRCandTCPconnections

is between0.7 to 0.8 over a broadrangeof timescales,which issimilarto thesteady-statecase.At higherlossratestheequivalenceratio is low on all but the longesttimescalesbecausepacketsaresentrarely. Any interval with only oneflow sendingapacket givesa valueof zeroin theequivalencetime series,while intervalswithneitherflow sendinga packet arenot counted.This tendsto resultin a lower equivalenceratio. However, on long timescales,evenat 40% loss (150 ON/OFFsources),the equivalenceratio is still0.4,meaningthatoneflow getsabout40%morethanits fair shareandoneflow gets40%less.ThusTFRCis seento becomparableto TCPover a wide rangeof lossratesevenwhenthebackgroundtraffic is veryvariable.

Figure13showsthatthesendrateof TFRCis lessvariablethanthesendrateof TCP, especiallywhenthelossrateis high. NotethattheCoV for bothflows is muchhighercomparedto thevaluesin Fig-ure10 at comparabletimescales.This is dueto thehigh lossratesandthevariablenatureof backgroundtraffic in thesesimulations.

0

10

20

30

40

50

50 100 150Mea

n Lo

ss R

ate

(per

cent

)

�Number of On/Off sources

Figure 11: Loss rate at the bottleneck router, with ON-OFFbackground traffic

0.2

0.4

0.6

0.8

1

0.5 1 2 5 10 20 50 100

Equ

ival

ance

Rat

io�

Measurement Timescale (seconds)

60 on/off sources100 on/off sources130 on/off sources150 on/off sources

Figure 12: TCP equivalencewith TFRC, with ON-OFF back-ground traffic

0

5

10

15

20

1 10 100 1 10 100

Coe

ffici

ent o

f Var

iatio

n

TFRC Measurement Timescale (seconds) TCP

60 on/off sources100 on/off sources130 on/off sources150 on/off sources

Figure 13: Coefficient of Variation of TFRC (left) and TCP(right), with ON-OFF background traffic

4.2 Effectsof TFRC on QueueDynamicsBecauseTFRC increasesits sendingratemoreslowly thanTCP,andrespondsmoremildly to a lossevent, it is reasonableto ex-pectqueuedynamicswill be slightly different. However, becauseTFRC’sslow-startprocedureandlong-termresponseto congestion

5351

arebothsimilar to thoseof TCP, we expectsomecorrespondenceaswell� betweenthequeueingdynamicsimposedby TRFCandbyTCP.

050

100150200250

5 10 15 20 25

Que

ue (

in P

kts)�

Time (s)

queue size

050

100150200250

5 10 15 20 25

Que

ue (

in P

kts)�

Time (s)

queue size

Figure 14: 40 long-lived TCP (top) and TFRC (bottom) flows,with Drop-Tail queuemanagement.

Figure14 shows 40 long-lived flows, with start timesspacedoutover the first 20 seconds. The congestedlink is 15 Mbps, andround-triptimesareroughly45 ms. 20%of the link bandwidthisusedby short-lived,“background”TCPtraffic, andthereis asmallamountof reverse-pathtraffic aswell. Figure14 shows thequeuesize at the congestedlink. In the top graphthe long-lived flowsareTCP, andin thebottomgraphthey areTFRC.Bothsimulationshave 99%link utilization; thepacket drop rateat the link is 4.9%for theTCPsimulations,and3.5%for theTFRCsimulations.AsFigure14 shows, theTFRCtraffic doesnothave anegative impacton queuedynamicsin thiscase.

We have run similar simulationswith RED queuemanagement,with differentlevelsof statisticalmultiplexing,with amix of TFRCandTCPtraffic, andwith differentlevelsof backgroundtraffic andreverse-pathtraffic, andhave comparedlink utilization, queueoc-cupancy, andpacket drop rates[5, AppendixB]. While we havenot donean exhaustive investigation,particularlyat smallertimescalesandat lower levelsof link utilization,we do not seea nega-tive impactonqueuedynamicsfrom TFRCtraffic. In particular, insimulationsusingRED queuemanagementweseelittle differencein queuedynamicsimposedby TFRCandby TCP.

An openquestionincludesthe investigationof queuedynamicswith traffic loadsdominatedby shortTCPconnections,andthedu-rationof persistentcongestionin queuesgivenTFRC’slongertimebeforehalving the sendingrate. As AppendixA.2 shows, TFRCtakesroughlyfiveround-triptimesof persistentcongestionto halveits sendingrate. This doesnot necessarilyimply that TFRC’s re-sponseto congestion,for a TFRCflow with round-triptime

�, is

asdisruptive to othertraffic asthatof aTCPflow with a round-triptime � � , five timeslarger. TheTCPflow with a round-triptime of� � secondssendsat an unreducedratefor the entire � � secondsfollowing a loss,while theTFRCflow reducesits sendingrate,al-thoughsomewhatmildly, afteronly

�seconds.

4.3 Implementation ResultsWe have implementedthe TFRC algorithm,andconductedmanyexperimentsto explore the performanceof TFRC in the Internet.Ourtestsincludetwo differenttranscontinentallinks,andsitescon-nectedby a microwave link, T1 link, OC3link, cablemodem,anddial-upmodem.In addition,conditionsunavailableto usover theInternetwere testedagainstreal TCP implementationsin Dum-mynet.Full detailsof theexperimentsareavailablein [23].

0

20

40

60

80

100

120

140

160

180

200

60 80 100 120 140 160 180

thro

ughp

ut (K

Byte

/s)�

time (s)

UCL -> ACIRI, 3 x TCP, 1 x TFRC

TCPTFRC

Figure15: Thr eeTCP flowsand oneTFRC flow over the Inter -net.

To summarizeall theresults,TFRCis generallyfair to TCPtrafficacrossthe wide rangeof network typesandconditionswe exam-ined. Figure15 shows a typical experimentwith threeTCPflowsandoneTFRCflow runningconcurrentlyfrom Londonto Berke-ley, with thebandwidthmeasuredoverone-secondintervals. In thiscase,the transmissionrateof the TFRCflow is slightly lower, onaverage,thanthat of the TCPflows. At the sametime, the trans-missionrateof the TFRCflow is smooth,with a low variance;incontrast,thebandwidthusedby eachTCPflow variesstronglyevenover relatively short time periods,asshown in Figure17. Com-paringthis with Figure13 shows that, in the Internet,bothTFRCandTCPperformvery similarly to the lightly loaded(50 sources)“ON/OFF” simulationenvironmentwhich had lessthan1% loss.The loss rate in theseInternetexperimentsrangesfrom 0.1% to5%. Figure16 shows that fairnessis alsorathersimilar in therealworld, despitetheInternettestsbeingperformedwith lessoptimalTCPstacksthantheSackTCPin thesimulations.

0.2

0.4

0.6

0.8

1

0.5 1 2 5 10 20 50 100

Equ

ival

ance

Rat

io�

Measurement Timescale (seconds)

UCLMannheim

UMASS (Linux)UMASS (Solaris)

Nokia, Boston

Figure16: TCP equivalencewith TFRC over differ ent Inter netpaths

0

0.2

0.4

0.6

0.8

1

1 10 100 1 10 100

Coe

ffici

ent o

f Var

ianc

e

TFRC Measurement Timescale (seconds) TCP

UCLMannheim

UMASS (Linux)UMASS (Solaris)

Nokia, Boston

Figure 17: Coefficient of Variation of TFRC (left) and TCP(right) over differ ent Inter net paths

We foundonly a few conditionswhereTFRCwaslessfair to TCP

5452

or lesswell behaved:& In conditionswherethenetwork is overloadedsothatflowsachieve closeto onepacket perRTT, it is possiblefor TFRCto getsignificantlymorethanits fair shareof bandwidth.& SomeTCP variantswe testedagainstexhibited undesirablebehavior thatcanonly bedescribedas“buggy”.& With an earlier version of the TFRC protocol we experi-encedwhat appearsto be a real-world exampleof a phaseeffectover theT1 link from Nokiawhenthelink washeavilyloaded.This is discussedfurtherin [5].

Thefirst conditionis interestingbecausein simulationswe do notnormallyseethis problem.This issueoccursbecauseat low band-widths causedby high levels of congestion,TCP becomesmoresensitive to lossdueto theeffect of retransmissiontimeouts.TheTCPthroughputequationmodelstheeffectof retransmissiontime-outsmoderatelywell, but the � ����� (TCPretransmissiontimeout)parameterin theequationcannotbechosenaccurately. TheFreeBSDTCPusedfor ourexperimentshasa500msclockgranularity, whichmakesit ratherconservative underhigh-lossconditions,but not allTCPsareso conservative. Our TFRC implementationis tunedtocompetefairly with a moreaggressive SACK TCPwith low clockgranularity, andsoit is to beexpectedthatit out-competesanoldermoreconservative TCP. Similarly unfair conditionsarealsolikelyto occurwhendifferentTCPvariantscompeteunderthesecondi-tions.

Theeffectsof buggyTCPimplementationscanbeseenin experi-mentsfrom UMassto California,whichgaveverydifferentfairnessdependingon whetherthe TCPsenderwasrunningSolaris2.7 orLinux. TheSolarismachinehasaveryaggressiveTCPretransmis-sion timeout, andappearsto frequentlyretransmitunnecessarily,which hurtsits performance[16]. Figure16 shows the resultsforboth Solarisand Linux machinesat UMass; the Linux machinegivesgoodequivalenceresultswhereasSolarisdoesmorepoorly.That this is a TCPdefectis moreobvious in theCoV plot (Figure17) wheretheSolarisTFRCtraceappearsnormal,but theSolarisTCPtraceis abnormallyvariable.

We alsoransimulationsandexperimentsto look for thesynchro-nizationof sendingrateof TFRCflows(i.e., to look for parallelstothe synchronizingratedecreasesamongTCP flows whenpacketsaredroppedfrom multiple TCPflows at thesametime). We foundsynchronizationof TFRCflowsonly in avery smallnumberof ex-perimentswith very low lossrates. Whenthe lossrateincreases,smalldifferencesin theexperiencedlosspatternscausestheflowsto desynchronize.This is discussedbriefly in Section6.3of [23].

4.4 Testingthe LossPredictorAs describedin Section3.1.2,theTFRCreceiver useseight inter-lossintervals to calculatethe lossevent rate,with the oldestfourintervalshaving decreasingweights.Onemeasureof theeffective-nessof this estimationof the pastlossevent rate is to look at itsability to predict theimmediatefuture lossratewhentestedacrossawiderangeof realnetworks.Figure18showstheaveragepredic-tor errorandtheaverageof thestandarddeviation of thepredictorerrorfor differenthistorysizes(measuredin lossintervals)andforconstantweighting (left) of all the loss intervals versusTFRC’smechanismfor decreasingthe weightsof older intervals (right).Thefigure is anaverageacrossa largesetof Internetexperimentsincludingawide rangeof network conditions.

Predictionaccuracy is not theonly criteria for choosinga losses-timation mechanism,as stablesteady-statethroughputandquick

0

0.002

0.004

0.006

0.008

0.01

2 4 8 16 32 2 4 8 16 32

avg.

loss

pre

dictio

n er

ror

history size (constant weights (L), decreasing weights (R))

error avg.error std. dev.

Figure18: Prediction quality of TFRC lossestimation

reactionto changesin steady-stateareperhapsequallyimportant.However thesefiguresprovide experimentalconfirmationthat thechoicesmadein Section3.1.2arereasonable.

5. SUMMARY OF RELATED WORKThe unreliable,unicastcongestioncontrol mechanismsclosesttoTCP maintaina congestionwindow which is useddirectly [8] orindirectly [18] to control the transmissionof new packets. In [8]thesenderusesTCP’scongestioncontrolmechanismsdirectly, andthereforeits congestioncontrolbehavior shouldbesimilarto thatofTCP. In theTEAR protocol(TCPEmulationat theReceivers)from[18], whichcanbeusedfor eitherunicastor multicastsessions,thereceiver emulatesthe congestionwindow modificationsof a TCPsender, but thenmakesatranslationfrom awindow-basedto arate-basedcongestioncontrol mechanism.The receiver maintainsanexponentiallyweightedmoving averageof thecongestionwindow,anddividesthis by theestimatedround-triptime to obtaina TCP-friendly sendingrate.

A classof unicastcongestioncontrolmechanismsonestepremovedfrom thoseof TCP arerate-basedmechanismsusingAIMD. TheRateAdaptationProtocol(RAP) [17] usesan AIMD ratecontrolschemebasedon regular acknowledgmentssentby the receiverwhich thesenderusesto detectlost packetsandestimatetheRTT.RAPusestheratioof long-termandshort-termaveragesof theRTTto fine-tunethe sendingrateon a per-packet basis. This transla-tion from a window-basedto a rate-basedapproachalsoincludesamechanismfor the senderto stopsendingin the absenceof feed-back from the receiver. PureAIMD protocolslike RAP do notaccountfor the impactof retransmissiontimeouts,andhencewebelievethatTFRCwill coexist betterwith TCPin theregimewheretheimpactof timeoutsis significant.An AIMD protocolproposedin [21] usesRTPreportsfrom thereceiver to estimatelossrateandround-triptimes.

BansalandBalakrishnanin [1] considerbinomialcongestioncon-trol algorithms,wherea binomialalgorithmusesa decreasein re-sponseto alosseventthatis proportionalto apower � of thecurrentwindow, andotherwiseusesan increasethat is inverselypropor-tional to the power � of the currentwindow. AIMD congestioncontrol is a specialcaseof binomial congestioncontrol that uses� � � and � � � . [1] considersseveralbinomialcongestioncontrolalgorithmsthat areTCP-compatibleandthat avoid TCP’s drasticreductionof thecongestionwindow in responseto a lossevent.

Equation-basedcongestioncontrol [12] is probably the classofunicast,TCP-compatiblecongestioncontrolmechanismsmostre-movedfrom theAIMD mechanismsof TCP. In [22] theauthorsde-scribea simpleequation-basedcongestioncontrolmechanismfor

5553

unicast,unreliablevideo traffic. The receiver measuresthe RTTandthe  lossrateover a fixedmultiple of theRTT. Thesenderthenusesthis information,alongwith theversionof theTCPresponsefunction from [12], to control thesendingrateandtheoutputrateof the associatedMPEG encoder. The main focusof [22] is notthecongestioncontrolmechanismitself, but thecouplingbetweencongestioncontrolanderror-resilientscalablevideocompression.

The TCP-FriendlyRateControl Protocol (TFRCP)[15] usesanequation-basedcongestioncontrol mechanismfor unicasttrafficwherethereceiver acknowledgeseachpacket. At fixedtime inter-vals,thesendercomputesthelossrateobservedduringthepreviousinterval andupdatesthesendingrateusingtheTCPresponsefunc-tion describedin [14]. Sincetheprotocoladjustsits sendrateonlyatfixedtimeintervals,thetransientresponseof theprotocolis poorat lower time scales.In addition,computingthe lossrateat fixedtime intervalsmake theprotocolvulnerableto changesin RTT andsendingrate.[13] comparestheperformanceof TFRCandTFRCP,andfindsthatTFRCgivesbetterperformanceoverawiderangeoftimescales.

TCP-Friendlymechanismsfor multicastcongestioncontrolaredis-cussedbriefly in [5].

6. ISSUESFOR MULTICAST CONGESTIONCONTROL

Many aspectsof TFRCaresuitableto formabasisfor sender-basedmulticastcongestioncontrol.In particular, themechanismsusedbyareceiver to estimatethelosseventrateandby thesenderto adjustthe sendingrateshouldbe directly applicableto multicast. How-ever, a numberof cleardifferencesexist for multicastthat requiredesignchangesandfurtherevaluation.

Firstly, thereis a needto limit feedbackto themulticastsendertoprevent responseimplosion. This requireseitherhierarchicalag-gregation of feedbackor a mechanismthat suppressesfeedbackexceptfrom thereceiverscalculatingthe lowesttransmissionrate.Both of theseaddsomedelayto thefeedbackloop thatmayaffectprotocoldynamics.

Dependingon thefeedbackmechanism,TFRC’s slow-startmech-anismmaybeproblematicfor multicastasit requirestimely feed-backto safelyterminateslowstart.

Finally, in theabsenceof synchronizedclocks,it canbedifficult formulticastreceiversto determinetheir round-triptime to thesenderin a rapidandscalablemanner.

Addressingtheseissueswill typically result in multicastconges-tion controlschemesneedingto bea little moreconservative thanunicastcongestioncontrolto ensuresafeoperation.

7. CONCLUSION AND OPEN ISSUESIn this paperwe have outlineda proposalfor equation-baseduni-castcongestioncontrol for unreliable,rate-adaptive applications.We have evaluatedthe protocolextensively in simulationsand inexperiments,andhave madeboth the ns implementationand thereal-world implementationpublicly available[6]. Wewould like toencourageothersto experimentwith andevaluatethe TFRCcon-gestioncontrolmechanisms,andto proposeappropriatemodifica-tions.

While thecurrentimplementationof TFRCgivesrobustbehavior inawiderangeof environments,wecertainlydonotclaimthatthis istheoptimalsetof mechanismsfor unicast,equation-basedconges-tion control.Activeareasfor furtherwork includethemechanismsfor the receiver’s updateof the lossevent rateafter a long periodwith no losses,andthe sender’s adjustmentof the sendingrateinresponseto short-termchangesin theround-triptime. We assumethat,aswith TCP’scongestioncontrolmechanisms,equation-basedcongestioncontrolmechanismswill continueto evolve basedbothon further researchandon real-world experiences.As an exam-ple,weareinterestedin thepotentialof equation-basedcongestioncontrol in an environmentwith Explicit CongestionNotification(ECN). Similarly, our currentsimulationsand experimentshavebeenwith aone-waytransferof data,andweplanto exploreduplexTFRCtraffic in thefuture.

We have run extensive simulationsand experiments,reportedinthis paperand in [5], [4], [13], and [23], comparingthe perfor-manceof TFRC with that of standardTCP, with TCP with dif-ferentparametersfor AIMD’ s additive increaseandmultiplicativedecrease,andwith otherproposalsfor unicastequation-basedcon-gestioncontrol. In our resultsto date,TFRC comparesvery fa-vorablywith othercongestioncontrolmechanismsfor applicationsthatwould prefera smoothersendingratethanthatof TCP. Therehave alsobeenproposalsfor increase/decreasecongestioncontrolmechanismsthat reducethe sendingratein responseto eachlossevent,but thatdo not useAIMD; we would like to compareTFRCwith thesecongestioncontrolmechanismsaswell. Webelieve thatthe emergenceof congestioncontrol mechanismsfor relatively-smoothcongestioncontrolfor unicasttraffic canplaya key role inpreventingthedegradationof end-to-endcongestioncontrol in thepublic Internet,by providing a viablealternative for unicastmulti-mediaflows thatwould otherwisebe temptedto avoid end-to-endcongestioncontrolaltogether.

Our view is thatequation-basedcongestioncontrol is alsoof con-siderablepotentialimportanceapartfrom its role in unicastcon-gestioncontrol. Equation-basedcongestioncontrol can providethe foundationfor scalablecongestioncontrol for multicastpro-tocols. In particular, becauseAIMD andrelatedincrease/decreasecongestioncontrolmechanismsrequirethatthesenderdecreaseitssendingratein responseto eachlossevent, thesecongestioncon-trol familiesdo not provide promisingbuilding blocksfor scalablemulticastcongestioncontrol. Our hopeis that,in contributing to amoresolidunderstandingof equation-basedcongestioncontrolforunicasttraffic, the papercontributesto a moresolid developmentof multicastcongestioncontrolaswell.

8. ACKNOWLEDGEMENTSWewouldliketoacknowledgefeedbackanddiscussionsonequation-basedcongestioncontrol with a wide rangeof people,includingthe membersof the ReliableMulticast ResearchGroup,the Reli-ableMulticastTransportWorking Group,andtheEnd-to-EndRe-searchGroup,alongwith Dan Tan andAvideh Zhakor. We alsothank Hari Balakrishnan,Jim Kurose,Brian Levin, Dan Ruben-stein,ScottShenker, andtheanonymousrefereesfor specificfeed-backon the paper. Sally Floyd andMark Handley would like tothank ACIRI, and JitendraPadhyewould like to thank the Net-works ResearchGroupat UMass,for providing supportive envi-ronmentsfor pursuingthiswork.

5654

9. REFERENCES[1] D. BansalandH. Balakrishnan.TCP-FriendlyCongestion

Controlfor Real-timeStreamingApplications,May 2000.MIT TechnicalReportMIT-LCS-TR-806.

[2] B. Braden,D. Clark,J.Crowcroft, B. Davie, S.Deering,D. Estrin,S.Floyd, V. Jacobson,G. Minshall,C. Partridge,L. Peterson,K. Ramakrishnan,S.Shenker, J.Wroclawski,andL. Zhang.RecommendationsonQueueManagementandCongestionAvoidancein theInternet.RFC2309,Informational,Apr. 1998.

[3] S.Floyd andK. Fall. PromotingtheUseof End-to-endCongestionControlin theInternet.IEEE/ACM Transactionson Networking, Aug. 1999.

[4] S.Floyd, M. Handley, andJ.Padhye.A ComparisonofEquation-basedandAIMD CongestionControl,May 2000.URL http://www.aciri.org/tfrc/.

[5] S.Floyd, M. Handley, J.Padhye,andJ.Widmer.Equation-basedCongestionControlfor UnicastApplications:theExtendedVersion,March2000.ICSITechnicalReportTR-00-03,URL http://www.aciri.org/tfrc/.

[6] S.Floyd, M. Handley, J.Padhye,andJ.Widmer. TFRC,Equation-basedCongestionControlfor UnicastApplications:SimulationScriptsandExperimentalCode,February2000.URL http://www.aciri.org/tfrc/.

[7] M. Handley, J.Padhye,andS.Floyd. TCPCongestionWindow Validation,Sep.1999.UMassCMPSCITechnicalReport99-77.

[8] S.JacobsandA. Eleftheriadis.Providing VideoServicesoverNetworkswithoutQualityof ServiceGuarantees.InWorld WideWebConsortiumWorkshoponReal-TimeMultimediaandtheWeb, 1996.

[9] V. Jacobson.CongestionAvoidanceandControl.SIGCOMMSymposiumon CommunicationsArchitecturesandProtocols,pages314–329,1988.An updatedversionis availableviaftp://ftp.ee.lbl.gov/papers/congavoid.ps.Z.

[10] R. Jain.TheArt of ComputerSystemsPerformanceAnalysis.JohnWiley andSons,1991.

[11] R. Jain,K. Ramakrishnan,andD. Chiu.CongestionAvoidancein ComputerNetworkswith aConnectionlessNetwork Layer. Tech.Rep.DEC-TR-506,Digital EquipmentCorporation,August1987.

[12] J.Mahdavi andS.Floyd. TCP-friendlyUnicastRate-basedFlow Control.Notesentto end2end-interestmailing list, Jan.1997.

[13] J.Padhye.Model-basedApproachto TCP-friendlyCongestionControl.Ph.D.thesis,UniversityofMassachusettsat Amherst,Mar. 2000.

[14] J.Padhye,V. Firoiu, D. Towsley, andJ.Kurose.ModelingTCPThroughput:A SimpleModelandits EmpiricalValidation.SIGCOMMSymposiumonCommunicationsArchitecturesandProtocols, Aug. 1998.

[15] J.Padhye,J.Kurose,D. Towsley, andR. Koodli. A ModelBasedTCP-FriendlyRateControlProtocol.In Proceedingsof NOSSDAV’99, 1999.

[16] V. Paxson.AutomatedPacket TraceAnalysisof TCPImplementations.In Proceedingsof SIGCOMM’97, 1997.

[17] R. Rejaie,M. Handley, andD. Estrin.An End-to-endRate-basedCongestionControlMechanismfor RealtimeStreamsin theInternet.In Proceedingsof INFOCOMM99,1999.

[18] I. Rhee,V. Ozdemir, andY. Yi. TEAR: TCPEmulationatReceivers– Flow Controlfor MultimediaStreaming,Apr.2000.NCSUTechnicalReport.

[19] L. Rizzo.DummynetandForwardErrorCorrection.In Proc.Freenix98, 1998.

[20] ReliableMulticastResearchGroup.URLhttp://www.east.isi.edu/RMRG/.

[21] D. SisalemandH. Schulzrinne.TheLoss-DelayBasedAdjustmentAlgorithm: A TCP-FriendlyAdaptionScheme.In Proceedingsof NOSSDAV’98, 1998.

[22] D. TanandA. Zakhor. Real-timeInternetVideoUsingErrorResilientScalableCompressionandTCP-friendlyTransportProtocol.IEEETransactionsonMultimedia, May 1999.

[23] J.Widmer. Equation-basedCongestionControl,Feb. 2000.DiplomaThesis,URL http://www.aciri.org/tfrc/.

APPENDIXA. ANALYSIS OF TFRCA.1 Upper Bound on the IncreaseRateIn thissectionweshow that,givenafixedround-triptimeandin theabsenceof historydiscounting,theTFRCmechanismincreasesitssendingrateby at most0.14packets/RTT. Historydiscountingis acomponentof thefull AverageLossInterval methodthatis invokedafterthemostrecentlossinterval is greaterthantwice theaverageloss interval, to smoothlydiscountthe weight given to older lossintervals. In this sectionwe show thatwith fixedround-triptimesand the invocationof history discounting,the TFRC mechanismincreasesits sendingrateby at most0.22packets/RTT.

For simplicity of analysis,in this sectionwe assumethat TFRCusesthedeterministicversionof theTCPresponsefunction[3] asthecontrolequation,asfollows:

�¡�K¢ F'£ ¤� ¢ \ Thisgivesthesendingrate

�in packets/secas a function of the round-trip time

�and

losseventrate� . Thus,theallowedsendingrateis atmost � \ " >Em �packets/RTT.

To explorethemaximumincreaseratefor aTFRCflow with afixedround-trip time, considerthe caseof a single TFRC flow with around-triptime of

�seconds,on a pathwith no competingtraffic.

Let | betheTFRCflow’s averagelossinterval in packets,ascal-culatedat thereceiver. Thereportedlosseventrateis � >q| , andtheallowedsendingrateis � \ " m | pkts/RTT.

After a round-triptime with no packet drops,the receiver hasre-ceived � \ " m | additionalpackets,andthemostrecentlossintervalincreasesby � \ " m | packets. Let the mostrecentlossinterval beweightedby weight O in calculatingtheaveragelossinterval, for��S O S¥� (with the weightsexpressedin normalizedform sothat the sumof the weightsis one). For our TFRC implementa-tion in the normalcase,whenhistory discountingis not invoked,O � � > n . Thecalculatedaveragelossinterval increasesfrom | to

5755

at most | � O � \ " m | packets.Theallowedsendingrateincreases

from � \ " m | to at most � \ "d¦ | � O � \ " m | packets/RTT.

Therefore,givenafixedround-triptime,thesendingrateincreasesby atmost } � packets/RTT, for

� \ " � | � O � \ " m | � � \ " m | � } � \Thisgivesthefollowing solutionfor } � :

} � � � \ "�~ � | � O � \ " m | X m |�� (5)

Solvingthisnumericallyfor O � � > n , asin TFRCwithouthistorydiscounting,this gives } �.§ � \ �3" for |©¨ � . Thus,givena fixedround-triptime, andwithout historydiscounting,the sendingrateincreasesby at most0.12packets/RTT.

ThisanalysisassumesTFRCusesthesimpleTCPcontrolequation[3], but wehavealsonumericallymodeledtheincreasebehavior us-ing Equation(1). Dueto slightlydifferentconstantsin theequation,theupperboundnow becomes0.14packets/RTT. With thesimpleequationtheusualincreaseis closeto theupperbound;with Equa-tion 1 this is still thecasefor flows wherethe lossrateis lessthatabout5% but at higherlossratesthe increaserateis significantlylower thanthisupperbound.Whenhistorydiscountingis invoked,givenTFRC’s minimumdiscountfactorof 0.5, therelative weightfor themostrecentinterval canbeincreasedup to O � � \ "�ª ; thisgives } � § � \ "#" , giving anincreasein thesendingrateof at most0.22packets/RTT in thatcase.

As this sectionhasshown, the increaserateat theTFRCsenderiscontrolledby the mechanismfor calculatingthe lossevent rateattheTFRCreceiver. If theaveragelossratewascalculatedsimplyasthemostrecentlossinterval, this would meana weight O of 1,resultingin } � § � \ « . Thus,even if all theweightwasput on themostrecentinterval, TFRCwould increaseits sendingrateby lessthanonepacket/RTT, givenafixedmeasurementfor theround-triptime.

101214161820

9.5 10 10.5 11 11.5 12 12.5 13Allo

wed

Rat

e (p

kts)¬

Time

allowed sending rate in packets per RTT

Figure19: A TFRC flow with an endto congestionat time 10.0.

To informally verify the analysisabove, we have run simulationsexploring theincreasein thesendingratefor theactualTRFCpro-tocol. Figure19 shows a TFRCflow with every 100-thpacket be-ing dropped,from asimulationin thenssimulator. Then,aftertime10.0,no morepacketsaredropped.Figure19 shows the sendingratein packetsperRTT; thissimulationuses1000-bytepackets.AsFigure19 shows, theTFRCflow doesnot begin to increaseits rateuntil time 10.75;at this time the currentlossinterval exceedstheaveragelossinterval of 100packets.Figure19 shows that,startingat time10.75,thesenderincreasesits sendingrateby 0.12packetseachRTT. Startingat time 11.5, the TFRC receiver invokes his-tory discounting,in responseto the detecteddiscontinuityin thelevel of congestion,andthe TFRC senderslowly changesits rateof increase,increasingits rateby up to 0.29packetsperRTT. Thesimulationin Figure19informallyconfirmstheanalysisin thissec-tion.

A.2 The Lower Bound on TFRC’s ResponseTime for PersistentCongestion

Thissectionusesbothsimulationsandanalysisto exploreTFRC’sresponsetimefor respondingto persistentcongestion.Weconsiderthe following question: for conditionswith the slowest responseto congestion,how many round-triptimes = of persistentconges-tion arerequiredbeforeTFRCcongestioncontrolhalvesthesend-ing rate? [5, AppendixA.2] shows that,given a modelwith fixedround-triptimesanda controlequationwith thesendingratepro-portionalto

F¢ , at leastfive round-triptimesof persistentconges-tion arerequiredbeforeTFRChalvesthesendingrate.

020406080

100120140

8 8.5 9 9.5 10 10.5 11 11.5 12Allo

wed

Rat

e (K

Bps

)

­Time

allowed sending rate

Figure20: A TFRC flow with persistentcongestionat time 10.

Simulationsshow that this lower boundof five round-triptimesiscloseto theactualnumberof round-triptimesof persistentconges-tion requiredfor thesenderto halve its sendingrate.To informallyverify this lowerbound,whichappliesonly to thesimplifiedmodeldescribedabovewith equallossintervalsbeforetheonsetof persis-tentcongestion,we have run simulationsexploring thedecreaseinthesendingratefor theactualTRFCprotocol.This is illustratedinthesimulationshown in Figure20whichconsistsof asingleTFRCflow. Fromtime 0 until time 10, every 100thpacket dropped,andfrom time 10 on, every otherpacket is dropped.Figure20 showsthe TFRC flow’s allowed sendingrateascalculatedat the senderevery round-triptime, with a markeachround-triptime, whenthesenderreceivesanew reportfrom thereceiverandcalculatesanewsendingrate. As Figure20 shows, whenpersistentcongestionbe-ginsat time10, it takesfive round-triptimesfor thesendingrateoftheTFRCflow to behalved.

02468

10

0 0.05 0.1 0.15 0.2 0.25

Rou

ndtr

ip T

imes®

Initial Packet Drop Rate

number of roundtrip time to halve sending rate

Figure 21: Number of round-trip times to halve the sendingrate.

Figure21 plots the numberof round-triptimesof persistentcon-gestionbeforetheTFRCsendercutsits sendingratein half, usingthe samescenarioas in Figure20 with a rangeof valuesfor theinitial packet drop rate. For the TFRC simulationsin Figure21,the numberof round-triptimesrequiredto halve the sendingraterangesfrom four to eightround-triptimes.For higherpacket droprates,theTFRCsender’s controlequationis nonlinearin

F¢ , soitis not surprisingthatthelower boundof five round-triptimesdoesnotalwaysapply.

We leave an upperboundon the numberof round-trip times re-quiredto halve thesendingrateasanopenquestion.Onepossiblescenarioto investigatewould be a scenariowith a very large lossinterval justbeforetheonsetof persistentcongestion.

5856