optical packet contention resolution through edge smoothing into

14
Optical Packet Contention Resolution Through Edge Smoothing Into Decomposed Subflows Zheng Lu and David K. Hunter Abstract—Motivated by the difficulty of implement- ing optical packet switches having large optical buff- ers, we propose a new, simple, and effective way of ap- plying electronic traffic smoothing at the edge of an optical packet switched network to reduce core opti- cal buffering requirements. A single class of service is assumed. Incoming traffic that is destined for a par- ticular egress edge router is smoothed at each ingress edge router by a buffer scheduling algorithm and de- composed into several constant bit rate subflows, each with a uniform packet interarrival time. Each such subflow emerging from a core switch also re- tains these traffic characteristics. The rates of these subflows obey a simple mathematical relationship, and it is shown that, with the appropriate queuing strategy, this substantially reduces the core buffering requirements. Indeed, it is shown through analysis and simulation that scalability is improved over ex- isting approaches, and computation is simplified. The proposal in this paper requires much less core buffer- ing than one without smoothing, while compared with other smoothing proposals, it is more scalable both to large networks and a large number of smoothed flows. A condition on the fiber delay line buffer depth requirements is also derived for no packet loss to take place in any of the scenarios mod- eled in this paper. Furthermore, it is confirmed through simulation that packet jitter is not generated in the network core. Index Terms—Computer network performance; Op- tical communication; Packet switching; Scheduling. I. INTRODUCTION P revious publications [13] show that electronic traffic smoothing at the edge of an optical packet switched (OPS) network can reduce packet contention in the core and hence reduce optical buffering require- ments there; this principle is outlined in Fig. 1. In this paper, a simpler and more easily implemented traffic smoothing and contention resolution scheme is pro- posed. It greatly reduces core packet switch conten- tion through a scheme known as smoothed flow de- composition (SFD), which is based upon another concept, which is called general traffic smoothing (GTS) in this paper, and which has already been pro- posed [1]. Indeed, contention resolution has been a fundamen- tal problem since optical packet switching [4,5] was first proposed, because there is no optical RAM analo- gous to that used in the electronic domain [6,7]. Three types of contention resolution are commonly proposed other than edge smoothing: fiber delay line (FDL) buffering in the time domain, deflection routing in the space domain, and wavelength conversion in the wavelength domain. Combining several of these tech- niques to resolve contention has been studied [810]; however, by also employing traffic smoothing at the edge of the optical network, the core contention reso- lution requirements can be reduced. The new traffic smoothing scheme proposed in this paper is simpler and more scalable than earlier solutions, yielding per- formance improvements as demonstrated below. A. Existing Work Because it simplifies optical packet switch design, slotted OPS is assumed in this paper, where Internet Protocol (IP) datagrams are segmented into fixed- length optical packets for transmission over the net- work core. Other work on slotted OPS networks en- sures fairness through a capacity allocation algorithm and addresses contention problems by means of both core switch buffering and deflection routing [11]. Else- where, optical time-slot interchangers have been pro- posed to switch in the time domain, achieving high statistical multiplexing gain, while eliminating any requirement for wavelength converters [12]. Link- utilization-based detection and transmission rate ad- justment for Transmission Control Protocol (TCP) Manuscript received March 25, 2009; revised November 6, 2009; accepted November 10, 2009; published November 25, 2009 Doc. ID 109208. Z. Lu is with Acorah Software Products Ltd., Wokingham, Berkshire RG40 1XS, United Kingdom. D. K. Hunter (e-mail: [email protected]) is with the School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, United Kingdom. Digital Object Identifier 10.1364/JOCN.1.000622 622 J. OPT. COMMUN. NETW./VOL. 1, NO. 7/DECEMBER 2009 Z. Lu and D. K. Hunter 1943-0620/09/070622-14/$15.00 © 2009 Optical Society of America

Upload: others

Post on 03-Feb-2022

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Optical Packet Contention Resolution Through Edge Smoothing Into

622 J. OPT. COMMUN. NETW./VOL. 1, NO. 7 /DECEMBER 2009 Z. Lu and D. K. Hunter

Optical Packet Contention ResolutionThrough Edge Smoothing Into

Decomposed SubflowsZheng Lu and David K. Hunter

impsptcc(p

tfigtobswnhelsaf

A

sPlwsacwpsruj

Abstract—Motivated by the difficulty of implement-ing optical packet switches having large optical buff-ers, we propose a new, simple, and effective way of ap-plying electronic traffic smoothing at the edge of anoptical packet switched network to reduce core opti-cal buffering requirements. A single class of service isassumed. Incoming traffic that is destined for a par-ticular egress edge router is smoothed at each ingressedge router by a buffer scheduling algorithm and de-composed into several constant bit rate subflows,each with a uniform packet interarrival time. Eachsuch subflow emerging from a core switch also re-tains these traffic characteristics. The rates of thesesubflows obey a simple mathematical relationship,and it is shown that, with the appropriate queuingstrategy, this substantially reduces the core bufferingrequirements. Indeed, it is shown through analysisand simulation that scalability is improved over ex-isting approaches, and computation is simplified. Theproposal in this paper requires much less core buffer-ing than one without smoothing, while comparedwith other smoothing proposals, it is more scalableboth to large networks and a large number ofsmoothed flows. A condition on the fiber delay linebuffer depth requirements is also derived for nopacket loss to take place in any of the scenarios mod-eled in this paper. Furthermore, it is confirmedthrough simulation that packet jitter is not generatedin the network core.

Index Terms—Computer network performance; Op-tical communication; Packet switching; Scheduling.

I. INTRODUCTION

P revious publications [1–3] show that electronictraffic smoothing at the edge of an optical packet

switched (OPS) network can reduce packet contention

Manuscript received March 25, 2009; revised November 6, 2009;accepted November 10, 2009; published November 25, 2009 �Doc. ID109208�.

Z. Lu is with Acorah Software Products Ltd., Wokingham,Berkshire RG40 1XS, United Kingdom.

D. K. Hunter (e-mail: [email protected]) is with the School ofComputer Science and Electronic Engineering, University of Essex,Colchester CO4 3SQ, United Kingdom.

Digital Object Identifier 10.1364/JOCN.1.000622

1943-0620/09/070622-14/$15.00 ©

n the core and hence reduce optical buffering require-ents there; this principle is outlined in Fig. 1. In this

aper, a simpler and more easily implemented trafficmoothing and contention resolution scheme is pro-osed. It greatly reduces core packet switch conten-ion through a scheme known as smoothed flow de-omposition (SFD), which is based upon anotheroncept, which is called general traffic smoothingGTS) in this paper, and which has already been pro-osed [1].

Indeed, contention resolution has been a fundamen-al problem since optical packet switching [4,5] wasrst proposed, because there is no optical RAM analo-ous to that used in the electronic domain [6,7]. Threeypes of contention resolution are commonly proposedther than edge smoothing: fiber delay line (FDL)uffering in the time domain, deflection routing in thepace domain, and wavelength conversion in theavelength domain. Combining several of these tech-iques to resolve contention has been studied [8–10];owever, by also employing traffic smoothing at thedge of the optical network, the core contention reso-ution requirements can be reduced. The new trafficmoothing scheme proposed in this paper is simplernd more scalable than earlier solutions, yielding per-ormance improvements as demonstrated below.

. Existing Work

Because it simplifies optical packet switch design,lotted OPS is assumed in this paper, where Internetrotocol (IP) datagrams are segmented into fixed-

ength optical packets for transmission over the net-ork core. Other work on slotted OPS networks en-

ures fairness through a capacity allocation algorithmnd addresses contention problems by means of bothore switch buffering and deflection routing [11]. Else-here, optical time-slot interchangers have been pro-osed to switch in the time domain, achieving hightatistical multiplexing gain, while eliminating anyequirement for wavelength converters [12]. Link-tilization-based detection and transmission rate ad-

ustment for Transmission Control Protocol (TCP)

2009 Optical Society of America

Page 2: Optical Packet Contention Resolution Through Edge Smoothing Into

bgcpnabmwonhc

baictco

B

etlv

Z. Lu and D. K. Hunter VOL. 1, NO. 7 /DECEMBER 2009/J. OPT. COMMUN. NETW. 623

ACK segments has been proposed to reduce conges-tion and thus reduce the size of optical buffers [13].

Also, a contention reduction scheme has been pro-posed to reduce core network contention through flowinformation signaling [1,3]. A rate-based pacingscheme for edge traffic smoothing using an explicitcongestion control protocol was also proposed to re-duce the need for optical buffering in core switches [2].Furthermore, for networks employing TCP as a trans-port layer protocol, it has been shown [14] that elec-tronic router buffer capacities can be decreased dra-matically over that decided by the usual rule ofthumb; however, the buffer size required in this caseis dependent on the number of TCP connections shar-ing the same link. Very small buffers have been shownto be sufficient when carrying many TCP connectionsover each link, while nevertheless exhibiting accept-able performance [15–17]. Quality of service (QoS) dif-ferentiation through contention resolution has alsobeen proposed, by exploiting recirculation bufferingand deflection routing [18]. In the work discussedabove, a variety of techniques and algorithms wereproposed to reduce contention in the optical domain[1,11,12,18] through the use of large optical buffers,wavelength conversion, deflection routing, or a combi-nation of these. Several of these publications proposemethods for reducing optical packet loss when it oc-curs while congestion takes place [2,3,13]; this is incontrast to taking steps to avoid or reduce contention

Fig. 1. (Color online) Simple example of an optical packet switchedwith electronic traffic smoothing, and core optical packet switches. Oin the diagram.

y reacting to it appropriately. Indeed, network con-estion can have serious consequences (namely, in-reased contention and packet loss), and the methodsroposed for reducing it include both passive tech-iques and active signaling. A different approachbandons the traditional rule of thumb used for routeruffer dimensioning in networks employing TCP;uch smaller buffers are proposed for use in core net-orks carrying many TCP connections [14–17], which

f course has implications for optical packet switchingetworks employing optical buffering. However thisas the disadvantage that the required router bufferapacity depends on the number of TCP flows present.

Essentially, previous work relies heavily on opticaluffering, wavelength conversion, deflection routing,nd complex signaling. However, these are not easy tomplement in very high-speed optical networks, andombining several of them is even more difficult. Inhis paper, a scalable and easily implemented rate de-omposition algorithm is proposed that does not relyn these techniques.

. Motivation

This paper makes the contribution of proposing anffective, scalable, and simple scheme to resolve con-ention in OPS networks, without the need for wave-ength conversion or deflection routing, and with aery small amount of core optical buffering. A single

twork using edge smoothing, showing user terminals, edge routersof the flows that exists between each pair of edge routers is shown

nene

Page 3: Optical Packet Contention Resolution Through Edge Smoothing Into

ptIogdc

ssstpd

stacs

mdw

A

t

624 J. OPT. COMMUN. NETW./VOL. 1, NO. 7 /DECEMBER 2009 Z. Lu and D. K. Hunter

class of service is assumed because, as described be-low, the delay in the core is uniform and bounded, andindeed packet loss can be eliminated. Furthermore,the parameters of the smoothing scheme of SectionII.A below can be configured in most network sce-narios to reduce delay to a level that is acceptable forall applications.

To demonstrate the principle of SFD in this paper, itis assumed that the routing algorithm ensures that nocore link is overutilized, i.e., that the load presented toeach link does not exceed its capacity. The implemen-tation of a routing algorithm for the network core,with associated load balancing, is a separate topic,which is outside the scope of this paper.

SFD is simpler than existing edge smoothingschemes because each packet flow between a pair ofedge routers is subdivided into decomposed subflows,the rates of which obey a simple mathematical rela-tionship that is explained and justified below. In par-ticular, it is shown that in conjunction with an appro-priate buffering strategy in core optical packetswitches, the use of such decomposed subflows per-mits further reductions to be made in core bufferingcapacities while still retaining acceptable packet lossperformance.

In the scenario proposed here, IP traffic enters eachedge router from customer networks and is convertedinto smoothed slotted OPS traffic that is injected intothe core. The scheme reduces contention by decompos-ing each smoothed constant bit rate (CBR) trafficstream generated at the network edge into CBR sub-flows and by employing simple coordination betweencore switches that is based upon simple rules and re-quires no explicit signaling packets; it is introducedand explained in Section II. In large networks employ-ing normal FIFO (first in first out) packet buffering inthe core optical packet switches [1,2], smoothed trafficentering the core from each edge router would becomebursty after traversing many core switches due tobuffering and drop tail operation [19]; due to conten-tion, conventional FIFO buffers often accumulate con-tended packets and transmit them as a burst. This isalso explained in Section II. In this paper, edge sched-uling of traffic flows that have been decomposed intosmaller CBR units, and a simple core network algo-rithm for scheduling slots, are both introduced, over-coming this problem and dramatically reducing con-tention in large networks.

For analytical convenience, Poisson models are of-ten used when analyzing Internet data traffic,whereas Markov models are well understood, andspecify bounds on performance. However, they both of-ten underestimate the network resources required[20]. Traffic measurements have shown self-similarityand heavy-tailed behavior in variable bit rate com-

ressed video traffic [21,22], local area network (LAN)raffic [23], and wide area network (WAN) traffic [20].n reality, Internet traffic is nonstationary, and peri-dically it changes as new applications and technolo-ies emerge. Hence networking algorithms cannot beesigned only based on current Internet traffic, spe-ifically because• The network infrastructure and protocols are

perpetually changing and vary between geo-graphical areas due to design considerationsbased upon economics and politics.

• New applications are always emerging, such aspeer-to-peer applications; traffic statistics mayeven vary for the same application, when usedunder different conditions.

• TCP will probably remain the dominant trans-port protocol for some time; however, the defaultbehavior of many hosts may not suit a particularsmoothing algorithm, because many TCP param-eters typically have different default settings indifferent operating systems.

• With the increased use of real-time services, thevolume of user datagram protocol (UDP) traffic iscurrently increasing. For example, the real-timetransport protocol (RTP) and its control compan-ion real-time control protocol (RTCP) are built ontop of UDP. RTP/RTCP are designed for applica-tions employing audio and video streaming. Thechanging proportions of applications employingeither TCP or UDP will doubtless change thecharacteristics of Internet traffic in the future.

For these reasons, smoothing schemes that are de-igned only for a specific type of traffic statistic are notustainable, and therefore, SFD, the traffic smoothingcheme proposed in this paper, is not specific to a par-icular type of incoming traffic. Furthermore, it em-loys slotted packet transport in the optical core in or-er to simplify optical packet switch implementation.

Section II introduces the edge traffic smoothingcheme assumed and discusses the concepts that mo-ivate SFD. Section III analyzes SFD mathematicallynd presents conditions for no packet loss to occur inore switches. Section IV presents the numerical re-ults, while Section V concludes this paper.

II. TRAFFIC SMOOTHING WITH FLOW DECOMPOSITION

For completeness, this section summarizes theain aspects of implementing traffic smoothing to re-

uce optical buffering requirements in the core net-ork and further discusses the motivation for SFD.

. Edge Traffic Smoothing

A full description of the traffic smoothing schemehat was assumed in conjunction with SFD to obtain

Page 4: Optical Packet Contention Resolution Through Edge Smoothing Into

boattflctt

B

sbs2rIacst

tasrdnsospatuttrd

i

Frt

Z. Lu and D. K. Hunter VOL. 1, NO. 7 /DECEMBER 2009/J. OPT. COMMUN. NETW. 625

the performance evaluation results reported in thispaper can be found in a previous publication [1]; how-ever, it is summarized here for completeness. Time isdivided into intervals, called negotiation intervals. Atthe network edge, each incoming edge-to-edge trafficflow is smoothed by a buffer into a stream of fixed-length optical packets that may change the rate at thebeginning of each negotiation interval, based uponstepwise rate estimation. A flow is defined as the datastream from one ingress edge router to another egressedge router (Fig. 1); each edge router has a buffer tocarry the flow destined for each other edge router.Throughout each negotiation interval, the packets ineach flow are transmitted in slotted form by the in-gress edge router at a constant rate, much like CBRtraffic in asynchronous transfer mode (ATM). At thebeginning of each negotiation interval, the packettransmission rate over the core may be changedthrough renegotiation. The data rate of an edge-to-edge flow during a negotiation interval is fixed.

To facilitate estimation of the service rate to be re-quested for the next negotiation interval, time is splitup into small units of say �=5 ms, which are shorterthan would be required for any foreseeable traffictype. After each such time unit, the moving averagearrival rate into each buffer is recalculated using themoving average calculation of Eq. (1). Ui is the mea-sured arrival rate for traffic entering the buffer duringunit i, while Ri is the smoothed arrival rate, definedvia Eq. (1) below. The service rate to be requested atthe next renegotiation is obtained by taking the meanof Ri over all units in the last negotiation interval. Theparameter � decides to what extent previous experi-ence and to what extent the current measured rateshould influence the new service rate and is deter-mined by considering the arrival rate, the current ser-vice rate, and the available network capacity. Thisheuristic algorithm is based upon a moving averagecalculation of the arrival rate into the buffer and maybe described as follows:

Ri+1 = �Ri + �1 − ���Ui + B/��. �1�

� is the duration of an iteration of the algorithm, andB is the buffer occupancy. Hence B /� represents theadditional transmission capacity required to emptythe buffer during the next iteration; Ui+B /� is the to-tal transmission capacity required. A detailed descrip-tion of the above equation, and how it motivates theoperation of the edge smoothing algorithm, can befound in an earlier paper [1].

As discussed below, unlike previous proposals, SFDdoes not generate signaling traffic; instead, the algo-rithm makes use of information contained in the slot-ted optical data packet header. At each edge electronicbuffer in SFD, the parameters governing trafficsmoothing are adjusted dynamically in response to

oth the volume of incoming traffic and the occupancyf the buffer itself. Each optical packet header carriessmall amount of information to facilitate coordina-

ion in the core. Because of the simplicity of the pro-ocol, little overhead is required for this purpose: theow ID, the subflow ID, and the subflow rate. Eachore switch checks the flow ID and subflow ID to de-ermine the rate and then schedules the packet ontohe appropriate fiber delay line.

. Flow Decomposition and Core Switch Coordination

To achieve a constant mean data rate during onemoothing interval, optical packets in the core mighte scheduled in several ways: irregular unsmoothedlot arrivals [Fig. 2(a)]; regular slot arrivals [Fig.(b)]; or a combination of several subflows each withegular slot arrivals, i.e., decomposed flows [Fig. 2(c)].mplementation of such decomposed flows is discussednd evaluated in this paper. However, the same flow ifarried by each of these three methods must have theame mean traffic rate, which is approximately equalo the estimated rate for the next negotiation interval.

As a simple example, consider traffic injected intohe core in each of the three scenarios identifiedbove. Figure 2(a) contains unpredictable groups oflots that have not been smoothed and would incur se-ious core switch contention and corresponding lossue to the small buffers. Figure 2(b) shows how exter-al traffic can be smoothed into regularly spaced con-tant rate streams at an ingress edge router duringne negotiation interval; this is called general trafficmoothing (GTS) in this paper. However, GTS em-loys FIFO optical packet buffering in the core, and asresult, packets often become grouped together after

raversing several intermediate core switches. Thesenpredictable groups of packets, resulting in burstyraffic, will degrade contention loss performancehroughout the path to the destination egress edgeouter. Simulation results are described later thatemonstrate this.

Figure 2(c) shows the flow after being decomposednto several subflows by SFD and is chosen as a very

ig. 2. Illustration of different flows injected into the OPS core: (a)andom unsmoothed packets, (b) packets with constant interarrivalimes, (c) a simple example of decomposed flows for illustration.

Page 5: Optical Packet Contention Resolution Through Edge Smoothing Into

rsontsl

ccdi

C

vbwbtci

GescoTlt

oh

Fs

626 J. OPT. COMMUN. NETW./VOL. 1, NO. 7 /DECEMBER 2009 Z. Lu and D. K. Hunter

simple example to illustrate the concept of flow de-composition. In real implementations that are dis-cussed later, a flow may be decomposed into manysubflows, and each of these will be at a different rate.These subflows are recombined into the original singleflow at the egress edge router, based upon sequenceinformation held in each optical packet header. As dis-cussed before, even a GTS flow that initially had regu-larly spaced packets can become bursty after travers-ing several core switches because of contention in theFIFO buffers between flows sharing the same multi-plexed link. Early research work on ATM corroboratedthis point [19]. The purpose of SFD is to avoid theneed for an FIFO packet buffering discipline in coreswitches and hence not disturb the regular timing ofpackets in each subflow. In this way, the input flowsinto any core switch follow the original smoothed pat-terns, even after traversing many core switches.

As shown in Section III, SFD can work well with ahigh load in a large network with very little FDL buff-ering. However, a number of important questionsarise:

• Exactly how should a flow be decomposed intosubflows? What should be the relative timing ofthese subflows? How is contention in the coreavoided between different subflows from differentingress edge routers? These subflows must obeycertain rules in order to avoid such contention.Not only is it shown later that such rules exist,but they are derived and presented.

• How should core optical packet switches schedulepackets? In GTS, by traversing many coreswitches en route, packets tend to becomegrouped together, and the traffic becomes morebursty. By scheduling these decomposed subflowsappropriately in SFD, the output subflows fromeach optical packet switch retain the uniform in-terarrival times of the input subflows; they are inturn the input subflows for the next core switch.

As described above, a single class of service is as-sumed, meaning that there is a single flow from eachsource edge router to each destination edge router, soSFD is not aware of individual flows (for example,those generated through TCP) that are produced byapplications. For this reason, the processing burden isnot heavy, especially because the algorithm used todecompose flows is very simple (see Section III), whichsimplifies control of core nodes. The algorithm onlyconsiders the relative positions of decomposed flows.For example, suppose that a routing change takesplace, so that some flows originally destined for nodeA are diverted to node B instead. Node B still imple-ments the shifting algorithm correctly, because eachnode does not expect any particular set of flows to ar-rive at its inputs, and the decomposition algorithm al-ways ensures that any combination of flows is cor-

ectly shifted by each node. In other words, thecheduling algorithms work correctly independentlyf the route. Furthermore, in order to implement this,o internode information exchange is required, andhe algorithm only requires that the relative (not ab-olute) positions of packets are maintained on eachink throughout the core.

In a real implementation, an optical packet syn-hronization stage is required on each input of eachore switch, in order to ensure that slots arriving onifferent switch inputs do so at the same time, just asn any slotted optical packet switching network.

. Edge Buffer Requirements

The approach taken in this paper differs from con-entional packet switching with small core routeruffers and also differs from optical burst switching,hich requires large edge buffers in order to assembleursts. SFD smooths incoming traffic at the edge ofhe core, but the buffer requirements are limited be-ause if it seems likely to overflow, rate renegotiations triggered.

The statistical characteristics and performance ofTS was illustrated through simulation, where twodge OPS switches are connected through a core OPSwitch. Each edge OPS switch has two traffic sourcesonnected to it, which generate either Poisson trafficr self-similar traffic with a Hurst parameter of 0.8.he full link capacity is 10 Gbps and the simulated

oad is approximately 0.4. �R is manually configuredo 25 Mbps with an 800 ms negotiation interval.

As expected, Figs. 3 and 4 show that the edge bufferccupancy when smoothing self-similar traffic hasigher peaks, each of which may push the traffic

ig. 3. Edge switch electronic buffer occupancy with general trafficmoothing and with Poisson traffic input.

Page 6: Optical Packet Contention Resolution Through Edge Smoothing Into

sTtmstaw

noastttiTnhrspTti

I

Fesfiweis

Z. Lu and D. K. Hunter VOL. 1, NO. 7 /DECEMBER 2009/J. OPT. COMMUN. NETW. 627

smoothing scheme to renegotiate for a new rate. Com-paring the second and third columns of Table I showsthat smoothing self-similar traffic requires more edgeelectronic buffering than less bursty Poisson traffic.However, because optical buffering in the optical corenetwork is difficult to implement and expensive, andbecause electronic memory for edge buffering is easilyand cheaply implemented, this is not a problem.

III. ANALYSIS OF SFD

This section describes and justifies the simple algo-rithms used to implement SFD.

A. Scheduling of Decomposed Subflows

In this section, the scheduling algorithm for decom-posed subflows is described and justified analytically,and scalability with respect to delay and synchroniza-tion is discussed.

In SFD, the relative positions of packets within asubflow are maintained when it traverses each coreswitch, so that all subflows in the core are always

TABLE ISTATISTICAL DATA FOR EDGE ELECTRONIC BUFFER OCCU-PANCY WITH POISSON AND SELF-SIMILAR TRAFFIC INPUTS,

MEASURED IN BITS

Poisson Self-Similar

Minimum 0 0Maximum 2,480,000 197,905,400Expected value 206,513 9,435,876Sample mean 206,666 10,748,157Standard deviation 234,864 29,050,646

Fig. 4. Edge switch electronic buffer occupancy with general trafficsmoothing and with self-similar traffic input.

moothed and in the form of regularly spaced packets.hus core switch FDL buffering is used to maintain

he relative positions of packets, rather than imple-enting conventional FIFO packet buffering. It is

hown below that very little FDL buffering is requiredo achieve almost zero contention loss performancend maintain traffic throughout the core as subflowsith regularly spaced packets.

To reduce or even eliminate contention, whenever aew subflow is introduced after renegotiation onto anptical packet switch output port, the new subflownd all subflows of lower rate on that output port arecheduled. Assume that scheduling takes place atime Ti, then again at time Ti+1. Between these twoimes, each packet in each subflow j will be delayed inhis core optical packet switch by an amount di,j thats fixed to the same value for that subflow from time

i until Ti+1. Then each such subflow, including theew one, is scheduled, starting with the one with theighest rate and proceeding in descending order ofate. To do this, the subflow is assigned a value of di,jo that its next packet occupies the first slot not occu-ied by subflows that have already been scheduled.his algorithm is justified analytically below. To main-

ain each subflow’s packet arrival pattern, the follow-ng requirements must be satisfied:

• Requirement 1: Packet arrivals in each subflowmust have constant interarrival times at allpoints in their path through the optical packetcore.

• Requirement 2: It must be possible to scheduleeach subflow to be multiplexed onto a particularcore optical packet switch output with the neces-sary delay in order to avoid contention.

n Fig. 5(a), the interarrival times of flows 2 and 3

ig. 5. Both (a) and (b) are flows that are divided into subflowsach having constant interarrival times; however, the rates of theubflows in (a) do not obey the mathematical relationship that is de-ned in this paper, so it is not possible to multiplex them togetherithout disturbing the interarrival times within a subflow. How-ver, in (b), the mathematical relationship is adhered to, and hencet is not necessary when multiplexing to retime packets within theame subflow relative to one another.

Page 7: Optical Packet Contention Resolution Through Edge Smoothing Into

�tasiap

B

btApoa1ow

fcTtvdsflwcmrtub

C

pctecetrIpp

feaii

628 J. OPT. COMMUN. NETW./VOL. 1, NO. 7 /DECEMBER 2009 Z. Lu and D. K. Hunter

cannot be preserved because the mathematical rela-tionship defined below is not adhered to, hence aftermultiplexing, it is necessary to disturb their interar-rival times so that they are not constant for each sub-flow. On the other hand, in Fig. 5(b), the mathematicalrelationship below is obeyed, and hence the interar-rival times within each subflow once they are multi-plexed together remain constant, although each entiresubflow may be shifted in time; this is compatible withthe principles of SFD. Assume there are m subflowson a link in the network core, and that Xj, the con-stant interarrival interval between packets in a sub-flow, is measured in slots.

Condition 1: If Xj�Xk, Xj must be a divisor of Xk(i.e., Xk must be an integer multiple of Xj), for 1� j�m and 1�k�m.

Condition 2: The link must not be overloaded, i.e.,

�i=1

m

1/Xi�1.

Theorem 1:

If Condition 1 and Condition 2 are both satisfied,then all flows that have to be forwarded onto a par-ticular link can be multiplexed together without con-tention and without destroying their fixed interarrivaltimes, by employing appropriate scheduling.

Proof of Theorem 1:

Condition 1 already satisfies Requirement 1 above.The arrival time of packet number a+1 in subflow j isdescribed by �j+aXj �a�0�, where �j is the shift ofsubflow j. To satisfy Requirement 2, it should be pos-sible to multiplex together all subflows without con-tention, although one or more subflows may have toundergo a fixed time delay in order to achieve this.First, consider two subflows j and k; the condition forno contention is that no packet in subflow j contendswith a packet in subflow k, i.e., �j+aXj��k+bXk, forall integers a�0, b�0, 1� j�m, and 1�k�m.

When a new subflow is introduced onto a core net-work link, contention is avoided by scheduling all thelower-rate subflows. The proof of the algorithm’s va-lidity proceeds by induction. Let A be the set of sub-flows not having a lower rate than the new subflow,and let set B be all subflows on the link in questionother than those in A, i.e., the set of subflows to bescheduled. H�i� is the hypothesis that i out of the �B�subflows have already been scheduled so that there isno contention among either them or those subflows inA. If i� �B�, and H�i� is true, the shift �r can be allo-cated for the subflow r that has the highest rate ofthose remaining to be scheduled. To carry out thescheduling, subflow r’s delay in the switch is decidedso that its first packet occupies the first free slot notalready allocated to those subflows that have alreadybeen scheduled or are in set A. This is because if slot

r is free, then so will be all slots �r+aXr, for a�0;hese slots cannot be occupied because all of the inter-rrival times for subflows that have already beencheduled or are in set A are integer multiples of thenterarrival time for subflow r. Hence H�i+1� is true,nd the desired result H��B�� follows by induction,roving the theorem.

. Generation of Decomposed Subflows

Condition 1 is the basic requirement for all flows toe multiplexed without destroying their original pat-erns, i.e., any other condition based on it can be used.

simple case that satisfies this condition is also ofractical importance and is assumed in the remainderf this paper—here all Xj are integral powers of 2. Ifll subflows adhere to this condition, then by Theorem, the contended flows can always be scheduled beforenward transmission in order to avoid contention,ithout changing the packet interarrival times.

The edge router smoothing algorithm for SFD dif-ers from GTS because each smoothed flow will be de-omposed into several subflows obeying Condition 1.he rate sum of these subflows should approximate

he rate of the original single flow. It can be shownery easily that a smoothed flow with any rate can beecomposed into a small number of subflows with aummed rate approximating the rate of the smoothedow. Therefore, if all flows are composed of subflowsith a minimum capacity of a fraction 2−i of the link

apacity, i.e., having Xj=2i, then the maximum mis-atch possible between the scheduled transmission

ate and the actual transmission rate (assuming thathe latter is always smaller) is 2−i. Hence larger val-es of i generally imply a need for more subflows, butetter matching of rates.

. Core Packet Delay Jitter

Delay variation (or jitter) is important when sup-orting real-time services implemented by the appli-ation layer in the protocol stack. SFD greatly reduceshis because the delay of each subflow is fixed withinach negotiation interval. There is only a small pro-essing overhead at intermediate switches, which isspecially important because the short packet dura-ions in a high-speed optical packet network result ineduction of the processing time available. Only theD of each decomposed subflow, and the subflowacket interarrival time, need to be stored in eachacket header to be identified by the core switches.

SFD does not require synchronization of TDMrames over the whole network. However, packet slotsntering core switches must be aligned to slot bound-ries by using a suitable variable optical delay on eachnput of each core switch. While this is an importantssue, it is addressed elsewhere [24,25] but not in this

Page 8: Optical Packet Contention Resolution Through Edge Smoothing Into

cnF

ptw

A

tetrcs

F(c

Z. Lu and D. K. Hunter VOL. 1, NO. 7 /DECEMBER 2009/J. OPT. COMMUN. NETW. 629

paper. Hence SFD only requires slots to follow theirrelative positions in a flow, e.g., if two flows arrive at acore switch and contend for one output port, and theslots in both flows are not aligned (two flows may ar-rive with half a slot difference in timing), then a coreswitch synchronization stage, i.e., a variable opticaldelay, is used to align the two flows to the slot bound-aries. Once both flows are aligned to the slot bound-aries, the core switch scheduling scheme can be em-ployed. Unlike some other OPS networking proposalsrequiring heavy use of FDLs, this shifted OPS net-working uses very little FDL buffering, because everydelay is small and predictable.

D. Analytical Condition for Zero Packet Loss

When combining each set of shifted subflows withina core optical packet switch to make up a flow, thesubflows are added to the flow in decreasing order ofrate, and the first free slot that is chosen for a packetin flow i is Xp /2+ iXp, in order to distribute packetsevenly, where Xp is the interarrival time in slots of thehighest rate subflow. Hence, to compute an upperbound on the buffer depth required for no packet lossin any scenario, such a flow can be replaced by a se-quence of packets with constant interarrival time S,which is the highest power of 2 less than or equal tothe interarrival time of the subflow with the highestrate:

S = 2�log2 n−log2 �. �2�

�x� is the largest integer less than or equal to x. In theworst case, each flow will be shifted by exactly thesame number of slots, so packets in each of the n flowswill coincide. In that case, the buffer will fill up to itsmaximum extent when n packets arrive at once, onefrom each flow. Hence for no packet loss to occur, it issufficient that Conditions 3 and 4 both hold:

• Condition 3: When n packets arrive at once, thebuffer must be able to hold n−1 of them while theother one is transmitted. Hence it is necessarythat the FDL buffer depth B satisfies B�n−1

• Condition 4: There must be sufficient time be-tween packet arrivals for all n−1 packets in thebuffer to be transmitted. Hence it is a require-ment that S�nmax, where nmax= �1/� is the maxi-mum number of flows that can be multiplexedonto the studied output port without overloadingit.

If n=1, then contention is impossible, so it is sufficientto consider n�2. In this case, Condition 4 alwaysholds:

S = 2�log2 n−log2 � � 21+�log2�1/�� � 2log2�1/� = 1/ � �1/�= nmax.

Hence Condition 3 alone, namely, B�n−1, is a suffi-

ient condition for no packet loss to occur in any sce-ario, a fact that is confirmed by Fig. 6(a) through toig. 8(a).

IV. NUMERICAL RESULTS

This section presents results on the contentionacket loss rate of SFD and demonstrates via simula-ion how delay jitter, which is eliminated in SFDithin each negotiation interval, occurs with GTS [1].

. Analysis of SFD in a Single Core Switch

To support the analysis of Section III, a programhat enumerated and calculated the number of pack-ts lost for all possible slot traffic patterns was writ-en in C in order to compute the contention lossate for SFD with different FDL depths, loads, andore switch sizes. The results from this program arehown in Figs. 6–11. With SFD, there are two possible

ig. 6. Contention loss performance with (a) =0.156 and n=5 andb) =0.438 and n=7. Where data for SFD is not shown, it is be-ause the calculation evaluated to zero.

Page 9: Optical Packet Contention Resolution Through Edge Smoothing Into

sapiptBtmvwtrfiptpB

shown, it is because the calculation evaluated to zero.

Fo

630 J. OPT. COMMUN. NETW./VOL. 1, NO. 7 /DECEMBER 2009 Z. Lu and D. K. Hunter

cenarios for FDL buffering: it can be shared amongll switch output ports, or alternatively, each outputort can have its own dedicated buffering—the latters assumed in this analysis. The results were com-ared with those obtained with GTS, where Bernoulliraffic with FIFO buffering is assumed. With GTS,ernoulli traffic in the core is assumed because the

raffic on each link consists of many lower rate flowsultiplexed together; these individual flows are of

aried rates and have a constant bit rate when theyere injected into the core. It is well known that mul-

iplexing together many heterogeneous smaller flowsesults in traffic that tends toward a Poisson processor variable-length, unslotted packets [26], and also,n an analogous manner, such a mixture of traffic ap-roaches a Bernoulli process in the slotted, discrete-ime domain. The reader is referred to an earlier pa-er for details on how the packet loss for GTS withernoulli traffic was calculated [1].

Fig. 9. Loss performance of SFD and GTS with varying load.

ig. 10. Loss performance of SFD and GTS with varying numbersf contending ports.

Fig. 7. Contention loss performance with =1.0 for SFD and =0.19, 0.3, and 1.0 for GTS with n=8. Where data for SFD is not

Fig. 8. Loss performance comparisons between SFD and GTS.Where data for SFD is not shown, it is because the calculationevaluated to zero.

Page 10: Optical Packet Contention Resolution Through Edge Smoothing Into

tllwanstectnza

�btf

wbaeqds

dwbiih

lnsc

frtbiavl

Z. Lu and D. K. Hunter VOL. 1, NO. 7 /DECEMBER 2009/J. OPT. COMMUN. NETW. 631

When analyzing SFD, a particular output port(called the studied output port) on a particular coreoptical packet switch (called the studied switch) is ex-amined. Each of the n flows arriving at the studiedswitch that are also destined for the studied outputport are composed of one or more subflows. j is de-fined as the traffic load of flow j that is destined forthe studied output port. Packets destined for otheroutput ports are ignored here because they are not rel-evant to the analysis:

j = �i�Ij

1

Xi.

Ij is the set of subflows forming a flow j that is des-tined for the studied output. It is assumed below thatthe loads of all flows destined for the studied outputport are equal, i.e., that =j for 1� j�n.

Using the notation discussed above, each individualsubflow i has a constant packet interarrival time of Xi.w is defined as the maximum value of Xi, over all sub-flows destined for the studied output, where I=I1�I2� ¯ �In:

w = maxi�IXi.

By Condition 1, interarrival times for each flow beingconsidered must either be w or a divisor of w. Define atraffic matrix T with n rows and w columns, whererow i represents the traffic destined for the studiedoutput on flow i in an interval of w slots; a 1 repre-sents the presence of a packet, and a 0 represents nopacket.

To determine packet loss, 57 different decomposi-tions of a flow into subflows were evaluated by a cus-tom simulator written in C (see Appendix A). Foreach of these decompositions, all possible traffic ma-

Fig. 11. Loss performance comparisons between SFD and GTS. Xis the total number of subflows on the output link for which conten-tion is being studied.

rices were enumerated, and the corresponding packetoss was evaluated. In each case, w is equal to theargest subflow interarrival time and the packet lossas evaluated for 16 integral values of n between 2nd 17 inclusive, making a total of 57�16=912 sce-arios. For each such scenario, all subflows werehifted by all combinations of between 0 and w−1ime slots, making K=wn different traffic matrices forach scenario. While it is assumed for simplicity in thealculation described by Eq. (3) below that each ofhese K traffic matrices is equally probable, this doesot influence the packet loss result if it is evaluated asero because in this case, there is no packet loss forny traffic matrix.

If i denotes the current matrix pattern index �1� iK� and B denotes the number of FDL optical packet

uffer positions available at the studied output port,hen the packet loss probability in such a core switchor each scenario is

L = �i=1

K number of packets lost with matrix i

total number of packets in matrix i

=1

Knw�i−1

K

LiB, �3�

here LiB is the number of lost packets with an FDL

uffer depth of B with the ith possible matrix. LiB is

lso calculated by the C program; each matrix isxamined in turn, and by emulating the operation of aueue, the number of packets lost with the bufferepth B specified is determined, using the rules forcheduling described in Section III.

From Figs. 6(a) and 6(b), it can be seen that underifferent loads and numbers of input links, SFD al-ays performs better than GTS for all values of FDLuffer depth shown. Also, with increased FDL buffer-ng, the loss rate of SFD decreases more quickly thant does with GTS; the difference is more obvious atigher loads.

From Fig. 7, it can be seen that under very highoad the difference between SFD and GTS is very sig-ificant. Indeed, for optical buffer capacities of up toix packets, the performance of SFD under full load islose to that of GTS with a load between 0.19 and 0.3.

Figures 8(a) and 8(b) show comprehensive loss per-ormance comparisons between SFD and GTS for aange of varying parameters; to achieve the same con-ention loss performance, GTS has much higher FDLuffering requirements, while at the same time caus-ng higher delay. The traffic load in Figs. 8(a), 8(b),nd 11 is obtained by summing the rates of all indi-idual subflows, resulting in the particular values ofoad between 0 and 1 shown in the graphs. A range of

Page 11: Optical Packet Contention Resolution Through Edge Smoothing Into

flefietcttspreffefos

tGss1asinstadpthm

Ff

632 J. OPT. COMMUN. NETW./VOL. 1, NO. 7 /DECEMBER 2009 Z. Lu and D. K. Hunter

loads is shown in order to corroborate the analysis ofSubsection III.D for different values of n, the numberof subflows.

Figure 9 shows the loss performance of SFD andGTS with varying load parameters. It also shows thatbetter performance is achieved by SFD, with all otherconditions being equal.

Figure 10 shows that SFD has superior loss perfor-mance for a range of numbers of contending ports. Inparticular, the performance is significantly improvedwhen only a very small FDL buffer is provided.

Figure 11 shows that increasing the number of sub-flows has no negative effect; SFD exhibits an improve-ment in performance regardless of the number of sub-flows.

B. Delay Jitter in Multiple Core Switches in Series

In SFD, the interarrival times of each subflow arekept constant throughout the subflow’s path throughthe core. This is one of the reasons that SFD scales tolarge networks, because the traffic statistics do notchange with the number of hops through the core, i.e.,they do not change with network size. However, withGTS, the traffic statistics change as a flow traversesthe core, because the FIFO buffering has a tendencyto make the traffic burstier. In this subsection, thesestatements are justified by simulation using OPNET[27].

By definition, a flow with packets that are moreevenly distributed over time has a lower packet inter-arrival time deviation. The more grouped together thepackets are, the larger the packet interarrival timedeviation is. To demonstrate how a flow in GTS, or asubflow in SFD, changes after traversing many coreswitches, a simulation scenario was implemented. InFig. 12, there are 20 core switches connected in series.The edge router named source will generate smoothedtraffic destined to the edge router named sink. Herethe traffic from source to sink is called the studied

Fig. 12. Simulation topology for investigating packet burstiness inthe core with GTS.

ow. Each core switch is connected to three edge rout-rs. These three edge routers generate smoothed traf-c that contends with the studied flow. The traffic gen-rated from these edge routers passes through onlyhe next core switch before being dropped there be-ause its only purpose is to influence the studied flowhrough contention at the current core switch. Theraffic load into each core switch is kept at 0.75. Thetudied flow is generated with a load of 0.125. For theurposes of the simulation, each packet slot has a du-ation of 1 ms. The smoothed traffic generated by thedge routers can be either in general form with simpleeed-forward core buffering (GTS) or in decomposedorm (SFD). By inspecting the flow’s statistical prop-rties at the output of each traversed core switch, dif-erent forms of smoothed traffic and their influencesn the studied flow after traversing many corewitches were studied.

Figure 13 shows probability density histograms ofhe interarrival time drift for the studied flow withTS after traversing a number of core switches. The

imulation time was 600 s for each number of corewitches N. For a load of 0.125 and slot duration ofms, the interarrival time (and also the mean inter-

rrival time) for the studied flow originating from theource is 8 ms. Zero on the x-axis in Fig. 13 refers tonterarrival times that are all equal to the mean. Theegative area corresponds to interarrival times beinghorter than the mean, and the positive area refers tohem being longer. Before traversing any core switch,ll interarrival times in the studied flow have zerorift, where the drift is the deviation from the originalacket position; packets have equal interarrivalimes. After traversing many core switches, the driftas clearly increased, and fewer interarrival times re-ain the same, so packets do not retain their original

ig. 13. Probability density histograms of interarrival time driftor the studied flow in GTS after traversing N core switches.

Page 12: Optical Packet Contention Resolution Through Edge Smoothing Into

ctttdp

roSstsfofl

psttfl

Tepsmie

Tp

Z. Lu and D. K. Hunter VOL. 1, NO. 7 /DECEMBER 2009/J. OPT. COMMUN. NETW. 633

equal interarrival times and the traffic flow is morebursty.

In Fig. 14, at the output of each core switch thestudied flow is clearly changed by GTS (nondecom-posed), but is not changed by SFD (decomposed). Alsowith GTS, the deviation increases with the number oftraversed core switches. Although the SFD schedulingalgorithm described above was implemented in theOPNET simulations, the results in Fig. 14 show nonoticeable delay jitter with SFD. For each simulationpoint, six simulation runs were carried out in order tocalculate 95% confidence intervals. Each simulationrun had a simulation time of 600 s.

V. CONCLUSIONS

In this paper, a traffic smoothing scheme calledsmoothed flow decomposition (SFD) was introduced,which is simpler and more robust than GTS, a previ-ous proposal [1]. At each core switch, a simple distrib-uted packet scheduling algorithm is used, which re-quires no signaling traffic for its coordination.Furthermore, it does not introduce delay jitter, mean-ing that SFD can reduce packet loss in any size of net-work, and performance does not degrade with largenetworks as with GTS where traffic becomes morebursty as it progresses through the core. It is shownthrough simulation that with GTS, there is significantpacket jitter in the core, which, within each negotia-tion interval, is eliminated in SFD, facilitating sup-port for real-time applications. Indeed, analysis showsthat the performance of SFD under full load is close tothe performance of GTS with a much lower load of be-tween 0.19 and 0.3. Also, it is shown that if the FDLbuffer depth is at least one less than the number of

Fig. 14. Comparison of standard deviation of packet slot interar-rival times for studied flows in the form of general smoothing anddecomposed smoothing. The dotted lines denote 95% confidenceintervals.

ontending flows, there is no core packet loss in any ofhe scenarios modeled in this paper. Furthermore,his decomposition and coordination scheme is simpleo implement with low computational requirementsue to the simple packet scheduling algorithm em-loyed.

Only the subflow ID, subflow optical packet interar-ival time, and sequence number are required in eachptical packet header to facilitate correct operation ofFD. SFD decomposes the smoothed flow into severalubflows, with the result of greatly reducing computa-ional complexity. Analysis and simulation resultshow that SFD has superior contention resolution per-ormance to the nonsmoothed scenario and is more-ver scalable to larger networks with more aggregatedows than GTS.

APPENDIX A

This Appendix describes how the 57 different flowatterns are constructed, which generate the resultshown in Figs. 6–9. Each subflow has an interarrivalime of either 2, 4, 8, 16, 32, 64, or 128 slots, and nowo subflows have the same interarrival time. Eachow is made up of either 1, 2, 3, 4, or 5 subflows:• With 1 subflow, the subflow has an interarrival

time of either 2, 4, 8, 16, 32, 64, or 128 slots (7different flow patterns).

• With 2 subflows, their interarrival times are twodifferent values from {4, 8, 16, 32, 64, 128}(6! /4!2!=15 different flow patterns).

• With 3 subflows, there are two categories. In thefirst category, the first two subflows have interar-rival times of 4 and 8, while the third is either 16,32, 64, or 128 (4 different flow patterns). In thesecond category, three different interarrival timesare chosen from {8, 16, 32, 64, 128} (5! /3!2!=10different flow patterns).

• With 4 subflows, their interarrival times are fourdifferent values from {4, 8, 16, 32, 64, 128}(6! /2!4!=15 different flow patterns).

• With 5 subflows, their interarrival times are fivedifferent values from {4, 8, 16, 32, 64, 128}(6! /1!5!=6 different flow patterns).

his makes a total of 7+15+4+10+15+6=57 differ-nt flow patterns, forming a hierarchy of rates thatrovide an appropriate range of granularities for theimulation. For all flow patterns made up of two orore subflows, S in Eq. (2) is equal to half the lowest

nterarrival time. If there is only one subflow, S isqual to its interarrival time.

ACKNOWLEDGMENT

he authors thank the anonymous reviewers of thisaper, as well as Reza Nejabati and Martin Reed of

Page 13: Optical Packet Contention Resolution Through Edge Smoothing Into

JUAsm

UcfM

634 J. OPT. COMMUN. NETW./VOL. 1, NO. 7 /DECEMBER 2009 Z. Lu and D. K. Hunter

the University of Essex, and Joseph Sventek of Glas-gow University, for their helpful comments on previ-ous drafts.

REFERENCES

[1] Z. Lu, D. K. Hunter, and I. D. Henning, “Contention reductionin core optical packet switches through electronic trafficsmoothing and scheduling at the network edge,” J. LightwaveTechnol., vol. 24, no. 12, pp. 4828–4837, Dec. 2006.

[2] O. Alparslan, S. I. Arakawa, and M. Murata, “Rate-based pac-ing for small buffered optical packet-switched networks,” J.Opt. Netw., vol. 6, no. 9, pp. 1116–1128, Sept. 2007.

[3] Z. Lu and D. K. Hunter, “Dual-layer congestion control for TCPcarried by optical packet switching with UDP background traf-fic,” J. Lightwave Technol., vol. 1, no. 2, July 2009, pp. A1–A16.

[4] S. Yao, B. Mukherjee, and S. Dixit, “Advances in photonicpacket switching: an overview,” IEEE Commun. Mag., vol. 38,no. 2, pp. 84–94, Feb. 2000.

[5] D. K. Hunter and I. Andonovic, “Approaches to optical Internetpacket switching,” IEEE Commun. Mag., vol. 38, no. 9, pp.116–122, Sept. 2000.

[6] R. S. Tucker, “The role of optics and electronics in high-capacity routers,” J. Lightwave Technol., vol. 24, no. 12, pp.4655–4673, Dec. 2006.

[7] R. S. Tucker, P.-C. Ku, and C. J. Chang-Hasnain, “Slow-lightoptical buffers: capabilities and fundamental limitations,” J.Lightwave Technol., vol. 23, no. 12, pp. 4046–4066, Dec. 2005.

[8] J. J. He, D. Simeonidou, and S. Chaudhury, “Contention reso-lution in optical packet switching networks under long-rangedependent traffic,” in Optical Fiber Communications Conf.,2000.

[9] S. Yao, B. Mukherjee, S. J. B. Yoo, and S. Dixit, “A unifiedstudy of contention resolution schemes in optical packetswitched networks,” J. Lightwave Technol., vol. 21, no. 3, pp.672–683, Mar. 2003.

[10] F. Xue, Z. Pan, Y. Bansal, J. Cao, M. Jeon, K. Okamoto, S. Ka-mei, V. Akella, and S. J. B. Yoo, “End-to-end contention reso-lution schemes for an optical packet switching network withenhanced edge routers,” J. Lightwave Technol., vol. 21, no. 11,pp. 2595–2604, Nov. 2003.

[11] H. Zang, J. P. Jue, and B. Mukherjee, “Capacity allocation andcontention resolution in a photonic slot routing all-opticalWDM mesh network,” J. Lightwave Technol., vol. 18, no. 12,pp. 1728–1741, Dec. 2000.

[12] J. Ramamirtham and J. Turner, “Time sliced optical burstswitching,” IEEE INFOCOM, San Francisco, California, 2003.

[13] F. Xue and S. J. B. Yoo, “TCP-aware congestion control in op-tical packet switched networks,” in Optical Fiber Communica-tions Conf., Atlanta, Georgia, Mar. 2003.

[14] G. Appenzeller, J. Sommers, and N. McKeown, “Sizing routerbuffers,” Proc. ACM SIGCOMM, vol. 34, no. 4, pp. 281–292,Oct. 2004.

[15] D. Wischik and N. McKeown, “Part I: Buffer sizes for coreswitches,” Comput. Commun. Rev., vol. 35, pp. 75–78, July2005.

[16] G. Raina, D. Towsley, and D. Wischik, “Part II: Control theoryfor buffer sizing,” Comput. Commun. Rev., vol. 35, pp. 79–82,July 2005.

[17] M. Enachescu, Y. Ganjali, A. Goel, N. McKeown, and T. Rough-garden, “Part III: Routers with very small buffers,” Comput.Commun. Rev., vol. 35, pp. 83–90, 2005.

[18] T. Zhang, K. Lu, and J. P. Jue, “Differentiated contention res-olution for QoS in photonic packet-switched networks,” J.Lightwave Technol., vol. 22, no. 11, pp. 2523–2535, Nov. 2004.

[19] A. Srikitja, M. A. Stover, T. Zhang, E. Zhong, S. Banerjee, D.

Tipper, M. B. Weiss, and A. Khalil, “Analysis of traffic mea-surements on a wide area ATM network,” IEEE GLOBECOM,1996.

[20] V. Paxson and S. Floyd, “Wide-area traffic: the failure of Pois-son modeling,” IEEE/ACM Trans. Netw., vol. 3, pp. 226–244,June 1995.

[21] M. W. Garrett and W. Willinger, “Analysis, modeling and gen-eration of self-similar VBR video traffic,” ACM SIGCOMMProc. Commun. Archit. Protoc. Appl., vol. 24, pp. 269–280, Oct.1994.

[22] J. Beran, R. Sherman, M. Taqqu, and W. Willinger, “Longrange dependence in variable-bit-rate video traffic,” IEEETrans. Commun., vol. 43, pp. 1566–1579, 1995.

[23] W. Leland, M. Taqqu, W. Willinger, and D. Wilson, “On the self-similar nature of Ethernet traffic extended version,” IEEE/ACM Trans. Netw., vol. 2, pp. 1–15, Feb. 1994.

[24] C. Guillemot, M. Renaud, P. Gambini, C. Janz, I. Andonovic, R.Bauknecht, B. Bostica, M. Burzio, F. Callegati, M. Casoni, D.Chiaroni, F. Clerot, S. L. Danielsen, F. Dorgeuille, A. Dupas, A.Franzen, P. B. Hansen, D. Hunter, A. Kloch, R. Krahenbuhl, B.Lavigne, A. Le Corre, C. Rafaelli, M. Schilling, J. C. Simon,and L. Zucchelli, “Transparent optical packet switching: theEuropean ACTS KEOPS Project approach,” J. Lightwave Tech-nol., vol. 16, no. 12, pp. 2117–2134, Dec. 1998.

[25] I. Chlamtac, A. Fumagalli, L. G. Kazovsky, P. Melman, W. H.Nelson, P. Poggiolini, M. Cerisola, A. N. M. M. Choudhury, T.K. Fong, R. T. Hofmeister, C.-L. Lu, A. Mekkittikul, D. J. M.Sabido IX, C.-J. Suh, and E. W. M. Wong, “CORD: ContentionResolution by Delay Lines,” IEEE J. Sel. Top. Quantum Elec-tron., vol. 14, pp. 1014–1029, June 1996.

[26] M. Ghanbari, C. J. Hughes, M. C. Sinclair, and J. P. Eade,Principles of Performance Engineering for Telecommunicationand Information Systems. The Institution of Electrical Engi-neers, 1997.

[27] OPNET modeler web site, http://www.opnet.com.

Zheng Lu received his B.Eng. degree inelectronic engineering from the PLA Uni-versity of Science and Technology, Nanjing,China, and his M.Sc. degree (with distinc-tion) in telecommunications and informa-tion systems from the University of Essex,Colchester, UK, in 2002 and 2003, respec-tively. In 2008, he graduated from the Uni-versity of Essex with a Ph.D., which wasawarded for research on protocols foroptical-packet-switched networks. From

anuary 2007 until August 2008, he was a Research Officer at theniversity of Essex. Since September 2008, he has been working forcorah Software Products Ltd., Wokingham, Berkshire, UK. His re-earch interests concentrate on algorithms, protocol design, andodeling for data communications and wireless sensor networks.

David K. Hunter (M’09) became a StudentMember (S) of IEEE in 1988, a Member (M)in 1990, and a Senior Member (SM) in 2000.In 1987, he obtained a first class honorsB.Eng. in electronics and microprocessorengineering from the University of Strath-clyde, Glasgow, UK, and a Ph.D. from thesame university in 1991 for research intooptical TDM switch architectures. After ob-taining his Ph.D., he was a Research Fel-low, then a Senior Research Fellow, at the

niversity of Strathclyde, researching optical networking and opti-al packet switching, and holding an EPSRC Advanced Fellowshiprom 1995 to 2000. After spending a year as a Senior Researcher in

arconi Labs, Cambridge, UK, he moved to the University of Es-

Page 14: Optical Packet Contention Resolution Through Edge Smoothing Into

awSlt

Z. Lu and D. K. Hunter VOL. 1, NO. 7 /DECEMBER 2009/J. OPT. COMMUN. NETW. 635

sex, Colchester, UK, in August 2002, where he is a Reader in theSchool of Computer Science and Electronic Engineering. He has au-thored or co-authored over 125 publications. Dr. Hunter has actedas an external Ph.D. examiner for the Universities of Cambridge,London, Strathclyde, and Essex. From 1999 until 2003 he was an

Associate Editor for the IEEE Transactions on Communications,

nd he was an Associate Editor for the IEEE/OSA Journal of Light-ave Technology from 2001 until 2006. He participated in editing apecial Issue of that journal, on Optical Networks, that was pub-

ished in December 2000. He is a Chartered Engineer, a Member ofhe IET, and a Professional Member of the ACM.