opnetwork - qos bandwidth allocation for pon - final draft3
TRANSCRIPT
-
7/31/2019 OPNETWORK - QoS Bandwidth Allocation for PON - Final Draft3
1/13
QoS Based Bandwidth AllocationSchemes for Passive Optical Networks
Nadav Aharony, Eyal Nave, Nir Naaman and Shay AusterDepartment of Electrical Engineering
Technion Israel Institute of Technology
Haifa, 32000, IsraelE-mail:{ nadav@, eyal@, auster@, cn17s02@}comnet.technion.ac.il
Web: www.comnet.technion.ac.il/~cn17s02
Abstract
"Ethernet in the First Mile" (EFM) is an upcomingstandard currently being drafted by the 802.3ah task force in
the IEEE organization. One of the tracks in the standard dealswith Ethernet over Passive Optical Network (EPON), a
centralized point-to-multi-point network with a shared
upstream channel. This paper deals with the problem ofupstream bandwidth allocation in EPON networks. Wesuggest, analyze, and compare several bandwidth allocation
algorithms. As a novel type of network, EPON presents manynew challenges, and one of the main goals of this project was
to get a feel for the network in order to decide on the nextsteps in its exploration. To that end, we have constructed an
OPNET model of an EPON network based on the IEEEstandard. The model contains one module for the network's
central-office, the Optical Line Terminal (OLT), and anothermodule for the customer premises device, the Optical
Network Unit (ONU). The network model implements
EPONs Time Division Multiple Access (TDMA)
multiplexing scheme and supports the relevant messages asdefined by the standard. Although designed for EPON, our
OPNET model can be easily converted to simulate othershared, TDMA based, networks.
The new OPNET modules were designed as a test-bed for
bandwidth allocation algorithms; they enable smooth
insertion of a wide range of algorithms and easy extension ofthe currently supported features. We used our model to
simulate several bandwidth allocation algorithms startingfrom a simple static algorithm and advancing to more
sophisticated algorithms that support Quality of Service(QoS) and guarantee fair bandwidth allocation. Our results
demonstrate the impact of the bandwidth allocation algorithmon the overall network performance.
1. Introduction
Passive Optical Network (PON) is an emerging access
technology, offering a high bandwidthpoint-to-multipoint
optical fibernetwork. It is called "passive", since there are noactive devices (e.g. repeaters) along the route apart from the
end units. The only interior elements used in such networks
are passive combiners, couplers, and splitters. This greatlyreduces the costs and complexity of the deployment and
maintenance of the network. PONs are intended to solve theaccess networks bandwidth bottleneck by offering a cost-
effective, flexible, and high bandwidth solution. Today newhousing developments in many places around the world are
built with fiber-based connections to the home, and networkproviders are conducting field-tests and experiments with fiber
access. Eventually, fiber access is predicted to replace the old
copper infrastructure the world over.
PON consists of two main types of end devices: An OpticalLine Terminal (OLT) and an Optical Network Unit (ONU).
The OLT resides in the Central office (CO) and is connectedto an uplink fiber and a downlink fiber, linking it to the
network end-units. The OLT is responsible for control andmanagement of the PON, and also acts as the gateway to the
outside world and adjacent networks.
The ONU is the client-side of the network that resides near
or inside the Customer Premises (CP). An ONU may serve asingle residence or business, or it may serve several subscriber
residences or an entire apartment building. PONs use point-to-
multipoint topologies they may be connected as a tree, a bus,or a ring. Figure 1 illustrates the components of an EPON
deployment.
Figure 1 - EPON Network Illustration
The term uplink refers to information flowing from the
end units the ONUs in our case, to the central officeequipment - the OLT. The term downlink refers to the
information flowing from the OLT to the ONUs.
All communication within a PON is mediated by the OLT.
On the downlink the OLT broadcasts all information to thefiber. Each ONU filters out only the transmissions that are
directed to it (see Figure 2). Encryption and authenticationfeatures may be applied to the traffic in order to make sure
1
http://www.webopedia.com/TERM/P/bandwidth.htmhttp://www.webopedia.com/TERM/P/bandwidth.htmhttp://www.webopedia.com/TERM/P/network.htmhttp://www.webopedia.com/TERM/P/network.htmhttp://www.webopedia.com/TERM/P/bandwidth.htm -
7/31/2019 OPNETWORK - QoS Bandwidth Allocation for PON - Final Draft3
2/13
that only the intended party/parties have access to the
information. On the uplink the ONUs send their traffic to theOLT which then forwards it to its destination. ONUs are not
able to directly "see" the upstream traffic from their networkpeers (see Figure 3).
The OLT is responsible for all bandwidth allocations in thenetwork. An ONU is allowed to transmit only in time slots
that have been assigned to it by the OLT. The OLT has greatamount of flexibility in implementing a bandwidth allocation
algorithm; allocations may change dynamically over time toadapt to the different bandwidth requirements of the ONUs.
The bandwidth allocation algorithm may implement anythingfrom a strict and even division of the bandwidth among all
ONUs, to giving all bandwidth to a single ONU (both uplinkand downlink). It is also up to the algorithm to maintain the
level of QoS that has been guaranteed to each ONU. Thiswork inspects different aspects of bandwidth allocation in
PONs.
Figure 2 - EPON TDMA Downlink
Figure 3 - EPON TDMA Uplink
1.1 Ethernet over PON (EPON)
"Ethernet in the First Mile" (EFM) is an upcomingstandard currently being drafted by the 802.3ah task force in
the IEEE organization. It will define an access standard basedon the Ethernet protocol for several media architectures and
physical link types. One of the tracks in the standard dealswith Ethernet over Passive Optical Network (EPON).
EPON deals with a symmetric point-to-multi-pointconnection at very high speeds (Currently 1Gb/s, in the future
10Gb/s and possibly even more). The protocol does not limitthe number of users, though a typical scenario refers to the
connection of up to 64 end points per PON fiber. A mainadvantage of using Ethernet datagram over the PON is the
fact that most networks on both sides of the PON (the
customer and the service providers) are based on Ethernet.Using Ethernet in the access link will save unnecessary format
conversions. Another great benefit is that Ethernet equipmentis widely available "off the shelf" and is manufactured by
many vendors.
A unique network management protocol has been devised
by the EFM task force Multi Point Control Protocol(MPCP) [1]. MPCP defines a TDMA multiplexing scheme, in
which the upstream channel is divided into time-slots. EachONU is allocated time-slots in which to send its uplink traffic.
The time slots are in granularity of a Time Quanta (TQ),which is defined as the time that it takes to broadcast 2 octets
of data. The MPCP constitutes an absolute timing model. Aglobal clock exists in the OLT, and the ONUs set their local
clock to OLT clock. All control messages in the network aretime-stamped with the local clock, and through them the
devices are able to synchronize their network clocks.
According to the standard, ONU devices must support the
802.1Q queuing, meaning a queuing system that supports 8priorities (or traffic classes). There is no definition about how
the priorities must be used. The priority queue may also beused for QoS purposes, where each traffic type is given a
distinct priority.
1.2 Bandwidth Allocation in EPON
This project concentrates on upstream bandwidth allocation
in EPON. It is important to note that the 802.3ah standard
does not dictate the bandwidth allocation algorithm and leavesit open to the implementation of each vendor/manufacturer.
There are many possibilities for the management of the
bandwidth.
The simplest regime is "static allocation", in which each
ONU is allocated a constant bandwidth allocation, orgrant,which is re-allocated at constant intervals. This type of
allocation is very inefficient, since the end-stations may not
require the entire grant bandwidth all the time, and the wastedbandwidth might have been allocated to someone else. For this
reason more advanced allocation algorithms have been
devised, that are dynamic in nature. These algorithms usuallyhave access to input about the current, near real-time needs of
the end stations, and also have access to the service levelagreement (SLA) definitions of each end-user. These schemes
usually attempt to achieve fairness in the allocations to end-users of the same class.
Dynamic algorithms may work either in a continuoustimeline or as cycle based. A continuous timeline algorithm
receives bandwidth requests from the end stations andallocates bandwidth according to them in a continuous
fashion. A cycle based algorithm divides the timeline intoconsecutive "cycles", and calculates the bandwidth allocation
for an entire cycle at a time.
Ultimately, the bandwidth allocation processes in the OLT
are supposed to reflect the QoS and SLA requirements andimplement them in this segment of the network. The OLT is
also responsible for taking into consideration all aspects of
2
-
7/31/2019 OPNETWORK - QoS Bandwidth Allocation for PON - Final Draft3
3/13
equipment and physical delays in order to avoid collisions.
Collisions occur when more than one ONUs signal reach theOLTs receiver at the same time.
The grants information is sent in MPCP GATE messages,
which tell each ONU its start time to transmit and the length
of the allocated transmission. In order to make correctallocation decisions, the ONUs send MPCP REPORT
messages, informing the OLT of their queue status. Eachallocation algorithm might use this information in a different
way. Even if an ONU did not make request for any bandwidthfor the next cycle, it would typically be allocated a minimal
grant that would allow it to at least send a REPORT messagefor the cycle afterwards.
EPONs bandwidth allocation scheme is different from thatof other shared access networks such as data over cable
networks. The standard for data over cable, DOCSIS [3],allows contention between end stations on the available
bandwidth. In EPON there is no contention and thereforethere is no danger of data collision. The downside to this
mode is obligation to allocate the minimal grant to each end-station. The difference between the two networks is that the
total available bandwidth of EPON is very large compared tocoaxial cables (1 Gbps compared to only 30 Mbps), and the
number of users is much smaller (a few dozens compared to afew hundreds or thousands). Because of this, the minimal
grants for the ONU reports are negligible.
1.3 Goals
In this project we attempt to explore some of the aspects ofQoS bandwidth allocation in the 802.3ah EPON architecture.
Since this is a novel network model and a very extensive
field, this project can offer a starting point for more advancedresearch on the subject.
The bandwidth allocation process can be divided into several
sub-processes (that may or may not be independent to eachother):
Gathering of the input for the decision making process
(such as bandwidth requests or the bandwidth definitionsfor each end unit).
Dividing the available bandwidth between the end units
(determining the quota for each) within a defined time-frame.
Scheduling the allocated quotas of all end units for the
defined time frame. Informing the end units when they are allowed to
broadcast (parallel to the frequency of the scheduling
changes).
For each of these sub-processes there are many work modes
and algorithms that may be thought up and compared. Sincethere are numerous approaches that can be used, the goal of
this project is not to provide a complete and optimal solution,
but rather to present a preliminary comparison of severalalgorithms, in order to get a feel for where to go next. The
current project concentrates on a narrow section of the
network mainly the bandwidth allocation in the uplink
direction.
OPNET has been chosen as the simulation environment for
the project. Since EPON is still a novel technology and thestandard is still being developed, there were no ready-made
simulation modules. A new environment and set of moduleshad to be designed and implemented.
2. Problem Definition
The problem we set out to investigate is the problem ofbandwidth allocation of the EPONs uplink, with the
following guidelines:
Two types of network traffic are defined:
o Committed Rate (CR) A specified bit rate
that the ONU must be allocated by the network if it
requests it. The requirement is for the average bit rate
over one second, and there are no requirements forminimum delays between the bandwidth allocations
within this timeframe.o Best Effort (BE) - Traffic that is not
promised to the ONU in the SLA, and will be
allocated only if all CR requirements in the networkhave been fulfilled and there is still available bandwidthto allocate. The only guideline is the attempt to allocate
the available traffic as fairly as possible between theONUs.
Fairness is defined as follows:
o Dont ask dont get An ONU that did
not request any BE bandwidth will not be allocated any.
o History window A certain history will be
kept regarding the amount of BE bandwidth allocated
to each ONU. ONUs that were allocated less bandwidthin this history window will be entitled to receive
bandwidth until they are aligned with the others.The window size is not defined; it is a parameter to be
tested, but note that too large a window might enable asingle ONU that was quiet for a long time to take over
all of the available BE bandwidth and deny service tothe others.
o Among ONUs that have the same amount of
allocated history, fairness will be defined as an equal
division of the bandwidth.
3. OPNET Implementation
In this section we describe the EPON environment that was
implemented and its components. Since there was no availableOPNET module for the EPON standard, we constructed anEPON modeling environment from scratch. This required
simulating both the physical PON characteristics and theupcoming 802.3ah standard. Even though the standard is not
complete at this time, the aspects of the 802.3ah that wereimplemented were ones that are at final stages of ratification.
They were also designed with the option to easily modifythem, if necessary.
3.1 Network Model Assumptions
Several assumptions have been made in order to simplify the
simulation environment and to allow better focus on the
bandwidth allocation problem:
3
-
7/31/2019 OPNETWORK - QoS Bandwidth Allocation for PON - Final Draft3
4/13
Static network configuration The number of ONUsremains constant, i.e., no ONUs are joining or leaving the
network. Accordingly, the network registration processesdefined in 802.3ah were not implemented.
Single subscriber ONUs Each ONU is assumed to be
serving only a single subscriber (and not, for example, anentire apartment building). When more than one distinct
subscriber is served by an ONU, each subscriber willhave his own SLA definitions and will need to receive
specific allocations according to it. This will require
increased complexity from the network devices and theallocation algorithms.
Bandwidth allocated per ONU All bandwidth
allocations are done per ONU. The OLT does not instructthe ONU how to utilize the granted bandwidth. This is
the simplest work mode and probably the most common -at least for the initial deployments of EPON. More
advanced allocation models are also available. Forexample, the "granted entity" might not be the ONU but a
specific IP address at the customer's premise or specifictraffic types. Another example is the allocation of a
separate grant for each queue priority.
Global Clock All simulation modules use the same
global clock provided by the network. Consequently, theexchange of synchronization information, which is done
in reality, is not implemented.
No Packet Loss The initial model assumes that no
packets are lost. Future versions would take into account
the realistic bit error rate of about 10 -12.
No Propagation Delays To simplify the initial model,
propagation delays have been ignored. Propagation
delays will be added to future versions.
Part of the plans for future work include expansions of thesimulation to include some of the leniencies that were stated,
in order to make it more realistic.
3.2 TDMA Model and the Bandwidth AllocationConcept
The TDMA timing regime was implemented usingOPNET's global simulation time and the scheduling of self
interrupts and remote interrupts in advance. These interruptsinvoke processes that remove packets from the queue and
transmit them, make REPORT packets, schedule futuregrants, etc. The ONUs set interrupts to start transmitting in
offsets based on their GATEs start time and length. With theassumption that the OLT allocates the grants without
overlaps, there should be no collisions of uplink data betweendifferent ONUs.
The timeline is divided into cycles, which are defined as anattribute of the OLT. The division to cycles is not mandatoryand is a matter of our implementation of the standard. TheOLT utilizes several "markers" to schedule actions for itself
in the future. The most significant marker on the timeline is apointer to the time of the next cycle's start time. All the other
action markers are set accordingly. As in reality, the GATE
allocations must be received by the ONUs before the cycle
starts, which provides them with ample time to transmit attheir assigned time. For transmissions, there is another marker,
"send_grants" which schedules an interrupt at a certain offsetbefore the next cycle's start. This interrupt tells the OLT to
send the GATE messages that have already been prepared bythe algorithm. Before this can happen, another marker,
"do_schedule" must instruct the OLT to actually "execute" thealgorithm and make the necessary calculations.
There are also other time markers that are used by the ONUsto schedule their GATE start and stop times, and to simulate
the REPORT message generation and the actual sending of theoutstanding packet queue in a realistic manner.
As mentioned above, the basic unit of time is a TimeQuanta, or TQ. All allocations are in multiplications of TQ
units. All control packets created are time stamped with thecurrent simulation time. Figure 4 depicts an example of the
timeline from the OLT and ONU point of view (in a 3 ONUscenario):
Figure 4 - Bandwidth Allocation timing model; a 3 ONUs
example.The top timeline depicts the OLTs point of view, the
bottom timeline is seen by the ONUs. As mentioned, thetimelines are divided into cycles. The cycles are marked in the
OPNET environment by next_cycle_start interrupt. TheOLTs interrupts are scheduled in advance, one cycle at a
time. As described above, before the cycle starts, two actionsmust be completed the actual allocation for that cycle, and
the broadcast of all GATE messages for the upcoming grant.
Each ONU is allocated its grant in which it is allowed to sendits data. At the beginning of each grant (or at its end) the ONU
sends its REPORT message. The reports reach the OLT and
the most recent report from each ONU is saved. When the
do_schedule interrupt is invoked, the last report received isused by the allocation algorithm. In the depicted example, theREPORTs from ONUs 1 and 2 are received in time to be
considered for the cycle N-1, whereas ONU 3s REPORT hasto wait until cycle N.
Note: The described timing model may easily be changed bychanging the interrupt allocation scheme. For example, the
allocation algorithm may be executed several times before thegrants are sent, optimizing the original execution.
3.3 Modeling MPCP in OPNET
As mentioned earlier, the MPCP defines different types of
messages for managing the EPON. Currently the only MPCP
4
-
7/31/2019 OPNETWORK - QoS Bandwidth Allocation for PON - Final Draft3
5/13
-
7/31/2019 OPNETWORK - QoS Bandwidth Allocation for PON - Final Draft3
6/13
data passed through the fiber, and passes it on to the
packet_classifierprocess, which determines if the packet is acontrol packet destined to the OLT, or a data packet. The data
packets are routed to the port_1_transmitterand passed on,out of the EPON network. Control packets are passed on to
the schedulerprocess, which is the main process of the OLTdevice. The PON downstream is connected to the downlink
fiber of the network through the pon_transmitter. It receivespackets from olt_q, the OLTs queue process, which is based
on OPNETs abc_fifo process model. The queue receivesGATE messages from the scheduler, and optional downlink
traffic from the outside. Since we concentrate on the upstreamprocess, the downstream is used mostly to deliver the OLTs
control messages. The external ports Tx and Rx can be usedto test a TCP/IP connection between a station on the PON to
the outside.
Figure 8-OLT node model implemented in OPNET
The resource_manageris responsible for holding node-wideinformation (such as its MAC address) and execute nodeinitialization actions, if needed.
The scheduler (Figure 9), is responsible for managing theEPON upstream, and for implementing the timing model
described earlier. It is composed of the following states:
init Initializes the process model and specifically theselected allocation algorithm. Also jump-starts the
TDMA scheme by allocating the initial cycle interrupts.
idle The default state in-between process interrupts.
msg_arrival Receives control messages that wererouted to the scheduler process, and distributes them
according to the message type. Currently it handles only
REPORT messages, but it may be easily adapted tohandle more.
store_reports This process receives the REPORT andprocesses it as required by the selected allocation
algorithm.
schedule The key state of the scheduler process. Theactual execution of the bandwidth allocation algorithm is
done within this state. It relies on the formatted REPORTdata from the store_reports process, and its output is in
the format of a list of grant data for the upcoming cycle.
send_grants Uses the output of the Schedule state to
prepare standard GATE messages, and broadcast them tothe PON.
Figure 9-OLT scheduler state machine
3.5 ONU Node Module
Similar to the OLT, the ONU device is also composed of
PON upstream and PON downstream data paths (see Figure10). The packets received from the pon_receiverconnected to
the PONs downlink are passed to the packet_filter, which
passes on only the packets destined to the specific ONU.Other packets are destroyed in the packet_filter_sink. Next,
the packet_classifier determines if the packet is a control
packet or a data packet. The data packets are routed to theappropriate external port. Control packets are passed on to the
scheduler.
The different external ports are used to easily insertsimulation traffic into the different priority queues, each ofwhich corresponds to a different port number (currently only
four priorities are used).
Figure 10-ONU node model implemented in OPNET
The resource_manager is responsible for holding node-wide
information (such as its MAC address) and executing nodeinitialization actions if needed.
6
-
7/31/2019 OPNETWORK - QoS Bandwidth Allocation for PON - Final Draft3
7/13
The scheduler (Figure 11) is responsible for the
implementation of the timing model from the ONUs point ofview. It is composed of the following states:
init Initializes the process model.
idle The default state in-between process interrupts.
msg_arrival Similar to the OLT state of the same name.
Currently, it handles only GATE messages.
gate_arrival This process receives the GATE messageand is responsible for extracting the allocated grants data
and for scheduling interrupts accordingly.
gate_start Conducts necessary actions at the start-time
of each grant.
Figure 11-ONU scheduler state machine
onu_q, the ONUs queue process (as depicted in Figure 12),
is implemented as an active queue, meaning that it is
autonomous in the insertion and extraction of packets,according to its limitations and the allocated grants. As
mentioned earlier, it comprises 8 sub-queues, in accordance
with to 802.1Q. Control messages receive the highest priority.The queue may perform in one of three modes:
Unlimited sub-queue size.
Limited queue size with specific size for each sub-
queue.
Limited queue size with shared memory for the sub-queues, meaning that their size may vary and a high
priority packet in one sub-queue may cause a tail-drop of a lower priority packet in another sub-queue.
The queue performs three main tasks:
Insertion of packets managed by the ins_tail state,according to the selected work-mode for the queue size.
Packet transmission - the beginning of the GATE is set up
by thestart_gate state, and the actual transmission is
managed bysend_headWithin the GATE, packets aretransmitted one-by-one. A single packet is sent and then the
process sets an interrupt for the next packet to be sent at theactual time that the previous packet finishes, according to
the ONU line-rate. For this implementation the queue has 2idle states one for when the ONU is not transmitting
(branch state) and one for when the ONU is within a
grants timeframe (gate_idle).
Generation of a REPORT message At the appropriateinterrupt, make_report state creates a REPORT packet
and assigns its fields with appropriate values.
Figure 12-ONU queue state machine
4. Bandwidth Allocation Algorithms
Three bandwidth allocation algorithms have been
implemented: static, semi-static, and dynamic. The algorithmsare described in the following subsections.
4.1 Static Allocation
This is perhaps the simplest algorithm possible, and the one
we used in our model development and validation stage.
A cycle size parameter is specified, setting the size of each
allocation cycle in units of TQ.
The allocated grant size for each ONU is the same, and is set
by the simple calculation:_
_ _
cycle sizegrant
number of ONUs=
This division implements fairness in the grant allocation.
However, other divisions are also possible, as long as the sumof the static allocations does not exceed the cycles size. There
is no consideration of the actual needs of each ONU, and eachreceives the same allocation in every cycle. Figure 13 depicts
an example of the algorithm's allocation for 3 ONUs.
Figure 13-Static Allocation Illustration
4.2 Semi-Static Allocation Algorithm
This simple algorithm is somewhat of a hybrid between thestatic allocation and the dynamic one. The semi-static
algorithm, as opposed to the static algorithm, uses the
REPORTs collected by the OLT from the ONUs to determine
which ONUs requested bandwidth. Each REPORT acts as aBoolean variable that is False if the ONU did not request any
7
-
7/31/2019 OPNETWORK - QoS Bandwidth Allocation for PON - Final Draft3
8/13
bandwidth, and True if the ONU requested bandwidth. The
algorithm ignores the actual size of the bandwidth requests.
Each ONU (both idle and requesting) is granted a minimalallocation in every cycle, which is sufficient for it to send a
REPORT packet with requests for the next cycle. The
remainder of the cycles length is divided equally among theONUS that had bandwidth requests for this cycle. Unlike the
static algorithm, the semi-static does not allocate bandwidthto ONUs that are idle .Figure 14 presents a sample output of
the semi-static algorithm for 3 ONUs in the network.
Figure 14-Semi-Static Allocation Example
4.3 Dynamic Allocation Algorithm
The dynamic algorithm allows for a dynamic allocation ofbandwidth, which varies from cycle to cycle and is adapted to
the end-units SLA, the networks current requirements, andfairness in the division of excess bandwidth. The algorithm
currently supports two types of traffic - best effort traffic (BE)and committed rate traffic (CR), which is bandwidth that
ONU is entitled to but does not have delay/jitter limitations.Delay and jitter constraints force the allocation algorithm to
schedule the grants with added definitions of mandatorylengths and intervals, which add a great deal to the
complexity of the allocation problem.
Three important elements are defined:
Window A window is defined as the time interval to
be used for the measurement of whether the networkcomplies with the ONUs SLA constraints. In addition, it
defines the history interval during which the algorithmenforces the fairness in the allocation of BE traffic
among the networks ONUs.
Cycle Each window is divided into several equallength cycles (the total number of cycles per window is a
parameter of the algorithm). The cycle bounds the totalamount of bandwidth that can be allocated to the entire
network for each execution of the algorithm. Thus, the
delay between the ONU requests and the correspondingresponses is controlled.
Sub-cycle A sub-cycle is enabled when the total sumof requests is lower than the remaining cycle size. In this
case, each ONU receives a minimal grant and all requestsare granted.
The algorithm has two work modes, based on the networkload: a simple mode for low traffic loads and a complex mode
for high traffic loads. Decision points are defined along thetime-line, where the algorithm chooses the work-mode for the
next cycle or window. Low-load is selected if the sum ofrequests is less than cycle length; in this case the algorithm
allocates a sub-cycle. If the sum of requests is more than
cycle length the OLT enters high-load mode and remains in
it for at least the duration of one window (see Figure 15).
Figure 15- Dynamic algorithm mode arbitration
As mentioned, each window is divided into consecutivecycles. Within them, the algorithm allocates the CR requests
first; if there is room left in the cycle the algorithm allocatesthe BE requests, according to the algorithms fairness
guidelines. Due to length constraints, the details of the BEallocation mechanism are omitted from this paper. The main
idea is that an ONU is allocated BE bandwidth according tothe history of the BE allocations that it received since the
beginning of the current window. ONUs that have beenallocated less BE bandwidth get higher priority.
Figure 16-Dynamic algorithm bandwidth allocation to threeONUs in low-load (sub-cycles) and high-load (windows and
sub-cycles)
5. Simulation Results
5.1 Initial testing
Before starting the algorithm tests and comparisons, asimple test scenario was constructed in order to give a general
perspective on how the prototype functions and to convey the
general characteristics of the EPON as reflected by thesimulation. The following section shows examples of several
of the properties tested.
ONU queue population vs. grant size
These tests were conducted in order to verify the behavior of
the ONU according to the allocated grant size. The allocationwas done using the static allocation algorithm. A constant
priority 1 source is used for all the simulations. The sourcewas active between seconds 1 to 2 and then halted. The tests
differ in the size of the allocated grant. The size of the grantwas set by setting the number of ONUs in the PON for the
allocation algorithms. The more ONUs are connected, thesmaller the grant that each one receives. The queue size is
unlimited for these tests, in order to see how high the queuefills up at each case.
8
-
7/31/2019 OPNETWORK - QoS Bandwidth Allocation for PON - Final Draft3
9/13
Three scenarios were tested: one where the grant allocation
rate is about the rate of the source bandwidth (titled large-GATEs; one where the allocated rate is much smaller than
the source bandwidth, so queue explosion was expected(titled small-GATEs); and one where the grant size is in
between (titled medium-GATEs). The results are depictedin Figure 17, showing the network acts as expected.
Figure 17 - Queue size vs. sim time, TOP - "large-GATEs",
MIDDLE - "medium-GATEs", BOTTOM - "small-
GATEs"
ONU queue overflow mode test
In this experiment, we set out to test the shared-memory
mode of the ONUs sub-queues. The total queue size was setto 400,000 bits, and two priorities were fed with source data.
The source for the higher priority was active between seconds1-2, and the lower priority source was active between seconds
1-2.5. Figure 18 shows how the higher priority dominates thetotal queue, dropping the lower prioritys packets. When the
source is done and the packets are transmitted, the lowerpriority is enabled to add packets to the queue and transmit
them.
Figure 18 - ONU's sub-queues' size in bits vs. simulation
time
End-to-end delay vs. the number of active ONUs
This simple test shows how the end-to-end delay increases
with the number of ONUs. All sources are constant bit-rate of
about 30 Mbps. The static allocation algorithm is used. Notethat the delay starts to increase only after crossing thethreshold of about 30 ONUs (which is parallel to a static
allocation of about 30Mbps, same as the traffic sources rate),and continues with a linear increase. The specific delay value
is a factor of the cycle size, but the main idea here was to seethe networks trend. Data was collected through the execution
of a series of simulations with a varying number of ONUs.
Figure 19 - End-to-end delay in seconds vs. number of
ONUs in PON
5.2 Algorithm ComparisonFor the comparison of the three algorithms behavior, a 16
ONU scenario was constructed. We wanted to create a diverseenvironment that would allow us to examine as many aspects
of comparison as possible within the same simulation:
The networks packets sources generated Ethernet packets.
The packet length is drawn by an exponential distributionwith a mean of 3000 bits, but packets of over 1500 bytes are
discarded (Ethernet MTU size). The packet inter-arrival
time is exponentially distributed. The average source bit-rate was determined through the setting of different inter-
arrival mean values. Three source modes were defined:
o High Load 100 Mbpso Medium Load 50 Mbps
9
-
7/31/2019 OPNETWORK - QoS Bandwidth Allocation for PON - Final Draft3
10/13
o Low Load 5 Mbps
The 16 ONU sources were configured as follows:
o 8 ONUs High load
o 4 ONUs Medium load.
o 3 ONUs Low Load
o
1 ONUs source was idle throughout thesimulation.
Roughly, half of the ONU sources of each load type were
defined to be stable they had the same packetgeneration parameters throughout the simulation. The rest
of the ONUs were defined to have a burst that begins
some time after the simulation starts, and ends some timebefore it finishes.
The simulations duration was 0.69 seconds. The timelinewas divided as follows:
o Segment 0 - 0 0.0001 seconds Init margin, all
sources idle.
o Segment 1 - 0.0001 0.05 sec Only stable ONU
sources are active. The mean of the networks total bit-
rate was 510 Mbps.
o Segment 2 - 0.05 0.3 The bursty ONU sources
join the stable ones, raising the mean total bit-rate to
a peak of 1015 Mbps.
o Segment 3 - 0.3 0.5 The bursty ONU sources all
go down to a mean bit-rate of 5Mbps. The mean total
bit-rate is now at 545 Mbps.
o Segment 4 - 0.5 0.69 The bursty ONU sources stop
transmitting, only the stable remain. Mean total bit rateof 510 Mbps.
Allocation algorithm configuration:
o TQ 16*10-9 [sec]
o Cycle size 500 [sec]
o Cycles per Window 10 (for dynamic alg.)
Sample Results:
Figure 20 and Figure 21 show the total number of bits that ahigh-load ONU was allocated and its queuing delay,
respectively, produced by each algorithm in accordance with
the timeline described above. Other results, not shown here,indicate that the queue size presents similar behavior to that
of the queuing delays.
Note that the queuing delay is proportional to the queuesize. Figure 22 and Figure 23 show the same for low load
ONUs, and Figure 24 and Figure 25 show the same for
medium load.
Figure 20-Total Bits Granted [bits] vs. Time [sec] for a
High-Load ONU
Figure 21-Queueing Delay [sec] vs. Time [sec] for a High-
Load ONU
In a 16 ONU scenario, the static algorithm would allocateabout 30 Mbps per ONU. Clearly, a 100 Mbps high load will
not be able to send all of its data and its queue wouldexplode.
During time segment 1, the dynamic and semi-staticalgorithms utilize the fact that they do not allocate bandwidth
to non-requesting ONUs and are thus able to allocate morebandwidth to requesting ONUs. In this segment, the total
requests are smaller than the network capacity, so the dynamic
algorithm is able to allocate each ONU the amount itrequested. The semi static division of available bandwidth isalso sufficient for even the high-load ONUs request. Thus,
the queuing delay of packets for these two algorithms duringthis segment tends towards zero.
During segment 2, the bursty ONUs start requestingbandwidth, so the dynamic and semi-static algorithms have to
take them into consideration. The semi static algorithmgenerates an output very similar to the static algorithm (all
ONUs except the idle one are requesting some amount, so the
cycles bandwidth is divided equally among 15 ONUs -instead of 16 in the static). The dynamic algorithm is able to
allocate the low and medium loads according to their specificrequests. When these ONUs request less bandwidth than their
10
-
7/31/2019 OPNETWORK - QoS Bandwidth Allocation for PON - Final Draft3
11/13
allowed limit, the dynamic algorithm is able to allocate the
additional bandwidth to the high load ONUs.
During segment 3, the bursty ONUs requests falls down to 5Mbps, but they are still requesting bandwidth. So while the
dynamic algorithm adapts its allocations to the decrease in
requests, the semi-static algorithm still allocates the sameamount as before.
During segment 4, the bursty ONUs stop requesting
bandwidth. Here, the semi-static algorithm can stop taking
them into consideration and can allocate more bandwidth tothe other ONUs.
Some additional observations:
According to Figure 22, the low load ONU seems to be
allocated smaller grants by the semi-static than by the staticalgorithm. This seems to contradict the fact that the
smallest allocation possible for a requesting ONU is_
_ _
Cycle SizeGrant
Number of ONUs=
, the same as the static, and there shouldbe no possibility that the static will allocate more than the
semi-static. A closer look reveals the explanation: the low-loads request rate may sometimes be smaller than the
REPORT packet frequency in the semi-static algorithmThis means that there will be some cycles where the low-
load request will be zero even when the source is active. Inthese cycles the ONU will not be allocated any bandwidth
for data. Consequently, the total allocation slope is moremoderate
Figure 22-Total Bits Granted [bits] vs. Time [sec] for a
Low-Load ONU
Figure 23-Queueing Delay [bits] vs. Time [sec] for a Low-
Load ONU
The phenomenon described in the previous observation alsoaffects the queuing delay. A packet arriving during a cycle
in which the ONU was not allocated any bandwidth will be
delayed more than when there is an allocation (such as inthe static case). This is seen in Figure 23: there is anincrease in the delays for the semi-static and the dynamic
algorithms in time segment 2, where the ONU operates inhigh-load mode, in comparison to the static algorithm.
The delays in the dynamic algorithm during segment 2 arehigher than the semi-static delays. This is because the semi-
static algorithm allocates more bandwidth than actually
requested, so packets arriving after REPORT was sent mayalso be transmitted before they are actually reported. This
lowers the overall delay of a packet in the queue.
When the dynamic algorithm operates in low-load mode,
the sub-cycles allocate all requests, and the next sub-cyclestarts right after the current one ends. As seen in Figure 21,
Figure 23, and Figure 25, the delays tend towards zero afterthe queues stabilize, a feature that is not possible in a work-
mode that defines a fixed cycle size, such as the static andsemi-static algorithms and the high-load mode in the
dynamic algorithm.
Figure 24-Total Bits Granted [bits] vs. Time [sec] for a
Medium-Load ONU
11
-
7/31/2019 OPNETWORK - QoS Bandwidth Allocation for PON - Final Draft3
12/13
Figure 25-Queueing Delay[sec] vs. Time [sec] for a
Medium-Load ONU
Figure 26 shows the queue delay for a high load burstONU. At 0.3 sec the source drops from 100 Mbps to 5
Mbps, and shuts down completely after 0.5 seconds. In thefigure we see that all three algorithms manage to empty the
queue and handle the burst event. What is seen clearly isthat the dynamic algorithm manages to empty the queue
very fast, followed by the semi-static and the staticalgorithm.
Figure 26-Queueing Delay [sec] vs. Time [sec] for a High-
Load Burst ONU
Figure 27, Figure 28, and Figure 29 show the totalallocation of each algorithm to each of the ONU types.
Figure 27 shows the results for the dynamic algorithm. Init we see how the allocation is dependant on the ONUs
specific needs, so each of the six types is treated separately.The reason that burst ONUs continue to receive noticeable
allocations even after they shut down is that the totalallocation also includes allocations for REPORT messages,
and the dynamic algorithm working in low-load moderequests a large amount of REPORTS from all network
stations (but remember that the bandwidth dedicated tothose REPORTs is not utilized in low network loads).
Another observation is that the low-load of 5 Mbps isnegligible
Figure 27-Bits Granted [bits] vs. Time [sec] for Each Load
Type Using Dynamic Algorithm
In Figure 28 we see that the semi-static algorithm groups
together the medium and high load ONUs, and that the
allocations increase at similar rates for all requesting ONUs.The exception is the low load mode, which displays a lowerslope, as was explained previously.
Figure 29 shows the expected result that the staticallocation treats all ONUs in the same manner and allocates
the same bandwidth to all ONUs.
Figure 28-Bits Granted [bits] vs. Time [sec] for Each Load
Type Using Semi-Static Algorithm
12
-
7/31/2019 OPNETWORK - QoS Bandwidth Allocation for PON - Final Draft3
13/13
Figure 29-Bits Granted [bits] vs. Time [sec] for Each Load
Type Using Static Algorithm
6. Conclusions
Although the research is only in its beginning, severalconclusions are already eminent:
For heterogeneous-source networks, the dynamic algorithmachieves the best division of the network bandwidth as it
adapts the allocations to each end-stations needs. It alsohandles high-load bursts most effectively.
For ONUs with low load sources in a highly loaded
network, the dynamic algorithm shows the worst delayperformance. This is because the other algorithms allocate
more than the low load ONUs need, and they may use theextra allocations to send new packets faster without the
need to report them. On the other hand, the rest of theONUs in the network suffer more because there is wasted
bandwidth while they still have outstanding requests.
For the same reason, the static allocation provides betterdelay results for the low-load ONUs than the semi-static
algorithm since it keeps allocating bandwidth even if itdoes not receive a request for it.
The downsides of the static algorithms are clear: it wastes alot of empty grants that may be needed by other ONUs, and
it prevents over-subscription to the network. It limits the
amount of ONUs/allocated bandwidth per ONU for the
network.
For ONUs that request less bandwidth than the semi-static
algorithm eventually allocates them, the semi-staticalgorithm takes on the downsides of the static algorithm as
described above. On the other hand, it handles situationswith idle ONUs well. Time segments 3 and 4 show these
two behavior characteristics.
7. Current Activities and Future Work
We are currently in the process of exploring further thedynamic algorithm, mostly learning its behavior
characteristics with different attribute parameters and workmodes (such as testing the algorithms behavior without the
low-load mode, or with a limit on the minimal size of a sub-
cycle). Concurrently, we are working on enhancements to the
simulation environment, adding more features and bringingthe simulation closer to realistic results (such as the addition
of propagation delays to the network).
Future work includes adding traffic types to the algorithm,
such as delay-critical traffic. Another future direction is thetesting of additional allocation algorithms of different natures.
References
[1] Point to Multipoint Ethernet Passive Optical Network
(EPON) Tutorial, by Gerry Pesavento, EFMTaskforce
http://www.ieee802.org/3/efm/public/jul01/tutorial/p
esavento_1_0701.pdf
[2] MPCP State of the Art, by Ariel Maislos et. al., EFMTaskforce
http://grouper.ieee.org/groups/802/3/efm/public/jan02/maislos_1_0102.pdf
[3] MPCP: Message Format, by Onn Haran et al, EFM
Taskforcehttp://grouper.ieee.org/groups/802/3/efm/public/jan0
2/haran_1_0102.pdf
[4] IEEE 802.3ah Ethernet in the First Mile Task Force
web site, including task force meetings' presentation
material and meeting summaries.http://grouper.ieee.org/groups/802/3/efm/public/
[5] CableLabs, Data-Over-Cable Service InterfaceSpecifications - Radio Frequency Interface
Specification (status: interim), March 1999.
13
mailto:[email protected]:[email protected]://www.ieee802.org/3/efm/public/jul01/tutorial/pesavento_1_0701.pdfhttp://www.ieee802.org/3/efm/public/jul01/tutorial/pesavento_1_0701.pdfhttp://grouper.ieee.org/groups/802/3/efm/public/jan02/maislos_1_0102.pdfhttp://grouper.ieee.org/groups/802/3/efm/public/jan02/maislos_1_0102.pdfhttp://grouper.ieee.org/groups/802/3/efm/public/jan02/haran_1_0102.pdfhttp://grouper.ieee.org/groups/802/3/efm/public/jan02/haran_1_0102.pdfhttp://grouper.ieee.org/groups/802/3/efm/public/mailto:[email protected]://www.ieee802.org/3/efm/public/jul01/tutorial/pesavento_1_0701.pdfhttp://www.ieee802.org/3/efm/public/jul01/tutorial/pesavento_1_0701.pdfhttp://grouper.ieee.org/groups/802/3/efm/public/jan02/maislos_1_0102.pdfhttp://grouper.ieee.org/groups/802/3/efm/public/jan02/maislos_1_0102.pdfhttp://grouper.ieee.org/groups/802/3/efm/public/jan02/haran_1_0102.pdfhttp://grouper.ieee.org/groups/802/3/efm/public/jan02/haran_1_0102.pdfhttp://grouper.ieee.org/groups/802/3/efm/public/