qos adaptation in inter-domain services
TRANSCRIPT
12th IFIP/IEEE International Symposium on Integrated Network Management 2011
QoS Adaptation in Inter-Domain Services
Fernando Matos, Alexandre Matos, Paulo Simoes and Edmundo Monteiro
CISUC-DEI, University of Coimbra, Dep. Eng. Informatica 3030-290 Coimbra, Portugal {fmmatos, aveloso, psimoes, edmundo}@dei.uc.pt
Abstract-QoS adaptation is an important process in inter-domain service management, since it can guarantee the
correct service provisioning when unexpected events occur. However, in inter-domain environments, the human interference and the manual-based operations hamper the deployment of expeditious mechanisms to adapt the QoS of the services. This paper presents an approach based on SOA principles to perform QoS adaptation in these environments. This approach allows providers to determine the new QoS parameters, renegotiate these parameters with other providers along the provisioning path, update their contracts and enforce the changes in the equipment configuration in an automatic and on-demand fashion. The QoS adaptation process can be caused by technical (QoS violations, infrastructure malfunctions) or business (financial problems) issues. In addition, if a provider configuration in the provisioning path already supports the new requirements, this provider is not affected by the adaptation, thus decreasing the processing time.
Index Terms-Adaptation, inter-domain, QoS, SOA
I. INTRODUCTION
Managing inter-domain QoS-aware services is a highly challenging task due to the intrinsic heterogeneity of the inter-domain environments. Providers with different equipments, policies and business goals have to interact and cooperate with each other to provide a service that meets the customer requirements. There are no automated mechanisms that completely support this interaction, resulting in a need for manual work and human intervention. If providers want to establish a service contract, their employees have to arrange meetings or to make phone calls in order to negotiate the service terms. To configure their equipments, they need to exchange faxes (or e-mails) containing the information necessary to forward the traffic between domains and to guarantee the QoS requirements. Moreover, the equipment configuration and the network management are done manually [1]. Such situations turn the inter-domain service management into a cumbersome and complex task.
However, as transmission and connection technologies evolve and customers' demands increase, the necessity to develop automated mechanisms to deal with inter-domain management becomes more evident. This necessity also increases due to the advent of new paradigms that stimulate a more dynamic interaction between providers. For instance, in the Internet of Things [2], the number of devices connected to the Internet will rise, so the number of connections between the providers that offer services for these devices. Another example is cloud computing along with its three application scenarios
[3], namely Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS), which can lead to situations where the providers of these services must intensively interact to satisfy a customer request.
When dealing with the inter-domain environment challenge, Service Oriented Architecture (SOA) is a logical choice. In SOA, services are defined in a loosely coupled manner, which allows them to be combined in order to create a more complex service. SOA also follows a business orientation perspective, in which a service can be viewed as an abstraction of a business related real object [4]. Following this reasoning, providers can create and offer services that encapsulate their different business views and at the same time cooperate to fulfill the desired requirements.
Some relevant works following SOA principles have already been proposed to facilitate providers in offering and managing inter-domain services, such as IPsphere [5] and Service Delivery Framework (SDF) [6]. IPsphere proposes a business layer-like framework in which providers can reach business agreements in order to offer and provide services. SDF is a generic framework that defines a set of policies and standards to manage the service life-cycle (from concept to termination). However, despite these works handle with the problem of inter-domain service provisioning, there are no effective mechanisms to deal with adaptation scenarios.
During a service provisioning scenario, it may be necessary to adapt the QoS of the service by changing its configuration. This adaptation process can be caused by several technical reasons, such as infrastructure malfunctions, QoS violations, communication problems, etc. It can also be triggered by business factors. For instance, a customer may request a service downgrade due to financial constraints or a provider may request an adaptation in a VoIP service in order to give priority to a video service. If some of these events occur, providers need to determine the new QoS parameters, renegotiate these parameters with the other providers along the provisioning path, update their contracts and enforce the changes in the equipment configuration.
In this paper, an automatic and on-demand QoS adaptation approach to inter-domain services is presented. It is based on SOA principles, where a service is composed of smaller services (i.e. service elements) owned by different providers. Service element offers are published in a service directory and providers interact with each other through Web Service calls. This approach employs a weak adaptation [7], in which
978-1-4244-9221-31111$26.00 ©2011 IEEE 257
providers can adapt the QoS of a service by reconfiguring
one or more service elements in the provisioning path, so the
service can meet new QoS requirements. The contributions
of the paper are seen as follows. First, it allows providers
to perform QoS adaptation of inter-domain services in an
automatic and on-demand fashion. Second, it permits to adapt
a service not only due to technical issues (e.g. QoS requirement
violations), but also due to business-related requests (e.g.
service downgrade). Furthermore, some providers may not
be affected by the adaptation if their original configurations
already support the new requirements, thus decreasing the
processing time. The proposed approach was integrated in the
QoS model for business layers [8] developed in the context of
the Global Business Framework (GBF) [9], which is a business
layer framework to manage inter-domain services.
The rest of the paper is organized as follows: Section II
discusses some related works about adaptation. Section III
gives an overview of how inter-domain QoS-aware services are
provided using the GBF along with the QoS model. In Section
IV, the inter-domain QoS adaptation approach is described,
while in Section V results from a testbed scenario simulation
are presented. Finally, Section V I concludes the paper.
II. RELATED WORK
The work presented in [lO] proposes an approach to adapt
multimedia traffic in DiffServ-based networks. In DiffServ
networks, a resource at the edge of the network (e.g. router)
marks an incoming traffic with a pre-defined value, so it can
be forwarded according to pre-established configurations. In
the proposed approach, a bandwidth monitor in the core of
the network informs a Policy Decision Point (PDP) about the
network conditions. The PDP then uses this information to send
appropriate policies to Policy Enforcement Points (PEP), which
are located in edge devices. In its turn, the PEP can apply these
new policies to the incoming traffic, thus allowing the traffic to
be treated according to the current network conditions. Despite
this proposal uses a dynamic adaptation approach, it does not
deal with inter-domain scenarios. Moreover, it also does not
consider adapting the traffic according to business demands.
In [11], the authors present a service management framework
for wireless environments that can adapt services according
to statistical data. This framework has a component named
Service Adaptation Logic (SEAL) that employs information
about user preferences and device capabilities to adapt the
service in a proactive fashion. For instance, SEAL can realize
that videos with certain parameter values, such as basic quality
level (from user preferences) and DivX codec (from device
capabilities) are more often requested by the users. By using
this information, SEAL can transcode the videos with these
two parameters before user requests, thus increasing service
availability. Despite the proactive approach this framework
uses, it only adapts services that were not requested yet.
Running services are not affected by the adaptation process.
An architecture to perform dynamic adaptation at the edge
of networks is the work presented in [12]. This architecture,
called Distributed Dynamic Quality of Service (DDQoS), has
two main components: DDQoS-Core, which resides in core
routers and; DDQoS-Edge, which resides in edge routers.
The DDQoS-Core monitors the traffic at the core network
and informs the DDQoS-Edge if data losses are perceived in
traffic classes. The DDQoS-Edge then performs adaptation at
the network edge to improve performance of priority traffics
(static adaptation). By using reports from the DDQoS-Core
and learning techniques, the DDQoS-Edge can also perform
adaptation before data losses occur in the core (dynamic
adaptation).
In [13], the authors present a framework called Quality of
Experience (QoE)-aware Real-time Multimedia Management
(QoE2M) to manage the quality level of end-to-end multimedia
applications. QoE2M is designed with well-defined interfaces
that can interact with resource allocation controllers to
gather information about the network status. Moreover, it
also can extract video characteristics from the Real-Time
Transport Protocol (RTP) header. By using this information,
the QoE2M is able to adapt applications according to network
conditions or user's device capabilities. For instance, it can
downgrade a video application if there is not available network
bandwidth to transmit the video or the user's device does
not support the codec. Despite this work uses a dynamic
approach and considers QoE characteristics, it does not discuss
implementation or validation issues.
A solution based on QoS parameter matching and
optimization (QMO) to enhance the IP Multimedia Subsystem
(IMS) is proposed in [14]. This solution aims to give
IMS-compliant providers the capability to adapt services,
among other functionalities. To guide the adaptation process,
the solution uses two profiles: (i) generic client profiles, which
contain information about user equipment, access network
constraints, and application preferences; (ii) service profiles,
which contain information about different versions of the
services that can be used according to the user preferences
(the client profile can be considered as an instance of a
user preference). By using this information, the QMO is
able to change from one service version to another in the
case an event occurs. For instance, a reduction in the access
network bandwidth (specified in the client profile) caused by
a modification in the user preference may trigger the service
version change.
III. GLOBAL BUSINESS FRAMEWORK AND QoS MODEL
Before introducing the QoS adaptation approach, an
overview of the Global Business Framework (GBF) [9] and
the QoS model for business layers [8], along with their main
entities, is presented. This overview shows how these entities
interact with each other to provide an inter-domain QoS-aware
service, which may eventually be adapted.
In previous works, an integrated solution to provide
inter-domain QoS-aware services in an automatic and
on-demand fashion was presented. This solution is comprised
by the GBF and a QoS model for business layers. The GBF
258
is a framework based on IP sphere [5] and inspired on SOA
principles that aims to support providers in publishing, offering,
composing and providing inter-domain services. It uses a
Business Layer (BL) as a communication channel, in which
providers exchange information in order to reach business
agreements and manage their services. The QoS model, in its
turn, allows providers to define quality levels for their services
(service classes) and create composite services that satisfy the
QoS customer requirements and the business expectations of
providers [15]. Figure 1 presents the GBF structure with the
QoS model. It is assumed that the providers that use the GBF
create a federation; therefore, its members can follow some
conventions, such as a unique identifier for providers, services
and service elements.
UODI Bus;ness Layer
-I r;;;�-;dia;;c;'r�;1 IISP-�';;-c;-QI1dP;i,rt II I I Business Agent (EO)
Fig. 1. QoS model integrated on GBF architecture
An important component in GBF is the BL, which providers
use as a common cOlmnunication channel to exchange
information by the means of Web Services technology.
The Business Managers are the components responsible for
communicating with each other through the BL on behalf
of providers. The QoS Managers are the components that
handle the QoS aspects of the service provisioning process,
by composing a service path that satisfies the QoS service
requirements from customers and the providers' business
expectations.
The information exchange in BL aims to help providers
in offering, searching, contracting, and providing inter-domain
services. An inter-domain service is a service composed by
one or more service elements (smaller services). Both the
services and the service elements are owned and offered by
providers. A provider of services is a Service Owner (SO)
and a provider of service elements is an Element Owner
(EO). SOs and EOs publish their services (or service elements)
offers at the Universal Description, Discovery, and Integration
(UDDI), which is a service registry commonly used in Web
Services environments. The service and service element offers
are represented by Service Specification Templates (SST) and
Element Specification Templates (EST), respectively. These
templates are XML documents (inspired by TEQUILA [16]
and [17-19]) that contain technical and business information
about the services and the service elements. There are three
types of service elements that are offered by EOs:
• Connection Elements (CoE): Service elements that offer
connection services between providers;
• Access Elements (AcE): Service elements that offer access
service for customers; and
• Application Elements (ApE): Service elements that offer
other services apart from the two aforementioned services
(e.g. email server, online storage, video streaming, VoIP,
etc).
As stated before, these service elements are represented by
the means of ESTs. Figure 2 shows an example of an EST
of a CoE. The ESTs of each type of service element differ
slightly from each other. The ESTs of ApEs and AcEs contain
information about the service-specific parameters it offers and
the QoS parameter of its access link, respectively. In its turn,
the EST of a CoE contains information about the link capacity
it offers (delay, jitter, etc) and the domains it can connect.
All templates also contain the web service addresses on their
respective providers. These web services are used by providers
as the means to send and receive information from each other.
<ServiceElementTemplate>
<ServiceElementOwner>
<Ownerld>0351 </Ownerld>
<OwnerName>Portugal Telecom</OwnerName>
</ServiceElementOwner>
<ServiceElementDescription>
<Elementld>2351 </Elementld>
<ElementName>ConnectionElement</ElementName>
<ElementType>CoE</ElementType>
<ElementClass>silver</ElementClass>
<PublicationDate>2009-1 0-11 </PublicationDate>
</ServiceElementDescription>
<SLS>
<Carrierld>pt.pt</Carrierld>
<CarrierDomain>pt</CarrierDomain>
<ReachableCarriers>
<ReachableCarrier>
<ReachCarrierld>ft.fr</ReachCarrierld>
<ReachCarrierDomain>fr</ReachCarrierDomain>
</ReachableCarrier>
<ReachableCarrier>
<ReachCarrierld>bt.uk</ReachCarrierld>
<ReachCarrierDomain>uk</ReachCarrierDomain>
</ReachableCarrier>
</ReachableCarriers>
<OoS>
<Name>silver</Name>
<Parameters>
<PerformanceParameters>
<Delay>
<Qualitative>medium</Qualitative>
<OuantitativeMaximum>200</OuantitativeMaximum>
<OuantitativeMinimum> 1 01 </OuantitativeMinimum>
<Unit>ms</Unit>
</Delay>
<Bandwidth>
<Oualitative>medium</Oualitative>
<OuantitativeMaximum>2500</OuantitativeMaximum>
<OuantitativeMinimu m > 1 OOO</Oua ntitativeMinimu m>
<Unit>kbps</Unit>
</Bandwidth>
</PerformanceParameters>
</Parameters>
</OoS>
</SLS>
</ServiceElementTemplate>
Fig. 2. Element Specification Template of a CoE
W hen a customer wants a service, he/she contacts the SO
through the Customer Entry Interface (CEI), which can be
a web portal or a client application. The CEI contacts the
259
Operational Support Systems (OSS) to forward the customer
request to the SO. The SO then searches at the UDDI for
service elements that can be composed to create a service path
that satisfies the customer request.
To compose the service path, the SO uses its internal policies
as guides. Considering the customer service requirements,
the SO obtains a service policy in which these requirements
fit. This policy dictates how the service must be composed
by, among other things, stating the maximum (or minimum)
number of service element types that must be part of the service
and the QoS parameter values the service must guarantee.
These private rules allow providers to offer services according
to their business objectives.
Once the SO obtains the service policy, it starts the service
composition process by comparing all concave QoS parameters
(e.g. bandwidth) of the ESTs found at the UDDI with those
from the service policy. If a parameter offered by an EST
does not satisfy the parameter value defined by the service
policy, the service element associated to the EST is excluded
from the composition. After that, the SO uses the reachability
information of the CoE ESTs (see Figure 2) to create a
graph representation containing all possible paths that can be
formed by using the remaining service elements. Finally, the
SO calculates the additive and multiplicative QoS parameter
(e.g. delay and packet loss, respectively) values of each path
and compares them to the values defined by the service policy.
If any parameter does not fulfill the policy requirements, the
path associated to this parameter is excluded. At the end of
this process, remains a set of paths that can fulfill the policy
requirements. The SO then can sort the paths according to
some parameter and present the best option to the customer.
For instance, the path with the cheapest price can be used to
provide the service. It is worth stressing that the composition
process described above is an automatic process.
The approach in comparing individual parameter values to
compose the service gives providers the possibility to define
their service classes according to their will. It is not necessary
to create standard service classes that providers must apply in
their service elements. For instance, Table I shows possible QoS
parameter values for two service classes of a CoE from two
different EOs. In this case, each EO published two templates
(one for each service class).
TABLE I QoS PARAMETER VALUES OF A CoE
Service class QoS parameter EO 1 EO 2 Jitter 10 ms 5 ms
Basic Bandwidth 512 kbps 1024 kbps Packet loss 5% 3%
Jitter 5 ms 3 ms Silver Bandwidth 1024 kbps 2048 kbps
Packet loss 3% 1 %
In this example, considering that jitter, delay and bandwidth
might be mandatory QoS parameters for CoEs, EOs may define
the parameter values they think are most appropriate for any
service class, which gives EOs the flexibility to differentiate
their service elements from other EOs.
After the customer chooses one path to provide the service,
the SO must reach agreements with all EOs that compose
that path. To accomplish that, the SO creates a Service Level
Agreement (SLA) to be established with each EO and with the
customer, as illustrated in Figure 3.
Business Layer
so
Fig. 3. SLA establishment
In the case that all parties agree with the SLA terms, the EOs
create configuration scripts according to the QoS requirements
stated in the SLAs. These configuration scripts are enforced on
the EOs' equipments so that the service can be provisioned.
By following the process described in this section, providers
are able to provide inter-domain services with QoS guarantees.
However, they must also guarantee that the service provisioning
can adapt, since contract violations, conununication and
infrastructure problems, and even modification requests can
occur.
IV. QoS ADAPTATION PROC ESS
The QoS adaptation process allows providers to perform
modifications in inter-domain QoS-aware services that are
being provisioned. These modifications aim to maintain the
provisioning of the service, instead of terminating it in the
case that technical problems or contract violations occur. The
adaptation can also be carried out in order to satisfy a request
from one of the players of the provisioning process.
During the provisioning process, the SLAs are the documents
that formalize the proper service execution. SLAs are
represented as a XML document and they define the QoS
parameter thresholds the service must respect along with the
penalties that are incurred when these thresholds are violated.
Moreover, the SLA also comprises other information such as,
identification of the provider and the requestor of the service,
monitoring methods, reporting schedule, and financial rates,
among others. The terms used in the SLA, as well as in the
templates, are well-defined and agreed among the members of
the federation of providers, thus facilitating the information
exchange and the automatic data handling. Figure 4 shows an
example of a SLA established between a SO and an EO. Due
to lack of space, this SLA presents only a subset of the overall
information.
After all players of the provisioning process (customer,
SO and EOs) have reached an agreement and the service is
being executed, it is necessary to use some sort of monitoring
approach to check if the service complies with the agreed
260
<SLA> <SLAld>1 0301 </SLAld> <Business>
<Service> <Serviceld>1043</Serviceld> <Type>ApplicationElement</Type> <Description>Video streaming</Description>
</Service> <Parties>
<Requestorld>SO _001 </Requestorld> <Providerld>EO_009</Providerld>
</Parties> <Financial>
<Currency>$</Currency> <ActivationCharge>100</ActivationCharge> <lnterruptionCharge>50</lnterruptionCharge> <AdaptationCharge>30</AdaptationCharge>
</Financial> <Violations>
<Violation> <ViolationCode>V _ 001 <NiolationCode> <ViolationParameter>Frame rate<NiolationParameter> <ViolationDescription>Minimum frame rate not guaranteed <NiolationDescription> <ViolationAction>Service adaptation<NiolationAction>
<Niolation> <Niolations>
</Business> <SLS>
<Service Parameters> <FrameRate>
<Value>24<Nalue> <Unit>fps</Unit>
</FrameRate> <Compression>
<Value>MPEG-4<Nalue> <Unit>Unitless</Unit>
</Compression> </ServiceParameters>
</SLS> </SLA>
Fig. 4. SLA example
requirements. It is not the purpose of this paper to propose
a monitoring solution, since this is a research topic of its own.
Therefore, we assume that the SO contracts a trusted third
party entity (e.g. a member of the federation) to monitor the
service (Figure 5), which is a well-accepted solution [20]. In
this case, a monitor agent (MA) monitors the traffic between
service elements, by using the information comprised in a SLA
established between the SO and the monitor entity. The agent
is aware of the parameter thresholds that it must check for a
specific service and is responsible to send alert messages to the
SO in a pre-defined time interval or in the case of threshold
violations, which can cause an adaptation.
Business Layer
Customer
Provider 1 r------, , , , , ,
Third Party Monitor .
Fig. 5. Service monitoring
, , , , ,
... ______ 1
The adaptation can be triggered by three different situations:
• Violation messages from monitor agents: Third party
agents contracted to monitor the service can send
messages to the SO if they detect SLA violations.
• Requests from customers: A customer may request for
adaptations of the service according to hislher needs. For
instance, a service level downgrade can be requested due
to financial restrictions;
• Requests from EOs that are providing service elements: A
EO may request an adaptation due to technical problems
(e.g. heavy traffic conditions, equipment malfunctions, etc)
or business demands;
To better understand the process triggered by the
aforementioned situations, Figure 6 presents a sequence
diagram with the actions performed during the QoS adaptation.
By using the SLA identification included in the adaptation
request (sent by the customer or the EO) or in the alert message
(sent by the MA), the SO is able to identify the service that it
must adapt. After the SO identifies the service, it must define
the new parameters of the service elements that compose the
service (defineElementParams). For instance, a customer may
request for a QoS level upgrading in his/her video streaming
service from Silver to Premium class. The SO then searches in
its service policies the new parameter values the Premium video
service must guarantee (e.g. bandwidth of 10 Mb/s instead of
5 Mb/s).
Once the SO determines the new parameter values, it sends
confirmation messages to the EOs (confirmElementParams) to
confirm whether they can guarantee the new values or not. It
is worth mentioning that the SO may not send confirmation
messages to some EOs if the new parameter values still fit in
the current configuration of the service elements of these EOs.
Considering the above video streaming upgrading example,
if the CoEs are already configured to guarantee a maximum
bandwidth of lO Mb/s, there is no need to reconfigure them.
After the EOs receive the confirmation message from
the SO, they check if they can assure the new parameter
values (isResourceAvailable). This is done by using a simple
reservation in advance scheme. Each EO has a database
containing information about its resources. When receiving the
confirmElementParams message, the EO verifies whether its
resources can meet the requirements. If so, they mark their
resources as booked and respond affirmatively the confirmation
message. Although this is a simple booking scheme, the
approach can support the deployment of more advanced
ones. Anyway, when all EOs respond affirmatively, the SO
recompose the SLAs (recomposeSLA) previously established
during the composition process. In this step, all agreements
between the SO and the EOs that are affected by the QoS
adaptation are reformulated, as well as, the agreement between
the SO and the customer. Obviously, the penalties associated to
the adaptation process defined in the SLAs, such as financial
compensations, are applied. However, despite billing issues
are an important aspect that must be handled in inter-domain
provisioning, they are out of the scope of this work.
When the new SLAs are finalized, the SO sends them
to the customer and to all EOs that were affected by the
261
I customer I I SO I I EO 1 I I EO n I I MA I
else
else requestAdaptation
I Asynchronous Messages
sendAlert
requestAdaptation
::> 1 - delineElementParams
2 - confirmElementParams
3 - isResourceAvailable
L ___ ______________________________________________________ ___________________________________________________ _________________________________________________ J
4 - recomposeSLA
------------------------------------------------ ------------------------------------------------------ ---------------------------------------------------f---------------------------------------------------,
5 - establishSLA 5 - establishSLA
5 - establishSLA
,---- -- 6 - reconfigureElement
I Asynchronous L
Messages I 6 - reconfigureElement D 7 - reconligureResource
P 7 - reconligureResource
,---- -------------------- ------------------- --------- - --------------- - ---------
Fig. 6. QoS adaptation sequence diagram
adaptation request (establishSLA). At this moment, all players
that received the new SLAs must decide whether they agree
with the SLA terms or not. If so, the SO sends a reconfiguration
request (reconfigureElement) to the EOs. The EOs then create
and enforce a configuration script with the new parameter
values into their resources (reconfigureResource). At the end
of this process, the adapted service is ready to continue its
execution.
Figure 7 summarizes the adaptation process by showing a
service upgrade request example. Suppose a customer request
for the upgrade of his/her video streaming service from
Silver to Premium class. Each service class has its own set
of parameter values (for the sake of simplicity, only four
parameters are considered). There are four service elements
interacting in order to provide the service: AcEl, CoEl, CoE2,
ApEl. Each pattern in the small rectangles represents the
current configuration of the service element. As can be seen
in the figure, only AcEl and ApEl need to be adapted. Since
CoEl and CoE2 still support the Premium service requirements,
they are not affected by the adaptation process.
The aforementioned set of operations enables providers
to adapt QoS-aware services in an automatic and dynamic
fashion. This feature enhances the management of inter-domain
services, since providers do not need to manually interfere
Silver service Jitter: 3 ms AcE 1 CoE 1 Bandwidth: 5 Kb/s Enconding: MPEG-4 - � Frame rate: 20 Ips
Premium service Jitter: 1 ms AcE 1 CoE 1 Bandwidth:10 Kb/s Enconding: MPEG-4 � � Frame rate: 24 Ips
Jitter: 3 ms Bandwidth: 5 Kb/s
Enconding: MPEG-4 Frame rate: 20 Ips
CoE2 ApE 1
� me
+ service adaptation
CoE2 ApE 1
� [ill]]]]]
Jitter: 1 ms Bandwidth:10 Kb/s
Enconding: MPEG-4 Frame rate: 24 Ips
Fig. 7. QoS adaptation representation (service upgrade)
in the service provisioning every time an adaptation request
is performed, thus considerably decreasing operational times.
Moreover, depending on the adaptation that is required, only a
subset of the EOs that compose the service need to adapt their
service elements, which also decreases the operational times. It
also handles adaptation requests related to business issues and
262
not only when technical problems occur.
Next section presents some performance results from a
testbed running a service adaptation use case.
v. QoS ADAPTATION SIMULATION RESULTS
To validate the QoS adaptation process, a prototype
was developed, using Java programming language and Web
Services technology. Performance tests were undertaken in an
inter-domain testbed use case. In this use case, a request is
performed by the customer to adapt a video streaming service
running in a BGP/MPLS Virtual Private Network (VPN).
Figure 8 shows the testbed topology, which is composed of
two intermediary domains, represented by CoEs, and two
endpoint domains, represented by one ApE and one AcE.
According to RFC 4364 [21] terminology, there are two
Customer Edge (CE) routers, two Provider Edge (PE) routers
and two Autonomous System Border Router (ASBR) routers.
This topology illustrates a real scenario, since it was verified
that most of the inter-domain traffic (80% - 90%) exchanged
by an ISP travels only a few AS hops away (two to four hops)
[22]. The testbed was deployed in five PCs. One PC hosts the
SO while the other PCs host the EOs (each PC hosts one EO).
Dynamips [23] router emulator was used to emulate Cisco 7200
routers, while Dynagen [24] was used as front-end.
Endpoint 1 (AcE) ISP1 (CoE) lSP2(CoE) Endpoint 2 (ApE)
Fig. 8. Testbed topology
During the service execution, the SO starts the QoS
adaptation process triggered by a customer request. The first
performance test aims to verify the time the SO takes to handle
an adaptation as the number of EOs to adapt increases. Figure
9 shows the times measured in four different scenarios, which
are represented by the curves. The number of EOs affected
by the adaptation varies according to the scenarios (from one
EO to four EOs). It was performed 2,000 successive adaptation
requests in each scenario. At every 200 successive requests, the
average time to handle these requests is measured (represented
by each point in X axis).
The chart shows that as the number of EOs increase, the
time to adapt the QoS of the service increase as well. This is
an expected behavior, since if there are more service elements
to adapt, then more SLA recomposing and establishment, and
equipment reconfiguration are necessary. However, this increase
is not proportional to the increase in the number of EOs. The
average time to handle an adaptation request in the scenario
where one EO is affected is 2285 ms, while the average time
in the scenario where four EOs are affected is 2540 ms, which
en .s IJ)
E i=
2700
2650
2600
2550
2500
2450
2400
2350
2300
2250 1 4 5 6 7
Successive requests (x 200)
Fig. 9. Service adaptation times (increasing number of EOs)
10
shows the good scalability of the approach, especially when
considering that most of the inter-domain traffic crosses only
two to four hops. Moreover, adapting a service in less than 3
seconds is a processing time extremely low when compared to
the time it takes when manual interference is required.
The second performed test aims to verify the time an EO
takes to handle an adaptation as the number of requesting SOs
increases. Four different scenarios were used in this test, with
one SO, two SOs, three SOs and four SOs, respectively. In each
scenario, 2,000 successive adaptation requests were generated
by each SO. In the scenarios that have more than one SO,
the requests from different SOs are simultaneous. Figure 10
shows the results, where each curve represents a scenario. At
every 200 successive requests, the average time to handle these
requests is measured (represented by each point in X axis).
en .s IJ)
E i=
1100
1080
1060
1040
1020
1000
980
960
4 5 6 7
Successive requests (x 200)
Fig. 10. Service adaptation times (increasing number of SOs)
10
The results in Figure 10 shows that as the number of
SOs performing requests increases, the time an EO takes to
handle one request also increases, which is also an anticipated
behavior. Similarly to what was observed in the first test, the
263
increase in the time is not proportional to the increase in
the number of SOs, thus attesting the good efficiency of the
approach. Moreover, the time spent by the EO to adapt its
service element, which includes equipment configuration, is
considerably low (about 1 second).
VI. CONCLUSION
This paper presented an approach for QoS adaptation in
inter-domain services. It was developed in the context of a QoS
model for business layers with the GBF. This approach is based
on SOA principles, where a service is composed of smaller
services owned by different providers. It aims to enhance
the inter-domain service management by enabling providers
in adapting QoS-aware services in an automatic and dynamic
manner. By using this approach, a provider can handle a QoS
adaptation, which is triggered by requests from customer or
from other providers, or by alert messages from monitoring
agents. The QoS adaptation process determines the new QoS
parameter values the service must satisfy and contact each
provider affected by this adaptation, sending the new parameter
values. These providers then reconfigure their resources in order
to continue the service provisioning with the new requirements.
This is an advantage of the approach, since only affected
providers need to reconfigure their equipments. In the case
that a provider equipment configuration still supports the new
requirements, there is no need to contact this provider.
To evaluate the proposed approach, tests were undertaken in
an inter-domain scenario using the GBF prototype developed
along with the QoS model. The results showed that the time
to adapt a service is considerably low when compared to the
cumbersome processes of human-based interactions between
providers and manual configuration of equipments. Besides, the
approach also presented a good scalability, since as more EOs
were added and more requests were performed (by increasing
the number of SOs), the increase in the time to adapt a service
was small.
Plans for future work include the analysis of methods for
monitoring the service and the study of mechanisms to deal
with billing issues (e.g. financial compensations among all
players involved in the adaptation process). Analysis of storage
needs is also subject of further studies.
ACKNOWLEDGMENT
This work was partially funded by FCT (scholarship
contracts SFRHIBDJl6261J2004 and SFRHIBD/29103/2006).
REFERENCES
[1] R.L. Vianna, E.R. Polina, c.c. Marquezan, L. Bertholdo, L.M.R. Tarouco, M.1.B. Almeida, and L.Z. Granville. An evaluation of service composition technologies applied to network management. In Integrated Network Management, 2007. 1M '07. 10th IFlPIIEEE International Symposium on, pages 420-428, 2007.
[2] P Spiess, S Karnouskos, D Guinard, D Savio, 0 Baecker, L Souza, and V Trifa. SOA-Based integration of the internet of things in enterprise services. In Proceedings of the 2009 IEEE International Conference on
Web Services, pages 968-975. IEEE Computer Society, 2009. [3] Luis M. Vaquero, Luis Rodero-Merino, Juan Caceres, and Maik Lindner.
A break in the clouds: towards a cloud definition. ACM SIGCOMM
Computer Communication Review, 39(1):50-55, 2009.
[4] T. Hau, N. Ebert, A. Hochstein, and W. Brenner. Where to start with SOA: criteria for selecting SOA projects. In Hawaii International Coriference on System Sciences, Proceedings of the 41st Annual, page 314, 2008.
[5] IPsphere Forum. IPsphere framework technical specification (Release 1), June 2007.
[6] TMF. Service delivery framework reference architecture. http://www.tmforum.org/ServiceDeliveryFrameworkl4664/home.html, 2009.
[7] M Salehie and L Tahvildari. Self-adaptive software: Landscape and research challenges. ACM Transactions on Autonomous and Adaptive
Systems, 4(2):1-42, 2009. [8] F Matos, A Matos, P Simoes, and E Monteiro. A QoS model for business
layers. In T he 24th International Coriference on lriformation Networking
(ICOTN 2010), Busan, Korea, January 2010. [9] A. Matos, F. Matos, P. Simoes, and E. Monteiro. A framework for the
establishment of inter-domain, on-demand VPNs. In Network Operations
and Management Symposium, 2008. NOMS 2008. TEEE, pages 232-239, Salvador, Brazil, April 2008.
[10] T Ahmed, R Boutaba, and A Mehaoua. A measurement-based approach for dynamic QoS adaptation in DiffServ networks. Computer
Communications, 28(18):2020-2033, 2005. [l1] A. M. Hadjiantonis, M Charalambides, and G Pavlou. An adaptive service
management framework for wireless networks. Vehicular Technology Magazine, IEEE, 2(3):6-13, 2007.
[l2] L. Cruvinel, T. Vazao, F. Silva, and A. Fonseca. Dynamic QoS adaptation for multimedia traffic. In Computer Communications and Networks, 2008. ICCCN '08. Proceedings of 17th International Coriference on, pages 1-7, 2008.
[13] M Mu, E Cerqueira, F Boavida, and A Mauthe. Quality of experience management framework for real-time multimedia applications. Tnt. 1.
Tnternet ProIOC. Tee/mol., 4(1):54-64, 2009. [14] L. Skorin-Kapov, M. Mosmondor, O. Dobrijevic, and M. Matijasevic.
Application-Level QoS negotiation and signaling for advanced multimedia services in the IMS. Communications Magazine, TEEE, 45(7):108-116,2007.
[l5] F Matos, A Matos, P Simoes, and E Monteiro. A service composition approach for Inter-Domain provisioning. In 6th International Coriference on Network and Services Management (CNSM), Niagara Fall, Canada, October 2010.
[16] TEQUILA Consortium. Tequila - dl.l: Functional architecture definition and top level design, 2000.
[l7] M Georgievski and N Sharda. A taxonomy of QoS parameters and applications for multimedia communications. In International Coriference on Tnternet and Multimedia Systems and Applications, TMSA2003, pages 13-15, Kauai, Hawaii, USA, August 2003.
[18] C Bouras and A Sevasti. Service level agreements for DiffServ-based services' provisioning. lournal of Network and Computer Applications, 28(4):285-302, 2005.
[19] G Dobson and A Sanchez-Macian. Towards unified QoS/SLA ontologies. In Services Computing Workshops, 2006. SCW '06. TEEE, pages 169-174, 2006.
[20] P Jacobs and B Davie. Technical challenges in the delivery of interprovider QoS. TEEE Communications Magazine, 43(6):1l2-118, June 2005.
[21] E Rosen and Y Rekhter. BGP/MPLS IP virtual private networks (VPNs). RFC 4364, February 2006.
[22] B. Quoitin, C. Pelsser, L. Swinnen, O. Bonaventure, and S. Uhlig. Interdomain traffic engineering with BGP. Communications Magazine, TEEE, 41(5):122-128, 2003.
[23] Dynamips cisco 7200 simulator. http://www.ipflow.utc.frlindex.php/ Cisco\_7200\_Simulator.
[24] Dynagen. http://dynagen.orgl.
264