web server qos models: applying scheduling rules from production...

Download Web server QoS models: applying scheduling rules from production planningchaos1.la.asu.edu/~yclai/papers/COR_05_YGLFL.pdf · 2004-11-02 · Web server QoS models: applying scheduling

If you can't read please download the document

Upload: others

Post on 01-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

  • Computers & Operations Research 32 (2005) 1147–1164www.elsevier.com/locate/dsw

    Web server QoS models: applying scheduling rules fromproduction planning

    Nong Ye∗, Esma S. Gel, Xueping Li, Toni Farley, Ying-Cheng LaiDepartment of Industrial Engineering, Arizona State University, P.O. Box 875906, Tempe, AZ 85287, USA

    Abstract

    Most web servers, in practical use, use a queuing policy based on the Best E8ort model, which employsthe 9rst-in-9rst-out (FIFO) scheduling rule to prioritize web requests in a single queue. This model doesnot provide Quality of Service (QoS). In the Di8erentiated Services (Di8Serv) model, separate queues areintroduced to di8erentiate QoS for separate web requests with di8erent priorities. This paper presents webserver QoS models that use a single queue, along with scheduling rules from production planning in themanufacturing domain, to di8erentiate QoS for classes of web service requests with di8erent priorities. Thesescheduling rules are Weighted Shortest Processing Time (WSPT), Apparent Tardiness Cost (ATC), and EarliestDue Date. We conduct simulation experiments and compare the QoS performance of these scheduling ruleswith the FIFO scheme used in the basic Best E8ort model with only one queue, and the basic Di8Serv modelwith two separate queues. Simulation results demonstrate better QoS performance using WSPT and ATC,especially when requested services exceed the capacity of a web server.? 2003 Elsevier Ltd. All rights reserved.

    Keywords: Web server; Quality of Service (QoS); Scheduling rules; Simulation model

    1. Introduction

    The World Wide Web has become one of the most popular and important applications on theInternet. A web server receives numerous requests for its services, which it cannot handle at the sametime, and will typically use a bu8er or queue to store incoming requests awaiting service. Requestsin the queue are typically stored in order of arrival. The web server will take the request at the frontof the queue, and service it 9rst. This is an example of 9rst-in-9rst-out (FIFO) scheduling. Mostexisting web servers provide services based on the Best E8ort model, using FIFO scheduling.

    ∗ Corresponding author. Tel.: +1-480-965-7812; fax: +1-480-965-8692.E-mail address: [email protected] (N. Ye).

    0305-0548/$ - see front matter ? 2003 Elsevier Ltd. All rights reserved.doi:10.1016/j.cor.2003.10.001

    mailto:[email protected]

  • 1148 N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164

    Quality of Service (QoS) has three main attributes: timeliness, precision, and accuracy [1,2]. Aservice request can be considered a process with an input and output. Timeliness measures how fastan output is produced for a given input. Precision measures the quantity, and accuracy measuresthe quality, of the output. We often see such measures as delay, response time, and jitter used forthe timelines attribute; throughput, bandwidth, and loss rate (e.g., packet drop rate) used for theprecision attribute; and error rate used for the accuracy attribute. Tradeo8s often need to be madeamong these attributes.

    The Best E8ort model of a web server does not provide QoS because the completion time ofa request depends on how many requests are already in the queue, and thus cannot be predictedfor timeliness assurance. Furthermore, in any set of web requests, some may have a high priority,while others have a low priority. Bhatti and Friedrich [3] discuss classifying web requests into high,medium, and low priorities using such information as IP addresses and requested web sites. TheBest E8ort model does not support the di8erentiation of requests based on their priority. Instead,requests are treated “fairly” based on their arrival time.

    The lack of QoS in the Internet today makes it a highly vulnerable system in its current role ofsupporting critical operations in many sectors of our society. QoS has been considered as a “must”for the next-generation Internet [4,5]. Existing work on QoS for the Internet falls into two generalframeworks: Integrated Services (IntServ) and Di8erentiated Services (Di8Serv), both of which areconcerned mostly with timeliness assurance.

    In the IntServ framework, applications are placed in two classes based on timeliness requirements:real-time (hard or strict) and elastic (soft) [6–11]. IntServ aims to provide per Low QoS by reservingbandwidth along a source-destination path to assure timeliness of data delivery. Many schedulingrules and admission control strategies have been proposed in the IntServ framework to assure abound on end-to-end timeliness [12–15]. Because it requires that every hop on the end-to-end pathmaintain the state of all bandwidth reservations for each Internet connection, IntServ is not scal-able. Furthermore, a bound on end-to-end timeliness usually depends on the number of hops on anend-to-end path, which is diMcult to predict, implying that there can exist no absolute bound onend-to-end timeliness.

    The Di8Serv framework marks data at the edges of a network with two classes of priority:premium and best e8ort, which, respectively, have a high and low priority for being serviced [16–18].In a router or host, two separate queues with di8erent capacities are used, one for each class ofdata. Because the vast majority of data is classi9ed as Best e8ort, the capacity of the premiumqueue is usually much smaller. Data in the premium queue is served 9rst, whereas data in the BestE8ort queue is served only when the premium queue is empty. Lu et al. [19] use a feedback controlmethod to address the relative delay or jitter in a web server for assuring QoS. Some studies use loadbalancing to address QoS [20–22]. Still other QoS methods based on Di8Serv have been explored[23–25].

    Research in providing QoS on a web server takes di8erent approaches. Ferrari [26] investigatesthe e8ect of aggregation on performance using Priority Queuing (PQ) and Weighted Fair Queuing(WFQ) scheduling algorithms. Chen and Mohapatra [27] exploit the dependence among session-basedrequests and propose a dynamic weighted fair sharing (DWFS) scheduling algorithm to controloverloads in web servers.

    We investigate Internet QoS by providing QoS on web servers. We use only one queue for all webrequests, and an advanced scheduling rule to di8erentiate services and improve overall QoS. Many

  • N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164 1149

    advanced scheduling rules have been developed for production planning in the manufacturing domain[28]. If we consider a web server as a production system, those scheduling rules can be adopted.In this study, we investigate the application of three scheduling rules from production planning.These scheduling rules are: Weighted Shortest Processing Time (WSPT), Apparent Tardiness Cost(ATC), and Earliest Due Date (EDD). EDD is also known as Earliest Deadline First (EDF) [29].We compare the QoS performance of these rules with that of FIFO in the basic Best E8ort modelwith only one queue, and that of a basic Di8Serv model for a web server with two separate queues.

    The rest of this paper is organized as follows: First, we de9ne the QoS models of a web serverusing WSPT, ATC, and EDD, along with basic Best E8ort and Di8Serv models. Next, we describethe simulation of these web server models using a commercial simulation tool for computer networks(OPNET), and the simulation experiments conducted under various data conditions to discover theQoS performance of the models. Finally, we present the simulation results and summarize our9ndings.

    2. Web server QoS models

    We introduce high level frameworks for web server QoS models to provide QoS at the applicationlayer. The units queued in these frameworks are considered as complete web requests. Similar toreal-time applications where jitter and delay are critical to performance, timeliness is also importantto web requests, especially for some commercial web sites where a late response may cause clientsto turn to rival web sites. With the increasing development of e-commerce, it is signi9cant to providetimeliness on web sites to attract users. Loss of users means loss of pro9t. Our study focuses on theserver side of a web site. Any incoming request is treated as a new request, regardless of whetherit is a resubmitted, previously dropped request, or an entirely new request.

    QoS requirements on the Internet primarily come from end users. A web server provides servicesto end users, who can specify their QoS requirements for these services. Web requests on the Internettoday do not come with any indication of their priority or other QoS requirements, such as expecteddelay. We anticipate that in the future web requests will be accompanied by such data in order toallow for QoS assurance on the Internet. In this study, we assume that each web request comes withthe following information:

    • Priority• Requested data (from which we derive its size)• Due date (required completion time of request)• Arrival time (determined at time of arrival).

    In our study it is assumed that it is not entirely up to the web client to set the priority. Clientscannot set priorities without web server authentication. Otherwise, everyone, including malicioususers, will give themselves the highest priority. Clients’ access history, user name and password, IPaddress, and current status (e.g., a customer with a full shopping cart or with a purchasing historygains a higher weight than a customer without a purchasing history or a full cart) can be used toprioritize the requests. Web QoS allows the incoming requests to be categorized as high, medium,or low priority based on IP address, requested URL, and so on [5].

  • 1150 N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164

    Fig. 1. QoS model of a sequential web server.

    The complete round trip time, or delay, of a web request includes time waiting in the queue,9nding and retrieving requested data, and sending it to the web client. In this study, for each webrequest, we ignore the time spent 9nding and retrieving the requested data, and consider only thewaiting time in the queue and the time sending requested data to the requesting web client.

    In this paper, we model a web server as a sequential single server, which means that the webserver processes incoming requests one at a time as shown in Fig. 1. This concept is di8erent fromthe traditional multiprocessor or mutlithreaded web server model. We argue that this is reasonablein terms of web server QoS. For example, if bandwidth were the performance bottleneck underscrutiny, then we would argue that high priority requests should not share bandwidth with lowerpriority requests, thereby requiring a multiprocessor or multithreaded solution.

    In our study, we are addressing scheduling issues relating to the web server itself. Such solutionscould be scalable to a multiprocessor or multithreaded web server, when applied at each level ofredundancy. In this section we present the models used in our experiments.

    2.1. The basic Best E;ort model

    The basic Best E8ort model of a web server consists of three elements, as shown in Fig 1:incoming web requests, a queue to sort and keep incoming requests before they are extracted forservicing, and a server to process incoming requests and provide web services. The basic best-e8ortmodel uses only the arrival time to sort incoming requests in the queue using the FIFO rule. Arequest is dropped by the web server if it has been in the queue for longer than some “timeout”threshold (for example, 90 s).

    The Internet and most corporate intranets are built using the IP protocol, which is a connectionlessprotocol that provides no guarantees of service time or the relative ordering of packets. For the BestE8ort web servers in such networks, there is no admission control scheme. Hence, if client requestsare placed into the web server queue faster than they are removed for processing, congestion occurs,resulting in delayed requests and requests dropped due to the server’s timeout threshold. Thus, theInternet can only provide a single level of service; that of Best E8ort.

  • N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164 1151

    Fig. 2. Basic Di8Serv model.

    2.2. The basic Di;Serv model

    Under the Di8Serv policy, requests are categorized into priority classes and placed in separatequeues. The server always processes the higher priority queue before serving any of the lower priorityqueues. Di8Serv architectures generally de9ne two types of classi9cation: (1) the behavior aggregateclassi

  • 1152 N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164

    wj is the given priority weight for the request j, and pj is its service time, which is calculated fromthe size of the requested data, dj, as follows:

    pj =djs: (1)

    In Eq. (1), s is the service rate or data transmission rate (the amount of data that can be transmittedper time unit) of the web server. Note that the higher the weight and the shorter the service timeof a web request, the higher its priority value will be.

    When a new request arrives at the web server, its priority value is computed, and the requestis then inserted into the queue according to this value. Before a new request is inserted into thequeue, we 9rst make an admission control decision based on its expected completion and due dates.Deferring an incoming request at the very beginning of the transaction, rather than in the middle,is a desirable scheme for an overloaded web server. First of all, it avoids further frustration on theclient side by refusing to accept requests for which it cannot satisfy the respective QoS requirements(e.g., limits on cycle time. lateness and tardiness, etc.). Secondly, it keeps the queue levels relativelystable, resulting in less variable output.

    QoS requirements are determined by a web request’s QoS factor. We de9ne the QoS factor of anincoming request as:

    Qj = Dj − Tj −Wj: (2)In Eq. (2), Dj is the due date of the web request j; Tj is its arrival time, and Wj is its predictedwaiting time (i.e., the sum of the processing times of the requests ahead of request j). Accordingto our assumed admission control scheme, if Qj is less than zero, request j will be rejected.

    2.4. ATC

    The ATC rule combines WSPT and the Minimum Slack (MS) 9rst rule [28]. MS is a dynamicdispatching rule that orders jobs in increasing order of slack, where the slack of job j at time t isde9ned as max{dj−pj− t; 0}, where dj denotes the due date of job j and pj denotes its processingtime as in Eq. (1).

    Under the ATC rule, jobs are scheduled one at a time; that is, every time the machine becomesfree, a ranking index is computed for each remaining job. The job with the highest-ranking indexis then selected to be processed next. The index is de9ned as

    Ij(t) =wjpj

    exp(

    −max{dj − pj − t; 0}k Sp

    ): (3)

    In Eq. (3), k is a scaling parameter that can be determined empirically, and Sp represents the averageprocessing time of the remaining jobs in the queue at time t. We can see that if k is big enough,the ATC rule will reduce to WSPT, since the ATC index Ij(t) → wj=pj as k → ∞.In this paper, we use the following equation in place of Eq. (3) to index incoming requests using

    the ATC rule:

    I∗j (t) =wjpj

    exp(

    −max{dj − t; 0}k Sp

    ): (4)

  • N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164 1153

    In Eq. (4), the processing time pj is eliminated because the slack of the request equals its due dateminus its waiting time, which we previously de9ned as the time elapsed between the time a requestcomes into the web server and the time it gets processed.

    2.5. EDD

    Sequencing in increasing order by due date, EDD, minimizes the maximum lateness of a set ofjobs with given due dates and processing times [28]. While we know that EDD is optimal for thecase in which all jobs to be scheduled are available at time zero, it is harder to 9nd an optimalpolicy for the case in which jobs are released in di8erent points in time, such is the case in a webserver. This is due to the fact that when preemption of jobs is not allowed, the optimal schedule isnot necessarily a non-idling schedule.

    Having noted this point, we adopt a static EDD rule in our web server model by placing theincoming requests into the queue according to their due date. Requests with an earlier due date areplaced in the front of the queue, and processed before requests with a later due date.

    3. QoS measures

    We de9ne four QoS measures in this paper: number of dropped jobs per unit time (drop rate),average waiting time in the system, average lateness and throughput.

    Under the basic Best E8ort and Di8Serv policies, request drops only occur when the request’swaiting time in the queue reaches the TIMEOUT threshold. Under the WSTP, ATC, and EDDpolicies, request drops may happen at admission control, as well as when the waiting time exceeds thedue date while requests are waiting in the queue. Waiting time is used to measure the responsivenessof the web server. Lateness represents the gap between the waiting and due dates, and can be negativeor positive. A negative lateness indicates that a request completed before its due date, and a positivelateness indicates that a request was tardy. Lateness depicts how well the due date requirement ofthe request is met.

    4. Simulation model and experiments

    We simulate a web server under the policies discussed above using the OPNET Modeler 8.1.Asimulation environment. The simulation experiments were conducted on a Micron PC with a singlePentium4 1:9 GHz CPU and 512 MB of RAM running on the Windows 2000 operating system. Fig. 3depicts the simulation model based on the web server system presented in Fig. 1. The three generatormodules generate web requests with di8erent priority weights. The forwarder module forwards therequests to the queue, where the admission control scheme is implemented. The requests are thenplaced into the queue based on the scheduling rule currently in use. The sink module destroys therequests after the web server processes them.

    We de9ne two 9elds in the data packet format: weight and due date. We assume that the requesteddata follows a Pareto distribution with shape parameter 65536 and scale parameter 1.4 [31,32]. Thusthe mean of the requested data is 65536

    ∗1:41:4−1 =229376 bits, about 28 K bytes. We calculate the processing

  • 1154 N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164

    Fig. 3. The topology of the QoS web server simulation.

    time by dividing the requested data by a constant, deterministic service rate. For example, if therequested data is 6000 bytes and the service rate of the server is 240,000 bytes per second, theprocessing time for this request is assumed to be 0:025 s.The weight 9eld is a 4-bit integer that contains the weight information of the request. The weight

    value is recorded in the weight 9eld of a request packet when the request is generated. There aremultiple choices available to select the proper weight value [3]. For simplicity, we only de9nethree priority classes. In Fig. 3, generators 1, 2, and 3 generate requests with weights 1, 5 and 10,respectively, corresponding to low, medium, and high priority requests.

    The due date 9eld in the packets shows the time a request is due. If an application cannot receivea response as required, its QoS requirements cannot be met. Of course, whether or not an applicationcan get its response on time depends not only on how fast the web server processes the request,but also on the Internet transmission delay. How to guarantee a lower bound delay between routersis beyond the scope of this paper. Hence, we focus on the due date, by which the web server isrequired to have processed a request, and assume that the client will receive its response once theserver sends it. We use the same distribution to model the randomness of the due dates for allclasses of requests because we assume that they require the same URLs. We also assume that thedue dates are normally distributed with a mean of 2 s and a standard deviation of 0:2 s.

    In our experiments, we use two traMc conditions: overwhelming and light. We use an exponentialdistribution to model randomness in the arrival of requests to the web server. For the overwhelmingtraMc condition, generators 1 and 2 generate requests at a rate of 40 requests per second, whilegenerator 3 generates requests at a rate of 20 requests per second. Hence, the total traMc generatedis equal to (40+ 40+ 20)∗229376 bits per second. We set the queue service rate at 12,697,600 bitsper second, about T1 speed of 1:55 Mbps, which is around 55.36% of the generated web traMc. Theserver can process about 55 requests per second.

    In the light traMc case, the arrival rate for low and medium priority requests are reduced to 16requests per second, while the arrival rate for the high priority requests is at a rate of 8 requests

  • N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164 1155

    Table 1Simulation parameters

    TraMc case Class Weight Request interarrival time Due date (s) Document size (bits)

    Overwhelming LOW 1 Exponential(0.025) Normal(2,0.2) Pareto (65536; 1:4)MEDIUM 5 Exponential(0.025) Normal(2,0.2) Pareto (65536; 1:4)HIGH 10 Exponential(0.05) Normal(2,0.2) Pareto (65536; 1:4)

    Light LOW 1 Exponential(0.0625) Normal(2,0.2) Pareto (65536; 1:4)MEDIUM 5 Exponential(0.0625) Normal(2,0.2) Pareto (65536; 1:4)HIGH 10 Exponential(0.125) Normal(2,0.2) Pareto (65536; 1:4)

    ARRIVAL START DONE IDLE

    BUSY

    FREE

    JOBSWAITING

    FINISH

    NOT FINISH

    JOBSWAITING

    NO JOBWAITING

    DEFAULT

    ARRIVAL

    Fig. 4. FSM for sequential web server.

    per second. Hence, the total traMc generated reduces to only 72% of the traMc the web servercan handle. In both overwhelming and light traMc cases, we use 100 for the scaling parametersin the ATC policy. We set the simulation duration to 4000 s. The parameters for our simulationexperiments are given in Table 1.We use a Finite State Machine (FSM) to simulate processing in a web server. The model actually

    includes two FSMs. The ARRIVAL state itself is an FSM in which requests come into the queuebased on QoS rules. The second FSM includes the three states: START, DONE, and IDLE. TheFSMs are shown in Fig. 4.

    We describe the processing handled in each state in the FSM shown in Fig. 4 for each of theweb server models in our investigation:

    ARRIVAL state. Best E;ort model: When a request comes in, it enters the ARRIVAL state, andwill be placed into the queue using the FIFO rule.

    Basic Di;Serv model: After a request enters the ARRIVAL state, we get the priority ‘weight’value from the packet and place it into the respective queue.

    WSPT model: First we execute the admission control algorithm and drop any requests with aQoS factor less than zero. For new requests, we calculate their priority according to equation (1),and insert the request into the queue based on its priority.

    ATC model: As an extension of WSPT, we also implement the Admission Control Scheme at thevery beginning of the ARRIVAL state. For a new incoming request, we calculate its index using Eq.(4) as its priority for queue placement. We assign k =100; the scaling parameter in our simulation.

  • 1156 N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164

    EDD model: We perform the Admission Control Scheme when the process enters the ARRIVALstate. We extract the due date information of a new request from the packet’s due date 9eld. Weuse its reciprocal as the priority and insert the request into the queue accordingly.

    START state. When the server is not busy, a process enters the START state. In this state, wewill remove a request from the head of the queue and schedule it based on its processing time. TheFSM will then stay in the START state until the current request is processed.

    DONE state. After a request is processed, the FSM enters the DONE state, and the server becomesfree again. If there are no jobs waiting for processing, the FSM will go to IDLE, otherwise, theFSM will go back to START to process the waiting requests.

    IDLE state. The server is free at the IDLE state. When there are no requests waiting in thequeue, the FSM remains in the IDLE state. When a request comes into the ARRIVAL state and isinserted into the queue, the FSM will transition from IDLE to START.

    5. Experimental results

    In this section, we provide detailed results on our simulation experiments for the heavy traMc andlight traMc cases and provide a discussion of our results for each of the QoS performance metrics.

    5.1. Overwhelming tra@c case

    We 9rst present the results of the overwhelming traMc scenario. Due to the “bursty” nature of webtraMc, it is not surprising that a web server can become overwhelmed when one minute previousits workload was quite small. A malicious attack that sends a high volume of requests to a webserver is another source of overwhelming traMc. We argue that even under the overwhelming traMccondition, our QoS enabled web server can still provide QoS.

    The simulation results for this section are given in Table 2. In our tabulated results, STD standsfor Standard Deviation which is calculated over time from 600 to 4000 s (a time interval in whichthe system is in a steady state). BE stands for Best E8ort policy and DS stands for basic Di8Servpolicy. The number in each cell can also be considered a percentage of requests since the columnsof each row total the actual number of requests.

    5.1.1. Waiting timeThe waiting time performance of the Best E8ort, basic Di8Serv, WSPT, ATC, and EDD policies

    are shown in Fig. 5. We take all processed requests into account and calculate the waiting time inthe steady state window.

    We observe that the overall performance is dramatically enhanced under the ATC, EDD, andWSPT policies, as shown in Table 2. These policies employ an admission control scheme. Theadmission control scheme discards requests whose QoS requirements cannot be met before they areeven placed in the queue, which results in a smaller average number of requests waiting in the queueand consequently a shorter waiting time. The average number of requests waiting in the queue isabout 25 under WSPT policy, and 67 under EDD policy. Under Best E8ort and basic Di8Serv, theaverage number of waiting requests is about 8989 and 7279, respectively.

  • N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164 1157

    Table 2Experimental results of the overwhelming traMc case

    Priority class BE DS WSPT ATC EDD

    Waiting time of requestsAll Mean 89.96725 56.39645 0.100961 0.104629 0.2742334

    STD 0.004698 7.965471 0.012236 0.013271 0.0301226High Mean 89.96703 3.776491 0.021563 0.022253 3.776491

    STD 0.005094 10.2084 0.003994 0.00413 10.2084Medium Mean 89.96749 89.95819 0.039224 0.040503 0.273149

    STD 0.00474 0.007214 0.005459 0.005846 0.03198Low Mean 89.96762 89.958 0.231369 0.240477 0.274335

    STD 0.004737 0.006525 0.029947 0.032699 0.032978

    Requests droppedAll Mean 45.34254 46.44422 10.44669 10.63292 46.50868

    STD 11.2525 12.06754 1.174351 1.191997 14.39948High Mean 9.001148 0.066765 0.180685 0.199215 9.21321

    STD 2.279903 0.612841 0.067248 0.075172 2.850334Medium Mean 18.24159 23.28894 1.001089 1.046383 18.75024

    STD 4.625595 6.022406 0.170591 0.17572 5.782265Low Mean 18.10011 23.08864 9.265216 9.387631 18.54573

    STD 4.4637 5.869901 1.061894 1.07052 5.851304

    Lateness of requestsAll Mean 87.964869 54.39429 −1.87864 −1.8751 −1.45155

    STD 0.0124608 7.964984 0.210136 0.2099 0.046376High Mean 87.96774 0.4149 −1.95503 −1.95416 −1.4516044

    STD 0.023457 10.20868 0.219513 0.219381 0.051434Medium Mean 87.96471 87.95174 −1.93868 −1.93742 −1.45239

    STD 0.01794 0.022036 0.217253 0.217093 0.047151Low Mean 87.96591 87.96192 −1.77454 −1.76515 −1.4509

    STD 0.016003 0.028651 0.022221 0.024757 0.049007

    Throughput of requestsAll Mean 12696420 12696350 12697660 12697573 12697636

    STD 2968060 4493832 7463.187 5814.585 3727158High Mean 2311544 4241935 3396161 3373713 2290618

    STD 562505.6 1302286 186548.3 178843.3 670560.8Medium Mean 5015363 3963317 6083328 6073930 5023274

    STD 1321620 1529590 165117.2 162480 1967933Low Mean 5369555 4491137 3218229 3249982 5383762

    STD 2870234 2748046 158414.8 164200.2 3589084

    The WSPT and ATC policies also contribute to the stabilized waiting time as shown in Fig. 5.We notice that the variance of the overall Waiting Time under basic Di8Serv policy is quite large,which can also be seen from Table 2.

  • 1158 N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164

    0102030405060708090

    100

    600 1600 2600 3600

    Time (Second)

    BE

    DS

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    600 1600 2600 3600

    Time (Second)

    Sec

    on

    dS

    eco

    nd

    WSPT

    ATC

    EDD

    Fig. 5. Overwhelming traMc case: overall waiting time of requests.

    0

    20

    40

    60

    80

    100

    120

    600 1600 2600 3600

    Time (Second)

    Req

    ues

    ts

    BE

    DS

    WSPT

    ATC

    EDD

    -5

    0

    5

    10

    15

    20

    25

    600 1600 2600 3600

    Time (Second)

    Req

    ues

    ts

    BE

    DS

    WSPT

    ATC

    EDD

    Fig. 6. Overwhelming traMc case: overall drop of all (above) and high priority (below) requests.

    5.1.2. Drop rateFig. 6 shows the overall drop rate of all requests and high priority requests. We observe that

    the overall dropped requests using Best E8ort, basic Di8Serv, and EDD is about 45 requests persecond as shown in Table 2. However, only about 10 requests are dropped per second under ATC

  • N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164 1159

    0

    2000000

    4000000

    6000000

    8000000

    10000000

    12000000

    14000000

    BE DS WSPT ATC EDD

    Bits

    Bits

    Mean

    Std

    0

    500000

    1000000

    1500000

    2000000

    2500000

    3000000

    3500000

    4000000

    4500000

    BE DS WSPT ATC EDD

    Mean

    Std

    Fig. 7. Overwhelming traMc case: throughput of all (above) and high priority (below) requests.

    and WSPT; 22% of Best E8ort. This is because the WSPT and ATC policies take processing timeinto account and discriminate against requests with a long processing time.

    In the high priority class of requests, using the basic Di8Serv policy has the lowest drop ratebecause the requests in the low priority queue can only be processed when the high priority queueis empty. We can see from the lower chart in Fig. 6 that some high priority requests may stillbe dropped. This can happen when the timeout threshold is reached, for example, when a highpriority request comes into the system while a request that asked for a large document is beingprocessed.

    5.1.3. LatenessFrom Table 2, we note that ATC, WSPT and EDD can meet the Lateness QoS requirement. On

    the other hand, Best E8ort and basic Di8Serv cannot. In Di8Serv, most of the High priority requestscan meet the Lateness QoS requirement (negative Lateness), however, the Lateness of some requestscan be over 60 s (STD).

    5.1.4. ThroughputThe mean throughput of all requests is about 12,697,600 bits per second, the service rate of the

    server, because the web server is overwhelmed (see Table 2). We 9nd that di8erences in standarddeviation of the throughput between the 9ve policies are quite large as shown in Fig. 7.

    We observe from the lower chart in Fig. 7 that with High priority class requests, the basicDi8Serv policy yields the highest throughput among the 9ve models and EDD has the lowest. Themore requests that are dropped, the less the throughput yield, and vice versa.

  • 1160 N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164

    For Medium priority requests, WSPT has the highest throughput, about 21% more than BestE8ort and 53% more than basic Di8Serv. For Low priority requests, WSPT and ATC have thelowest throughput and standard deviation (see Table 2). Best E8ort and EDD policies have thehighest throughput. However, the standard deviations are quite large.

    5.1.5. Requests waiting in queueThe average number of requests waiting in the queue under each of the policies is shown in Table

    2. There are only 25 requests waiting in the queue under WSPT and ATC policies, while EDD hasabout 67 requests. Best E8ort and basic Di8Serv have about 8989 and 7279 requests waiting in thequeue, respectively.

    5.2. Light tra@c case

    We now present the results of the light traMc scenario. The simulation results for this section aregiven in Table 3.

    5.2.1. Waiting timeThe overall waiting time of the ATC, Best E8ort, basic Di8Serv, EDD and WSPT policies is

    shown in Fig. 8. WSPT and ATC have similar performance. The overall waiting time for Best E8ortand basic Di8Serv are more than 15 times longer than the WSPT and ATC policies.

    From Table 3 and Fig. 8, we can see that the deviation of waiting times of the Best E8ortand basic Di8Serv policies are very large, about 5.87 for Best E8ort and 5.49 for basic Di8Serv.Congestion can happen under Best E8ort and basic Di8Serv models at some points, leading to longerwaiting times. A request may need to wait more than 40 s before getting service.

    5.2.2. Drop rateTable 3 also includes the drop rate information. There are no requests dropped under the Best

    E8ort and Basic Di8Serv policies. There is a tradeo8 between waiting time and requests dropped.There are a small amount of requests dropped under ATC, WSPT and EDD policies, due to theadmission control scheme, however a shorter waiting time is gained.

    5.2.3. LatenessWe 9nd that on the average there is no violation of due date requirements in the Light TraMc

    scenario under each of the policies because the mean lateness, as shown in Table 3, is negative.But we should keep in mind that, as shown in Fig. 8, the waiting time of some requests can exceed40 s and thus the lateness can be positive.

    5.2.4. ThroughputSince the incoming traMc and service rates are the same under all policies, they produce almost

    the same level of throughput in the light traMc scenario because the system is stable and able tohandle the incoming traMc. The throughput of the Best E8ort and Basic Di8Serv policies are a littlehigher than that of the other policies because no requests are dropped, resulting in a higher arrival(and hence, throughput) rate.

  • N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164 1161

    Table 3Experimental results of the light traMc case

    Priority class BE DS WSPT ATC EDD

    Waiting time of requestsAll Mean 1.530703 1.493246 0.092163 0.090235 0.124289

    STD 5.878742 5.493077 0.04996 0.046535 0.062083High Mean 1.529657 0.354413 0.068523 0.069562 0.126305

    STD 5.887196 1.552743 0.03778 0.036143 0.063747Medium Mean 1.530479 1.825389 0.078798 0.079292 0.121852

    STD 5.875447 6.939911 0.044733 0.042569 0.056926Low Mean 1.531522 1.827767 0.117916 0.112029 0.125791

    STD 5.878587 6.948821 0.065669 0.059874 0.070296

    Requests droppedAll Mean 0 0 1.352074 1.441486 1.659334

    STD 0 0 4.215406 4.232963 4.426103High Mean 0 0 0.217108 0.232718 0.312424

    STD 0 0 0.777847 0.781258 0.828979Medium Mean 0 0 0.50021 0.53609 0.676685

    STD 0 0 1.694942 1.698867 1.787868Low Mean 0 0 0.635015 0.672957 0.67051

    STD 0 0 1.746721 1.756716 1.810628

    Lateness of requestsAll Mean −0.47009 −0.50748 −1.91247 −1.91452 −1.87361

    STD 5.881148 5.495734 0.047722 0.04479 0.068402High Mean −0.47449 −1.65009 −1.93752 −1.9364 −1.87521

    STD 5.889278 1.557602 0.046556 0.045589 0.07022Medium Mean −0.46995 −0.17499 −1.92551 −1.92505 −1.87535

    STD 5.877842 6.941919 0.044774 0.042893 0.065856Low Mean −0.46819 −0.17192 −1.88658 −1.89277 −1.87127

    STD 5.881193 6.951396 0.063165 0.058128 0.076792

    Throughput of requestsAll Mean 8995437 9022972 8599323 8592518 8624697

    STD 1564300 1543513 1198582 1187294 1203524High Mean 1713773 1719019 1651044 1644867 1636118

    STD 360182.2 333671.4 315147.4 314850.4 317698.7Medium Mean 3598025 3609039 3460339 3463858 3459718

    STD 883359.1 913816 747808.3 744651.2 746697.3Low Mean 3683673 3694949 3487975 3483828 3528896

    STD 1390708 1345147 1327450 1326310 1347935

    5.2.5. Requests waiting in queueThere are about 3.6 requests waiting in the queue under the WSPT and ATC policies, 5.3 requests

    under EDD, 61.5 requests under basic Di8Serv and 62.2 requests under Best E8ort. The smallernumber of waiting requests in the queue contributes to the shorter waiting time.

  • 1162 N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164

    05

    101520

    2530354045

    600 1100 1600 2100 2600 3100 3600

    Time (Second)

    Sec

    on

    d

    BE

    DS

    00.050.1

    0.150.2

    0.250.3

    0.350.4

    0.45

    600 1600 2600 3600

    Time (Second)

    Sec

    on

    d WSPT

    ATC

    EDD

    Fig. 8. Light traMc case: overall waiting time.

    6. Conclusions and future work

    In this paper, we demonstrate how to model a sequential web server as a single machine and applyWSPT, ATC, and EDD queuing disciplines to di8erentiate the services, and thus provide QoS. Wecompare these policies with the Best E8ort and basic Di8Serv policies.

    We propose that most web servers can be modeled as a Best E8ort Model using the FIFO queuingdiscipline. We then verify that the Best E8ort Model cannot provide quality of service, especiallyin regard to waiting time. Furthermore, our results show that with the Basic Di8Serv Model, theperformance of high priority requests cannot be guaranteed because the requests cannot meet theLateness QoS requirement. Performance using other classes of requests is no better than the BestE8ort policy can provide. As we can see from our experimental results, the Basic Di8Serv policycannot provide QoS for a web server.

    We introduce an Admission Control Scheme, which contributes to a tremendous improvement inperformance. Thanks to the Admission Control Scheme, the overall waiting time of WSPT, ATC,and EDD Models are much shorter than the overall waiting time of Best E8ort and Basic Di8ServModels. The waiting time of Best E8ort policy is about 900 times longer than that of WSPT inthe overwhelming traMc case. In the light traMc case, the waiting time of WSPT is also about 15times shorter than that of Best E8ort. It reveals that the Admission Control Scheme is e8ective tomaintain timeliness for an overwhelmed web server.

    We have shown that WSPT and ATC dispatching rules can be used to provide di8erentiatedservices. From the simulation results, the performance of ATC is quite similar to that of WSPTwhen the scaling parameter k is set to 100. We also create other scenarios of ATC with di8erentscaling parameters and 9nd that there is no signi9cant di8erence if the scaling parameter k is largerthan 10 in our simulation.

  • N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164 1163

    We can also safely conclude that our QoS models not only provide good performance when theserver is overloaded but also work well under a light traMc condition. In a light traMc scenario,the QoS models provide better waiting time for high priority requests at the cost of a very smallamount of dropped lower priority requests.

    However, as Pinedo [28] stated, real-world scheduling problems are di8erent from the mathematicalmodels in academia. For example, WSPT is a static rule which is not time dependent. It assumesthat there are n jobs to be scheduled and the problem is solved after the n jobs are scheduled. For aweb server, requests are submitted by clients continuously. WSPT may not be the optimal schedulingrule to gain the minimum total weighted completion time. Another important aspect is that stochasticmodels usually use special distributions which may not closely represent the behaviors of the realsystem. Here, we use a requested document size divided by service rate to decide the processingtime of a request. Request size follows the Pareto distribution. For a web server, the processing timeof a request may also be inLuenced by the load and con9guration of the web server.

    In spite of these small di8erences, scheduling rules in manufacturing can provide valuableinsights into scheduling problems in an information infrastructure. From the results of our research,we surmise that some manufacturing scheduling rules may be used to develop a framework forQoS enabled web servers. To implement our web server QoS models, other information about webrequests, such as hostname, port, etc. along with the requested links, are also required. We suggestthat further work might be done to investigate network costs and delays on implementing modelsbased on these frameworks.

    Acknowledgements

    This work is sponsored by the Air Force Research Laboratory—Rome (AFRL-Rome) under grantnumber F30602-01-1-0510, and the Department of Defense (DoD) and the Air Force OMce ofScienti9c Research (AFOSR) under grant number F49620-01-1-0317. The US government is autho-rized to reproduce and distribute reprints for governmental purposes notwithstanding any copyrightannotation thereon. The views and conclusions contained herein are those of the authors and shouldnot be interpreted as necessarily representing the oMcial policies or endorsements, either express orimplied, of, AFRL-Rome, DoD, AFOSR, or the US Government.

    References

    [1] Lawrence TF. The quality of service model and high assurance. In Proceedings of the IEEE High Assurance SystemsEngineering Workshop, Washington, DC, 1997.

    [2] Ye N. QoS-Centric stateful resource management in information systems. Information Systems Frontiers2002;4(2):149–60.

    [3] Bhatti N, Friedrich R. Web server support for Tiered services. IEEE Network 2000;13(5):64–71.[4] Strnadl C. At your service: QoS for the Internet. IEEE Multimedia 2002;9(1):93–5.[5] Bhatti N, Bouch A, Kuchinsky A. Integrating user-perceived quality into web server design. Computer Networks

    2000;33(1–6):1–16.[6] Lzzo P. Gigbit networks: standards and schemes for next-generation networking. New York: Wiley; 2000.

    p. 177–233 [chapter 6].[7] Braden R, Clark D, Shenkar S. Integrated services in the internet architecture: an overview, RFC 1633, IETF, 1994.

  • 1164 N. Ye et al. / Computers & Operations Research 32 (2005) 1147–1164

    [8] Braden R, Zhang L, Berson S, Herzog S, Jamin, S. Resource reservation protocol (RSVP) version 1, functionalspeci9cation, RFC 2205, IETF, 1997.

    [9] Wroklawski J. The use of RSVP with IETF integrated services, RFC 2210, 1997.[10] Wroklawski J. Speci9cation of the controlled-load network element service, RFC 2211, 1997.[11] Shenker S, Partridge C, Guerin R. Speci9cation of guaranteed quality of service, RFC 2212, 1997.[12] Almeida J, Dabu M, Manikutty A, Cao P. Providing di8erentiated levels of services in web content hosting. First

    Workshop on Internet Server Performance, Madison, WI, 1998.[13] Cherkasova L, Phaal P. Session-based admission control: a mechanism for peak load management of commercial

    web sites. IEEE Transactions on Computers 2002;13(6):669–85.[14] Li K, Jamin S. A measurement-based admission-controlled web server. Proceedings of the IEEE INFCOM 2000,

    2000.[15] Zhang H. Service disciplines for guaranteed performance service in packet-switching networks. Proceedings of the

    IEEE, 1995;83(10):1374–96.[16] Chandra S, Ellis CS, Vahdat A. Di8erentiated multimedia web services using Quality Aware Transcoding, INFOCOM,

    2000.[17] Nichols K, Blake S, Baker F, Black D. De9nition of the di8erentiated services 9eld (DS Field) in the Ipv4 and

    Ipv6 headers, RFC 2474, 1998.[18] Nichols K, Jacobson V, Zhang L. A two-bit di8erentiated services architecture for the Internet, 1999.

    ftp://ftp.ee.lbl.gov/papers/dsarch.pdf.[19] Lu C, Abdelzaher TF, Stankovic JA, Son SH. A feedback control approach for guaranteeing relative delays in web

    servers. IEEE Real-Time Technology and Application Symposium (RTAS’ 2001), Taipei, Taiwan, 2001.[20] Conti M, Gregori E, Panzieri F. Load distribution among replicated web servers: a Qos-based approach. Second

    Workshop on Internet Server Performance in conjunction with ACM SIGMETRICS 99/FCRC, Atlanta, GA, 1999.[21] Engelschall RS. Load balancing your web site, Practical Approaches for Distributing HTTP traMc, 1998.

    www.webtechniques.com/archives/1998/engelschall.[22] Shan Z, Lin C, Marinescu DC, Yang Y. Modeling and performance analysis of QoS-aware load balancing of

    web-server clusters. Computer Networks 2002;40(2):235–56.[23] Rhee, Yoon-Jung, Hyun, Eun-Sil, Kim, Tai-Yun. Connection management for QoS service on the Web. Journal of

    Network and Computer Applications 2002;25(1):57–68.[24] Abdelzaher TF, Shin KG, Bhatti N. Performance guarantees for web server end-systems: a control-theoretical

    approach. IEEE Transactions on Parallel and Distributed Systems 2002;13(1):80–96.[25] Striegel A, Manimaran G. Packet scheduling with delay and loss di8erentiation. Computer Communications

    2002;25(1):21–31.[26] Ferrari T. End-to-end performance analysis with traMc aggregation. Computer Networks 2000;34(6):905–14.[27] Chen H, Mohapatra P. Overload control in QoS-aware web servers. Computer Networks 2003;42(1):119–33.[28] Pinedo M. Scheduling theory, algorithms, and systems. Englewood Cli8s, NJ: Prentice-Hall; 1995.[29] Guerin R, Peris V. Quality-of-service in packet networks: basic mechanisms and directions. Computer Networks

    1999;31(3):169–89.[30] Silberschatz A, Galvin PB, Gagne G. Operating systems concepts, 6th ed. New York: Wiley; 2003. p. 158–61

    [chapter 6].[31] David von S. CRC Standard curves and surfaces. Boca Raton, FL: CRC Press; 1993. p. 252–3.[32] Arlitt MF, Williamson CL. Web server workload characterization: the search for invariants. In ACMSIGMETRICS

    Performance Evaluation Review 1996:126–37.

    ftp://ftp.ee.lbl.gov/papers/dsarch.pdfhttp://www.webtechniques.com/archives/1998/engelschall

    Web server QoS models: applying scheduling rules from production planningIntroductionWeb server QoS modelsThe basic Best Effort modelThe basic DiffServ modelWSPTATCEDD

    QoS measuresSimulation model and experimentsExperimental resultsOverwhelming traffic caseWaiting timeDrop rateLatenessThroughputRequests waiting in queue

    Light traffic caseWaiting timeDrop rateLatenessThroughputRequests waiting in queue

    Conclusions and future workAcknowledgementsReferences