worm damage minimization in enterprise networks

14
Int. J. Human-Computer Studies 65 (2007) 3–16 Worm damage minimization in enterprise networks Surasak Sanguanpong, Urupoj Kanlayasiri Department of Computer Engineering, Faculty of Engineering, Kasetsart University, Bangkok 10900, Thailand Available online 30 October 2006 Abstract Attackers utilize many forms of intrusion via computer networks; currently, worms are an important vector with the potential for widespread damage. None of the strategies is effective and rapid enough to mitigate worm propagation. Therefore, it is extremely important for organizations to better understand worm behaviour and adopt a strategy to minimize the damage due to worm attacks. This paper describes an approach to minimize the damage due to worm infection in enterprise networks. The approach includes: (1) analyzing the effect of parameters influencing worm infection: openness, homogeneity, and trust, (2) predicting the number of infected nodes by fuzzy decision, and (3) optimizing the trust parameter to minimize the damage by fuzzy control. Experiments using real worm attacks show that the selected parameters are strongly correlated with actual infection rates, the damage prediction produces accurate estimates, and the optimization of the selected parameter can lessen the damage from worm infection. r 2006 Elsevier Ltd. All rights reserved. Keywords: Worm; Worm infection; Fuzzy decision; Fuzzy control; Network security 1. Introduction The pervasive use of networked computing in almost all aspects of the knowledge economy raises concerns about network security and the potential damage due to intrusion. Worms are pieces of executable code or programs that can automatically replicate themselves on machines by exploiting vulnerable services. The first Morris worm in 1988 that spread by exploiting vulnerabilities in hosts running variants of BSD UNIX (Staniford et al., 2002) demonstrated that a worm can rapidly infect a large number of disperse systems. In the last few years, a dramatic increase in worm outbreaks occurred, including Code Red, Code Red II, Nimda, and Slammer. Code Red infected hosts running Microsoft Internet Information Server by exploiting an .ida vulnerability (Moore and Shannon, 2002). The next version of Code Red, Code Red II, proved even more dangerous than the original as Code Red II was not memory resident. Therefore, rebooting of an infected machine did not halt the worm. Analyses of Code Red are in (Moore and Shannon, 2002; Moore et al., 2003). Nimda used multiple mechanisms for infection: from client to client via email, from client to client via open network shares, and from web server to client via browsing of compromised web sites (CERT/CC, 2001). In 2003, Slammer rapidly spread across the Internet by exploiting a buffer overflow vulnerability in Microsoft’s SQL Server or Microsoft SQL Server Desktop Engine (CERT/CC, 2003a). It is regarded as the fastest worm in history. Recent worms such as Blaster and Sasser infect vast numbers of desktop computers in enterprise networks by exploiting vulnerabilities in the default configuration of desktop operating systems. This differs from the behaviour of Code Red and Slammer, which exploited holes in optional components of servers. Blaster used a vulner- ability in Microsoft’s DCOM RPC interface, enabling a remote attacker to execute an arbitrary code with local system privileges or to cause a denial of service condition (CERT/CC, 2003b). Sasser exploits a LSA buffer overflow bug. Similar to Blaster, it uses a public exploit for the LSA vulnerability in order to obtain a system level command shell on its victims (Yuji and Derek, 2004). The worm propagates by FTP download and then executes a copy of the worm executable from a basic FTP service installed on the attacking system. Blaster and Sasser are regarded as the ARTICLE IN PRESS www.elsevier.com/locate/ijhcs 1071-5819/$ - see front matter r 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.ijhcs.2006.09.001 Corresponding author. Tel.: +66 2562 0951; fax: +66 2562 0950. E-mail addresses: [email protected] (S. Sanguanpong), [email protected] (U. Kanlayasiri).

Upload: surasak-sanguanpong

Post on 21-Jun-2016

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Worm damage minimization in enterprise networks

ARTICLE IN PRESS

1071-5819/$ - se

doi:10.1016/j.ijh

�CorrespondE-mail addr

[email protected]

Int. J. Human-Computer Studies 65 (2007) 3–16

www.elsevier.com/locate/ijhcs

Worm damage minimization in enterprise networks

Surasak Sanguanpong, Urupoj Kanlayasiri�

Department of Computer Engineering, Faculty of Engineering, Kasetsart University, Bangkok 10900, Thailand

Available online 30 October 2006

Abstract

Attackers utilize many forms of intrusion via computer networks; currently, worms are an important vector with the potential for

widespread damage. None of the strategies is effective and rapid enough to mitigate worm propagation. Therefore, it is extremely

important for organizations to better understand worm behaviour and adopt a strategy to minimize the damage due to worm attacks.

This paper describes an approach to minimize the damage due to worm infection in enterprise networks. The approach includes: (1)

analyzing the effect of parameters influencing worm infection: openness, homogeneity, and trust, (2) predicting the number of infected

nodes by fuzzy decision, and (3) optimizing the trust parameter to minimize the damage by fuzzy control. Experiments using real worm

attacks show that the selected parameters are strongly correlated with actual infection rates, the damage prediction produces accurate

estimates, and the optimization of the selected parameter can lessen the damage from worm infection.

r 2006 Elsevier Ltd. All rights reserved.

Keywords: Worm; Worm infection; Fuzzy decision; Fuzzy control; Network security

1. Introduction

The pervasive use of networked computing in almost allaspects of the knowledge economy raises concerns aboutnetwork security and the potential damage due tointrusion. Worms are pieces of executable code orprograms that can automatically replicate themselves onmachines by exploiting vulnerable services. The first Morrisworm in 1988 that spread by exploiting vulnerabilities inhosts running variants of BSD UNIX (Staniford et al.,2002) demonstrated that a worm can rapidly infect a largenumber of disperse systems. In the last few years, adramatic increase in worm outbreaks occurred, includingCode Red, Code Red II, Nimda, and Slammer.

Code Red infected hosts running Microsoft InternetInformation Server by exploiting an .ida vulnerability(Moore and Shannon, 2002). The next version of CodeRed, Code Red II, proved even more dangerous than theoriginal as Code Red II was not memory resident.Therefore, rebooting of an infected machine did not haltthe worm. Analyses of Code Red are in (Moore and

e front matter r 2006 Elsevier Ltd. All rights reserved.

cs.2006.09.001

ing author. Tel.: +662562 0951; fax: +66 2562 0950.

esses: [email protected] (S. Sanguanpong),

c.th (U. Kanlayasiri).

Shannon, 2002; Moore et al., 2003). Nimda used multiplemechanisms for infection: from client to client via email,from client to client via open network shares, and from webserver to client via browsing of compromised web sites(CERT/CC, 2001). In 2003, Slammer rapidly spread acrossthe Internet by exploiting a buffer overflow vulnerability inMicrosoft’s SQL Server or Microsoft SQL Server DesktopEngine (CERT/CC, 2003a). It is regarded as the fastestworm in history.Recent worms such as Blaster and Sasser infect vast

numbers of desktop computers in enterprise networks byexploiting vulnerabilities in the default configuration ofdesktop operating systems. This differs from the behaviourof Code Red and Slammer, which exploited holes inoptional components of servers. Blaster used a vulner-ability in Microsoft’s DCOM RPC interface, enabling aremote attacker to execute an arbitrary code with localsystem privileges or to cause a denial of service condition(CERT/CC, 2003b). Sasser exploits a LSA buffer overflowbug. Similar to Blaster, it uses a public exploit for the LSAvulnerability in order to obtain a system level commandshell on its victims (Yuji and Derek, 2004). The wormpropagates by FTP download and then executes a copy ofthe worm executable from a basic FTP service installed onthe attacking system. Blaster and Sasser are regarded as the

Page 2: Worm damage minimization in enterprise networks

ARTICLE IN PRESSS. Sanguanpong, U. Kanlayasiri / Int. J. Human-Computer Studies 65 (2007) 3–164

trend of attacks that produce great damage for enterprisenetworks today.

There are three broad strategies (Moore et al., 2003) forlimiting attacks by worms: prevention, treatment, andcontainment. Prevention is how to reduce the size of thevulnerable population. Secure design in software engineer-ing and application of good security practices in networkadministration are forms of prevention (Necula, 1997;Cowan et al., 1998; Wagner et al., 2000). Treatmentincludes measures to detect and eradicate worms: intrusiondetection systems (Cheung et al., 1999; Toth and Kruegel,2002; Williamson, 2002), antivirus, and system update areexamples of tools for treatment. Containment (Kephartand White, 1993; Moore et al., 2003; Eustice et al., 2004) isexemplified by content filtering and address blacklisting.However, none of these strategies is effective and rapidenough to adequately mitigate worm propagation. There-fore, it is extremely important for organizations to betterunderstand the behaviour of worm infections in order toassess their vulnerability and adopt a strategy to minimizethe damage due to worm attacks.

This paper describes an approach to minimize thedamage due to worm infection in enterprise networks.The approach includes analyzing the effect of parametersinfluencing worm infection, predicting the number ofinfected nodes, and optimizing a key parameter tominimize the damage. Prediction of the number of infectednodes is performed by developing measurements ofdifferent factors and then fusing them by a fuzzy decisionprocess. The optimization of a trust parameter to minimizethe worm damage is performed by automatic parametertuning using a fuzzy controller that employs rulesincorporating qualitative knowledge of the effect of theparameter. Fuzzy logic (Zadeh, 1994) is used for predictionand optimization in this problem because the measures areuncertain and imprecise, and human experts have intuitionor knowledge of the effects of characteristics of parametersthat relate to worm attacks.

The rest of this paper is organized as follows. In Section2, related work is described. We give the overview of theapproach in Section 3. Section 4 presents the analysis ofmeasures influencing worm infection. The prediction ofworm damage is described in Section 5. Section 6 presentsthe parameter tuning by fuzzy controller to minimize wormdamage. Experiments are shown in Section 7. Finally,Section 8 gives the concluding remarks.

2. Related work

Wang et al. (2000) investigated several factors influen-cing worm infection: system topology, node immunity,temporal effects, propagation selection, multiple infections,and stochastic effects. This simulation study consideredhierarchical and cluster topologies with the selectiveimmunizations of hosts. Both topologies support criticalinfrastructure that contrasts with the fully connected, opennature of the Internet. Ellis (2003) describes an analytical

framework for worm infection in relational description andattack expression. The four conditions for infection aretargeting, vulnerability, visibility, and infectability, whichare used to calculate the set of infectable hosts.Several studies attempt to estimate the damage and

predict the spread of worms. There are several approachesto model the spread of worms in networks, principally theEpidemiological model (Kephart and White, 1991, 1993),the two-factor worm model (Zou et al., 2002), and theAnalytical Active Worm Propagation (AAWP) model(Chen et al., 2003). The Epidemiological model is a simplemodel that explains the spread of computer viruses byemploying biological epidemiology. The number of in-fected hosts depends on vulnerability density and scanningrate. In this model, the infection initially grows exponen-tially until the majority of hosts are infected, then theincidence slows toward a zero infection rate.The two-factor worm model describes the behaviour of

worm based on two factors: the dynamic countermeasureby ISPs and users, and a slowed down worm infection rate.This model explains observed data for Code Red and thedecrease in scanning attempts during the last several hoursbefore it ceased propagation. The AAWP model extendsthe model of worms that employ random scanning to coverlocal subnet scanning worms. Parameters in this modelinclude the number of vulnerable machines, size of hitlists,scanning rate, death rate, and patching rate. AAWP bettermodels the behaviour of Code Red II than previousmodels.Unlike the above models, our approach does not require

observing variables during attacks. Therefore, it can beused to predict worm damage before the attack occurs. Themodel does not rely on attack type and configuration of theworm programme. Such factors are: (1) scanning rate in theEpidemiological and the AAWP models and (2) size ofhitlists in the AAWP model. In addition, our predictiondoes not depend on human countermeasures as in theAAWP model and the two-factor worm model because wecannot observe these factors before the attack occurs.To predict worm damage, an important task of fuzzy

decision that is often difficult and time consuming is thedetermination of membership functions. Traditionally thiscan be performed by experts but this is not always the mostsuitable method. Several techniques, such as inductivereasoning (Kim and Russell, 1989, 1993; Kim, 1997),neural networks, and genetic algorithms, have been used togenerate membership functions and production rules. Ininductive reasoning, as long as the data are not dynamicthe method will produce good results (Ross, 1997). Theinductive reasoning method uses the entire data set toformulate membership functions and, if the data is notlarge, this method is computationally inexpensive. Com-pared to neural networks and genetic algorithms, inductivereasoning has an advantage in the fact that the methodmay not require a convergence analysis, which in the caseof neuron networks and genetic algorithms is computa-tionally very expensive.

Page 3: Worm damage minimization in enterprise networks

ARTICLE IN PRESSS. Sanguanpong, U. Kanlayasiri / Int. J. Human-Computer Studies 65 (2007) 3–16 5

To adjust the parameter for minimizing worm damage,we employ the concept of fuzzy feedback control. Incontrol theory, there has been much interest in usingcontrol methods for adjusting parameters. Some examplesare the control of performance metrics (Ogata, 1997),queue length in Lotus Note (Parekh et al., 2002), andbuffer length in Internet routers (Misra et al., 2000). All ofthese methodologies require the knowledge of constructingfunctions to control parameters. The fuzzy control para-digm is based on interpolative and approximate reasoning(Phillips and Harbor, 1996). It is a generally model-freeparadigm. This method uses fuzzy rules to encode knowl-edge, thereby avoiding difficulties with skills-intensiveconsideration such as the worm signature construction,the worm detailed design. Alternatively, artificial neuralnetworks are based on analogical learning and try to learnthe nonlinear decision surface through adaptive andconverging techniques. However, this method requireshigh data availability of inputs and outputs, and theconsideration of performance criteria in convergenceanalysis.

3. An approach

The aim of the approach is to minimize the damage dueto worm infection in enterprise networks. The approach ismainly composed of three parts: factor analysis, damageprediction, and parameter tuning. Factor analysis performsthe analysis of parameters that influence worm infection.Damage prediction is in charge of predicting the number ofinfected nodes due to worm attacks. Parameter tuningoptimizes a key factor to minimize the worm damage. Theoverview approach is shown in Fig. 1.

In this approach, we initially study the enterprise

network, scanning worm, worm damage, and network full

infection time. The enterprise network is the heterogeneousIP network consisting of a number of subnetworks. Thescanning worm assumption is that worms target to exploitvulnerabilities of desktop computers that comprise themajority of hosts in the enterprise network. The scanningworms require no user intervention for their execution. Theterm worm damage is defined as the number of infected

DamagePrediction

Factor Analysis

Enterprise Networks

Evaluation factors

Host and network configuration

Key factors

ParameterTuning

Fig. 1. The worm damage minimization approach.

hosts in the enterprise network. Finally, the network fullinfection time is the time point at which the number ofinfected hosts is saturated. In other words, when theincreasing rate of infection is zero, we reach the time ofnetwork full infection.

4. Factor analysis

The general behaviour of worms (Ellis, 2003; Kenzle andElder, 2003; Weaver et al., 2003; Wegner et al., 2003)includes three processes: scanning, attacking, and propa-gating. We study and define parameters that relate to thesethree processes: openness, homogeneity, and trust. Opennessdescribes the quantity of hosts that can be scanned;homogeneity defines the area of infection—the more hostswith the same vulnerability, the more number of infectedhosts. Finally, trust determines relations among hosts thatworms use for propagation. Three factors are extractedfrom the host and network configuration: openness (O),homogeneity (H), and trust (T). The worm damage (D) canbe given as a function of these factors:

D ¼ GðO;H;TÞ. (1)

4.1. Openness

Openness describes the vulnerability of enterprise net-works to scanning by worms. Typically, machines that arehidden from scanning by worms are safer than visible ones.The visibility can be configured by Network AddressTranslation (NAT) or firewall technology. Openness can bemeasured by the ratio of the number of hosts that can bescanned by any host to the total number of hosts by

O ¼

PjjxsðejÞj

n, (2)

where ej is the collection of hosts on subnetwork j, xs is afunction that selects hosts in ej that can be connected to viaTCP, UDP or ICMP from outside the network j, and n istotal number of hosts on the network. For example thenetwork E shown in Fig. 2, if the gateway G1 configures

Fig. 2. Extracted parameters of enterprise network.

Page 4: Worm damage minimization in enterprise networks

ARTICLE IN PRESSS. Sanguanpong, U. Kanlayasiri / Int. J. Human-Computer Studies 65 (2007) 3–166

NAT for the network E3 then the enterprise network E hasO ¼ 0:66.

4.2. Homogeneity

Homogeneity measures the density of hosts that can beattacked by a worm. When a worm attacks a host, it willexploit other hosts through the same vulnerability. In thisstudy, we assume that the operating system, rather thanapplication software, represents the mode of vulnerability.Therefore, H is defined as the homogeneity of operatingsystem by hosts on the network:

H ¼1

nmaxk2K

mðkÞ, (3)

where K is a set of operating system types on the network,m(k) is the number of hosts running operating system k,and n is the total number of hosts on the network. For theexample network E shown in Fig. 2, b operating system hasthe maximum number of hosts, H ¼ 0:53.

4.3. Trust

Trust is a relationship between a trustor and a trustee.The trustor allows the trustee to use, manipulate itsresources, or influence the trustor’s decision to useresources or services provided by the trustee. A trustrelationship can be represented by a directed graph. We usea nondeterministic finite-state automaton A to describe thetrust relationship of desktop computers in the enterprisenetwork, where A ¼ ðQ;P; f ; q0;F Þ consists of a set Q ofstates, an input alphabet P, a transition function f betweenstates Q which depends on the input, a starting state q0,and a subset F of Q consisting of the final states. The set ofstates Q is a group of machines in enterprise network. Thefunction f represents the propagation of a worm. q0 is thestarting node that the worm first exploits and F contains aset of possible attacked nodes. The input for function f isassumed to be a constant. Then, T can be calculated by

T ¼

Pni¼1½mðF jqo ¼ iÞ � 1�

n � ðn� 1Þ, (4)

where m(Fjqo ¼ i) is the number of elements in the set F

with the starting node i. n the number of elements in the setQ, respectively. In Fig. 2, the directed graph of nodesillustrates the example of trust relationship. Using Eq. (4),

Input

FuzzificationInferen

Production

Fig. 3. Fuzzy rule

in the enterprise network E, T is equal to 0.17. For theexample of calculation, m(Fjqo ¼ a of E1) is equal to 4.

5. Damage prediction

The damage prediction uses fuzzy rule-based system toconstruct the decision surface. Three steps are performed;fuzzification, inference, and defuzzification as shown inFig. 3. The fuzzy sets for inputs and output are as follows:

ce

ru

-bas

Input: evaluation factors (O, H, T)Fuzzy set: {Low, Middle, High}

les

ed syst

Output: worm damage (D)Fuzzy set: {Normal, Critical}

The primary result of prediction is ‘‘Normal’’ or‘‘Critical’’ condition. In addition, it will be transformedto the number of infected hosts by the defuzzificationprocess. The exact partitioning of input and output spacesdepends upon membership functions. Triangular shapesspecify the membership functions of inputs by inductivereasoning. The damage threshold, which is defined by anorganization, divides the output into two classes. Expertknowledge is used to generate production rules. For fuzzyinference, we use the minimum correlation method, whichtruncates the consequent fuzzy region at the truth of thepremise. The centroid defuzzification method is adopted toyield the expected value of the solution fuzzy region.

5.1. Fuzzification

Membership functions can be accommodated by using theessential characteristic of inductive reasoning. The inductionis performed by the entropy minimization principle (Kim,1997). A key goal of entropy minimization analysis is todetermine the quantity of information in a given data set.The entropy of a probability distribution is a measure of theuncertainty of the distribution. It draws a thresholdline between two classes of sample data as in Fig. 4.This classifies the samples while minimizing the entropy foran optimum partitioning. We select a threshold value x inthe range between x1 and x2. This divides the range into tworegions, [x1, x] and [x, x2] or p and q, respectively.The entropy (Christensen, 1964) for a given value of x is

SðxÞ ¼ pðxÞSpðxÞ þ qðxÞSqðxÞ, (5)

Oututp

Defuzzification

em.

Page 5: Worm damage minimization in enterprise networks

ARTICLE IN PRESSS. Sanguanpong, U. Kanlayasiri / Int. J. Human-Computer Studies 65 (2007) 3–16 7

where

SpðxÞ ¼ �½p1ðxÞ ln p1ðxÞ þ p2ðxÞ ln p2ðxÞ�, (6)

SqðxÞ ¼ �½q1ðxÞ ln q1ðxÞ þ q2ðxÞ ln q2ðxÞ� (7)

and wherepk(x) and qk(x) are the conditional probabilities that the

class k sample is in the region [x1, x1+x] and [x1+x, x2],respectively.

p(x) and q(x) are probabilities that all samples are in theregion [x1, x1+x] and [x1+x, x2], respectively

pðxÞ þ qðxÞ ¼ 1.

The entropy estimates of pk(x), qk(x), p(x), and q(x) are

Fig. 4. Basic concept of entropy minimization.

Fig. 5. The membership functions of inpu

calculated, as follows:

pkðxÞ ¼nkðxÞ þ 1

nðxÞ þ 1, (8)

qkðxÞ ¼NkðxÞ þ 1

NðxÞ þ 1, (9)

pðxÞ ¼nðxÞ

n, (10)

qðxÞ ¼ 1� pðxÞ, (11)

where nk(x) is the number of class k samples in [x1, x1+x],n(x) is the total number of samples in [x1, x1+x], Nk(x) isthe number of class k samples in [x1+x, x2], N(x) is thetotal number of samples in [x1+x, x2], n is the totalnumber of samples in [x1, x2].The value of x in the interval [x1, x2] that gives the

minimum entropy is chosen as the optimum thresholdvalue. This x divides the interval [x1, x2] into twosubintervals. In the next sequence we conduct thesegmentation again, on each of the subintervals; thisprocess will determine secondary threshold values. Thesame procedure is applied to calculate these secondarythreshold values. Fuzzy sets of inputs are defined bytriangular shapes with these optimum threshold values. In

ts and output for damage prediction.

Page 6: Worm damage minimization in enterprise networks

ARTICLE IN PRESSS. Sanguanpong, U. Kanlayasiri / Int. J. Human-Computer Studies 65 (2007) 3–168

Fig. 5, m is the degree of membership for the membershipfunctions of inputs and output.

5.2. Inference

A set of production rules are expressed as follows:

L

LLLLLLLL

M

MMMMMMMM

H

HHHHHHHH

Rule 1: If (x1 is A11) and (x2 is A2

1) and y and (xw is Aw1 ),

then y is B1.

� Rule 2: If (x1 is A1

2) and (x2 is A22) and y and (xw is Aw

2 ),then y is B2

y.

� Rule z: If (x1 is A1

z) and (x2 is A2z) and y and (xw is Aw

z ),then y is Bz.

Here, xj (1pjpw) are input variables, y is an outputvariable, and Aj

i and Bi (1pipz) are fuzzy sets that arecharacterized by membership functions. The numbers ofinput and output variables are three and one, respectively.Total 27 production rules are generated by expertexperiences. The rules are shown in Fig. 6.

5.3. Defuzzification

The results of fuzzy decision are defuzzified to numericalvalues (the number of infected hosts) as shown in Fig. 7. Inthese graphs, the Z-axis values are the fraction of infectedhosts. The values on the X- and Y-axis represent (1) H andT in Fig. 7(a), (2) O and T in Fig. 7(b), and (3) O and H inFig. 7(c). Comparison of the three graphs for a givenmaximum value (1.0) of O, H, and T shows the effect onworm damage. These surfaces show that the factors have

IF THEN

O AND H T D

ow AND Low Low Normal

ow AND High High Normalow AND High Middle Normalow AND High Low Normalow AND Middle High Normalow AND Middle Middle Normalow AND Middle Low Normalow AND Low High Normalow AND Low Middle Normal

iddle AND Low Low Normal

iddle AND High High Criticaliddle AND High Middle Normaliddle AND High Low Normaliddle AND Middle High Normaliddle AND Middle Middle Normaliddle AND Middle Low Normaliddle AND Low High Normaliddle AND Low Middle Normal

igh AND Low Low Normal

igh AND High High Criticaligh AND High Middle Criticaligh AND High Low Normaligh AND Middle High Criticaligh AND Middle Middle Normaligh AND Middle Low Normaligh AND Low High Normaligh AND

AND

AND

ANDANDANDANDANDANDANDAND

AND

ANDANDANDANDANDANDANDAND

AND

ANDANDANDANDANDANDANDANDLow Middle Normal

Fig. 6. The production rule for fuzzy decision.

Fig. 7. The decision surfaces of worm damage prediction. (a) O ¼ 1.0 (b)

H ¼ 1.0 (c) T ¼ 1.0.

different effects on the fraction of infected hosts over abroad range of values.

6. Parameter tuning

Worm propagation is the process of copying worm code toother machines as much as possible. This behaviour acceleratesthe spread of worm to infection saturation in a very short time.A key parameter of host configuration that supports thepropagation process is the trust relationship between hosts.Therefore, controlling trust may lessen the worm damage. Weadjust a trust parameter using a fuzzy feedback control thatemploys rules incorporating qualitative knowledge of the effectof the parameter. An example of qualitative knowledge inworm propagation is ‘‘trust relationship among hosts has a

Page 7: Worm damage minimization in enterprise networks

ARTICLE IN PRESSS. Sanguanpong, U. Kanlayasiri / Int. J. Human-Computer Studies 65 (2007) 3–16 9

convex downward effect on the network full infection time inthe enterprise network’’. Our studies using a real worm attacksuggest that such a scheme can delay the network full infectiontime. Therefore, the mitigation of worm propagation has moretime to proceed.

6.1. The effect of trust

As we know, the higher trust (T) is, the smaller networkfull infection time (Tf) is. In the best case, T should be zero,at which point Tf will be maximized. Unfortunately, T

cannot be zero because some applications require relationsfor their operations. To demonstrate the effect of T on Tf,we conducted real attacks in which the value of T is varied.Fig. 8 displays the relation between network full infectiontime and trust while openness and homogeneity are heldfixed. The squares indicate the average full infection timesmeasured at different trust values in the network of 150hosts. As can be seen, in the very small T, Tf is nearlyconstant because relations have not much effect for thepropagation. On the other hand, if T is larger, Tf will bedecreasing because the larger T creates more scanner hosts.Our proposed fuzzy feedback control is to estimate theappropriate T for the largest Tf. For example, in Fig. 8 theappropriate T is the value in the interval of 0.2 and 0.4.

6.2. Fuzzy feedback control

A fuzzy feedback control handles a new worm attackwithout a system patch and signature knowledge. The trust

05

101520253035404550

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

Trust

Ful

l inf

ectio

n tim

e (m

ins.

)

Fig. 8. The relation of network full infection time and trust.

dTfDifferentiator

FuzzyController Integrator

dT*

dT

Fig. 9. Fuzzy feedbac

value is gradually changed by fuzzy controller to reduce thedamage in the next cycle of infection. The architecture ofthe fuzzy feedback control system is shown in Fig. 9. Thefeedback loop operates in discrete time. The first compo-nent in the feedback loop is a monitor that measures thenetwork full infection time or Tf. The next part is adifferentiator whose output is the change in network fullinfection time (dTf) between the current and previousmonitoring values. Moving further along the feedbackloop, there is the fuzzy controller that determines thechange in trust for the next time interval. The fuzzycontroller has two inputs: dTf and the change in trust of theprior time interval (dT*). The controller’s output is thechange in trust for the next interval (dT). An integratorconverts this output based on the prior T and the minimumtrust requirement into an actual T that is applied to theenterprise network.The fuzzy controller is derived from expert knowledge to

approximate and construct the control surface. The controlsystem design is based on interpolative and approximatereasoning. Three steps are performed: fuzzification, in-ference, and defuzzification. The exact partitioning of inputand output spaces depends upon membership functions.Triangular shapes specify the membership functions ofinputs and output. In Fig. 10, m is the degree of member-ship for the membership functions of inputs and output.The fuzzy sets for inputs and output are as follows:

T

k c

Input: dTf, dT*

Fuzzy set: {Negative, Zero, Positive}

EnN

T

ontrol s

Output: dT

Fuzzy set: {Negative, Positive}

Expert experiences are used to generate IF-THEN rulesbased on the knowledge of the relation between dTf anddT*. The qualitative knowledge in worm propagation inthis study is ‘‘trust relationship among hosts has a convexdownward effect on the network full infection time inenterprise network’’ as displayed in Fig. 8. A rule base is aset of production rules as in Fig. 11. For fuzzy inference,we use the minimum correlation method, which truncatesthe consequent fuzzy region at the truth of the premise. Thecentroid defuzzification method is adopted to yield theexpected value of the solution fuzzy region.

terpriseetwork Monitor

f

ystem.

Page 8: Worm damage minimization in enterprise networks

ARTICLE IN PRESS

Fig. 10. The membership functions of inputs and output for fuzzy feedback control system.

IF

dT*

Negative AND Zero Positive

Negative AND Positive Negative

Positive AND Negative Negative

Positive AND Zero Positive

dTf dTAND

THEN

Fig. 11. The production rules for fuzzy feedback control system.

S. Sanguanpong, U. Kanlayasiri / Int. J. Human-Computer Studies 65 (2007) 3–1610

7. Experiments

The test environment consists of a class C heterogeneousIP network subdivided into three wired subnets and onewireless subnet. There are 200 hosts consisting of desktopcomputers and laptops running a mixture of Windows NT,Windows 2000, Windows XP, Solaris, and Linux operatingsystems. In this network, a router connects the four subnetswith six Ethernet switches and two IEEE 802.11b wirelessaccess points. The bandwidth of the core network is100Mbps.

The experiments aim to investigate the usefulness ofparameters, to evaluate the performance of damageprediction, and to demonstrate the ability of parameteradjustment for delaying network full infection time. Blasterand Sasser, which attack the default configuration ofdesktop computers in enterprise networks and require nouser intervention, are used in the experiments. Code Redand Slammer were not chosen because they target server

applications and attack the optional component ofapplications. Nimda was not selected since it requires userintervention for some modes of infection, hence itsbehaviour is difficult to simulate.In the experiments, two variants of Blaster and two

variants of Sasser randomly attack the test network.Infection experiments were performed for 192 differenttest configurations that are the combination of differentvalues of three factors: O, H, and T. The openness value isvaried by NAT for computers in subnets. The homogeneityis the density of hosts running Windows family. Finally,the configuration of file transfer and file sharing service isused to represent trust conditions. The minimum require-ment of trust in this study is assumed to be 0.1.During worm execution, the number of infected compu-

ters is calculated at the average time of full infection foreach test configuration. The fraction of infected nodes istranslated into two classes using the damage threshold 0.3:Normal ðDo0:3Þ and Critical ðDX0:3Þ. The damagethreshold is the condition defined by the organization. Atotal of 1728 experimental results have been collected; ofthese, 864 are used for generating membership functionsand the other 864 are for evaluating the performance of thedamage prediction and parameter tuning.There are two main reasons that we perform real attacks

rather than simulations. First, a real attack can provide theconditions of practical configuration setup, effects ofenvironmental factors, and stochastic behaviours ofattacks. The other reason is that it is useful to setup hostpopulations as on real networks. One class C network canrepresent the actual address space of a small or mediumenterprise network. We can directly observe the conse-quence of an attack in a realistic topology.

Page 9: Worm damage minimization in enterprise networks

ARTICLE IN PRESSS. Sanguanpong, U. Kanlayasiri / Int. J. Human-Computer Studies 65 (2007) 3–16 11

7.1. Usefulness of factors

We first study the full infection condition of real attacksin enterprise networks. Worms were released in twonetworks of 200 hosts: real network and ideal network.The configuration of the real network is combinations ofO, H, and T. On the other hand, each O, H, and T is equalto 1 (the highest damage if an attack occurs) for theconfiguration of the ideal network. We study the networkfull infection time of two network configurations. As canbe seen from Fig. 12, the hosts in the real network are notall infected. The result reveals that some hosts areprotected from attacks by their configuration parameters.In addition, we can observe the network full infection timesfrom the experiment. The network full infection times arethe time point of 19 and 21 (the time of zero increase ofinfection) for the real network and the ideal network,respectively.

In general, there is no exact answer to the question ofwhich factors are most influential for infection. It isbelieved that the factors that influence the worm infectionsignificantly can be used to predict worm damage.To observe the effect of a single variable on the infection,Fig. 13 shows the number of infected hosts (Y-axis) as afunction of one variable when the other two are held fixed.As can be seen, the number of infected hosts increases asthe factor value increases. This means that the proposedfactors affect the number of infected hosts significantly. Wecan use these parameters for damage prediction.

We also consider the factors that are useful forclassifying the worm damage into two classes based ondamage threshold. To perform this, we analyse theReceiver Operating Characteristics (ROC) curve thatpresents the variation of true positives (Y-axis) accordingto the change of false positives (X-axis). Fig. 14 shows theROC curves with respect to the different damage thresholdof classification. Several thresholds are considered: 0.3, 0.4,0.5, and 0.6.

The ROC curve indicates that the factor is effective if itscurve lies above the no discrimination line. In addition, we

0.1

0

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 3 5 7 9 11 13 15 17 19 21 23 25time

frac

tion

of in

fect

ion

real network

ideal network

Fig. 12. The network full infection condition.

can compare the significance of factors for binaryclassification by comparing the area under their ROCcurves. The greater the area the more effective the factor isfor classification. As can be seen in Fig. 14, most factors areabove the no discrimination line. Homogeneity is likely themost significant factor in classification for all damagethresholds in this study.Fig. 15 shows ROC curves for combinations of factors

with a damage threshold 0.3. Fig. 15(a) shows ROC curvesof the combinations O+H, O+T, and H+T. Fig. 15(b)shows the ROC curve of the combination O+H+T.Comparison of the ROC curves is Figs. 14 and 15 showsthat a combination of factors can produce a much moreeffective classification than any single factor. O+H andO+H+T are the most effective combinations of factorsfor classification in this study.

7.2. Performance of damage prediction

The experiments were conducted with different popula-tion sizes and network architectures. The output of thepredictor is the number of infected hosts. We can analysethe performance of the predictor by comparing thepredicted number of infected hosts with the actual number.Prediction accuracy is measured using root mean squarederror (RMSE) and mean absolute error (MAE). RMSE isthe most commonly used measure of accuracy of predic-tion. If this number is significantly greater than MAE, itmeans that there are test cases in which the prediction erroris significantly greater than the average error. MAE is theaverage of the difference between predicted and actualvalue in all test cases; it is the average prediction error.Table 1 shows that there is no significant differencebetween RMSE and MAE for three network sizes andarchitectures with a damage threshold 0.3.The fraction of infected nodes is divided into two

categories: Normal ðDo0:3Þ and Critical ðDX0:3Þ. Table 2shows the prediction rate (true-positive rate) and false-positive error rate of heterogeneous networks with thedamage threshold 0.3. As can be seen, the prediction rate ismore than 83% for all population sizes. The greaternetwork size does not imply a higher false-positive errorrate because it can be better than the smaller network size.

7.3. Performance of parameter tuning

In general, the state of a worm attack in a network canbe described as in Fig. 16—hosts that are vulnerable to aworm are called susceptible hosts; hosts that have beeninfected and can infect others are called infectious hosts;hosts that are immune or dead such that they cannot beinfected by a worm are called removed hosts. Kephart andWhite (1993) proposed the SIS (susceptible—infected—susceptible) model to study the spread of computer viruses,which assumes that a cured computer can be reinfected. Itis not suitable for modeling worm propagation since once

Page 10: Worm damage minimization in enterprise networks

ARTICLE IN PRESS

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

D D

O H

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

(H, T) = (0.89, 0.25) (H, T) = (0.65, 0.08) (O, T) = (0.25, 0.06) (O, T) = (0.5, 0.56)

(O, H) = (0.5, 0.5) (O, H) = (0.25, 0.25)

(a) (b)

(c)

0

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.10

D

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

T

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.10

Fig. 13. Variation of worm damage according to (a) openness, (b) homogeneity, and (c) trust.

S. Sanguanpong, U. Kanlayasiri / Int. J. Human-Computer Studies 65 (2007) 3–1612

an infected computer is patched or disconnected; it is morelikely to be immune to this worm.

In this paper, however, we consider the introduction of anew worm. Therefore, there is no system patch or signatureknowledge at that time of infection—the removed state isnot considered. Each attack event operates in discrete timeas in the susceptible—infected—susceptible model. Fordifferent trust configurations, the worms are releasedrandomly and we monitor the network full infection time.These infected hosts are then cleaned and the worm isreleased again for the next cycle of the experiment. Thisattack cycle is similar to the situation of new worm spreadin an organization. Even though no patches or signatureknowledge are available, the operating system on theinfected nodes are reinstalled so that the organization can

continue its business. Hence, these hosts are still at riskfrom this worm.Fig. 17 shows the performance of the fuzzy controller in

a wired network of 150 hosts. Fig. 17(a) shows the effect offull infection time under a stationary trust value. The topplot shows the trust value of 0.6. As can be seen, the fullinfection time is changing due to the stochastic behaviourof the worm attack. Fig. 17(b) shows the fuzzy feedbackcontroller seeks the trust value to delay the full infectiontime for the network. The top plot shows the value of trust.We see the trust value starts at 0.6 and keeps decreasinguntil it converges to a value that provides higher fullinfection time. The full infection time of the networkgradually increases from 31 to 47min in 10 cycles ofattacks.

Page 11: Worm damage minimization in enterprise networks

ARTICLE IN PRESS

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

00 0.2

1 - Specificity (false positives)

No discrimination

O

H

T

No discrimination

O

H

T

Sen

sitiv

ity (

true

pos

itive

s)

0.4 0.6 0.8 1(a)

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

00 0.2

1 - Specificity (false positives)

No discrimination

O

H

T

Sen

sitiv

ity (

true

pos

itive

s)

0.4 0.6 0.8 1(c)

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

00 0.2

1 - Specificity (false positives)

No discrimination

O

H

T

Sen

sitiv

ity (

true

pos

itive

s)

0.4 0.6 0.8 1(d)

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

00 0.2

1 - Specificity (false positives)

Sen

sitiv

ity (

true

pos

itive

s)

0.4 0.6 0.8 1(b)

Fig. 14. Factor effect of classification with the change of damage threshold: (a) 0.3, (b) 0.4, (c) 0.5, and (d) 0.6.

S. Sanguanpong, U. Kanlayasiri / Int. J. Human-Computer Studies 65 (2007) 3–16 13

We also study the trust parameter tuning in the wirelessnetwork of 50 hosts. Fig. 18(a) shows the effect of fullinfection time under a stationary trust value 0.8. Theperformance of the fuzzy controller is shown in Fig. 18(b).The trust values are changed in three steps: 0.8, 0.4, and 0.2.The full infection time gradually increases to 22min. Thereis no significant difference in the performance of fuzzyfeedback control between wired and wireless networks inthis study, as can be seen by comparing Figs. 17 and 18.From the experimental results, the optimization of the trustparameter can lessen the damage from worm infection.

To perform the fine-tuning of the trust parameterpractically, it needs an automated system to adjust thetrust value of hosts in the enterprise network. The trust

value is based on the configuration of file sharing betweenhosts. When a new worm infects computers in theenterprise network, we monitor the damage and networkfull infection time. After the full infection, infected hostsare then cleaned and a new trust value is recommended bythe fuzzy controller.To adjust the trust value automatically, an Automated

Trust Control or ATC framework will enable or disable filesharing (or other related options) configuration of hosts. Itcan be implemented by installing an agent on each host.The ‘‘trust control agent’’ can enable or disable the trustconfiguration. The ‘‘trust control master’’ station willcommunicate and control the agents on hosts to adjustthe trust value.

Page 12: Worm damage minimization in enterprise networks

ARTICLE IN PRESS

1

0.9

0.8

0.7

0.6

Sen

sitiv

ity (

true

pos

itive

s)

0.5

0.4

0.3

0.2

0.1

00

(a)

(b)

0.2 0.4

1 - Secificity (false positives)

0.6 0.8 1

1

0.9

0.8

0.7

0.6

Sen

sitiv

ity (

true

pos

itive

s)

0.5

0.4

0.3

0.2

0.1

00 0.2 0.4

1 - Secificity (false positives)

0.6 0.8 1

No discrimination

O+H

No discrimination

O+H+T

O+T

H+T

Fig. 15. Combination of factors with the damage threshold 0.3: (a) two

factors and (b) three factors.

Table 1

The prediction accuracy RMSE (MAE)

Number of nodes

100 150 200

Wired networks 0.083 (0.068) 0.064 (0.052) 0.088 (0.071)

Wireless networks 0.111 (0.080) 0.149 (0.108) 0.108 (0.081)

Heterogeneous networks 0.135 (0.089) 0.116 (0.076) 0.145 (0.099)

Table 2

The prediction rate and false-positive error rate

Number of nodes 100 150 200

Prediction rate (%) 90.91 83.33 90.91

False-positive error rate (%) 0 4.16 4

Attack

Clean

Patch

SusceptibleS

InfectiousI

RemovedR

Fig. 16. State diagram of worm attack.

S. Sanguanpong, U. Kanlayasiri / Int. J. Human-Computer Studies 65 (2007) 3–1614

The example of practical implementation of an ATCframework is a Network Admission Control or NACframework from Cisco (Cisco systems, 2005). The ATCframework uses the trust control master that is differentfrom the NAC framework. Currently, we are implementingthe trust control master. The overall architecture of theframework is shown in Fig. 19.

8. Concluding remarks

This paper describes an approach for minimizing thedamage due to worm infection in enterprise networks. Theapproach includes analyzing the effect of factors influen-cing worm infection, predicting the number of infectednodes, and optimizing the key parameter to minimize thedamage. Experiments using real worm attacks on a varietyof test cases in enterprise networks were conducted to studythe parameters influencing worm infection, to evaluate theperformance of the damage prediction model, and todemonstrate the reduction in damage by parameter tuning.The experimental results show that the selected parametersare strongly correlated with actual infection rates, thedamage prediction produces accurate estimates, and theoptimization of the trust parameter can lessen the damagefrom worm infection. These results suggest that this

Page 13: Worm damage minimization in enterprise networks

ARTICLE IN PRESS

0

0.2

0.4

0.6

0.8

1

1 2 3 4 5 6 7 8 9 10

Attack cycle

1 2 3 4 5 6 7 8 9 10

Attack cycle

1 2 3 4 5 6 7 8 9 10

Attack cycle

Tru

st

0

0.2

0.4

0.6

0.8

1

Tru

st

0

10

20

30

40

50

60

Ful

l inf

ectio

n tim

e (m

ins.

)

1 2 3 4 5 6 7 8 9 10

Attack cycle

0

10

20

30

40

50

60

Ful

l inf

ectio

n tim

e (m

ins.

)

(a) (b)

Fig. 17. Fuzzy control performance in wired network: (a) full infection time with fixed trust and (b) full infection time with fuzzy control.

Trust Control AgentOS

Trust Control AgentOS

Trust Control AgentOS

Trust Control AgentOS

OSTrust Control Master

Fig. 19. Automated trust control framework.

01 2 3 4 5 6 7 8 9 10

0.2

0.4

0.6

0.8

1

Attack cycle

1 2 3 4 5 6 7 8 9 10

Attack cycle

Tru

st

01 2 3 4 5 6 7 8 9 10

0.2

0.4

0.6

0.8

1

Attack cycle

Tru

st

0

5

10

15

20

25

Ful

l inf

ectio

n tim

e (m

ins.

)

(a)1 2 3 4 5 6 7 8 9 10

Attack cycle

0

5

10

15

20

25

Ful

l inf

ectio

n tim

e (m

ins.

)

(b)

Fig. 18. Fuzzy control performance in wireless network: (a) full infection time with fixed trust and (b) full infection time with fuzzy control.

S. Sanguanpong, U. Kanlayasiri / Int. J. Human-Computer Studies 65 (2007) 3–16 15

approach can be beneficial in terms of both managementand operations. It provides quantitative information usefulfor risk analysis, security investment, policy development,and incident response with respect to worm threat.

Acknowledgements

Yuen Poovarawan, James Brucker, Pirawat Watana-pongse, and Yodyium Tipsuwan made very helpfulsuggestions for which the authors are grateful. The refereesmade valuable comments that improved the quality of

Page 14: Worm damage minimization in enterprise networks

ARTICLE IN PRESSS. Sanguanpong, U. Kanlayasiri / Int. J. Human-Computer Studies 65 (2007) 3–1616

paper enormously. This research is supported by KasetsartUniversity Research and Development Institute (GrantKURDI-85.49).

References

CERT/CC, 2001. CA-2001-26 Nimda Worm. CERT advisory.

CERT/CC, 2003a. CA-2003-04 MS-SQL Server Worm. CERT Advisory.

CERT/CC, 2003b. CA-2003-20 W32/Blaster worm. CERT Advisory.

Chen, Z., Gao, L., Kwiat, K., 2003. Modeling the spread of active worms.

In: Proceedings of the IEEE Symposium on Security and Privacy,

pp. 1890–1900.

Cheung, S., Crawford, R., Dilger, M., Frank, J., Hoagland, J., Levitt, K.,

Rowe, J., Staniford Chen, S., Yip, R., Zerkle, D., 1999. The design of

GrIDS: a graph-based intrusion detection system, Computer Science

Department, UC Davis, Report No.CSE-99-2.

Christensen, R., 1964. Foundations of Inductive Reasoning. Entropy Ltd.,

Lincoln, MA.

Cisco systems, 2005. Implementing network admission control phase one

configuration and deployment. Cisco white paper OL-7079-01.

Cowan, C., Pu, C., Maier, D., Alpole, J., Bakke, P., Beattie, S., Grier, A.,

Wagle, P., Zhang, Q., Hinton, H., 1998. StackGuard: automatic

adaptive detection and prevention of buffer-overflow attacks. In:

Proceedings of the Seventh USENIX Security Conference, pp. 63–78.

Ellis, D., 2003. Worm anatomy and model. In: Proceedings of the ACM

Worm’03, pp. 42–50.

Eustice, K., Kleinrock, L., Markstrum, S., Popek, G., Ramakrishna, V.,

Reiher, P., 2004. Securing Nomads: the case for quarantine,

examination and decontamination. In: Proceedings of the ACM New

Security Paradigms Workshop, pp. 123–128.

Kenzle, D.M., Elder, M.C., 2003. Recent worms: a survey and trends. In:

Proceedings of the ACM Worm’03, pp. 1–10.

Kephart, J.O., White, R.S., 1991. Directed-graph epidemiological models

of computer virus prevalence. In: Proceedings of the IEEE Symposium

on Security and Privacy, pp. 343–359.

Kephart, J.O., White, R.S., 1993. Measuring and modeling computer virus

prevalence. In: Proceedings of the IEEE Symposium on Security and

Privacy, pp. 2–14.

Kim, C.J., 1997. An algorithmic approach for fuzzy inference. IEEE

Transactions on Fuzzy Systems 5 (4), 585–598.

Kim, C.J., Russell, B.D., 1989. Classification of faults and switching

events by inductive reasoning and expert system methodology. IEEE

Transactions on Power Delivery 4 (3), 1631–1637.

Kim, C.J., Russell, B.D., 1993. Automatic generation of membership

function and fuzzy rule using inductive reasoning. In: Proceedings of

the Industrial Fuzzy Control and Intelligent Systems, pp. 93–96.

Misra, V., Wei, B.G., Towsley, D., 2000. Fluid based analysis of a

network of AQM routers supporting TCP flows with an application to

RED. In: Proceedings of the ACM SIGCOMM, pp. 151–160.

Moore, D., Shannon, C., 2002. Code-Red: a case study on the spread and

victims of an Internet worm. In: Proceedings of the ACM SICGOMM

Internet Measurement Workshop, pp. 273–284.

Moore, D., Shannon, C., Voelker, G., Savage, S., 2003. Internet

quarantine: requirements for containing self-propagating code. In:

Proceedings of the IEEE INFOCOM Conference, pp. 1901–1910.

Necula, G.C., 1997. Proof-carrying code. In: Proceedings of the 24th

ACM SIGPLAN-SIGACT Symposium on Principles of Programming

Languages, pp. 106–119.

Ogata, K., 1997. Modern Control Engineering, third ed. Prentice-Hall,

Englewood Cliffs, NJ.

Parekh, S., Gandhi, N., Hellerstein, L., Tilbury, D.M., Bigus, J.P., 2002.

Using control theory to achieve service level objectives in performance

management. In: Proceedings of the IEEE/IFIP Symposium,

pp. 841–854.

Phillips, C.L., Harbor, R.D., 1996. Feedback Control Systems, third ed.

Prentice-Hall, Englewood Cliffs, NJ.

Ross, J.T., 1997. Fuzzy Logic with Engineering Applications. McGraw-

Hill, Singapore.

Staniford, S., Paxon, V., Weaver, N., 2002. How to own the Internet in

your spare time. In: Proceedings of the 11th USENIX Security

Symposium, pp. 149–167.

Toth, T., Kruegel, C., 2002. Connection-history based anomaly detection.

In: Proceedings of the IEEE Work shop on Information Assurance

and Security, pp. 30–35.

Wagner, D., Foster, J.S., Bewer, E.A., Aiken, A., 2000. A first step

towards automated detection of buffer overrun vulnerabilities. In:

Proceedings of Network and Distributed System Security Symposium,

pp. 3–17.

Wang, C., Knight, J., Elder, M., 2000. On computer viral infection and the

effect of immunization. In: Proceedings of the 16th Annual Computer

Security Applications Conference, pp. 246–256.

Weaver, N., Paxson, V., Staniford, S., Cunningham, R., 2003.

A taxonomy of computer worms. In: Proceedings of the ACM

Worm’03, pp. 11–18.

Wegner, A., Dubendorfer, T., Plattner, B., Hiestand, R., 2003. Experi-

ences with worm propagation simulations. In: Proceedings of the

ACM Worm’03, pp. 34–41.

Williamson, M., 2002. Throttling viruses: restricting propagation to defeat

malicious mobile code. HP Laboratories Bristol Report No. HPL-

2002-172.

Yuji, U., Derek, S., 2004. Windows local security authority service remote

buffer overflow, eEye Digital Security Published Advisories.

Zadeh, L.A., 1994. Fuzzy logic, neural networks, and soft computing.

Communication of the ACM 37 (3), 77–84.

Zou, C.C., Gong, W., Towsley, D., 2002. Code Red worm propagation

modeling and analysis. In: Proceedings of the ACM CCS’02,

pp. 138–147.