[ieee 2011 seventh international conference on mobile ad-hoc and sensor networks (msn) - beijing,...

5
Shrew Attack in Cloud Data Center Networks Zhenqian Feng, Bing Bai, Baokang Zhao, Jinshu Su Computer Department, National University of Defense Technology Changsha, China {fengzhenqian1983, nudt.bb}@gmail.com, {bkzhao,sjs}@nudt.edu.cn Abstract— Multi-tenancy and lack of network performance isolation among tenants together make the public cloud vul- nerable to attacks. This paper studies one of the potential attacks, namely, low-rate denial-of-service (DoS) attack (or Shrew attack for short), in cloud data center networks (DCNs). To explore the feasibility of launching Shrew attack from the perspective of a normal external tenant, we first leverage a loss- based probe to identify the locations and capabilities of the underlying bottlenecks, and then make use of the low-latency feature of DCNs to synchronize the participating attack flows. Moreover, we quantitatively analyze the necessary and sufficient traffic for an effective attack. Using a combination of analytical modeling and extensive experiments, we demonstrate that a tenant could initiate an efficient Shrew attack with extremely little traffic, e.g., milliseconds-long burst traffic, which imposes significant difficulty for the switching boxes and counter-DoS mechanisms to detect. We identify that both the conventional protocol assumption and new features of DCNs enable such Shrew attack, and new techniques are required to thwart it in the DCNs. Keywords-Data Center Network; Denial of Service; TCP I. I NTRODUCTION Recent cloud computing provides elastic service, e.g., Ama- zon EC2 and Microsoft Azure, to public tenants. In this service model, tenants could rent or release VMs (Virtual Machines) and upload images as need. While the computing resources on servers are well allocated and isolated through VMs, the network resources are directly shared among different tenants [13], [4]. Lack of efficient bandwidth guarantee and traffic iso- lation makes such cloud services vulnerable, since malicious tenants may launch attacks towards co-resident tenants in the same cloud data center. As one of the formidable attacks, Shrew attacks [9], [14], [15] are widely studied on the Internet. However, the feasibility of launching such attacks in a high-band and low-latency environment like DCNs is still unknown. We firmly believe that a quantitative understanding on the prerequisites and the resultant damages of Shrew attacks would be essential and helpful to the architecture and counter-measure designers. To the best of our knowledge, we’re the first the explore it. There are both obstacles and conveniences to launch Shrew attacks in DCNs. For a malicious tenant, there’re at least two obstacles to overcome before attacking. (1) DCN architectures feature high aggregate bandwidth, and target for non-blocking communication between each node pair. In this spirit, there is no explicit persistent bottleneck link in DCNs. For example, in Fattree [1] and BCube [3], the output rates of network interface cards are universally not higher than those of the intermediate switching ports. Therefore, the attacker should identify or artificially create temporal bottlenecks. (2) For the security and management considerations, the VM placement schemes and underlying flow paths are concealed from tenants. The attacker should probe the potential targets with only the rental VMs. Nevertheless, the low-latency nature of DCNs paves the way for Shrew attack. For instance, the average RTT (Rount-Trip Time) in DCNs is extremely short, e.g., 100s of microseconds. This makes it readily to synchronize and aggregate attack traffic into large flows, without the need of complicated mechanism for flow coordination [15]. Further, large flows help identify and maintain the temporal bottlenecks. Based upon the above discussions, we organize the Shrew attack with given VMs as follows. (a) Probe the potential bottlenecks. As flows traversing the same bottleneck would experience the same level of conges- tion, thus very similar loss rates regardless of the existence of the background traffic. Based on this, we could group the attack flows by their loss rates. (b) Attack the bottlenecks according groups. After identi- fying the potential bottlenecks, one VM acts as the controller and issue requests to synchronize multiple flows from a certain group to congest the bottlenecks. The necessary and sufficient burst lengths are specified in the requests. To predict the burst length for each group, we build an analytical model. The balance of the paper is as follows. Section II introduces the probe method to detect the potential bottlenecks. Shrew attack is described and modeled in Section III. Section IV and V are the evaluations and potential counter-measures respectively. Related work and a brief conclusion present at the end. II. PROBE THE BOTTLENECKS Identifying the bottlenecks is an indispensable step towards a meaningful attack in a “blackbox” network. Two key factors might blind the attack, namely, where the bottleneck locates and how much traffic is essential to congest it. We’ll discuss them one by one in the following subsections. A. Location of Bottleneck Figure 1 exemplifies two cases indicating where the bottle- neck links locate in DCNs, where (a) indicates the bottleneck 2011 Seventh International Conference on Mobile Ad-hoc and Sensor Networks 978-0-7695-4610-0/11 $26.00 © 2011 IEEE DOI 10.1109/MSN.2011.71 442 2011 Seventh International Conference on Mobile Ad-hoc and Sensor Networks 978-0-7695-4610-0/11 $26.00 © 2011 IEEE DOI 10.1109/MSN.2011.71 441 2011 Seventh International Conference on Mobile Ad-hoc and Sensor Networks 978-0-7695-4610-0/11 $26.00 © 2011 IEEE DOI 10.1109/MSN.2011.71 441

Upload: jinshu

Post on 07-Mar-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [IEEE 2011 Seventh International Conference on Mobile Ad-hoc and Sensor Networks (MSN) - Beijing, TBD, China (2011.12.16-2011.12.18)] 2011 Seventh International Conference on Mobile

Shrew Attack in Cloud Data Center Networks

Zhenqian Feng, Bing Bai, Baokang Zhao, Jinshu Su

Computer Department, National University of Defense Technology

Changsha, China

{fengzhenqian1983, nudt.bb}@gmail.com, {bkzhao,sjs}@nudt.edu.cn

Abstract— Multi-tenancy and lack of network performanceisolation among tenants together make the public cloud vul-nerable to attacks. This paper studies one of the potentialattacks, namely, low-rate denial-of-service (DoS) attack (or Shrewattack for short), in cloud data center networks (DCNs). Toexplore the feasibility of launching Shrew attack from theperspective of a normal external tenant, we first leverage a loss-based probe to identify the locations and capabilities of theunderlying bottlenecks, and then make use of the low-latencyfeature of DCNs to synchronize the participating attack flows.Moreover, we quantitatively analyze the necessary and sufficienttraffic for an effective attack. Using a combination of analyticalmodeling and extensive experiments, we demonstrate that atenant could initiate an efficient Shrew attack with extremelylittle traffic, e.g., milliseconds-long burst traffic, which imposessignificant difficulty for the switching boxes and counter-DoSmechanisms to detect. We identify that both the conventionalprotocol assumption and new features of DCNs enable suchShrew attack, and new techniques are required to thwart it inthe DCNs.

Keywords-Data Center Network; Denial of Service; TCP

I. INTRODUCTION

Recent cloud computing provides elastic service, e.g., Ama-

zon EC2 and Microsoft Azure, to public tenants. In this service

model, tenants could rent or release VMs (Virtual Machines)

and upload images as need. While the computing resources

on servers are well allocated and isolated through VMs, the

network resources are directly shared among different tenants

[13], [4]. Lack of efficient bandwidth guarantee and traffic iso-

lation makes such cloud services vulnerable, since malicious

tenants may launch attacks towards co-resident tenants in the

same cloud data center.As one of the formidable attacks, Shrew attacks [9], [14],

[15] are widely studied on the Internet. However, the feasibility

of launching such attacks in a high-band and low-latency

environment like DCNs is still unknown. We firmly believe

that a quantitative understanding on the prerequisites and the

resultant damages of Shrew attacks would be essential and

helpful to the architecture and counter-measure designers. To

the best of our knowledge, we’re the first the explore it.There are both obstacles and conveniences to launch Shrew

attacks in DCNs. For a malicious tenant, there’re at least two

obstacles to overcome before attacking. (1) DCN architectures

feature high aggregate bandwidth, and target for non-blocking

communication between each node pair. In this spirit, there is

no explicit persistent bottleneck link in DCNs. For example,

in Fattree [1] and BCube [3], the output rates of network

interface cards are universally not higher than those of the

intermediate switching ports. Therefore, the attacker should

identify or artificially create temporal bottlenecks. (2) For the

security and management considerations, the VM placement

schemes and underlying flow paths are concealed from tenants.

The attacker should probe the potential targets with only the

rental VMs.

Nevertheless, the low-latency nature of DCNs paves the way

for Shrew attack. For instance, the average RTT (Rount-Trip

Time) in DCNs is extremely short, e.g., 100s of microseconds.

This makes it readily to synchronize and aggregate attack

traffic into large flows, without the need of complicated

mechanism for flow coordination [15]. Further, large flows

help identify and maintain the temporal bottlenecks.

Based upon the above discussions, we organize the Shrew

attack with given VMs as follows.

(a) Probe the potential bottlenecks. As flows traversing the

same bottleneck would experience the same level of conges-

tion, thus very similar loss rates regardless of the existence

of the background traffic. Based on this, we could group the

attack flows by their loss rates.

(b) Attack the bottlenecks according groups. After identi-

fying the potential bottlenecks, one VM acts as the controller

and issue requests to synchronize multiple flows from a certain

group to congest the bottlenecks. The necessary and sufficient

burst lengths are specified in the requests. To predict the burst

length for each group, we build an analytical model.

The balance of the paper is as follows. Section II introduces

the probe method to detect the potential bottlenecks. Shrew

attack is described and modeled in Section III. Section IV

and V are the evaluations and potential counter-measures

respectively. Related work and a brief conclusion present at

the end.

II. PROBE THE BOTTLENECKS

Identifying the bottlenecks is an indispensable step towards

a meaningful attack in a “blackbox” network. Two key factors

might blind the attack, namely, where the bottleneck locates

and how much traffic is essential to congest it. We’ll discuss

them one by one in the following subsections.

A. Location of Bottleneck

Figure 1 exemplifies two cases indicating where the bottle-

neck links locate in DCNs, where (a) indicates the bottleneck

2011 Seventh International Conference on Mobile Ad-hoc and Sensor Networks

978-0-7695-4610-0/11 $26.00 © 2011 IEEE

DOI 10.1109/MSN.2011.71

442

2011 Seventh International Conference on Mobile Ad-hoc and Sensor Networks

978-0-7695-4610-0/11 $26.00 © 2011 IEEE

DOI 10.1109/MSN.2011.71

441

2011 Seventh International Conference on Mobile Ad-hoc and Sensor Networks

978-0-7695-4610-0/11 $26.00 © 2011 IEEE

DOI 10.1109/MSN.2011.71

441

Page 2: [IEEE 2011 Seventh International Conference on Mobile Ad-hoc and Sensor Networks (MSN) - Beijing, TBD, China (2011.12.16-2011.12.18)] 2011 Seventh International Conference on Mobile

Fig. 1. The locations of bottleneck links

uplink in an oversubscribed tree (commonly seem in practical

DCNs), and (b) the bottleneck downlink in a fattree.

Such bottleneck links are potential attack targets, since

when they get congested all the traversing flows would drop

packets and encounter a potential timeout. There are a lot of

proposals for the Internet cases to detect the shared bottlenecks

[7], [12], [8], however, they share a different assumption

of the existence of a dominant persistent bottleneck on the

Internet. While in the DCNs, the bottlenecks are dynamical

and transient, if any. In this spirit, utilities based upon this

assumption are questionable when being applied in DCNs.

Latency in DCNs could not easily be leveraged to infer

the hop numbers. For most of the cases, the basic RTT is

about 100s of microseconds. while the queueing delay and

unknown endpoint scheduling cost may range from 100s of

microseconds to several milliseconds.

Bottlenecks are dynamically and transiently created by the

traversing flows, therefore, the problem of identifying their

locations is equivalent to group attack flows which share the

same bottleneck. We adopt a loss-based method to group

the sending VMs by the flow paths to the destination. The

rationales behind this choice are two-folders. First, flows

traversing the same bottleneck would experience the same

level of congestion, thus very similar loss rates regardless

of the existence of the background traffic. This observation

largely helps us determine which VMs are co-located under the

same ToR switch or sharing the longest flow paths. Moreover,

loss rate is also monotonically increased with the logical hopnumber of the underlying flow paths, given the aggregated

flow rates are high enough to congest the intermediate switch

buffers. This observation further makes it clear that which VM

groups are farther away from the destination than others. After

grouping all the rental VMs, the tenant could schedule flows

issued from the same group to attack their shared bottleneck

links.

B. Capacity of Bottleneck

Current DCNs are generally equipped with high-bandwidth

switches and links, such as 1Gbps or even 10Gbps. For

simplicity, we just assume that 1Gps is the bottleneck rate.

The buffer size that a switch port can utilize represents

another aspect of capacity, namely, the capacity of handling

the burst traffic. To infer the maximum buffer room each port

can occupy, we again leverage the low-latency or easy-sync

feature of DCNs. We first randomly select two VMs from a

group which shares the same bottleneck, then simultaneously

generate two flows to the same receiver. The receiver records

the duration from the flow start to when packet losses appear.

This duration is expected as follows:

t =B

Rinput + ε− C(1)

where B denotes the buffer room that a port could utilize,

Rinput is the total input rate of our specified flows, ε represents

the rate of the background traffic and C the link rate. As the

background traffic is unknown and changing with time, the

resultant durations may vary a lot. To filter the impact from the

unknown traffic, we choose to probe multiple times and select

the maximum value as the final duration, since the duration

increases reversely with the rate of background traffic. With

the duration, flow rates and link rate known, it’s easy to deduce

the per-port buffer room. As the widely equipped switches are

commodity and shallow-buffered, each port could only occupy

a small buffer room. Given the expected buffer room is not

that large, the above filtering is enough for us the approach it.

III. SHREW ATTACK

The Shrew attack works as follows. At first, the receiver

issues requests to each sender of a certain group. On receiving

the requests, each sender generates on-off traffic with required

burst length as specified in the requests periodically.

The key to launch an efficient attack is determining the

necessary and sufficient burst length, since a too long burst

would risk exposing the attacker and a too short one may

not bring down the victim’s performance thoroughly. Previous

work [9] claimed that the burst length should be larger than

RTTs of all the victim TCP flows, namely,

Tbst ≥ RTTi, i = 1, · · ·, n (2)

That’s true for the Internet case, but not sufficient for the DCN

case. We begin our model with single victim TCP flow and

then figure out the necessary and sufficient burst length to

suppress the aggregate traversing flows.

A. Notations and Assumptions

Fig. 2. Periodical attack flows

Figure 2 illustrates the periodical flows that we leverage to

launch the Shrew attack. Between every two successive active

bursts, there is an idle duration which usually corresponds to

a multiple of TCP’s retransmission timeout values.

443442442

Page 3: [IEEE 2011 Seventh International Conference on Mobile Ad-hoc and Sensor Networks (MSN) - Beijing, TBD, China (2011.12.16-2011.12.18)] 2011 Seventh International Conference on Mobile

TABLE INOTATIONS FOR MODEL

B Switch buffer size, counted by packetsτ Time to transmit a packetN Number of DoS attack flowsδ Normalized rate of attack flows, δ ∈ [0, 1]

Tbst Burst length of persistent attackλ Inter-arrival time of the attack flowsR0 Basic RTT of the victim TCP without queueingW Congestion window of the victim TCP

Table. I introduces the related notations for our model. We

assume the victim TCP already entered its steady state before

the attack, and a round corresponds to the round-trip, namely,

the duration for the forefront packet of a full TCP window to

arrive at the switch twice successively. In the following, we

let k denote the round number. For convenience, we further

presume all packets are of the same size(e.g., 1.5KB in our

testbed).

B. Analytical Model for Burst Length

We decompose Tbst into two parts: the time to saturate the

switch buffer (T1) and time to endure enough packet losses for

timeout (T2). For the former, as more Shrew packets pile into

the buffer, both the RTT and the queue length would increase

until the switch buffer gets overflowed and evolve as follows:

R (k) = R0 +Q (k)× τ (3)

Q (k) = Q (k − 1) +W + (φ− 1)×R (k − 1) /τ (4)

where R (k) and Q (k) are the RTT and queue length in the

kth round respectively, and φ indicates the effective aggregate

rate of Shrew flows,

φ =N × Tbst × δ

Tbst × (N − 1)× λ(5)

Note that Q (k) consists of four parts: the cumulative queue

length in the last round (Q (k − 1)), packets from the victim

TCP connection (W ), packets from the attack flows and

packets that traverse the output port ((φ− 1)×R (k − 1) /τ ).

Also note that the RTT increases sharply with the queue length

and eventually is dominated by the queueing delay, which is

the fundamental difference between DCNs and the Internet.

Let Q (k′) = B, one can find the round when the buffer

begins to overflow. We then get:

T1 =∑

i∈[1,k′]R (i), where : k′ = Q−1 (B) (6)

For the latter, we simply assume that an additional round

after the buffer gets full would cause enough packet losses for

TCP to timeout, hence:

T2 = R (k′) = R0 +B × τ (7)

Note that the timeout conditions are dependent on the TCP

variants and the schemes deployed for switch buffer manage-

ment, which would complicate our discussion. We would show

later that such approximation introduces invisible inaccuracy

on the burst length.

Thus we can conclude that the time to launch an efficient

attack is Tbst ≥ T1 + T2 by combining (4) and (5).

The above system of equations serves as the generic model

of the burst length in a Shrew attack. Unfortunately, it’s hard

to give a generic solution as there’re super-equations. To solve

this, we leverage experimental observations to approach it. As

DCN features low-latency and it’s easy to synchronize multiple

attack flows, we just ignore the inter-arrival times, namely, λin our model. Therefore,

φ =N × Tbst × δ

Tbst × (N − 1)× λ≈ N × δ (8)

The rationale of such approximation lies in that the inter-

arrival times (λ) are often within 100s of microseconds, while

the burst lengths (Tbst) are generally at the millisecond level.

IV. EVALUATION

We setup a testbed consisting of 24 servers and several

Quanta LB4G 48-port Gigabit Ethernet switches, each of

which has 4MB buffer per chip shared among 24 ports. Each

server has an Intel Xeon 2.2GHz Dual-core CPU, 32GB

DRAM and Broadcom BCM5709C NetXtreme II Gigabit

Ethernet NIC. The operating system is Windows Server 2008

R2 Enterprise 64bit. All the switches adopt the first-in-first-

out, tail-drop scheme by default. All links works at 1Gbps.

We built the topology of our testbed as a multi-level tree, as

illustrated in Figure 3. Two VMs are setup on each server. We

randomly choose a VM from one of the servers as receiver (R

in Figure 3), and some or all of the others as senders. Servers

in each group (A∼D) are connected to the same ToR switch.

Fig. 3. Testbed: a three-level tree

We adapt the open-source measurement tool iperf to gen-

erate traffic. All sending VMs are initially setup as listeners.

The receiver acts as the global controller and issues back-to-

back requests to the sending VMs, where each request contains

information such as burst length, flow rate and packet size.

On receiving such requests, the sending VMs generate traffic

accordingly. Note considering that broadcast service may not

be provided in practical DCNs, we use back-to-back unicast

requests instead of broadcast ones.

A. Group VMs by Loss Rate

We randomly choose three VMs from each group (A ∼D) and initiate a flow from each VM to the receiver. For

each round, we increase the flow rates with 100Mbps until the

resultant loss rates could distinguish the rest flows. Meanwhile

444443443

Page 4: [IEEE 2011 Seventh International Conference on Mobile Ad-hoc and Sensor Networks (MSN) - Beijing, TBD, China (2011.12.16-2011.12.18)] 2011 Seventh International Conference on Mobile

TABLE IIGROUP VMS BY LOSS RATES

ID of VM Rflow(Mbps)100 200 300 400

v01 0.21 0.66 0.67v03 0.21 0.66 0.67v04 0.21 0.66 0.67v08 0.20 0.42v09 0.20 0.42v12 0.20 0.42v13 0.21 0.69 0.83 0.78v15 0.21 0.67 0.82 0.78v17 0.21 0.67 0.83 0.78v19 0.21 0.67 0.83 0.75v22 0.21 0.67 0.83 0.75v23 0.21 0.67 0.82 0.75

there are other services running on all the servers, thus the

network natively has background traffic.

Table. II shows the values of loss rates in four rounds

with Rflow ranging from 100Mbps to 400Mbps. When Rflow

equals 100Mbps, the downlink of switch S2 which is directly

connected with server R becomes the bottleneck, all other links

are not congested, thus all the flows are with similar loss

rates. As Rflow increases, flows from the groups closer to

the receiver would experience much lower loss rates, because

they encounter less congestions than the farther away ones. For

instance, when Rflow is 200Mbps, flows from VMs belonging

to the group B has the lowest loss rates (0.42). Flows from

the last two groups (C and D) has very fuzzy but still

visible difference in their loss rates. The similarity between

their loss rates lies in the symmetric locations of the source

VMs and the same specified flow rates, while the potential

background traffic makes their loss rates dissimilar. Even they

share the same loss rates, e.g., without the interference from

the background traffic, it does little harm to our Shrew attack

as all these flows are with similar attack power.

B. Per-Port Buffer Utilization

After grouping the VMs, we randomly choose one group

to deduce the buffer room that each port could utilize. For

example, we select three VMs on different servers from

group A to measure the buffer size of output port between

switch S1 and S5. Three flows are synchronized to congest

the bottleneck link, and the receiver records the time until

packet loss emerges. To verify that our approach is background

traffic tolerant, we randomly choose three servers (thus totally

12 flows) from each group and generate all-to-all traffic

ranging from 10Mbps to 100Mbps. Note the aggregate rate

of the background traffic that a port receives would vary from

110Mbps to 1.1Gbps.

We deduce the per-port buffer size according to equation (1),

and Figure 4 illustrates all the values, where each point denotes

a test. The theoretical value of the buffer size is about 113 full-

size(∼1.5KB) packets, while the maximum experimental value

reaches 108. We thus conclude that even with the interference

from the background traffic, the deduced buffer room still

0

20

40

60

80

100

120

0 5 10 15 20 25 30

Ded

uced

Buf

fer

Siz

e(pa

cket

s)

Test Number

experimentaltheoretical

Fig. 4. Deduced buffer size each port can utilize

matches the expected value well.

C. Burst Length

To verify our analytic model of burst length, we setup a

long-lived TCP connection as the victim flow and one or more

periodical square-wave flows to attack it. For the victim TCP

flow, we restrict its receive window to control the congestion

window (W in our model). To suppress the victim flow, we

vary the number of attack flows (N ) to evaluate the necessary

burst lengths (Tbst). As there is always unavoidable overhead,

the attack flows could only achieve about 940Mbps instead of

1Gps, thus we set δ to 0.94 for the analytic model.

1

2

3

4

5

6

7

8

9

10

8 12 16 20 24 28 32 36 40 44

Burs

t L

ength

(ms)

of

Att

ack F

low

Number of Packets per Round of Victim TCP

model(Ndos=1)experiment(Ndos=1)

model(Ndos=2)experiment(Ndos=2)

model(Ndos=5)experiment(Ndos=5)

Fig. 5. Tbst vs num of packets per round.

Figure 5 shows the resultant burst lengths under two chang-

ing factors, namely, the number of attack flows and the rate of

victim TCP flow (indirectly implied by the number of packet

per round). As the value of each of the above two factors

increased, the necessary burst lengths to suppress the victim

TCP flow decreased, as expected. This is easy to explain, as

more packets per round traverse the bottleneck link, the switch

buffer gets overflowed much faster. Also we’re surprised to

find that it’s so readily to launch a Shrew attck with extremely

short burst traffic, e.g., less than 10ms even for the worst case.

Moreover, if the idle period is specified as 300ms (the RTO

445444444

Page 5: [IEEE 2011 Seventh International Conference on Mobile Ad-hoc and Sensor Networks (MSN) - Beijing, TBD, China (2011.12.16-2011.12.18)] 2011 Seventh International Conference on Mobile

value for Microsoft Windows OS), the long-term average rate

of the attack flows would be only 30Mbps.

V. DISCUSSION ON COUNTER-MEASURES

Shrew attack can be launched with extremely short periodi-

cal burst traffic, which could not be easily distinguished from

the normal traffic by conventional mechanisms. To thwart such

attack, attentions should be paid to the root causes and the

preconditions of launching it.

A. TCP Enhancement

One of the root cause for Shrew lies in the mismatch

between the high retransmission timeout bound and the low-

latency in DCNs. The straightforward counter-measure is to

low down the retransmission timer so as to achieve a much

faster recovery, such as the proposal for TCP Incast [5] where

1ms or shorter is used. However, an attacker would also

shorten the idle period of the traffic to mimic the renewed

retransmission timeout. Moreover, it costs more.

B. Rate Limiting

There’s an implied assumption that the attack flows can

make full use of the link capacity. Generally this is true, as

current DCNs provision best-effort services so as to achieve

a better utilization. However, as more attentions are paid to

the network performance isolation [4], [13], there’s a trend to

limit the flow rates in various ways. We claim that it only

partly mitigates the damages of the attack, since a determined

tenant could rent more VMs. More importantly, rate limiting

itself trades the network utilization for performance isolation.

C. Random Path Selecting

As TCP is sensitive to packet reordering, current schemes of

path selecting defer to per-flow scheduling in DCNs [2]. If the

flow path can be randomly chosen among all the candidate (or

equivalent) paths at a much finer granularity, the attack would

be difficult to control. The flowlet [6] is proposed as a good

tradeoff between load balance and packet reordering. However,

the performance of flowlet switching depends largely on the

inter-flowlet space. As the end-to-end latency itself is very low,

its application in DCNs is hard to predict.

VI. RELATED WORK

Shrew attack is a long-term concern on the Internet [9],

[15]. To the best of our knowledge, we’re the first to explore

it in DCNs. While a lot of counter-measures [14] exist for

curing this, we claim that the new features of DCNs call for

new techniques.

There is a number of works on shared path or bottleneck

identification on the Internet [7], [12]. As the Internet en-

vironment is heterogeneous and complicated, challenge lies

in how to efficiently capture the correlations between shared

flows. For instance, [12] propose loss and delay correlation

tests among flow pairs to determine shared bottlenecks. Katabi

[7] computes correlation among a set of flows at the receiver

through an entropy function. While in DCNs, no explicit

bottlenecks exist. Thus correlations of shared flows are hard

to retrieve.

Only a few works focusing on detecting the properties of

DCNs. Liu [10] studies similar problems and presents the idea

of distinguishing distances of VMs by the received bandwidth,

based upon the observation that farther away flows experience

more bandwidth competitions and splitting. That may be true

in a clear network, while not in noisy ones. Ristenpart et al [11]

experimentally verifies the feasibility of stealing information,

given the victim’s VM is co-located with malicious tenants’

VMs. Their work is complementary to ours and could help

identify the potential victims.

VII. CONCLUSIONS

Shrew attack leverages the low-latency feature of DCN to

congest the bottleneck within milliseconds time. As it takes

effect in an extremely short period, current solutions either

partly mitigate the impact or make an inevitable tradeoff

between performance isolations and resource utilizations. A

thorough counter-measure is our future work.

ACKNOWLEDGEMENT

This Research Project was partially or fully sponsored by

the National 863 Development Plan of China under Grants

2008AA01A325, the National Grand Fundamental Research

973 Program of China under Grants 2009CB320503, Network

Technology Innovation Team of the Ministry of Education and

Network Technology Innovation Team of Hunan Province.

REFERENCES

[1] M. Al-Fares, A. Loukissas, and A. Vahdat. A scalable, commodity datacenter network architecture. In Proc. Sigcomm, 2008.

[2] M. Al-Fares, S. Radhakrishnan, B. Raghavan, N. Huang, and A. Vahdat.Hedera: Dynamic flow scheduling for data center networks. In Proc.NSDI, 2010.

[3] C. Guo et al. Bcube: A high performance, server-centric networkarchitecture for modular data centers. In Proc. Sigcomm, 2009.

[4] C. Guo et al. Secondnet: A data center network virtualization architec-ture with bandwidth guarantees. In Proc. CoNext, 2010.

[5] V. Vasudevan et al. Safe and effective fine-grained tcp retransmissionsfor datacenter communication. In Proc. Sigcomm, 2009.

[6] S. Kandula, D. Katabi, S. Sinha, and A. Berger. Dynamic load balancingwithout packet reordering. In CCR, 2007.

[7] D. Katabi, I. Bazzi, and X. Yang. A passive approach for detectingshared bottlenecks. In Proc. ICNP, 2001.

[8] M. Kim, T. Kim, Y. Shin, S. Lam, and E. Powers. Scalable clusteringof internet paths by shared congestion. In Proc. Infocom, 2006.

[9] A. Kuzmanovic and E. W. Knightly. Low-rate tcptargeted denial ofservice attacks (the shrew vs. the mice and elephants). In Proc. Sigcomm,2003.

[10] H. Liu. A new form of dos attack in a cloud and its avoidancemechanism. In Proc. CCSW, 2011.

[11] T Ristenpart, E. Tromer, H. Shacham, and S. Savage. Hey, you, getoff of my cloud: Exploring information leakage in third-party computeclouds. In Proc. CCS, 2009.

[12] D. Rubenstein, J. Kurose, and D. Towsley. Detecting shared congestionof flows via end-to-end measurement. In IEEE/ACM ToN, 2002.

[13] A. Shieh, S. Kandula, A. Greenberg, C. Kim, and B. Saha. Sea-wall:performance isolation for cloud datacenter networks. 2011.

[14] H. Sun, J.C.S. Lui, and D.K.Y. Yau. Defending against low-rate tcpattacks: Dynamic detection and protection. In Proc. ICNP, 2004.

[15] Y. Zhang, Z.M. Mao, and J. Wang. Low-rate tcp-targeted dos attackdisrupts internet routing. In Proc. NDSS, 2007.

446445445