tcp-aware network coding with opportunistic scheduling in wireless mobile ad hoc networks

10
TCP-aware network coding with opportunistic scheduling in wireless mobile ad hoc networks Tebatso Nage, F. Richard Yu , Marc St-Hilaire Department of Systems and Computer Engineering, Carleton University, Ottawa, ON, Canada article info Article history: Received 21 July 2010 Received in revised form 6 April 2011 Accepted 8 April 2011 Available online 30 April 2011 Keywords: Network coding Opportunistic scheduling TCP performance Mobile ad hoc networks Reception report abstract This paper presents a scheme that employs TCP-aware network coding with opportunistic scheduling to enhance TCP throughput in wireless mobile ad hoc networks. Specifically, it considers a TCP parameter, congestion window size, and wireless channel conditions simultaneously to improve TCP throughput performance. Evaluation of this scheme is carried out by using ns2 simulations in different scenarios. The results show that the proposed scheme gives approximately 35% throughput improvement in a high mobility environment and about 33% throughput increase in no or low mobility environment as com- pared to traditional network coding with opportunistic scheduling. This paper also proposes a new adap- tive-W (i.e., adaptive Waiting time) scheme whose objective is to adaptively control waiting time of overheard packets that are stored in a buffer to achieve tradeoff between throughput and overhead. Ó 2011 Elsevier B.V. All rights reserved. 1. Introduction Network coding (NC) is a new transmission paradigm pioneered by Ahlswede et al. [1]. In recent years, it has generated huge re- search interest especially in wireless communications. The main attraction of this novel concept is that it is bandwidth efficient and achieves high throughput gains [2–7]. Through experiments [8], it has been found that network coding makes it possible to per- form peer-to-peer live multimedia streaming with finer granular- ity. Network coding can be regarded as an extension of the traditional routing protocol in which intermediate nodes store and forward packets. The basic idea about network coding is that packets are intelligently mixed (or coded) together at an interme- diate node into one coded packet which is then broadcasted. This process facilitates not only the generation of a few number of transmissions in the network but also information rich transmis- sions. Furthermore, with network coding, more bandwidth be- comes available for new data to be transmitted resulting in high network throughput [9]. Network coding comes in different forms. Some studies con- sider what is referred to as random network coding [3,4,8,9]. There are also physical layer network coding [10,11] and XOR network coding [9,12,13]. In random network coding, encoding coefficients are randomly chosen from a set of coefficients of finite field. A lin- ear combination of packets is then performed to generate a coded packet. At the receiver, decoding becomes possible if and only if the receiver can generate a full rank transfer matrix made of coef- ficients extracted from received coded packets [8]. This means that if there are K encoding coefficients in a coded packet, the receiver will have to receive at least K independent versions of the coded packet for successful decoding. Physical layer network coding, on the other hand, facilitates simultaneous reception of electromag- netic (EM) waves of signals at the air interface [10] which are then aggregated. This approach achieves higher network throughput than random network coding. However, it introduces interference at the node’s air interface hence high design complexity. The last form of network coding, which is the one considered in this paper, employs binary XOR operation to code packets. It is regarded as a special case of random network coding where the Galois field is of size 2. XOR network coding is simple and cheap because the same operation is used both at the sender and the receiver. Also, the implementation is less complex compared to other random net- work coding schemes. Existing network coding schemes, such as COPE [6], require ex- change of information among neighboring nodes in order to cor- rectly encode and decode data packets [14]. However, this leads to high packet overhead and degrades system performance by intro- ducing additional delay, congestion, energy consumption and ineffi- cient bandwidth utilization. To this end, there have been several approaches to combat this problem. In [15], Chou et al. proposed a buffer model which employs traditional generation-based network coding to minimize packet overhead. Although some performance 0140-3664/$ - see front matter Ó 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.comcom.2011.04.001 Corresponding author. Tel.: +1 6135202600x2978. E-mail addresses: [email protected] (T. Nage), [email protected] (F.R. Yu), [email protected] (M. St-Hilaire). Computer Communications 34 (2011) 1788–1797 Contents lists available at ScienceDirect Computer Communications journal homepage: www.elsevier.com/locate/comcom

Upload: tebatso-nage

Post on 05-Sep-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Computer Communications 34 (2011) 1788–1797

Contents lists available at ScienceDirect

Computer Communications

journal homepage: www.elsevier .com/ locate/comcom

TCP-aware network coding with opportunistic scheduling in wireless mobilead hoc networks

Tebatso Nage, F. Richard Yu ⇑, Marc St-HilaireDepartment of Systems and Computer Engineering, Carleton University, Ottawa, ON, Canada

a r t i c l e i n f o a b s t r a c t

Article history:Received 21 July 2010Received in revised form 6 April 2011Accepted 8 April 2011Available online 30 April 2011

Keywords:Network codingOpportunistic schedulingTCP performanceMobile ad hoc networksReception report

0140-3664/$ - see front matter � 2011 Elsevier B.V. Adoi:10.1016/j.comcom.2011.04.001

⇑ Corresponding author. Tel.: +1 6135202600x2978E-mail addresses: [email protected] (T. N

(F.R. Yu), [email protected] (M. St-Hilaire).

This paper presents a scheme that employs TCP-aware network coding with opportunistic scheduling toenhance TCP throughput in wireless mobile ad hoc networks. Specifically, it considers a TCP parameter,congestion window size, and wireless channel conditions simultaneously to improve TCP throughputperformance. Evaluation of this scheme is carried out by using ns2 simulations in different scenarios.The results show that the proposed scheme gives approximately 35% throughput improvement in a highmobility environment and about 33% throughput increase in no or low mobility environment as com-pared to traditional network coding with opportunistic scheduling. This paper also proposes a new adap-tive-W (i.e., adaptive Waiting time) scheme whose objective is to adaptively control waiting time ofoverheard packets that are stored in a buffer to achieve tradeoff between throughput and overhead.

� 2011 Elsevier B.V. All rights reserved.

1. Introduction

Network coding (NC) is a new transmission paradigm pioneeredby Ahlswede et al. [1]. In recent years, it has generated huge re-search interest especially in wireless communications. The mainattraction of this novel concept is that it is bandwidth efficientand achieves high throughput gains [2–7]. Through experiments[8], it has been found that network coding makes it possible to per-form peer-to-peer live multimedia streaming with finer granular-ity. Network coding can be regarded as an extension of thetraditional routing protocol in which intermediate nodes storeand forward packets. The basic idea about network coding is thatpackets are intelligently mixed (or coded) together at an interme-diate node into one coded packet which is then broadcasted. Thisprocess facilitates not only the generation of a few number oftransmissions in the network but also information rich transmis-sions. Furthermore, with network coding, more bandwidth be-comes available for new data to be transmitted resulting in highnetwork throughput [9].

Network coding comes in different forms. Some studies con-sider what is referred to as random network coding [3,4,8,9]. Thereare also physical layer network coding [10,11] and XOR networkcoding [9,12,13]. In random network coding, encoding coefficientsare randomly chosen from a set of coefficients of finite field. A lin-

ll rights reserved.

.age), [email protected]

ear combination of packets is then performed to generate a codedpacket. At the receiver, decoding becomes possible if and only ifthe receiver can generate a full rank transfer matrix made of coef-ficients extracted from received coded packets [8]. This means thatif there are K encoding coefficients in a coded packet, the receiverwill have to receive at least K independent versions of the codedpacket for successful decoding. Physical layer network coding, onthe other hand, facilitates simultaneous reception of electromag-netic (EM) waves of signals at the air interface [10] which are thenaggregated. This approach achieves higher network throughputthan random network coding. However, it introduces interferenceat the node’s air interface hence high design complexity. The lastform of network coding, which is the one considered in this paper,employs binary XOR operation to code packets. It is regarded as aspecial case of random network coding where the Galois field is ofsize 2. XOR network coding is simple and cheap because the sameoperation is used both at the sender and the receiver. Also, theimplementation is less complex compared to other random net-work coding schemes.

Existing network coding schemes, such as COPE [6], require ex-change of information among neighboring nodes in order to cor-rectly encode and decode data packets [14]. However, this leads tohigh packet overhead and degrades system performance by intro-ducing additional delay, congestion, energy consumption and ineffi-cient bandwidth utilization. To this end, there have been severalapproaches to combat this problem. In [15], Chou et al. proposed abuffer model which employs traditional generation-based networkcoding to minimize packet overhead. Although some performance

Fig. 1. XOR network coding [12].

T. Nage et al. / Computer Communications 34 (2011) 1788–1797 1789

improvements were reported in [15], this approach is susceptible topacket losses that are facilitated by flushing old generations in net-work coding.

In [4], Jin and Li designed an adaptive random network codingin WiMAX by adapting the number of upstream nodes and dynam-ically adjusting block size in response to channel conditions. Pra-sad et al. [14] proposed a new encoding strategy, XOR-TOP,which employs local topology to effectively estimate the availablenon-coded or native packets at neighboring nodes. They claim thattheir scheme, which does not employ information exchange amongneighboring nodes, can always accurately identify coding opportu-nities according to local topology. However, wireless link condi-tions are unpredictable. Therefore, there is a chance of making anerror in estimating available native packets at neighboring nodesand more especially in high mobility environment in which localnetwork topology frequently changes. Katti et al. [6] employedbit-map format in packet’s reception report. Even though the rep-resentation for reception reports in their approach is compact andeffective, their fixed-W (i.e., fixed Waiting time) scheme has short-comings. For example, if a node has many neighboring nodes,chances are that more packets from different nodes will be over-heard and this could increase overhead.

Although network coding was recently introduced in the litera-ture, substantial amount of research has been done to enhance itsperformance. However, to the best of our knowledge, there has notbeen any work reported in the literature which considers Trans-mission Control Protocol (TCP) dynamics and opportunistic sched-uling (OS) simultaneously for network coding in wireless mobile adhoc networks.

Therefore, this paper presents a robust and resilient TCP-awarenetwork coding with opportunistic scheduling in wireless mobilead hoc networks by employing the cross-layer design approach.On top of that, a new adaptive-W (i.e., adaptive Waiting time)scheme is also proposed to enhance the basic XOR network codingscheme. The scheme we put forward adaptively controls packetwaiting time in a local buffer to achieve a tradeoff betweenthroughput and overhead.

The motivation behind this work can be described as follows: IfTCP and wireless channel information is considered simulta-neously in network coding for determining transmission data rates,throughput may be maximized in wireless mobile ad hoc net-works. Furthermore, it would be costly to deploy a modified ver-sion of TCP. Therefore, an efficient scheme which employs thecurrent TCP variant (TCP Reno) as is without any changes couldcut down deployment cost. Moreover, information exchange re-quired in network coding could degrade network performance.Therefore, an adaptive scheme for controlling overhead could max-imize network throughput and provide efficient bandwidthutilization.

The contributions of this paper are as follows: as a first contri-bution, we introduce and design a robust and resilient schemewhich simultaneously considers TCP and channel information intraditional network coding to maximize TCP throughput in wire-less mobile ad hoc networks. As a second contribution, we alsopropose an adaptive-W scheme whose aim is to adaptively controlpacket waiting time in the buffer called packetPool to achievetradeoff between throughput and overhead.

Simulation results show that in high mobility environments,there is approximately 35% performance improvement whenTCP-aware network coding is employed as compared to traditionalnetwork coding with opportunistic scheduling. When adaptive-Wscheme is used in traditional network coding upon which TCP-aware network coding scheme is built, significant performance isachieved in terms of bandwidth utilization and throughput.

The rest of the paper is organized as follows. Section 2 describesthe system model followed by TCP-aware network coding with

opportunistic scheduling in Section 3. Adaptive-W scheme is dis-cussed in Sections 4 and 5 presents simulation results and discus-sion. Finally, conclusions and future work are exposed in Section 6.

2. System models

In this section, we first present some background informationabout network coding. Then, the wireless channel model is de-scribed followed by the TCP model used.

2.1. Network coding

Network coding is performed at the data link layer. The follow-ing subsections will describe how information is exchanged be-tween the nodes, how coded packet are transmitted, how to findcoding opportunities and how to use the channel and TCP informa-tion in the coding process.

2.1.1. Information exchangeIn order to facilitate network coding, network nodes are made

to snoop on all transmissions going on in the network. They cap-ture and store overheard packets from the network in their buffers(also known as packetPool) for a particular time interval (e.g., 0.5 s)[6]. For example, let us consider the example shown in Fig. 1 whichhas been taken from [12]. In this network, three packets are to beforwarded: P1 from A to C, P2 from C to A and P3 from E to D. It isassumed that each node can hear the communications of its neigh-bors. This means that, for example, node C is within the communi-cation range of nodes B, D and E. Initially, each node will transmitits packet to node B. At the same time, neighboring node will also

1790 T. Nage et al. / Computer Communications 34 (2011) 1788–1797

hear the communication and store the information in their packet-Pool. As a result, after each transmission to node B, the networkwill be in the state shown at the bottom of Fig. 1. It is importantto note that each node will also add the packet it was sending toits packetPool. As it will be described later, node B could code pack-ets P1, P2 and P3 into a single packet and each recipient would alsobe able to decode its intended packet.

When each node gets an opportunity to transmit its packet, itincludes the information about the packets it has overheard fromthe network in the XOR header of the outgoing packet. The XORheader, as shown in Fig. 2, includes information about thefollowing:

� Packets that have been encoded (i.e., information about nativepackets coded together).� Packets overheard from the network (i.e., reception report).� The current TCP sending rate from which packets originated.

When neighboring nodes overhear this outgoing packet, theyextract reception report from the XOR header of the packet and up-date their local information.

2.1.2. Transmitted coded packetsIt is critical to note that coded packets are not transmitted in

broadcast mode but in unicast mode. However, as mentioned inSection 2.1.1, all the nodes are set in promiscuous mode which en-able them to snoop on all transmissions in the network. Therefore,even though coded packets are transmitted in unicast mode, nodeswhich are not the intended recipients can overhear such transmis-sion. The main reason for choosing unicast transmission is that theintended recipient has to send back an acknowledgment which isnot the case in broadcast transmission. If no acknowledgment is re-ceived by the sender, retransmission will be carried out. This ap-proach increases chances of coded packets being successfullyreceived by their intended recipients.

2.1.3. Searching for coding opportunitiesFinding a set of packets to code is carried out in three phases.

Phase one only employs reception reports received from neighbor-ing nodes. When a node has data to transmit, the scheduler firstchecks reception reports received from neighboring nodes to deter-mine which packets to code together. A coding opportunity is said

Fig. 2. XOR header.

to exist if: (1) there is a set of packets C = {P1,P2, . . . ,Pn} with two ormore packets heading to different set of hops H = {ho-p1,hop2, . . . ,hopn}, (2) all hops need to have n � 1 packets in set Cin their local packetPools for successful decoding and (3) none ofthe packets in set C need to be generated by the node carryingout encoding. In Fig. 1, node B has found packets P1, P2 and P3 thatcan be coded together. Note that none of the packets was generatedby node B. Also, no two or more packets are heading to the samenext hop. Furthermore, the intended receivers all have n � 1 pack-ets included in the set of packets to be coded. Therefore, packets P1,P2 and P3 have all passed the first phase of network coding.

2.1.4. Channel and TCP informationIn the second phase of network coding, packets that are in set C

but have poor wireless links (links between the coding node andthe destinations) are eliminated from the set. In the final phaseof network coding, the node combines packets together andchooses the best transmission data rate for the coded packet. Thedata rate is chosen on the basis of channel and TCP informationit has. Note that each packet contains information about the cur-rent TCP sending rate from which it is coming. Such informationis stored in the XOR header of the packet as discussed in Section2.3.1. Therefore, the intermediate node will extract such informa-tion from the XOR header of each packet to be coded and employit in the decision making process.

2.2. Wireless channel model

The wireless channel model used is TwoRayGround model in a1 km � 1 km area. We set packet error rate (Pe) to 0.01. The num-ber of MAC retransmissions (Nretrans) is set to 6. For simplicity ofpresentation and simulation, TCP packet size is set to 140 bytesand there is no packet fragmentation. The packet size was deter-mined using the required packet error rate which involves bit errorrate (BER), frame error rate (Fe), retransmissions, signal to interfer-ence plus noise ratio, transmit power and other factors. As a result,the packet size is equal to frame size. The transmit power (Ptr) is1 W. Perfect channel estimation is assumed. It is also assumed thatthe sender has already received the channel information from thereceiver. Using (1)–(3) in the following, we can determine the re-quired SINRs for different modulation schemes shown in Table 1

Pe ¼ 1� 1� FNretransþ1e

� �Nfr; ð1Þ

Fe ¼ 1� ð1� BERÞLfr ; ð2Þ

BER ¼ K1exp�K2cPtr

2q � 1

� �: ð3Þ

In the above equations, K1 and K2 are constellation and coding-spe-cific constants, Nfr is the number of frames, c is the instantaneousreceived signal-to-noise (SNR), Lfr is the length of the frame, andq is the number of bits representing each symbol in the constella-tion space (see [16] for more details).

2.3. TCP model

TCP Reno, which is the most widely deployed version of TCPprotocol [17], is used in our simulations. TCP Reno has three

Table 1Modulation scheme and SINRs.

Modulation scheme Required SINR (dB) K1 K2

BPSK 6.549 0.5 1.5QPSK 8.471 0.2 2.5QAM16 10.989 0.2 70QAM64 16.13 0.2 9

T. Nage et al. / Computer Communications 34 (2011) 1788–1797 1791

phases: (i) slow start, (ii) congestion-avoidance and (iii) fast recov-ery. Even though this strategy is better than the one used in earlierTCP version, it is inefficient in the sense that it can only recovers atmost one dropped packet per round-trip time [18]. Therefore, itsperformance deteriorates a lot in wireless networks where channelerrors are frequent.

2.3.1. Accessing TCP window sizeSince a cross layer approach is employed in the proposed solu-

tion, it means that layers in the protocol stack have to communi-cate with each other. Therefore, it is expected that information ofthe transport layer has to be provided to the data link layer andvice versa. In the proposed solution, data link layer has to be pro-vided with information of the current TCP window size from thetransport layer without changing the classical operations of TCPReno. This was carried out by annotating information of the cur-rent TCP window size on the XOR header (see Fig. 2) of a TCP pack-et which has just been generated at the transport layer before it ispassed down the protocol stack. Upon arrival at the data link layer,the window size information is extracted from the XOR header ofthe packet and a local variable for keeping current window sizeis updated before the packet is inserted in the interface queue.Note that some packets coming down the protocol stack fromthe network layer may not be self generated by a node but needto be forwarded. Therefore, to ensure that the correct informationis extracted, the source IP address of the packet is checked to makesure it is the same as that of the node. If it is different, then the lo-cal variable is not updated. For simplicity of presentation and sim-ulation, we assume that a node only generates one TCP session at atime. If there are multiple TCP sessions, the data link layer needs tokeep information per TCP flow. Since we take a cross layer designapproach in this paper, we assume that obtaining and keepinginformation per TCP flow is not problematic in practice, and the in-crease of complexity caused by this design could be justified by theperformance improvements brought by the proposed scheme.

3. TCP-aware network coding with opportunistic scheduling

In this section, TCP-aware network coding with opportunisticscheduling scheme is proposed and then compared with two otherschemes which have been proposed in the literature. These are: (1)traditional network coding and (2) traditional network coding withopportunistic scheduling.

3.1. Traditional network coding

In traditional network coding, when a node gets an opportunityto transmit, it first searches for any available coding opportunity.This is performed by only using reception reports received fromneighboring nodes. In other words, such a node does not haveinformation on conditions of outgoing channels. Moreover, it doesnot have information on how TCP parameters are changing in re-sponse to channel conditions and the capability to react to thosechanges. Therefore, there is high probability of spurious codingopportunities being found, which means some intended recipientsmay not be able to receive or decode coded packets due to channelerrors. As a result, TCP retransmissions and timeout events will befrequent, which results in low performance.

3.2. Traditional NC with opportunistic scheduling

A better approach is traditional network coding with opportunis-tic scheduling. In this scheme, when a node gets an opportunity totransmit, it searches for coding opportunities by employing a muchadvanced approach. In this approach, reception reports received

from neighbors and channel conditions are taken into considerationin order to determine which set of packets to code together. Thisscheme ensures that native packets contained in the coded packethave good quality links, which means high probability of successfulpacket reception. Also, it takes advantage of instantaneous link con-ditions by employing rate adaptation. However, it introduces sto-chastic halt of data transfer controlled by the scheduler when linksare poor by waiting until link conditions improve. This leads to fre-quent TCP timeout events which degrade TCP performance [18]. Fur-thermore, if link quality is low and low channel rate is allocatedwhen TCP sending rate is high, congestion will occur resulting inTCP timeout hence low performance.

3.3. Proposed TCP-aware NC with opportunistic scheduling

A more robust and resilient scheme is TCP-aware network cod-ing with opportunistic scheduling. In this scheme, TCP dynamicsand link conditions are both considered to determine which pack-ets to code and which data rate to use. This scheme facilitates quickand efficient response of TCP sender upon realizing any changes indynamic link conditions. Specifically, at the sender, let N be thelargest TCP window size such that 1 6 g < a < b 6 N where g, aand b are threshold parameters. In this work, g, a and b are con-stants. The derivation of factors in discussion will be done in ourfuture works.

If n is the current state of the congestion window, and b < n 6 N,it means that TCP sending rate is very high. Therefore the schedulerwill allocate the highest channel rate (QAM64) irrespective of thelink quality [19]. This is done because if the channel quality islow and low channel rate is allocated when TCP sending rate ishigh, this will inevitably result in packet losses due to congestion.The advantage of transmitting at high channel rate in this situationis that the medium can become available quickly to other nodeswith good links thereby promoting network resource sharing (notethat carrier sense medium access technique is employed). How-ever, transmitting at high modulation scheme in a channel of lowquality can lead to high bit error rate. This is accounted for byemploying adaptive coding to ensure that a particular bit error rateis obtained (see Table 1 and [16]). Furthermore, in case packets getlost due to low quality channel to the intended receiver, othernodes whose incoming links are of good quality can actually over-hear such packets and could forward them by network coding tothe intended receiver. When a < n 6 b, the least modulationscheme that can be allocated is QAM16. In case the link qualityis good such that it allows QAM64 channel rate, the scheduler allo-cates QAM64 instead of QAM16. This is done to ensure that TCP re-acts quickly enough to take advantage of the available bandwidth[20]. For g < n 6 a, QPSK is the least channel rate that can be allo-cated. Again, if the link quality allows transmission at a highermodulation scheme, the scheduler will allocate the correspondingmodulation scheme. When TCP congestion window status is 1 6n 6 g, the least modulation scheme that can be used is BPSK. How-ever, if the link quality is higher, then higher modulation schemeswill be allocated accordingly.

Quite often, the wireless link can be in deep fade, which meansthat even the least available modulation scheme cannot be used.In this situation, no channel is allocated resulting in no transmission.This is done not only to save power but also to make the wirelessmedium available quickly to other nodes with good quality links.However, TCP will timeout if there is no ACK received. Moreover,by not transmitting, it means that neighboring nodes do not receiveupdates on information of overheard packets via reception reports.Therefore, chances of coding are very limited. To avoid this, some de-lay (e.g., 1 ms [21]) which corresponds to channel coherent time isinduced to allow channel quality to improve after which transmis-sion resumes. Even though transmission can be carried out even

Table 4Different channel conditions and various tcp states.

Wireless links ba bc bdLink quality (data rate) broken QPSK QAM16TCP window size (state) QAM64 BPSK QPSK

1792 T. Nage et al. / Computer Communications 34 (2011) 1788–1797

when link quality is still poor, chances are some other neighborswhose link qualities are good will receive reception reports andoverhear packets. This process facilitates coding opportunities[21]. At the intermediate nodes, information on TCP sending rateannotated on XOR header of a packet is used to determine data rateto be used for either coded packet or non-coded packet. In case thereis coding opportunity, the intermediate node uses the strategy dis-cussed above to determine a set of packets to code. Note that the fi-nal set of packets to be coded may have packets which requiredifferent data rates for transmission. To determine the best data rateto use, this scheme chooses the lowest data rate required by one ofthe native packets in the set. Even though this decision does not fa-vor packets which require high data rates, it ensures that both TCPdynamics and link conditions requirements are met to enhanceperformance.

3.4. Examples of decision making in NC shemes

Consider Fig. 1 and Table 2. Node B intends to combine packetsP1, P2 and P3. Table 2 shows the quality of each wireless link in-volved by indicating the modulation scheme suitable for each link.Also, it shows the state of the current TCP window size from whicheach packet is coming by indicating the modulation scheme suit-able for each state of TCP window size.

If node B is using traditional network coding only, then it willallocate BPSK channel rate for the coded packet since it does nothave TCP and channel information. On the other hand, node B willchoose BPSK channel rate if it uses traditional network coding withopportunistic scheduling. This is due to the weakest link whichlimits the transmission data rate (link ba in this case). If node Bis using TCP-aware network coding with opportunistic scheduling,it will choose QPSK channel rate for the coded packet. The criteriaused specifies that for each native packet, the data rate is deter-mined by either TCP information or channel information whichmaximizes data rates. Having determined the data rates for allpackets, the least data rate is chosen to satisfy both TCP and chan-nel requirements for coded packet. For example, according to Table2, node B makes decisions for each link as shown in Table 3 fromwhich it chooses QPSK channel rate. Note, however that, it is highlylikely that the native packet destined to node A will get lost sincethe coded packet will be transmitted at a higher modulationscheme. The advantage of this process is that more space willquickly become available in the queue to accommodate incomingpackets which would not be the case if other methods were used.

Now suppose that link ba is broken or poor as shown in Table 4such that even the least modulation scheme cannot be used fortransmission. If node B is using traditional network coding alone,it will still code all the three packets and allocate BPSK channel ratefor the coded packet. However, node A will not receive the codedpacket at all. On the other hand, if node B is using traditional net-work coding with opportunistic scheduling, it will not code all thethree packets. Instead, it will code packets P1 and P3 since their

Table 2Data rates required by TCP and wireless channels.

wireless links ba bc bdLink quality (data rate) BPSK QPSK QAM16TCP window size (state) QAM64 BPSK QPSK

Table 3Required data rates for tcp-aware nc.

Wireless links ba bc bdDecision (on data rate) QAM64 QPSK QAM16

links are good. Node B will then choose QPSK channel rate forthe coded packet. If node B is using TCP-aware network codingwith opportunistic scheduling, it will code packets P1 and P3 andallocate QPSK channel rate for the coded packet. Note that in casethere is no coding opportunity and packet P2 destined to node A isat the head of the packet queue, then node B will induce some de-lay (e.g., 1 m [21]) to allow channel conditions to improve afterwhich it resumes transmission.

4. Adaptive control of packet overhead

In order to provide additional performance improvement, it isimportant to consider packet overhead minimization. In this sec-tion, we propose the adaptive-W (i.e., adaptive Waiting time)scheme whose purpose is to adaptively control packet waiting timeto improve throughput performance.

4.1. Numerical analysis

Through numerical analysis and results, we show that packetoverhead is a big issue in network coding. The frame error ratecan be calculated as

Fe ¼ 1� ð1� BERÞLfr ; ð4Þ

where BER is the bit error rate and Lfr is the length of the frame. Letlc be the variable length (in bits) of reception report in XOR headersuch that Lfr = Ld + lc where Ld (in bits) is the length of the data partin the frame. Consider two reception reports (lci and lcj) with twodifferent lengths such that lci < lcj so that DFe = Fej � Fei. ButFei ¼ 1� ð1� BERÞLfri and Fej ¼ 1� ð1� BERÞLej . Therefore, we have

DFe ¼ ð1� BERÞLdfð1� BERÞlci � ð1� BERÞlcjg: ð5Þ

From (5), j1 � BERj < 1. Therefore, for DFe P 0, lci 6 lcj. Set lci = 0 asreference length of reception report in XOR header for analysis sothat we have

DFe ¼ ð1� BERÞLdf1� ð1� BERÞlcjg; ð6Þ

where lcj represents overhead introduced by reception report in XORheader. The more time packets spend in the packetPool, the longerthe XOR header and the packet itself. Let W be average time spentby each packet in packetPool and k be average arrival rate of packetsin packetPool. Also, let N be average number of packets in the packet-Pool so that N = kW according to Little’s theorem. From Fig. 2, it can beseen that the XOR header has a byte reserved for number of reports inthe reception report. Also, 5 bytes are reserved for information ofeach packet being reported. Therefore, lcj = 8 + 40N in bits. LetW 2 {10 ms,50 ms,100 ms,500 ms} and for simulation purposes,consider Ld = 160 bytes, BER = 5.69917364 � 10�4 which corre-sponds to packet error rate, PER = 0.01, Nretrans = 6, Nfr = 1 (i.e., framesize = packet size). Note that coded packets are not transmitted inbroadcast mode. Instead, one of the native packets in the coded pack-et is set as MAC destination. This is carried out to facilitate successfulreception at all the intended recipients due to possible retransmis-sions. If W = 10 ms and k = 10 pkts/s, then N = kW = 0.1 givingDFe = 3.28644 � 10�3 which corresponds to 0.63% increase of the Fe

for a packet without reception reports as shown in Fig. 3. Note thatDFe indicates by how much Fe increases as frame size increases dueto reception report overhead. For example, in Fig. 3, if packet’s

0 0.1 0.2 0.3 0.4 0.5packet waiting time in pool (seconds)

0

20

40

60

80

100

120

140

160

180

200

over

head

of X

OR

hea

der

(%)

rate = 10 pkts/secrate = 50 pkts/secrate = 100 pkts/sec

T. Nage et al. / Computer Communications 34 (2011) 1788–1797 1793

waiting time is 0.2 s and arrival rate of packets is 50 pkts/s, then theframe error rate increases by 20%.

When a node gets an opportunity to transmit data packets, itgoes through the entire set of packets which were previously over-heard from the network and extracts information for reception re-port to be included in the outgoing packet. Therefore if there ishigh network traffic, nodes will overhear many packets in a partic-ular time interval and vice versa. Therefore, with high networktraffic, many packets will be kept in packetPool which will resultin long reception reports. This introduces additional Fe as shownin Fig. 3, which results in performance deterioration. Fig. 4 showsthat if average time spent by packets in packetPool, W, is as longas 0.5 s and average packet arrival rate is 100 pkts/s, the overheadintroduced by reception report in a packet is as high as 160% (thatis approximately 50 packets in packetPool as shown in Fig. 5)which is more than doubling the packet length itself. Although thispresents more coding opportunities, it is an inefficient use of net-work/node resources because long reception reports consume lotof bandwidth, transmit power and node memory.

Fig. 4. Effect of packet arrival rate in XOR header.

30

40

50

60

70

80

f pac

kets

in p

acke

tPoo

lrate = 10 pkts/secrate = 50 pkts/secrate = 100 pkts/sec

4.2. Fixed-W scheme

In fixed-W scheme (employed in COPE method [6]), W is kept asa constant indicating the amount of time packets should spend inpacketPool regardless of network traffic level and required frameerror rate. Therefore, k is the only parameter that affects N. Thatis, if network traffic happens to be very high such that packetsare overheard by a promiscuous node in abundance, then packet-Pool will have a large number of overheard packets. Note that thisscheme assumes that each node has an infinite amount of memoryto accommodate all overheard packets which in practice is unreal-istic. This scheme will be used to assess the performance improve-ment of the proposed scheme which is presented below.

0 0.1 0.2 0.3 0.4 0.5packet waiting time in pool (seconds)

0

10

20

num

ber

o

Fig. 5. Effect of packet arrival rate on number of packets in packetPool.

4.3. Proposed adaptive-W scheme

This scheme adaptively controls the packet waiting time in pac-ketPool given inter-arrival rate of overheard packets (k) and re-quired frame error rate, Fe. Specifically, for every DFe that can betolerated, there is a corresponding number of packets that can bestored in the packetPool. Let X represent the required number ofpackets that can be in the packetPool for a particular Fe. Wheneverthere is an overheard packet from the network or a packet that has

0 0.1 0.2 0.3 0.4 0.50

10

20

30

40

50

60

70

packet waiting time in pool (seconds)

chan

ge in

Fra

me

Err

or R

ate

( %

)

rate = 10 pkts/secrate = 50 pkts/secrate = 100 pkts/sec

Fig. 3. Impact of inter-arrival rate of overheard pkts on frame error rate.

just been sent by a node, a copy of it is added to the packetPool. Foreach addition of packet, the scheme ensures that there can neverbe more that X packets in the packetPool to satisfy the requiredFe. The waiting time (W) for each packet in the packetPool is up-dated every 10 ms. If more packets are added within 10 ms, thenpackets in the packetPool will wait for shorter time before beingdiscarded. However, if few packets are added within that particulartime interval, then packets in the packetPool will wait for longertime. The basic idea of this approach is to control packet overheaddue to reception reports generated by using information of packetsin the packetPool.

Note that k is related to network traffic level and wireless chan-nel conditions. That is, if network traffic increases, chances are thatmore packets will be overheard by nodes per unit time and viceversa. Also, if incoming channel quality is poor most of the time,more packets will be dropped and will never make their way intothe packetPool.

W is the time packets can spend in a packetPool. Note that suchtime interval can change due to several factors. Therefore, W is anelement of {t1, t2, . . . , tn} where t1; t2; . . . ; tn 2 Rþ are optimized timespackets can spend in packetPool. Specifically, consider Fig. 3. Sup-pose DFe that can be tolerated for a specific frame error rate is 20%.

0 5 10 15 20 25 30 35 40 450

10

20

30

40

50

60

70

80

Speed of nodes (m/s)

TC

P th

roug

hput

(K

bps)

TCP−Aware NC with Opp Schedulingno NC and no Opp Schedulingtraditional NC onlytraditional NC with Opp Scheduling

Fig. 6. Impact of node mobility.

1794 T. Nage et al. / Computer Communications 34 (2011) 1788–1797

Also, assume that the current average packet arrival rate,k = 50 pkts/s so that W = 0.2 s. This corresponds to N = 10 pkts inpacketPool illustrated in Fig. 5 and 30% overhead introduced byreception report shown in Fig. 4. Therefore, if network traffic levelchanges such that average packet arrival rate is now k = 100 pkts/s,then by employing the proposed scheme, adaptive-W will dynami-cally be changed to 0.1 s. This will still maintain the required Fe. Also,it maintains the average number of packets in the packetPool, N, at10 shown in Fig. 5, which means approximately same number ofcoding opportunities can still be generated. In addition, reception re-port overhead is kept at 30%. If later on, the average packet arrivalrate reduces by half (that is, 50 pkts/s), then adaptive-W will bedynamically changed to 0.2 s and still maintain required Fe.

Note that by adaptively controlling packet overhead, packet lossrate is minimized. Furthermore, it utilizes bandwidth, node mem-ory and transmit power more efficiently than fixed-W scheme.Nevertheless, this scheme introduces additional system complex-ity because it has to collect information on traffic level from thenetwork by estimating average inter-arrival rate of overheardpackets. Based on this information and the required Fe, this schemethen dynamically chooses the best W from a set of optimizedtimes, t1, t2, . . . , tn mentioned above.

5. Simulation results and discussion

Two sets of simulation were performed. In the first set, we ana-lyze the performance of the proposed TCP-aware network codingwith opportunistic scheduling scheme. Then, the second set pre-sents the results for the proposed adaptive-W scheme.

1 2 3 4 5 6 7 8 9 10 110

10

20

30

40

50

60

70

80

90

100

Number of TCP sessions in the network

TC

P th

roug

hput

(K

bps)

TCP−Aware NC with Opp Schedulingno NC and no Opp Schedulingtraditional NC onlytraditional NC with Opp Scheduling

Fig. 7. Impact of traffic increase.

5.1. TCP-aware network coding with opportunistic scheduling

Network Simulator-2.33 version was used to perform all simu-lations. The simulated time was set to 30 s, since there was no sig-nificant change in the simulation results after 30 s. DestinationSequenced Distance Vector (DSDV) was employed as the routingmechanism. We implemented our scheme in IEEE 802.11 andWirelessPhy extended versions developed by Mercedes-Benz Re-search and Development North America and University of Kar-lsruhe for ns-2.33.

For simulation purposes, g, a and b were set to 20, 80 and 140,respectively. A total of 300 simulations were carried out for eachcase and an average throughput was found. Note that more sam-ples (300 samples) were taken to be sure that the results foundtruly reflects the population. The 95% confidence interval was alsocomputed for each scenario. The confidence interval is representedwith vertical lines on the graphs. Three simulation scenarios weredefined: (i) mobility, (ii) increasing network traffic, and (iii)increasing network nodes. In the mobility scenario, there were 9nodes in the network. Five TCP sessions were generated in the net-work. For the increasing network traffic scenario, there were 19nodes in the network which were placed at fixed predefined loca-tions. In the last simulation scenario, nodes were added in the net-work by being placed at fixed predefined locations. Five TCPsessions were generated.

From Fig. 6, we can see that the TCP throughput is significantlyhigher when the speed of nodes is below 5 m/s compared to thatwhen the speed of nodes is above 35 m/s in the TCP aware networkcoding with opportunistic scheduling. This is attributed to rela-tively constant topology which allows the routing layer to forwardpackets to best links. However, in a high mobility environment,there is a frequent change in topology which the routing layer(using DSDV routing method) is too slow to keep up with. There-fore some nodes end up forwarding packets to others which maynot be within communication range or whose links are poor hence

low throughput. Despite this, TCP aware network coding withopportunistic scheduling performs far much better than the tradi-tional network coding with opportunistic scheduling. The reason isthat the former employs induced short delay that facilitates codingopportunities and link quality improvement. Even if the topology isfrequently changing and the link is bad, TCP does not suffer thatmuch because other nodes with good links can pick up overheardpackets and forward them via network coding. On the other hand,the latter waits until the link is good before it can transmit. Eventhough this saves power and avoids interference, it does not onlyreduce chances of coding but also degrades TCP throughput at anode because of timeouts. The TCP throughput of traditional net-work coding scheme falls much slowly as compared to otherschemes. This is explained by coding opportunities which arefound in abundance despite transition from low mobility environ-ment to high mobility environment. Even though link conditionschange often, more packets are routed via network coding throughthe network and as a result mobility does not have significant im-pact provided more nodes are closer together.

Fig. 7 illustrates results of network traffic increase for 19 net-work nodes all traveling at fixed speed of 15 m/s. It can be seen

7 8 9 10 11 12 13 14 15 16 170

20

40

60

80

100

120

140

160

180

200

Number of TCP sessions

Pac

ket O

verh

ead

(in b

ytes

)

adaptive−Wfixed−W

Fig. 9. Impact of network traffic on packet overhead.

7 8 9 10 11 12 13 14 15 16 170

10

20

30

40

50

60

70

Number of TCP sessions

TC

P th

roug

hput

(K

bps)

adaptive−Wfixed−W

Fig. 10. Impact of traffic on TCP throughput.

T. Nage et al. / Computer Communications 34 (2011) 1788–1797 1795

that network throughput increase is directly proportional to net-work traffic increase. However, it is expected that when the net-work traffic reaches saturation point, TCP throughput willstabilize or may even diminish as intermediate nodes becomeoverwhelmed by high network traffic to be routed through the net-work. It can be seen from Fig. 7, that when there is low networktraffic, there is insignificant performance improvement by anyscheme. This is attributed to low network traffic which does notpresent coding opportunities. As traffic increases, TCP-aware net-work coding with opportunistic scheduling shows promising per-formance improvement as compared to other schemes. Thisindicates that when the traditional network coding is upgradedto TCP-aware network coding and combined with opportunisticscheduling in wireless mobile ad hoc networks, performance canbe significantly improved.

Fig. 8 shows the results of a scenario where nodes were addedin the network with 5 TCP sessions. Note that there is no perfor-mance improvement when nodes are added in the network.Although additional nodes in the network create options for rout-ing packets, the downside of this is that there are more routingmessages that are being generated. Moreover, more interferenceamong the crowded nodes can affect the wireless channels amongnodes, which will decrease the throughput of the network. There-fore, in time T, there are less chances of a node being granted chan-nel access. Despite this, TCP aware network coding withopportunistic scheduling shows significant performance as com-pared to other schemes.

5.2. Adaptive W-Scheme

In order to analyze the impact of packet overhead in the system,another simulation was carried out. In this simulation, only tradi-tional network coding which is the foundation of the main schemewas considered. Furthermore, a Shadowing channel model wasused. Parameters used are as follows: path loss exponent of 2.8,shadowing standard deviation of 6 dB, reference distance of 1 m.Ns-2.33 version was used to carry out all simulations. Three simu-lation scenarios were defined: (i) mobility, (ii) network traffic in-crease, and (iii) network node increase.

In the network traffic increase scenario, the network had 25nodes. There was 100 m separation distance between nodes placedin fixed locations representing a square Bravais lattice in a twodimensional plane. Traffic was increased by 2 TCP sessions from8 to 16 TCP sessions. Note that as network traffic increases, nodesoverhear packets in abundance. Therefore without adaptive control

9 10 11 12 13 14 15 16 17 18 190

10

20

30

40

50

60

70

80

90

100

Number of network nodes

TC

P th

roug

hput

(K

bps)

TCp−Aware NC with Opp Schedulingno NC and no Opp Schedulingtraditional NCtraditional NC with Opp Scheduling

Fig. 8. Effect of node increase.

of overheard packets in packetPool, high packet overhead is gener-ated as demonstrated by fixed-W scheme in Fig. 9. Fig. 10 showsthat this adversely affects throughput because of inefficient band-width utilization. However, with adaptive control mechanism,packet overhead is kept low to meet the required Fe. Therefore,as shown in Fig. 10, this leads to better TCP throughput becauseof efficient bandwidth utilization. Also, less node memory and lesstransmission power are used.

In the mobility scenario, initial locations, routes and final loca-tions of nodes where pre-defined. Nodes were made to move to-wards and past each other. Initial locations of nodes had 100meter separation distance between nodes. Traffic was fixed at 16TCP sessions. Fig. 11 shows that when the speed of nodes increasesup to 30 m/s, adaptive-W scheme exhibits significant TCP through-put improvement as compared to fixed-W scheme. This is due tothe fact that when nodes move towards each other at relativelylow speed, they spend more time within each others’ communica-tion range. Therefore in adaptive-W scheme, the estimation madeon inter-arrival rate of packets is relatively the same for mostnodes. Therefore given the required Fe, the average packet waitingtime (W) computed is relatively the same for most nodes. Thisfacilitates successful encoding and decoding of packets. On the

0 10 20 30 40 50

100

200

300

400

500

600

700

800

Speed of nodes (ms)

TC

P th

roug

hput

(K

bps)

adaptive−Wfixed−W

Fig. 11. Impact of mobility on TCP throughput.

24 26 28 30 32 34 36 38 40 4220

40

60

80

100

120

140

Number of network nodes

TC

P th

roug

hput

(K

bps)

adaptive−Wfixed−W

Fig. 12. Effect of nodes on TCP throughput.

1796 T. Nage et al. / Computer Communications 34 (2011) 1788–1797

other hand, low performance seen in fixed-W scheme is due topackets that are delayed or lost because of high packet overhead.Despite more coding opportunities generated by large amount ofoverheard packet in packetPool, performance deteriorates becauseof high packet overhead.

Note however that in adaptive-W scheme, when nodes move atspeeds beyond 30 m/s, TCP throughput deteriorates considerably.This is attributed to frequent change in network topology which re-sults in redistribution of network traffic flow. Therefore, there ishuge difference in k estimates for each network node as nodes movefurther away from each other. Due to different k estimates for eachnode, the computed Ws by nodes vary, which means that chancesof some intended receivers failing to decode coded packets are veryhigh as some nodes may have already discarded packets to be usedin decoding. On the other hand, TCP throughput decreases slowly infixed-W scheme as compared to the former. This is mainly due tooverheard packets which spend long time in packetPool hence facil-itates successful decoding of incoming coded packets. Despite this,adaptive-W scheme continues to perform far much better thanfixed-W scheme. Take-away point from this analysis is that whennodes are crowded, packet overhead is the dominant factor in deter-mining the overall TCP throughput but when nodes move away from

each other, the dominant factor is how often can coded packets besuccessfully decoded at the receivers.

In the network node increase scenario, the number of nodeswas increased by 4 each time from 25 to 41. New nodes wereplaced among existing nodes and 16 TCP sessions were generated.In this scenario there is no performance change when the numberof network nodes increases as shown in Fig. 12. This is mainly dueto the increase in the number of routing messages generated in thenetwork which consume more network resources. Moreover, moreinterference among the crowded nodes can affect the wirelesschannels among nodes, which will decrease the throughput ofthe network. However, adaptive-W scheme exhibits better perfor-mance than fixed-W scheme.

6. Conclusions and future work

In this paper, we proposed a robust and resilient TCP-aware net-work coding with opportunistic scheduling scheme that employsnetwork coding, TCP dynamics, rate adaptation and channel infor-mation to enhance TCP performance in wireless mobile ad hoc net-works. Simulation results showed that our scheme exhibitssignificant performance increase as compared to other schemesin low and high mobility environments, when network traffic in-creases and when the number of network nodes increases. In highmobility environment, about 35% performance improvement isachieved in TCP-aware network coding with opportunistic sched-uling as compared to traditional network coding with opportunis-tic scheduling. Also, in no or low mobility environment, the formerachieves approximately 33% performance improvement as com-pared to the latter.

We also introduced an adaptive-W scheme whose aim is toadaptively control waiting time of packets to improve TCPthroughput performance. Simulation results showed that whennodes travel at speed below 30 m/s, adaptive-W scheme showedsignificant throughput increase as compared to fixed-W scheme.

Our findings in this study have shown that the routing layerdoes not always perform well especially when the link is poordue to stochastic fluctuations of instantaneous channel conditions.Also, in high mobility environment, our scheme showed steepestdecline in TCP performance as nodes start traveling at high speed.Therefore, as part of future work, we seek to investigate these is-sues. Also, we intend to optimize parameters indicating the stateof TCP congestion window size so as to enhance performance. Fur-thermore, we intend to investigate the strategy employed inchoosing the transmission data rates for coded packets taking intoaccount channel and TCP information. Moreover, instead of fixinginduced delay used to facilitate improvement of link conditions,we seek to analyze channel state prediction techniques to furtherimprove the performance.

References

[1] R. Ahlswede, N. Cai, S.-Y. Li, R. Yeung, ‘Network information flow, IEEE Trans.Inform. Theory 46 (2000) 1204–1216.

[2] D. Katabi, S. Katti, W. Hu, H. Rahul, M. Medard, On practical network coding forwireless environments, in: Proceedings of the Int’l Zurich Seminar onCommunications, 2006, pp. 84–85.

[3] E. Fasolo, M. Rossi, J. Widmer, M. Zorzi, On MAC scheduling and packetcombination strategies for practical random network coding, in: Proceedingsof the IEEE ICC’07, 2007, pp. 3582–3589.

[4] J. Jin, B. Li, Adaptive Random Network Coding in WiMAX, in: Proceedings of theIEEE ICC’08, May 2008, pp. 2576–2580.

[5] L. Scalia, F. Soldo, M. Gerla, PiggyCode: A MAC Layer Network Coding Schemeto Improve TCP Performance Over Wireless Networks, in: Proceedings of theIEEE GLOBECOM ’07, November 2007, pp. 3672–3677.

[6] S. Katti, H. Rahul, W. Hu, D. Katabi, M. Medard, J. Crowcroft, XORs in the air:practical wireless network coding, IEEE/ACM Trans. Netw. 16 (2008) 497–510.

[7] D. Nguyen, T. Tran, T. Nguyen, B. Bose, Hybrid ARQ-random network coding forwireless media streaming, in: Proceedings of the ICCE’08, June 2008, pp. 115–120.

T. Nage et al. / Computer Communications 34 (2011) 1788–1797 1797

[8] M. Wang, B. Li, Lava: a reality check of network coding in peer-to-peer livestreaming, in: Proceedings of the IEEE INFOCOM’07, May 2007, pp. 1082–1090.

[9] A. Campo, A. Grant, Robustness of random network coding to interferingsources, in: Proceedings of the 7th Australian Communications TheoryWorkshop, February 2006, pp. 120–124.

[10] K. Lu, S. Fu, Y. Qian, Capacity of random wireless networks: impact of physical-layer network coding, in: Proceedings of the IEEE ICC’08, May 2008, pp. 3903–3907.

[11] H.-M. Zimmermann, Y.-C. Liang, Physical layer network coding for uni-castapplications, in: Proceedings of the IEEE VTC’08S, May 2008, pp. 2291–2295.

[12] H. Yomo, P. Popovski, Opportunistic scheduling for wireless network coding,in: Proceedings of the IEEE ICC’07, June 2007, pp. 5610–5615.

[13] K. Li, X. Wang, Cross-layer design of wireless mesh networks with networkcoding, IEEE Trans. Mobile Comput. 7 (2008) 1363–1373.

[14] R. Prasad, H. Wu, D. Perkins, N.-F. Tzeng, Local topology assisted XOR coding inwireless mesh networks, in: Proceedings of the ICDCS ’08, June 2008, pp. 156–161.

[15] P. Chou, Y. Wu, K. Jain, Practical network coding, in: Allerton Conference onCommunication, Control and Computing, 2003.

[16] X. Wang, G.B. Giannakis, A.G. Marques, A unified approach to QoS-guaranteedscheduling for channel-adaptive wireless networks, Proc. IEEE 95 (2007)2410–2431.

[17] K. Daoud, B. Sayadi, HAD: a novel function for TCP seamless mobility inheterogeneous access networks, in: Proceedings of the IEEE VTC’07F, October2007, pp. 1451–1455.

[18] Y. Wu, Z. Niu, J. Zheng, A network-based solution for TCP in wireless systemswith opportunistic scheduling, in: Proceedings of the IEEE PIMRC’04, 2004.

[19] M. Ghaderi, A. Sridharan, H. Zang, D. Towsley, R. Cruz, TCP-aware channelallocation in CDMA networks, IEEE/ACM Trans. Netw. 8 (2009) 14–28.

[20] K. Igarashi, K. Yamazaki, Flight size auto tuning for broadband wirelessnetworks, in: Proceedings of the IWCMC’2009, Leipzig, Germany, 2009.

[21] Y. Huang, M. Ghaderi, D. Towsley, W. Gong, TCP performance in coded wirelessmesh networks, in: Proceedings of the IEEE SECON’08, June 2008, pp. 179–187.