portfolio: optimizing remote desktop performance

1
For the best experience, open this PDF portfolio in Acrobat 9 or Adobe Reader 9, or later. Get Adobe Reader Now!

Upload: ipeak-networks

Post on 13-Jul-2015

948 views

Category:

Technology


0 download

TRANSCRIPT

For the best experience, open this PDF portfolio inAcrobat 9 or Adobe Reader 9, or later.

Get Adobe Reader Now!

RDP 7.0 + IPQ TEST REPORT Test Results Highlights - IPQ

RDP 7.0 + IPQ TEST REPORT

Prepared by IPeak Networks Incorporated

22 October 2010

RDP 7.0 + IPQ TEST REPORT

Table of Contents

Report Prepared by

Page 2 of 14

• Executive Summary

• Test Data Highlights – IPQ Improvements

• Interpretation

• WAN Scenarios

• Satellite Scenarios

• Retransmission Analyses

• Conclusions

• Full Test Results

RDP 7.0 + IPQ TEST REPORT

Executive Summary

Report Prepared by

Page 3 of 14

IPQ is a very effective RDP optimizer for lossy networks When measured in terms of bandwidth utilization IPQ improved RDP 7.0 performance in all tests where packet loss is greater than zero. High bandwidth applications benefit greatly with up to 235% improvement in data transfer speeds to optimize application performance under lossy network conditions and provide corresponding improvements in the user experience, i.e., far fewer delays and a more fluid and productive experience.

Packet Loss and High Bandwidth Applications (e.g., PPT, Flash) Loss and latency cause TCP “clamping” and reduced bandwidth1

Reduced bandwidth is experienced as reduced application performance. .

IPQ - protects against loss - prevents TCP clamping - increases bandwidth, and - restores application performance

Low bandwidth applications also benefit with the near complete elimination of TCP retransmissions and a significant reduction in the user’s experience of lag.

Packet Loss and Low Bandwidth Applications (e.g., Word) Loss and latency triggers TCP retransmissions TCP retransmissions are experienced as delayed application response or “lag” IPQ - protects against loss - reduces retransmissions, and - reduces lag

Next Steps - RDP 7.0 + IPQ over WiFi

The tests documented in this report were reasonably representative of real-world “wired” network connections with a range of variations in available bandwidth, network latency, and levels of packet loss. However, IPeak Networks’ experience suggests that the range of variations did not include those typically encountered with real-world WiFi networks, i.e., ample bandwidth, low latency and packet loss levels frequently well in excess of 5%. IPeak Networks will test RDP 7.0 + IPQ over WiFi and report the results at a later date.

1 TCP Clamping: When TCP detects packet loss, it artificially reduces the speed of the stream; the higher the loss and latency, the more severe the reduction in speed.

RDP 7.0 + IPQ TEST REPORT

Test Results Highlight – IPQ Improvements

Report Prepared by

Page 4 of 14

IPQ IMPROVEMENTS - All Test Scenarios

WAN - PPT Packet Loss Test Time Reduction

Ticks to Seconds

Bandwidth (bytes)

BW (bps) Throughput Speed

1.5 Mbps (T1 / WAN Deployment) 0%Latency: 100 ms 3% 00:03:39 220 31,370,691 139,943 13.69% 22.73%

MS Script: S1 PPTx 5% 00:21:24 887 35,680,844 306,104 15.30% 67.56%

WAN - Word Packet Loss Test Time Reduction

Ticks to Seconds

Bandwidth (bytes)

BW (bps)

Throughput (Bytes)

Speed (bps)

1.5 Mbps (T1 / WAN Deployment) 0%Latency: 100 ms 3%

MS Script: S3 Typing & Scrolling (Word) 5%

WAN - Flash Packet Loss Test Time Reduction

Ticks to Seconds

Bandwidth (bytes)

BW (bps)

Throughput (Bytes)

Speed (bps)

1.5 Mbps (T1 / WAN Deployment) 0%Latency: 100 ms 3% 00:00:40 N/A 6,116,530 73,696 12.84% 10.87%

MS Script: S4 Flash 5% 00:07:00 N/A -8,383,195 291,821 -16.15% 61.42%

Satellite - PPT Packet Loss Test Time Reduction

Ticks to Seconds

Bandwidth (bytes)

BW (bps)

Throughput (Bytes)

Speed (bps)

8 Mbps (Satellite Connection) 0% 00:07:02 398 2,264,966 82,754 1.03% 15.71%Latency: 300 ms 3% 00:52:46 3,129 2,858,453 227,334 1.50% 97.40%

MS Script: S1 PPTx 5% 01:08:18 4,730 95,498,433 444,188 53.37% 234.70%

Satellite - Word Packet Loss Test Time Reduction

Ticks to Seconds

Bandwidth (bytes)

BW (bps)

Throughput (Bytes)

Speed (bps)

8 Mbps (Satellite Connection) 0%Latency: 300 ms 3%

MS Script: S3 Typing & Scrolling (Word) 5%

Satellite - Flash Packet Loss Test Time Reduction

Ticks to Seconds

Bandwidth (bytes)

BW (bps)

Throughput (Bytes)

Speed (bps)

8 Mbps (Satellite Connection) 0%Latency: 300 ms 3% 00:19:28 N/A -6,954,746 283,329 -12.28% 121.12%

MS Script: S4 Flash 5% 00:22:16 N/A -6,357,954 231,176 -11.58% 118.23%

WAN _ Word Packet Loss

1.5 Mbps (T1 / WAN Deployment) 0%Latency: 100 ms 3%

MS Script: S3 Typing & Scrolling (Word) 5%

Satellite - Word Packet Loss

8 Mbps (Satellite Connection) 0%Latency: 300 ms 3%

MS Script: S3 Typing & Scrolling (Word) 5%

RDP 7.0 Compared to RDP 7.0 + IPQ

RDP 7.0 Compared to RDP 7.0 + IPQ

TEST SCENARIOS

RETRANSMISSION ANALYSIS

IPQ IMPROVEMENT

IPQ IMPROVEMENT

Retransmissions with RPD 7.0

11.47%14.89%

Same

Same

Same

Low Bandwidth apps are not impacted by bandwidth constraints induced by the interaction of packet loss and TCP. See separate analysis of Impact of TCP retransmissions.

Low Bandwidth apps are not impacted by bandwidth constraints induced by the interaction of packet loss and TCP. See separate analysis of Impact of TCP retransmissions.

RPD 7.0

11.49%

Retransmissions with RDP 7.0 + IPQ

0.57%1.60%

RDP 7.0 + IPQ

1.27%13.73%

Same

Same

2.38%

Net Reduction with IPQ

95.03%89.25%

Net Reduction

88.91%82.67%

RDP 7.0 + IPQ TEST REPORT

Interpretation of Highlights

Report Prepared by

Page 5 of 14

WAN SCENARIOS

WAN and PowerPoint - Significant Reduction in Test Time, Increase in Average Speed

In these tests, packet loss caused significant delays, uneven application performance, and an unsatisfactory user experience. With IPQ protection against that packet loss, the delays were all but eliminated, the application performance was smooth and steady, and the user experience was comparable to the experience over the LAN.

Test Times Reduced

Over the WAN with 3% loss, this test required an additional 00:08:30 to complete when compared to the Calibration baseline test time of 00:41:01 [See Full Test Results]. With IPQ protection against that packet loss, 00:03:39 of that time was recovered.

The IPQ Improvement when the WAN suffers 5% packet loss is even more dramatic. At 5% packet loss, this test required an additional 00:27:36 to complete when compared to the baseline. With IPQ protection against packet loss, 00:21: 24 of that additional time was recovered.

With zero packet loss over the WAN, this test ran in 00:44:44. With 5% packet loss over the WAN, this test ran in 01:08:37. With IPQ protection against packet loss, this test ran in 00:47:13. With IPQ, the impact of packet loss on the test completion time was practically eliminated.

Speed (bps) Increased

IPQ improved average data transfer speeds in these tests. With IPQ Protection against 3% packet loss, average speeds were improved by more than 22%. At 5% packet loss, IPQ protection enabled an average data transfer speed improvement of more than 67%.

WAN - PPT Packet Loss Test Time Reduction

Ticks to Seconds

Bandwidth (bytes)

BW (bps) Throughput Speed

1.5 Mbps (T1 / WAN Deployment) 0%Latency: 100 ms 3% 00:03:39 220 31,370,691 139,943 13.69% 22.73%

MS Script: S1 PPTx 5% 00:21:24 887 35,680,844 306,104 15.30% 67.56%

RDP 7.0 Compared to RDP 7.0 + IPQTEST SCENARIOS IPQ IMPROVEMENT

Same

RDP 7.0 + IPQ TEST REPORT

Interpretation of Highlights

Report Prepared by

Page 6 of 14

WAN SCENARIOS (con’t)

WAN and Word – Notable Reduction in Lag

Test Times, Speeds Unchanged

The measurements of test times, throughput, and average data transfer speeds in this test scenario confirmed that low bandwidth applications such as Word are not impacted by the bandwidth constraints induced by the interaction of packet loss and TCP. Test results with/without packet loss and with/without IPQ were not materially different.

Retransmissions Reduced, User Experience Enhanced

However, packet loss did cause delays in the application response times resulting in a notable deterioration in the user experience. With IPQ protection, the number of TCP retransmissions required to complete deliver user input to the application and return application response to the user was greatly reduced. I.e., the “lag” experienced by the user was significantly reduced. See the separate Retransmission Analyses for more.

WAN - Word Packet Loss Test Time Reduction

Ticks to Seconds

Bandwidth (bytes)

BW (bps)

Throughput (Bytes)

Speed (bps)

1.5 Mbps (T1 / WAN Deployment) 0%Latency: 100 ms 3%

MS Script: S3 Typing & Scrolling (Word) 5%

RDP 7.0 Compared to RDP 7.0 + IPQTEST SCENARIOS IPQ IMPROVEMENT

Low Bandwidth apps are not impacted by bandwidth constraints induced by the interaction of packet loss and TCP. See separate analysis of Impact of TCP retransmissions.

RDP 7.0 + IPQ TEST REPORT

Interpretation of Highlights

Report Prepared by

Page 7 of 14

WAN SCENARIOS (con’t)

WAN and Flash – Playout Times Corrected, Data Transfer Speeds Improved

Like PowerPoint, Flash is a high bandwidth application and as with the WAN PPTx tests packet loss caused significant delays, uneven and even un-useable application performance, and an unacceptable user experience in this test scenario. With IPQ protection against the packet loss, delays were greatly reduced, the application performance was smooth and steady, and the user experience was comparable to the experience over the LAN i.e., Flash was useable.

Test Times Reduced

Over the WAN with 5% loss, this test ran in 00:14:34, almost 6X the actual 00:02:30 runtime on the LAN [See Full Test Results]. With IPQ protection against the packet loss, this test time was reduced by more than half.

Speed (bps) Increased

IPQ improved average data transfer speeds in these tests. With IPQ Protection against 3% packet loss, average speeds were improved by more than 10%. At 5% packet loss, IPQ protection enabled an average data transfer speed improvement of more than 61%.

WAN - Flash Packet Loss Test Time Reduction

Ticks to Seconds

Bandwidth (bytes)

BW (bps)

Throughput (Bytes)

Speed (bps)

1.5 Mbps (T1 / WAN Deployment) 0%Latency: 100 ms 3% 00:00:40 N/A 6,116,530 73,696 12.84% 10.87%

MS Script: S4 Flash 5% 00:07:00 N/A -8,383,195 291,821 -16.15% 61.42%

RDP 7.0 Compared to RDP 7.0 + IPQTEST SCENARIOS IPQ IMPROVEMENT

Same

RDP 7.0 + IPQ TEST REPORT

Interpretation of Highlights

Report Prepared by

Page 8 of 14

SATELLITE CONNECTION SCENARIOS

Satellite Connection and PowerPoint – Extreme Reduction in Test Time, Extreme Increase in Average Speed

As with the WAN PPTx tests, packet loss caused significant delays, uneven application performance, and an unsatisfactory user experience in the Satellite PPTx tests. With IPQ protection against that packet loss, the delays were reduced dramatically, the application performance was far more fluid steady, and the user experience was much closer to the experience over the LAN.

Test Times Reduced

Over a Satellite Connection with 3% loss, this test required an additional 01:07:36 to complete when compared to the Calibration baseline test time of 00:41:01 [See Full Test Results]. With IPQ protection against that packet loss, 00:52:46 of that time was recovered.

The IPQ Improvement when the satellite Connection suffers 5% packet loss is even more dramatic. At 5% packet loss, this test required an additional 01:25:03 to complete when compared to the baseline. With IPQ protection against packet loss, 01:08:18 of that additional time was recovered.

With zero packet loss over the Satellite Connection, this test ran in 00:55:27. With 5% packet loss over the Satellite Connection, this test ran in 02:06:04. With IPQ protection against packet loss, this test ran in 01:08:18. With IPQ, the impact of packet loss on the test completion time was dramatically reduced.

Speed (bps) Increased

IPQ improved average data transfer speeds in these tests by as much as 2.3X. With IPQ Protection against 3% packet loss, average speeds were improved by more than 97%. At 5% packet loss, IPQ protection enabled an average data transfer speed improvement of more than 234%.

Satellite - PPT Packet Loss Test Time Reduction

Ticks to Seconds

Bandwidth (bytes)

BW (bps)

Throughput (Bytes)

Speed (bps)

8 Mbps (Satellite Connection) 0% 00:07:02 398 2,264,966 82,754 1.03% 15.71%Latency: 300 ms 3% 00:52:46 3,129 2,858,453 227,334 1.50% 97.40%

MS Script: S1 PPTx 5% 01:08:18 4,730 95,498,433 444,188 53.37% 234.70%

RDP 7.0 Compared to RDP 7.0 + IPQTEST SCENARIOS IPQ IMPROVEMENT

RDP 7.0 + IPQ TEST REPORT

Interpretation of Highlights

Report Prepared by

Page 9 of 14

SATELLITE CONNECTION SCENARIOS (con’t)

Satellite Connection and Word – Notable Reduction in Lag

Test Times, Speeds Unchanged

The measurements of test times, throughput, and average data transfer speeds in this scenario confirm that low bandwidth applications such as Word are not impacted by the bandwidth constraints induced by the interaction of packet loss and TCP. Test results with/without packet loss and with/without IPQ were not materially different.

Retransmissions Reduced, User Experience Enhanced

However, packet loss did cause delays in the application response times resulting in a notable deterioration in the user experience. With IPQ protection, the number of TCP retransmissions required to deliver user input to the application and return application response to the user was greatly reduced. I.e., the “lag” experienced by the user was significantly reduced. See the separate Retransmission Analyses for more.

Satellite - Word Packet Loss Test Time Reduction

Ticks to Seconds

Bandwidth (bytes)

BW (bps)

Throughput (Bytes)

Speed (bps)

8 Mbps (Satellite Connection) 0%Latency: 300 ms 3%

MS Script: S3 Typing & Scrolling (Word) 5%

RDP 7.0 Compared to RDP 7.0 + IPQTEST SCENARIOS IPQ IMPROVEMENT

Low Bandwidth apps are not impacted by bandwidth constraints induced by the interaction of packet loss and TCP. See separate analysis of Impact of TCP retransmissions.

RDP 7.0 + IPQ TEST REPORT

Interpretation of Highlights

Report Prepared by

Page 10 of 14

SATELLITE CONNECTION SCENARIOS (con’t)

Satellite Connection and Flash – Playout Times Improved, Data Transfer Speeds Doubled

Like PowerPoint, Flash is a high bandwidth application and as with the Satellite Connection PPTx tests packet loss caused frequent, prolonged delays, and a largely un-useable user experience in this test scenario. With IPQ protection against that packet loss, the delays were greatly reduced, and the application performance was improved as was the user experience.

Test Times Reduced

Over the Satellite Connection with 5% loss, the Flash test completed in 00:37:26, many times the actual Flash running time of 00:02:30 on the LAN [See Full Test Results]. With IPQ protection against packet loss, the test time was reduced by a full 00:22:16 bringing Flash over a Satellite Connection into the realm of possibility.

Speed (bps) Increased

IPQ improved average data transfer speeds in these tests. With IPQ Protection against 3% packet loss, average speeds were improved by more than 121%. At 5% packet loss, IPQ protection enabled an average data transfer speed improvement of more than 118%.

Satellite - Flash Packet Loss Test Time Reduction

Ticks to Seconds

Bandwidth (bytes)

BW (bps)

Throughput (Bytes)

Speed (bps)

8 Mbps (Satellite Connection) 0%Latency: 300 ms 3% 00:19:28 N/A -6,954,746 283,329 -12.28% 121.12%

MS Script: S4 Flash 5% 00:22:16 N/A -6,357,954 231,176 -11.58% 118.23%

RDP 7.0 Compared to RDP 7.0 + IPQTEST SCENARIOS IPQ IMPROVEMENT

Same

RDP 7.0 + IPQ TEST REPORT

Interpretation of Highlights

Report Prepared by

Page 11 of 14

RETRANSMISSION ANALYSES

WAN and Word – Dramatic Reductions in Retransmissions and Lost Keystrokes

The analysis of the retransmissions with and without IPQ was prompted by the observation of high levels of lag or delays in the application response to user input, i.e. to the typing and scrolling activities. Keyboard input that fails to reach the virtual machine and application due to packet loss requires retransmission. The application response is delayed until the retransmission is made successfully and the keystroke is communicated to the application. Similarly, the application response to the keystroke is also subject to potential loss, also requiring retransmission from application to user. Whether lost enroute to or from the application or the user – or both – the loss and retransmission manifest as lag.

TCP retransmissions are at once symptomatic of packet loss and the programmatic response to packet loss. IPeak Networks undertook this additional retransmission analysis to provide a set of underlying measurements that may be correlated to the user experience of lag.

Retransmissions and Lost Keystrokes Reduced

At 3% packet loss, while typing and scrolling over the WAN, retransmissions accounted for more than 11.47% of the data transferred. In effect, approximately one of every nine keystrokes was lost due to packet loss and required retransmission. With IPQ protection against the packet loss, the retransmissions accounted for 0.57% of the data transferred, a net reduction in retransmissions equal to more 95%. With IPQ, the number of keystrokes lost due to packet loss was reduced to one in 200.

When typing and scrolling at 5% packet loss, retransmissions accounted for 14.89% of the data transferred. IPQ protection reduced the retransmissions in this test by almost 90% and the lost keystrokes were reduced from about one in six to fewer than one in sixty.

WAN _ Word Packet Loss

1.5 Mbps (T1 / WAN Deployment) 0%Latency: 100 ms 3%

MS Script: S3 Typing & Scrolling (Word) 5%

RDP 7.0 Compared to RDP 7.0 + IPQRETRANSMISSION ANALYSIS IPQ IMPROVEMENT

Retransmissions with RPD 7.0

11.47%14.89%

Retransmissions with RDP 7.0 + IPQ

0.57%1.60%

Same

Net Reduction with IPQ

95.03%89.25%

RDP 7.0 + IPQ TEST REPORT

Interpretation of Highlights

Report Prepared by

Page 12 of 14

RETRANSMISSION ANALYSES (con’t)

Satellite Connection and Word – Dramatic Reductions in Retransmissions and Lost Keystrokes

The analysis of the retransmissions with and without IPQ was prompted by the observation of high levels of lag or delays in the application response to user input, i.e. to the typing and scrolling activities. Keyboard input that fails to reach the virtual machine and application due to packet loss requires retransmission. The application response is delayed until the retransmission is made successfully and the keystroke is communicated to the application. Similarly, the application response to the keystroke is also subject to potential loss, also requiring retransmission from application to user. Whether lost enroute to or from the application or the user – or both – the loss and retransmission manifest as lag.

TCP retransmissions are at once symptomatic of packet loss and the programmatic response to packet loss. IPeak Networks undertook this additional retransmission analysis to provide a set of underlying measurements that may be correlated to the user experience of lag.

Retransmissions and Lost Keystrokes Reduced

Retransmissions represented 11.49% of the data transferred when typing and scrolling over a Satellite Connection with 3% packet loss. One in nine keystrokes was lost. With IPQ protection against packet loss, and retransmissions were reduced by almost 90 percent and lost keystrokes were reduced by from one in nine to one in 62. At 5% packet loss, retransmissions were reduced by more than 80% and lost keystrokes were reduced from approximately one in 6 to fewer than one in forty.

Satellite - Word Packet Loss

8 Mbps (Satellite Connection) 0%Latency: 300 ms 3%

MS Script: S3 Typing & Scrolling (Word) 5%

RDP 7.0 Compared to RDP 7.0 + IPQRETRANSMISSION ANALYSIS IPQ IMPROVEMENT

RPD 7.0

11.49%

RDP 7.0 + IPQ

1.27%13.73%

Same

2.38%

Net Reduction

88.91%82.67%

RDP 7.0 + IPQ TEST REPORT

Conclusions

Report Prepared by

Page 13 of 14

CONCLUSIONS

The results of these tests confirm that packet loss and latency, both individually and in combination, have a seriously damaging effect on the performance of both high bandwidth and low bandwidth applications running in virtual desktops. It follows that it is desirable to make efforts to reduce latency and reduce packet loss in order to preserve applications performance and support a satisfactory user experience of the virtual desktop.

There is little that can be done to reduce network latency other than locating the data centers and remote desktop users in close physical proximity to one another, a strategy that is antithetical to the vision of a ubiquitous and even global deployment of virtualized computing. It is therefore reasonable to focus on the performance gains and user experience benefits that may be realized through the use technologies that reduce packet loss.

The results of these tests further confirm that IPQ is an effective solution for the problem of network packet loss. When measured in terms of reduced test time-to-completion, increased thoughput, increased average data transfer speeds, and reduced TCP retransmissions, IPQ proved to be very effective in optimizing RDP 7.0 performance in all tests where packet loss is greater than zero.

High bandwidth applications benefited greatly with up to 235% improvement in data transfer speeds helping to optimize application performance under lossy network conditions and provide corresponding improvements in the user experience, i.e., far fewer delays and a more fluid and productive experience.

Low bandwidth applications also benefit with the near elimination of TCP retransmissions, a vast reduction in the number of keystrokes lost as a function of packet loss, and a corresponding reduction in the user’s experience of lag.

RDP 7.0 + IPQ TEST REPORT

Full Test Results

CA

LIBR

ATIO

N

LAN

- Cal

ibra

tion

MS

Scrip

tTe

st T

ime

(hh:

mm

:ss)

MS

Tick

sBa

ndw

idth

(b

ytes

)10

0 M

bps

(LA

N C

onne

ctio

n)S

1 P

PTx

00:4

1:01

2452

515

183,

812,

710

Late

ncy:

1 m

sS

3 W

ord

00:3

6:06

2172

125

34,1

03,2

81C

onfig

ured

Los

s: 0

%S

4 Fl

ash

00:0

2:30

N/A

509,

124,

290

WAN

- PP

TPa

cket

Los

sTe

st T

ime

(hh:

mm

:ss)

MS

Tick

sBa

ndw

idth

(b

ytes

)Av

e BW

(bps

)Te

st T

ime

(hh:

mm

:ss)

MS

Tick

sBa

ndw

idth

(b

ytes

)Av

e BW

(bps

)Te

st T

ime

Redu

ctio

nTi

cks

to

Seco

nds

Band

wid

th

(byt

es)

BW

(bps

)Th

roug

hput

Sp

eed

1.5

Mbp

s (T

1 / W

AN

Dep

loym

ent)

0%00

:44:

4426

8504

724

7,63

0,52

173

8,09

400

:45:

1526

8637

524

2,53

4,92

071

4,65

2La

tenc

y: 1

00 m

s3%

00:4

9:36

2929

766

229,

070,

528

615,

781

00:4

5:57

2709

828

260,

441,

219

755,

724

00:0

3:39

220

31,3

70,6

9113

9,94

313

.69%

22.7

3%M

S S

crip

t: S

1 P

PTx

5%01

:08:

3735

7084

423

3,16

2,97

945

3,07

400

:47:

1326

8396

926

8,84

3,82

375

9,17

800

:21:

2488

735

,680

,844

306,

104

15.3

0%67

.56%

WAN

- W

ord

Pack

et L

oss

Test

Tim

e (h

h:m

m:s

s)M

S Ti

cks

Band

wid

th

(byt

es)

Ave

BW

(b

ps)

Test

Tim

e (h

h:m

m:s

s)M

S Ti

cks

Band

wid

th

(byt

es)

Ave

BW

(b

ps)

Test

Tim

e Re

duct

ion

Tick

s to

Se

cond

sBa

ndw

idth

(b

ytes

)BW

(b

ps)

Thro

ughp

ut

(Byt

es)

Spee

d

(bps

)1.

5 M

bps

(T1

/ WA

N D

eplo

ymen

t)0%

00:3

6:19

2170

453

35,9

76,7

3113

2,08

500

:36:

1621

6759

436

,221

,913

133,

169

Late

ncy:

100

ms

3%00

:36:

1821

6970

341

,433

,142

152,

188

00:3

6:21

2171

312

35,4

90,7

6713

0,18

2M

S S

crip

t: S

3 Ty

ping

& S

crol

ling

(Wor

d)5%

00:3

6:06

2170

671

43,5

50,8

8116

0,85

300

:36:

1521

6984

434

,730

,128

127,

743

WAN

- Fl

ash

Pack

et L

oss

Test

Tim

e (h

h:m

m:s

s)M

S Ti

cks

Band

wid

th

(byt

es)

Ave

BW

(b

ps)

Test

Tim

e (h

h:m

m:s

s)M

S Ti

cks

Band

wid

th

(byt

es)

Ave

BW

(b

ps)

Test

Tim

e Re

duct

ion

Tick

s to

Se

cond

sBa

ndw

idth

(b

ytes

)BW

(b

ps)

Thro

ughp

ut

(Byt

es)

Spee

d

(bps

)1.

5 M

bps

(T1

/ WA

N D

eplo

ymen

t)0%

00:0

6:27

N/A

61,4

59,7

521,

270,

486

00:0

5:47

N/A

53,1

60,5

391,

225,

603

Late

ncy:

100

ms

3%00

:09:

22N

/A47

,618

,310

677,

841

00:0

9:32

N/A

53,7

34,8

4075

1,53

600

:00:

40N

/A6,

116,

530

73,6

9612

.84%

10.8

7%M

S S

crip

t: S

4 Fl

ash

5%00

:14:

34N

/A51

,907

,366

475,

125

00:0

7:34

N/A

43,5

24,1

7176

6,94

600

:07:

00N

/A-8

,383

,195

291,

821

-16.

15%

61.4

2%

Sate

llite

- PP

TPa

cket

Los

sTe

st T

ime

(hh:

mm

:ss)

MS

Tick

sBa

ndw

idth

(b

ytes

)Av

e BW

(bps

)Te

st T

ime

(hh:

mm

:ss)

MS

Tick

sBa

ndw

idth

(b

ytes

)Av

e BW

(bps

)Te

st T

ime

Redu

ctio

nTi

cks

to

Seco

nds

Band

wid

th

(byt

es)

BW

(bps

)Th

roug

hput

(B

ytes

)Sp

eed

(b

ps)

8 M

bps

(Sat

ellit

e C

onne

ctio

n)0%

00:5

5:27

3280

609

219,

053,

050

526,

728

00:4

8:25

2883

094

221,

318,

016

609,

482

00:0

7:02

398

2,26

4,96

682

,754

1.03

%15

.71%

Late

ncy:

300

ms

3%01

:48:

3763

7759

419

0,12

9,08

323

3,39

500

:55:

5132

4900

019

2,98

7,53

646

0,72

800

:52:

463,

129

2,85

8,45

322

7,33

41.

50%

97.4

0%M

S S

crip

t: S

1 P

PTx

5%02

:06:

0474

1023

517

8,94

0,58

718

9,25

500

:57:

4626

7979

727

4,43

9,02

063

3,44

301

:08:

184,

730

95,4

98,4

3344

4,18

853

.37%

234.

70%

Sate

llite

- W

ord

Pack

et L

oss

Test

Tim

e (h

h:m

m:s

s)M

S Ti

cks

Band

wid

th

(byt

es)

Ave

BW

(b

ps)

Test

Tim

e (h

h:m

m:s

s)M

S Ti

cks

Band

wid

th

(byt

es)

Ave

BW

(b

ps)

Test

Tim

e Re

duct

ion

Tick

s to

Se

cond

sBa

ndw

idth

(b

ytes

)BW

(b

ps)

Thro

ughp

ut

(Byt

es)

Spee

d

(bps

)8

Mbp

s (S

atel

lite

Con

nect

ion)

0%00

:36:

2021

7168

735

,453

,614

130,

105

00:3

6:19

2170

047

35,7

27,5

6713

1,17

1La

tenc

y: 3

00 m

s3%

00:3

6:42

2185

281

40,1

39,6

2414

5,83

000

:36:

2421

7618

837

,099

,227

135,

895

MS

Scr

ipt:

S3

Typi

ng &

Scr

ollin

g (W

ord)

5%00

:37:

0922

0959

343

,892

,686

157,

533

00:3

6:23

2170

204

34,1

54,0

5912

5,16

4

Sate

llite

- Fl

ash

Pack

et L

oss

Test

Tim

e (h

h:m

m:s

s)M

S Ti

cks

Band

wid

th

(byt

es)

Ave

BW

(b

ps)

Test

Tim

e (h

h:m

m:s

s)M

S Ti

cks

Band

wid

th

(byt

es)

Ave

BW

(b

ps)

Test

Tim

e Re

duct

ion

Tick

s to

Se

cond

sBa

ndw

idth

(b

ytes

)BW

(b

ps)

Thro

ughp

ut

(Byt

es)

Spee

d

(bps

)8

Mbp

s (S

atel

lite

Con

nect

ion)

0%00

:09:

19N

/A40

,235

,190

575,

817

00:1

0:24

N/A

46,2

81,5

5759

3,35

3La

tenc

y: 3

00 m

s3%

00:3

2:16

N/A

56,6

11,9

5223

3,93

400

:12:

48N

/A49

,657

,206

517,

263

00:1

9:28

N/A

-6,9

54,7

4628

3,32

9-1

2.28

%12

1.12

%M

S S

crip

t: S

4 Fl

ash

5%00

:37:

26N

/A54

,896

,274

195,

534

00:1

5:10

N/A

48,5

38,3

2042

6,71

100

:22:

16N

/A-6

,357

,954

231,

176

-11.

58%

118.

23%

WAN

_ W

ord

Pack

et L

oss

Test

Tim

e (h

h:m

m:s

s)M

S Ti

cks

Band

wid

th

(byt

es)

Ave

BW

(b

ps)

Test

Tim

e (h

h:m

m:s

s)M

S Ti

cks

Band

wid

th

(byt

es)

Ave

BW

(b

ps)

1.5

Mbp

s (T

1 / W

AN

Dep

loym

ent)

0%00

:36:

1921

7045

335

,952

,064

131,

995

00:3

6:16

2167

594

36,2

21,9

1313

3,16

9La

tenc

y: 1

00 m

s3%

00:3

6:18

2169

703

36,6

81,9

8613

4,73

600

:36:

2121

7131

235

,288

,418

129,

439

MS

Scr

ipt:

S3

Typi

ng &

Scr

ollin

g (W

ord)

5%00

:36:

1421

7109

437

,200

,800

136,

893

00:3

6:15

2173

047

33,5

73,6

8812

3,48

9

Sate

llite

- W

ord

Pack

et L

oss

Test

Tim

e (h

h:m

m:s

s)M

S Ti

cks

Band

wid

th

(byt

es)

Ave

BW

(b

ps)

Test

Tim

e (h

h:m

m:s

s)M

S Ti

cks

Band

wid

th

(byt

es)

Ave

BW

(b

ps)

8 M

bps

(Sat

ellit

e C

onne

ctio

n)0%

00:3

6:20

2171

687

35,4

53,1

5813

0,10

300

:36:

1921

7004

735

,727

,111

131,

169

Late

ncy:

300

ms

3%00

:36:

4221

8528

135

,528

,018

129,

075

00:3

6:24

2176

188

36,6

26,7

1113

4,16

4M

S S

crip

t: S

3 Ty

ping

& S

crol

ling

(Wor

d)5%

00:3

7:09

2209

593

37,8

65,2

9313

5,90

100

:36:

2321

7020

433

,341

,317

122,

185

RDP

7.0

+IPQ

RDP

7.0

Com

pare

d to

RDP

7.0

+ IP

Q

RDP

7.0

Com

pare

d to

RDP

7.0

+ IP

QRD

P 7.

0RD

P 7.

0 +I

PQ

TEST

SCE

NARI

OS

RETR

ANSM

ISSI

ON

ANAL

YSIS

IPQ

IMPR

OVE

MEN

T

IPQ

IMPR

OVE

MEN

T

RDP

7.0

Retra

nsm

issi

ons

with

RPD

7.

0

11.4

7%14

.89%

Sam

e

Sam

e

Sam

e

Low

Ban

dwid

th a

pps

are

not i

mpa

cted

by

band

wid

th c

onst

rain

ts in

duce

d by

the

inte

ract

ion

of

pack

et lo

ss a

nd T

CP

. S

ee s

epar

ate

anal

ysis

of I

mpa

ct o

f TC

P re

trans

mis

sion

s.

Low

Ban

dwid

th a

pps

are

not i

mpa

cted

by

band

wid

th c

onst

rain

ts in

duce

d by

the

inte

ract

ion

of

pack

et lo

ss a

nd T

CP

. S

ee s

epar

ate

anal

ysis

of I

mpa

ct o

f TC

P re

trans

mis

sion

s.

RDP

7.0

RPD

7.0

11.4

9%

Retra

nsm

issi

ons

with

RDP

7.

0 +

IPQ

0.57

%1.

60%

RDP

7.0

+ IP

Q

1.27

%13

.73%

Sam

e

Sam

e

2.38

%

Net R

educ

tion

with

IPQ

95.0

3%89

.25%

Net R

educ

tion

88.9

1%82

.67%

TCP, RemoteFX, and IPQ Impact of Latency and Loss on TCP Throughput including

RemoteFX and IPQ Throughput Test Results

July 2010

TCP, RemoteFX, and IPQ

2

The IPQ Benefits Hierarchy The benefits of IPQ are most often described in terms of a remarkable improvement in the user experience of real-time network applications. There is, however, an inter-related hierarchy of benefits associated with the use of IPQ.

STEP 1 is accomplished as a core IPQ capability and actually enables all remaining steps in the benefits hierarchy. All steps can add significant value to the enterprise in terms of enhanced productivity and reduced costs but the benefits implied at STEP 2 are the most profound and the most complex. This brief backgrounder describes the relationship between packet loss and IP network performance and quality.

Packet Loss and UDP IP networks support two transport protocols: UDP and TCP. UDP is a ‘best efforts’ protocol that sends packets once and only once. There are no retransmissions. If packets are lost, it is up to the application that expects the lost data to recover, mask, or otherwise compensate for that loss. Real-time, interactive network applications like video conferencing implement UDP not because packet loss is not a problem – it is big problem for these applications – but because the alternative, TCP, adds latency and latency is not acceptable in a real-time interactive application environment. Unlike UDP, TCP is a guaranteed delivery transport protocol. TCP is the right choice for applications that must receive all packets and the time it takes to receive them is of secondary importance. Email and web browsing are good examples of these types of applications.

STEP 4 IMPROVE USER EXPERIENCE

↑STEP 3

IMPROVE NETWORK APPLICATION PERFORMANCE↑

STEP 2 IMPROVE NETWORK PERFORMANCE AND QUALITY

↑STEP 1

REDUCE PACKET LOSS

TCP, RemoteFX, and IPQ

3

Packet Loss and TCP In order to guarantee the delivery of all packets, TCP implements an acknowledgement and retransmission scheme. The source of the packets expects an acknowledgement for every packet that successfully reaches the destination. When packets are lost, i.e., when the destination fails to acknowledge receipt, the TCP source retransmits the lost packets. But it also assumes that the packet loss is an indication that the link cannot support the data rate and reduces the number of packets it sends on the retransmission. In a lossy network environment, the net effect of this scheme is a reduction of the so-called good throughput or “goodput”. Goodput refers to the total packets successfully received. Simple throughput, by contrast, refers to the total packets sent, whether those packets reach the destination or not. (In simple terms, goodput is equal to throughput minus retransmissions.) This reduction in goodput is sometimes called “TCP clamping”. Latency and TCP Network latency has a similar effect on TCP throughput. Everything else being equal, the length of time required to send the packet and receive the acknowledgement defines the maximum data rates. When the source and destination are in close physical proximity and network latency is low, acknowledgements are received quickly and the next sequence of packets can be transmitted quickly. However, if distance and high latency separate the source and destination, then acknowledgements take longer and the next transmission of packets will be delayed. The net effect is, again, “TCP clamping” – a reduction in goodput.

When packet loss and latency are both present, they interact to cause even greater reductions in TCP throughput. The TCP Goodput Base Case chart summarizes the combined impact of packet loss and latency on network performance and quality.

TCP, RemoteFX, and IPQ

4

The starting point in this chart is a 100 Mbps TCP session and a near perfect network: a network with zero packet loss and no more than 5 millliseconds of latency. Over such a network, there will be almost no TCP clamping and a full 100 Mbps of goodput will be achieved.

If, however, the network suffers 100ms of latency (as is common for a cross-country connection) and 1% packet loss (as is common with the public Internet) that same TCP session will be severely clamped and goodput will be reduced to a mere 1.16 Mbps. To frame the impact of this reduction in network performance, consider a 1 GB file transfer. Over the red network, that 1 GB file transfer would be completed in about 2 hours. However, with protection against packet loss, goodput could be increased to 5.24 Mbps and the 1 GB file transfer would be completed in about 25 minutes.

By reducing packet loss, and thereby improving network performance, the performance of the network application (in this case, the file transfer) is greatly improved and this in turn makes for a greatly improved user experience.

Another important variable contributing to TCP network performance is packet size. TCP achieves the greatest throughput (and therefore, in theory, best goodput) with large packets. Multimedia streams over TCP (e.g., RemoteFX) generally include both audio and video channels. Audio packets are much smaller than video packets and, when combined, will tend to cause an averaging downward of the maximum TCP throughput potential. This is consistent with our test results of RemoteFX shown below.

TCP Goodput Base CaseGoodput Maximums Under Varied Latency and Packet Loss ConditionsControl: 100 Mpbs of bandwidth under perfect network conditions

WiFi

No Loss .01% Loss 1% Loss 2% Loss 5% Loss 10% Loss 5 100.00 73.87 23.36 16.51 10.41 7.38

10 52.42 36.93 11.68 8.25 5.22 3.6925 20.97 14.77 4.67 3.30 2.08 1.4750 10.48 7.38 2.33 1.65 1.04 0.7375 6.99 4.92 1.55 1.10 0.69 0.49100 5.24 3.69 1.16 0.82 0.52 0.36125 4.19 2.95 0.93 0.66 0.41 0.29150 3.49 2.46 0.77 0.55 0.34 0.24175 2.99 2.11 0.66 0.47 0.29 0.21200 2.62 1.84 0.58 0.41 0.26 0.18250 2.09 1.47 0.46 0.33 0.20 0.14300 1.74 1.23 0.38 0.27 0.17 0.12350 1.49 1.05 0.33 0.23 0.14 0.10400 1.31 0.92 0.29 0.20 0.13 0.09

Satellite 500 1.04 0.73 0.23 0.16 0.10 0.07

LAN

Metro/State

X-Country

WAN

Global

INTERNETLatency (msecs)

Unprotected networkLAN

TCP Maximum Goodput Bandwidth (Mbps)

TCP, RemoteFX, and IPQ

5

This chart shows the measured maximum TCP goodput when displaying HD video in a RemoteFX virtual desktop over varying network conditions

The control for this series of test measurements was established by running the HD video over perfect network conditions: zero latency and zero packet loss. The measured TCP goodput with RemoteFX over a perfect network was 27 Mbps. (This control point is shown in green text in the upper left hand corner of the chart.) Measurements were then taken after adding latency in amounts that represent typical local, metro, cross-country, and global IP network conditions. The same measurements were then taken with packet loss added in increments that are representative of typical LAN, public Internet, and WiFi network conditions. The entire set of measurements repeated with IPQ protection against the packet loss on the network.

As the chart shows, when 30 milliseconds of latency and 1% packet loss were added in order to emulate a typical, unprotected metro network connection over the public Internet, the RemoteFX goodput was

RemoteFX Test ResultsGoodput Maximums Under Varied Latency and Packet Loss ConditionsControl: HD video generating 27 Mpbs of bandwidth under perfect network conditions (See Notes)

LAN WiFi WiFi

No Loss 1% Loss 2% Loss 5% Loss 10% Loss 1% Loss 2% Loss 5% Loss 10% Loss 0 27.00 21.70 18.00 4.00 1.20 27.00 27.00 27.00 21.605 21.00 13.40 8.40 3.40 1.10 20.70 19.20 18.60 12.70

10 14.60 7.50 5.10 2.60 1.00 13.80 13.60 12.80 8.0015 11.00 5.60 4.30 2.30 0.80 10.60 10.50 9.20 6.0020 8.70 5.00 3.30 1.80 0.75 8.30 8.20 7.40 4.5025 7.30 4.00 2.80 1.30 0.70 7.00 6.90 5.70 4.1030 6.20 3.40 2.30 1.10 0.60 6.20 6.10 5.40 3.4035 5.60 2.80 2.00 1.10 0.60 5.40 5.30 4.30 3.1040 4.80 2.50 1.80 1.00 0.50 4.80 4.60 4.10 2.5045 4.30 2.30 1.20 1.00 0.48 4.20 4.10 3.60 2.3050 4.00 2.30 1.50 0.90 0.44 4.00 3.80 3.10 2.1075 2.50 1.30 1.00 0.62 0.33 2.50 2.50 2.20 1.30100 2.00 1.00 0.80 0.45 0.30 2.00 2.00 1.80 1.00125 1.60 0.86 0.73 0.40 0.26 1.60 1.60 1.50 0.80150 1.40 0.65 0.56 0.32 0.20 1.30 1.40 1.10 0.70175 1.20 0.60 0.36 0.30 0.20 1.20 1.20 1.00 0.60

Notes

Linux TC for Network emula tions a nd impa irments

Windows 7 RemoteFX desktop a t 1920 by 1080

Sa mple video: wi ldl i fe.wmv - 1280 by 720, 29 fps (Distributed with Windows 7)

Windows Media Pla yer pla yba ck at 100%

WAN

LAN

Metro/State

X-Country

Global

Latency (msecs)

Unprotected network With IPQ Protection

RemoteFX Maximum Goodput Bandwidth (Mbps)INTERNET INTERNET

TCP, RemoteFX, and IPQ

6

reduced to 3.4 Mbps, a data rate that is unlikely to deliver a “best” user experience of HD video. By contrast, under the same conditions but with IPQ protection against the packet loss, the RemoteFX goodput is doubled to 6.2 Mbps, a data rate that easily supports a best user experience of HD video.

To emphasize the potential benefits of IPQ with respect to network performance and quality, the chart uses colored backgrounds to highlight the range of network goodput values that will support the best user experience (green background), a good user experience (yellow background), and a poor user experience (pink background) of HD video running over RemoteFX. With IPQ protection, as shown on the right hand side of the chart, the range of network latency and loss conditions that can still support best and good user experience of HD video is far greater than the same range without protection against packet loss.

IPeak Networks has concluded that adding IPQ protection against packet loss, the RemoteFX value proposition can be expanded to include not only the LAN but also the WAN. While competing protocols such as VMWare’s PCoIP do claim to support the WAN they still require a high quality, low loss network. That is, they require an MPLS WAN. By adding IPQ protection against packet loss, RemoteFX can become the only remote display protocol that supports virtualized desktop running over the public Internet – the true WAN. The net effect would be to the grow the addressable market for RDS to include deployment for nomadic sales, for workshifting, for the home consumer, and for deployment in those global markets that where readily accessible IP networks are of particularly low quality and the excessively high cost of better quality, i.e., MPLS network services are a significant obstacle to the adoption of virtualization technologies.

IPQ Use Case Matrix

Microsoft Footprint

July 2010

IPQ Use Case Matrix – MS Footprint

2

ADVANTAGES

Performance Quality Speed

CATEGORY MICROSOFT PRODUCTS USE CASE

PAIN

1 Nomadic salesman, workshiftingPoor user experience over broadband (usb lag, slow and buffered videos). MPLS networks too expensive for mass multimedia deployment and do not reach home workers

2 Turning VDI Multi-Media OnMultimedia in VDI requires high quality high performance bandwidth and therefore is often turned off

3 Enable B2B ReachMPLS networks are too rigid and expensive to use for temporary and/or unplanned B2B communications

4 ROBO - Remote Office / Branch OfficeMPLS networks are too rigid and expensive to support economical rollout of VDI to small and remote offices

5 Untethered VDI over WIFIEffective WIFI goodput is inadequate to deliver an acceptable untethered VDI user experience

6 Desktop as a Service (DaaS)

Poor user experience over broadband fronthaul (usb lag, slow videos).

7 Desktop as a Service (DaaS)

Costly on-premise data backend connection to off premise hosted DaaS apps cloud (backhaul).

8 Disaster recovery/replication & backupDC replication/backup takes too long

Off Premise Clouds

The distance between data centers or cloud edge to users is constrained by latency and packet loss

10 FUTURE: Streamed OSs & Apps (Windows 8)

Slow streaming (particularly fisrt time)

11 FUTURE: Type 1 hypervisor sync with cloudSlow syncronizing

12 Typing,windows, keyboard, mouse performance

Poor user experience over broadband (usb lag)

13 Large data movements

Any Hyper-V based application that moves large amounts of data over TCP is negatively affected by latency and loss

14 Gaming - MMOG (e.g., Halo)

Unresponsive and inaccurate real time gaming

15 e.g. Onlive.com

Unresponsive and inaccurate real time gaming

16In-home media distribution over WIFI - Windows MC PC to Xbox, STB, TVWIFI loss rates reduce throughput, cause excessive buffering

17OTT or TV Anywhere live events

Live events streamed in HD suffer from repeated rebuffering and codec downspeeding for a degraded user experience

18 Content Distribution Network & P2P

CDNs experience problems traversing partner networks; No QoS exists from CDN edge to consumer, particularly for live sporting events

19 Live Video (Video Chat, ESPN)

Video chat is greatly affected by loss , visual artifacts, bad unsyned audio, dropped calls, freeze frames

20 Video Chat

Video chat is greatly affected by loss , visual artifacts, bad unsyned audio, dropped calls, freeze frames

••

••

• •

Hyper-V• SQL• Exchange• App-V• Streaming• Sharepoint

Virtualization

Video Streaming

Collaboration OCS

Reduce• Lag (mouse, keyboard)Improve• Multi-Media bit rateAvoid • buffering/rebuffering

Windows 7 Phone - 3G/4G, Tether, Home WIFI

Silverlight

Xbox Live

Windows Media Center

Speculation: Virtualized Gaming

Xbox Live

RDS• VDI (RemoteFX, RDP)• Traditional SBC, TS

Azure• Cloud (OTT)• WAP (private)

USER EXPERIENCE

Gaming

Avoid:• Video artefacts• Audio-Video sync errors• dropped calls

Accelerate• Perceived processing times

IPQ Use Case Matrix – MS Footprint

3

ADVANTAGES

Cost Speed Reach Visibility

CATEGORY MICROSOFT PRODUCTS USE CASE

PAIN

1 Nomadic salesman, workshifting

Poor user experience over broadband (usb lag, slow and buffered videos) - MPLS networks too expensive for mass multimedia deployment, also do not reach home workers

2 Turning VDI Multi-Media OnMultimedia in VDI requires high quality high performance bandwidth and therefore is often turned off

3 Enable B2B ReachMPLS networks are too rigid and expensive to use for temporary and/or unplanned B2B communications

4 ROBO - Remote Office / Branch OfficeMPLS networks are too rigid and expensive to support economical rollout of VDI to small and remote offices

5 Untethered VDI over WIFI and MobileWIFI goodput is inadequate to deliver an acceptable untethered VDI user experience

6 Desktop as a Service (DaaS)Poor user experience over broadband fronthaul (usb lag, s low videos).

7 Desktop as a Service (DaaS)Costly on-premise data backend connection to off premise hosted DaaS apps cloud (backhaul).

8 Disaster recovery/replication & backupDC replication/backup takes too long

9 FUTURE: Streamed OSs & Apps (Windows 8)Slow streaming (particularly fisrt time)

10 FUTURE: Type 1 hypervisor sync with cloud

Slow syncronizing

11 Typing,windows, keyboard, mouse performance

Poor user experience over broadband (usb lag)

12 Large data movements

Any Hyper-V based application that moves large amounts of data over TCP is negatively affected by latency and loss

13 Gaming - MMOG (e.g., Halo)Unresponsive and inaccurate real time gaming

14 e.g. Onlive.comUnresponsive and inaccurate real time gaming

15In-home media distribution over WIFI to Xbox, STB, TV

WIFI loss rates reduce throughput and damage media playback16 OTT or TV Anywhere live events

Live events streamed in HD suffer from repeated rebuffering and codec downspeeding for a degraded user experience

17 Content Distribution Network & P2P CDNs experience problems traversing partner networks; No QoS exists from CDN edge to consumer, particularly for live sporting events

18 Live Video (Video Chat, ESPN)Video chat is greatly affected by loss , visual artifacts, bad unsyned audio, dropped calls , freeze frames

19 Video ChatVideo chat is greatly affected by loss , visual artifacts, bad unsyned audio, dropped calls , freeze frames

Video Streaming

Collaboration OCS • •

Windows 7 Phone - 3G/4G, Tether, Home WIFI

Silverlight •

Xbox Live

Windows Media Center

GamingXbox Live

Hyper-V• SQL• Exchange• App-V• Streaming• Sharepoint

Speculation: Virtualized Gaming

• • •

• • •

• •

• •

• • •Azure• Cloud (OTT)• WAP (private)

• •

• •

• • •

Increase• User Reach• Application Reach

Deliver• real-time network telemetry

Virtualization

RDS• VDI (RemoteFX, RDP)• Traditional SBC, TS

WAN COST AND PERFORMANCE

Reduce MPLS Network Costs• Augment/replace MPLS with broadband• Relax MPLS SLAs

Accelerate• Actual process cycle time

IPQ Use Case Matrix – MS Footprint

4

IPQ Network Quality Technology Benefits RemoteFX and Other Microsoft Products

1. User Experience: Performance & Quality

The user experience is of paramount importance. User expectations of application performance are very high and failure to meet those expectations can become a difficult obstacle in the path toward ubiquitous technology adoption.

Packet loss causes reduced TCP throughput which in turn triggers side-effects like CODEC downspeeding, buffering delays, and audio quality issues. The user experience of TCP-based protocols and applications that rely on large video flows (e.g., RDS, RDP, Hyper V Streaming, Xbox Live MMOG, Xbox Live Streaming [Netflix], and Windows Media Center over home WiFi) suffers badly under these circumstances.

Hyper V applications such as SQL Server, Exchange, App-V, and SharePoint running over RDS are susceptible to UI lag caused by packet loss. Users experience UI lag as delayed and out of sequence mouse and keyboards inputs, and delayed windows operations.

Packet loss can have a severe impact on UDP-based network applications (e.g., Windows 7 Phone Video Chat, Silverlight, and OCS Video Chat) producing video artefacts, pixelation, comet-tails and freeze-ups. (See the impact of packet loss on the UDP-based PCoIP protocol implemented in VMWare View: follow-on IPQ-for-PCoIP testing.)

2. User Experience: Speed

The upcoming Windows 8 Application and OS streaming (per Softricity) and offline VDI capabilities (per Kidaro) will be required to support large data flows. As described above, packet loss will lead to reduced TCP throughput which will result in extended operation completion times for these applications.

3. WAN Cost and Performance: Cost

Enterprises considering the adoption of technologies and applications that require large amounts of bandwidth consumption (e.g., RDS, Azure, Windows Azure Appliances, OCS, SQL Server, Exchange, Sharepoint) must be prepared to consider disproportionately high network operating costs. An investment bank in Singapore deploying 4,000 VMware View desktops over the corporate MPLS WAN was forced to disable the bandwidth-hungry support for multimedia on those desktops: the network costs for multimedia support were far too high. In contrast, IPeak Networks recently demonstrated to a large bank that uses Cisco TelePresence to speed communications and decisions between branches and across its operations throughout South East Asia. As shown in the ROI calculations below, IPeak Networks enabled a potential 17X reduction in the bank’s TelePresence network operating costs by simply replacing the 8 Mbps MPLS links with much more affordable 8 Mbps DSL links protected by IPQ.

IPQ Use Case Matrix – MS Footprint

5

Enterprises unwilling or unable to abandon MPLS can still realize significant savings by relaxing the SLA to a “best effort” class and adding IPQ protection to maintain the quality of the network services. IPeak Networks experience shows that this approach can return a 50% reduction in MPLS network operating cost.

In the emerging Desktop-as-a-Service Use there is typically a requirement to connect back-end enterprise data to off-premises clouds. IPeak Networks experience indicates that the associated MPLS costs can add as much as 25% to the monthly cost of each desktop.

By using IPQ to reduce network operating costs, enterprises are free to opt for wider deployments of more fully featured RDS implementations including RemoteFX.

4. WAN Cost and Performance: Speed

MPLS cannot be used to connect public cloud services like Amazon EC2 and Microsoft Azure. The low quality public Internet is the only way to connect to these services. Enterprises needing to transfer large files to and from the cloud will experience extended operation completion times.

IPQ Use Case Matrix – MS Footprint

6

Hyper-V applications needing to move large amounts of data over public internet without protection against packet loss will experience extended operation completion times.

Backup, replication and disaster recovery applications and services that require Internet connections between data centers and clouds will experience prohibitively long operation completion times.

5. WAN Cost and Performance: Reach

IPQ can greatly extend the reach of applications like RDS, Azure, Hyper-V Apps, and OCS. By effectively adding the quality needed to the ubiquitous Internet. Enterprises can extend their deployments of these applications well beyond that of the metro-constrained MPLS networks.

IPQ can greatly extend the reach of consumer-oriented applications of Xbox Live Streaming, Xbox Live MMOG Gaming, Silverlight, and Windows Media Center by adding the quality that these applications need to the public Internet.

6. WAN Cost and Performance: Visibility

IPQ offers a uniquely powerful real-time network telemetry capability. IPQ monitors, measures, and stores actual network conditions in an SNMP MIB that can be used to properly ascribe the responsibility for network performance and quality versus application performance and quality.