pvs and mcs webinar - technical deep dive

27
PVS vs. MCS PVS & MCS! Nick Rintalan Technical Deep Dive Lead Architect, Americas Consulting, Citrix Consulting August 5, 2014

Upload: david-mcgeough

Post on 15-Jan-2015

6.524 views

Category:

Documents


5 download

DESCRIPTION

This webinar will cover the current state of MCS and PVS. We'll look at how MCS and PVS work differently on hypervisors like ESXi and Hyper-V. We will look at new target platforms such as Windows Server 2012 R2 to see if PVS or MCS behave differently. And lastly we will dive into the new VHDX-based PVS wC option and why you should be using it for all your workloads. The webinar will be presented by Nick Rintalan

TRANSCRIPT

Page 1: PVS and MCS Webinar - Technical Deep Dive

PVS vs. MCS PVS & MCS!

Nick Rintalan

Technical Deep Dive

Lead Architect, Americas Consulting, Citrix Consulting

August 5, 2014

Page 2: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.2

Agenda

Myth Busting

The New PVS wC Option

Detailed Performance Testing & Results

Key Takeaways

Q & A

Page 3: PVS and MCS Webinar - Technical Deep Dive

Myth Busting!

Page 4: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.4

Myth #1 – PVS is Dead!

PVS is alive and well

Not only are we (now) enhancing the product and implementing new features like RAM cache with overflow to disk, but it has a healthy roadmap

Why the Change of Heart?• We realized PVS represents a HUGE competitive advantage versus VMW• We realized our large PVS customers need a longer “runway”• We actually had a key ASLR bug in our legacy wC modes we had to address

Page 5: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.5

Myth #2 – MCS Cannot Scale

Most people that say this don’t really know what they are talking about

And the few that do might quote “1.6x IOPS compared to PVS”

The 1.6x number was taken WAY out off context a few years back (it took into account boot and logon IOPS, too)

Reality: MCS generates about 1.2x IOPS compared to PVS in the steady-state • 8% more writes and 13% more reads, to be exact• We have a million technologies to handle those additional reads!

But performance (and IOPS) are only one aspect you need to consider when deciding between PVS and MCS…

Page 6: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.6

Myth #3 – MCS (or Composer) is Simple on a Large Scale

MCS (or any technology utilizing linked clone’ish technology) still leaves a bit to be desired from an operations and management perspective today• Significant time required when updating a large number of VDIs (or rolling back)• Controlled promotional model• Support for things like vMotion• Some scripting may be required to replicate parent disks efficiently, etc.

MCS is Simple/Easy• I’d agree as long as it is somewhat small’ish (less than 1k VDIs or 5k XA users) • But at real scale, MCS is arguably more complex than PVS• How do you deploy MCS or Composer to thousands of desktops residing on hundreds of

LUNs, multiple datastores and instances of vCenter, for example?• This is where PVS really shines today

Page 7: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.7

Myth #4 – PVS is Complex

Make no mistake, the insane scalability that PVS provides doesn’t come absolutely “free”, so there is some truth to this statement

BUT, have you noticed what we’ve done over the last few years to address this?• vDisk Versioning• Native TFTP Load Balancing via NS 10.1+• We are big endorsers of virtualizing PVS (even on that vSphere thing)• We have simplified sizing the wC file and we also endorse thin provisioning these days

- RAM Cache w/ overflow to disk (and thin provision the “overflow” disk = super easy)

Page 8: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.8

Myth #5 – PVS Can Cause Outages

So can humans!

And if architected correctly, using a pod architecture, PVS cannot and should not take down your entire environment

Make sure every layer is resilient and fault tolerant• Don’t forget about Offline Database Support and SQL HA technologies (mirroring)

We still recommend multiple PVS farms with isolated SQL infrastructure for our largest customers – not really for scalability or technical reasons, but to minimize the failure domain

Page 9: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.9

Myth #6 – XenServer is dead and MCS only works with IntelliCache

Just like PVS, XenServer is also alive and well• We just shifted our focus a bit• Contrary to popular belief, we are still actively developing it

We are implementing hypervisor level RAM-based read caching in XS.next• Think “IntelliCache 2.0” (no disks or SSDs required this time!)• The new in-memory read caching feature and the old IC feature can even be combined!

Did you know that MCS already works and is supported with CSV Caching in Hyper-V today?

Did you know that MCS also works with CBRC? • We even have customers using it in production! (Just don’t ask for official support)

Page 10: PVS and MCS Webinar - Technical Deep Dive

The new PVS wC Option

aka “The Death of IOPS”

Page 11: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.11

RAM Cache with Overflow to Disk – Details

First and foremost, this RAM Caching is NOT the same as the old PVS RAM Cache feature• This one uses non-paged pool memory and we no longer manage internal cache lists, etc. (let

Windows do it – it is pretty good at this stuff as it turns out!)• Actually compared the old vs. new RAM caching and found about 5x improvement in

throughput

Pretty simple concept: leverage memory first, then gracefully spill over to disk• VHDX-based as opposed to all other “legacy” wC modes, which are VHD-based

- vdiskdif.vhdx vs. .vdiskcache

• Requires PVS 7.1+ and Win7/2008R2+ targets• Also supports TRIM operations (shrink/delete!)

Page 12: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.12

RAM Cache with Overflow to Disk – Details Con’t

The VHDX spec uses 2 MB chunks or block sizes, so that is how you’ll see the wC grow (in 2 MB chunks)

The wC file will initially be larger than the legacy wC file, but over time, it will not be significantly larger as data will “backfill” into those 2 MB reserved blocks

This new wC option has nothing to do with “intermediate buffering” – totally replaces it

This new wC option is where we want all our customers to move ASAP, for not only performance reasons but stability reasons (ASLR)

Page 13: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.13

Why it works so well with only a little RAM

A small amount of RAM will give a BIG boost!

All writes (mostly random 4K) first hit memory

They get realigned and put into 2 MB memory blocks in Non-Paged Pool

If they must flush to disk, they get written as 2 MB sequential VHDX blocks• We convert all random 4K write IO into 2 MB sequential write IO

Since Non-Paged Pool and VHDX are used we support TRIM operations• Non-Paged Pool memory can be reduced and the VHDX can shrink!!!!• This is very different than all our old/legacy VHD-based wC options

Page 14: PVS and MCS Webinar - Technical Deep Dive

Performance Results

Page 15: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.15

Our First Field Test (XA workloads w/ vSphere)

Used IOMETER to compare legacy wC options and new wC option• #1 – “line test” (i.e. no PVS)• #2 and #3 – new wC option• #4 – legacy RAM cache option• #5 – legacy disk cache option (which 90% of our customers use today!!!)

Page 16: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.16

Our Second Field Test (XD workloads w/ vSphere and Hyper-V)

Win7 on Hyper-V 2012 R2 with 256 MB buffer size (with bloated profile):

Win7 on vSphere 5.5 with 256 MB buffer size (with bloated profile):

Page 17: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.17

And Even More Testing from our Solutions LabLoginVSI 4.0

Variables• Product version• Hypervisor• Image delivery• Workload• Policy

Hardware• HP DL380p G8• (2) Intel Xeon E5-2697• 384 GB RAM• (16) 300 GB 15,000 RPM spindles in RAID 10

Page 18: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.18

2008R2 2012R2 2012R2 2012R2 2012R2 2012R2 2008R2 2012R2 2012R2 2012R2 2012R2 2012R2UX UX UX Scale UX UX UX UX UX Scale UX UX

Medium Light Medium Medium Medium Medium Medium Light Medium Medium Medium MediumPVS (Disk) MCS MCS MCS PVS (Disk) PVS (RAM) PVS (Disk) MCS MCS MCS PVS (Disk) PVS (RAM)

Hyper-V Hyper-V Hyper-V Hyper-V Hyper-V Hyper-V vSphere vSphere vSphere vSphere vSphere vSphere6.5 7.5 7.5 7.5 7.5 7.5 6.5 7.5 7.5 7.5 7.5 7.5

0

50

100

150

200

250

VSI Max (XenApp 7.5 - LoginVSI 4)

Page 19: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.19

Hyper-V 2012R2 vSphere 5.50

50

100

150

200

250

MCS PVS (Disk) PVS (RAM with Overflow)

PVS vs MCSNotable XenApp 7.5 Results

Imaging platform does NOT impact single server scalability

Page 20: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.20

Hyper-V 2012R2 vSphere 5.50

10

20

30

40

50

60

70

80

90

MCS PVS (Disk) PVS (RAM with Overflow)

PVS vs MCSNotable XenDesktop 7.5 Results

Imaging platform does NOT impact single server scalability

Page 21: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.21

Hyper-V vSphere0

1

2

3

4

5

6

7

8

9

10

MCS PVS (Disk) PVS (RAM with Overflow)

MCS vs PVS (Disk) vs PVS (RAM with Overflow)Notable XenDesktop 7.5 Results

PVS (RAM with Overflow) less than 0.1 IOPS with 512MB RAM Cache!!!

0.1 IOPS per user

Page 22: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.22

PVS (RAM with Overflow)0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

512MB 256MB

PVS (RAM with Overflow) 512 MB vs 256 MBNotable XenDesktop 7.5 Results

• 512 MB RAM = .09 IOPS• 256 MB RAM = .45 IOPS

Page 23: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.23

1 19 37 55 73 91 109 127 145 163 181 199 217 235 253 271 289 307 325 343 361 379 397 415 433 451 469 487 505 523 541 559 577 595 613 631 6490

20

40

60

80

100

120

140

160

180

Total Host IOPS (100 users on host)

PhysicalDisk -- Disk Transfers/sec -- _Total

IOP

S

Peak IOPSNotable XenApp 7.5 Results

Peak = 155 IOPS

Page 24: PVS and MCS Webinar - Technical Deep Dive

Key Takeaways & Wrap-Up

Page 25: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.25

Key Takeaways

Performance/Scalability is just one element to weigh when deciding between MCS and PVS• Do NOT forget about manageability and operational readiness• How PROVEN is the solution? • How COMPLEX is the solution?• Do you have the ABILITY & SKILLSET to manage the solution?• Will it work at REAL SCALE with thousands of devices?

The new VHDX-based PVS 7.x write cache option is the best thing we have given away for FREE since Secure Gateway (IMHO)

It doesn’t require a ton of extra memory/RAM – a small buffer will go a long way

Page 26: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.26

Key Takeaways – Con’t

For XD workloads, start with 256-512 MB buffer per VM

For XA workloads, start with 2-4 GB buffer per VM

If you are considering vSAN, buying SSDs or a niche storage array, STOP immediately what you’re doing, test this feature and then have a beer to celebrate

We just put IOPS ON NOTICE!• http://blogs.citrix.com/2014/07/22/citrix-puts-storage-on-notice/ • Now all you really have to worry about are the IOPS associated with things like the pagefile and

event logs

Page 27: PVS and MCS Webinar - Technical Deep Dive

© 2014 Citrix. Confidential.27

WORK BETTER. LIVE BETTER.