reference architecture simplivity omnistack for citrix ... · pdf filepage 3 of 27 reference...

27
Page 1 of 27 www.SimpliVity.com Reference Architecture SimpliVity OmniStack for Citrix XenDesktop 7.6

Upload: trankhanh

Post on 20-Mar-2018

236 views

Category:

Documents


2 download

TRANSCRIPT

Page 1 of 27 www.SimpliVity.com

Reference Architecture

SimpliVity OmniStack for Citrix XenDesktop 7.6

www.SimpliVity.comPage 2 of 27

Reference Architecture

Table of Contents1. Executive Summary ...............................................................................................................................................................3 Audience .....................................................................................................................................................................32. Introduction ...........................................................................................................................................................................3 SimpliVity Hyperconvergence: Simplifying VDI ...........................................................................................................3 Superior user experience through unmatched VDI performance ...............................................................4 Linear scalability from pilot to production with cost-effective VDI deployments .......................................4 Enterprise-grade data protection and resiliency for VDI workloads ...........................................................43. Technology Overview............................................................................................................................................................44. Solution Overview ................................................................................................................................................................. 5 Citrix XenApp and XenDesktop Technology Overview ............................................................................................... 6 Citrix XenDesktop Components .................................................................................................................................. 6 Citrix Receiver ............................................................................................................................................. 6 StoreFront .................................................................................................................................................... 6 Delivery Controller ...................................................................................................................................... 6 Virtual Delivery Agent (VDA) ....................................................................................................................... 6 NetScaler Gateway ...................................................................................................................................... 6 Director EdgeSight ...................................................................................................................................... 6 Studio .......................................................................................................................................................... 7 Provisioning methods ................................................................................................................................................... 7 Citrix Provisioning Services 7.6 .................................................................................................................... 7 Machine Creation Services ..........................................................................................................................8 Desktop types ..............................................................................................................................................................84. Solution Architecture ............................................................................................................................................................ 9 Management Infrastructure Design ............................................................................................................................. 9 Desktop Infrastructure Design ................................................................................................................................... 14 2000 Office Worker Block ......................................................................................................................... 145. Login VSI ..............................................................................................................................................................................20 Testing Methodology .................................................................................................................................................20 Office Worker Workload Definition ............................................................................................................................ 21 Test Environment ........................................................................................................................................................ 21 Results ....................................................................................................................................................... 216. Summary/Conclusion .......................................................................................................................................................... 247. References and Additional Resources ................................................................................................................................. 248. Appendix ............................................................................................................................................................................. 25 Design Guidelines ...................................................................................................................................................... 25 SimpliVity OmniStack Design Guidelines .................................................................................................. 25 Citrix XenDesktop 7.6 Design Guidelines ................................................................................................. 26 Supporting Infrastructure Design Guidelines ........................................................................................... 27

www.SimpliVity.comPage 3 of 27

Reference Architecture

1. Executive SummaryVirtual Desktop Infrastructure (VDI) initiatives are a top priority for many IT organizations, driven in part by the promise of a flexible, mobile computing experience for end users, and consolidated management for IT. Organi-zations are looking to VDI solutions like Citrix XenDesktop to reduce software licensing, distribution and administra-tion expenses, and to improve security and compliance.

Too often, however, VDI deployments are plagued by slug-gish and unpredictable desktop performance and higher than expected costs. IT organizations are often forced to make compromises and tradeoffs between desktop per-formance, availability, and costs.

SimpliVity’s market-leading hyperconverged infrastructure platform is an ideal solution for addressing unique VDI challenges. It provides the superior end-user experience organizations require, without sacrificing economics or resiliency. SimpliVity provides:

• Simplified deployment with hyperconverged, x86 building blocks.

• Ability to start small and scale out in affordable increments—from pilot to production.

• Highest density of desktops per node in the hypercon-verged infrastructure category.

• Independently validated, unmatched VDI performance for a superb end-user experience.

• Deployment of full-clone desktops with the same data efficiency as linked clones.

• Enterprise-class data protection and resiliency.

This Reference Architecture provides evidence of these capabilities and showcases third-party-validated Login VSI performance testing. It provides a reference configura-tion for implementing Citrix XenDesktop 7.6 Hosted and Hosted Shared Desktops on SimpliVity hyperconverged infrastructure, and describes tests performed by SimpliV-ity to validate and measure the operation and perfor-mance of the recommended solution.

The performance testing demonstrates SimpliVity’s ability to consistently deliver a high quality end-user experience in VDI deployments as the environment scales. Highlights include:

1. Performance at scale: In Login VSI testing, consistently low latency of less than 2000ms average response was observed for both Hosted and Hosted Shared Desktop implementations.

2. Data optimization at scale: Native always-on inline deduplication and compression provided a data reduc-tion rate of above 20:1.

3. 2000 user sessions (800 Hosted Desktops and 1200 Hosted Shared Desktop sessions using PVS) running on only 10 OmniStack nodes, including resilient N+1 design.

AudienceThe document is intended for IT planners, managers and administrators; channel partner engineers, professional services personnel and other IT professionals who plan to deploy the SimpliVity hyperconverged infrastructure solu-tion to support Citrix XenDesktop 7.6

2. IntroductionSimpliVity Hyperconverged Infrastructure: Simplifying VDI Many businesses are constrained by legacy IT infra-structure that isn’t well suited for VDI initiatives. Siloed data centers, composed of independent compute, stor-age, network and data protection platforms with distinct administrative interfaces are inherently inefficient, cumber-some and costly. Each platform requires support, main-tenance, licensing, power and cooling—not to mention a set of dedicated resources capable of administering and maintaining the elements. Rolling out a new application like VDI is a manually intensive, time-consuming proposi-tion involving a number of different technology platforms, management interfaces, and operations teams. Expanding system capacity can take days or even weeks, and require complex provisioning and administration. Troubleshooting problems and performing routine data backup, replication and recovery tasks can be just as inefficient.

While grappling with this complexity, organizations also need to address challenges that are unique to VDI, including:

1. Difficulty sizing VDI workloads, due to the inherent randomness and unpredictability of user behavior.

2. Periodic spikes in demand, such as “login storms” and “boot storms” that may significantly degrade perfor-mance if not properly handled.

3. Loss of user productivity or revenue in the event of an outage.

SimpliVity addresses each of these challenges by providing a scalable, building block-style approach to deploying infrastructure for VDI, offering predictable cost, and deliv-ering a high-performing desktop experience with contin-ued availability.

www.SimpliVity.comPage 4 of 27

Reference Architecture

Superior user experience through unmatched VDI performance

The SimpliVity solution enables high performance at very high desktop density. It absorbs VDI login storms, deliver-ing 1,000 logins in 1,000 seconds – nearly 3x faster than the standard Login VSI benchmark provisioning speed and unparalleled in the hyperconverged infrastructure solution market.

Linear scalability from pilot to production with cost-effective VDI deployments

SimpliVity’s scale-out architecture minimizes initial capi-tal outlays and tightly aligns investments with business requirements; SimpliVity building blocks are added incre-mentally providing a massively-scalable pool of shared resources.

Enterprise-grade data protection and resiliency for VDI workloads

SimpliVity provides built-in backup and disaster recovery capabilities for the entire virtual desktop infrastructure. The solution also ensures resilient, highly available desktop operations with the ability to withstand node failures with no loss of desktops and minimal increase in latency.

3. Technology OverviewSimpliVity’s hyperconverged infrastructure solution is designed from the ground up to meet the increased performance, scalability and agility demands of today’s data-intensive, highly virtualized IT environments. The SimpliVity solution transforms IT by virtualizing data and incorporating all IT infrastructure and services below the hypervisor into compact x86 building blocks. With 3x total cost of ownership (TCO) reduction, SimpliVity delivers the best of both worlds: the enterprise-class performance, protection and resiliency that today’s organizations require, with the cloud economics businesses demand.

Designed to work with any hypervisor or industry-standard x86 server platform, the SimpliVity solution provides a sin-gle, shared resource pool across the entire IT stack, elimi-nating point products and inefficient siloed IT architec-tures. The solution is distinguished from other converged infrastructure solutions by three unique attributes: accel-erated data efficiency, built-in data protection functionality and global unified management capabilities.

• Accelerated Data Efficiency: Performs inline data deduplication, compression and optimization on all data at inception across all phases of the data lifecycle, all handled with fine data granularity of just 4KB-8KB. On average, SimpliVity customers achieve 40:1 data efficiency while simultaneously increasing application performance.

• Built-In Data Protection: Includes native data protec-tion functionality, enabling business continuity and disaster recovery for critical applications and data, while eliminating the need for special-purpose backup and recovery hardware or software. The solution’s inherent data efficiencies minimize I/O and WAN traf-fic, reducing backup and restore times from hours to minutes, while obviating the need for special-purpose WAN optimization products.

• Global Unified Management: A VM-centric approach to management eliminates manually intensive, error-prone administrative tasks. System administrators are no longer required to manage LUNs and volumes; instead, they can manage all resources and workloads centrally, using familiar interfaces such as VMware vCenter Server.

The SimpliVity solution includes its OmniStack software and related technologies, packaged on popular x86 platforms—either on 2U servers marketed as SimpliVity OmniCube, or with partner systems from Cisco or Lenovo, marketed as OmniStack with Cisco UCS and OmniStack with Lenovo System x, respectively.

An individual OmniStack node includes:

• A compact hardware platform - a 2U industry-standard virtualized x86 platform containing compute, memory, performance-optimized SSDs and capacity-optimized HDDs protected in hardware RAID configurations, and 10GbE network interfaces.

• A hypervisor such as VMware vSphere/ESXi.

• OmniStack virtual controller software running on the hypervisor.

• An OmniStack Accelerator Card – a special-purpose PCIe card with an FPGA, flash, and DRAM, protected with super capacitors; the accelerator card offloads CPU-intensive functions such as data compression, deduplication and optimization from the x86 processors.

www.SimpliVity.comPage 5 of 27

Reference Architecture

4. Solution Overview The solution outlined in this document provides guidance for implementing SimpliVity hyperconverged infrastruc-ture to enable a single VDI building block, supporting 2,000 office workers. This architecture can be used to scale up to many thousands of users, by replicating the building blocks as outlined below.

This solution leverages SimpliVity hyperconverged infra-structure as the foundational element of the design. Sim-pliVity OmniStack nodes are combined together, forming a pool of shared compute (CPU and memory), storage, and storage network resources. VMware vSphere and Citrix XenDesktop provide a high-performance VDI environment that is highly available and highly scalable.

The building block includes:

• SimpliVity OmniStack nodes with Haswell-based Intel Xeon E5-2697 CPUs for desktop workloads

• SimpliVity OmniStack nodes with Haswell-based Intel Xeon E5-2680 CPUs for management workloads

• 199GB – 455GB usable memory per OmniStack node

• 2TB datastores for all workloads

• 10GbE networking

• Windows 7 SP1 for Hosted Desktops

• Windows Server 2012 R2 for Hosted Shared Desktops and server workloads

• N+1 design for management workloads and infrastruc-ture where possible

Infrastructure Services Infrastructure Services

vCenter ActiveDirectory

SQL

NetScaler NetScaler

Storefront

Two 5 nodes Simplivity OmniStack

DeliveryControllers

ProvisioningServices

External Users

vSphere 5.5U2

DMZ

Storage Caching

Data Protection Apps

WAN Optimization

Cloud Gateway

Backup & Dedupe

SSD Array

(4) Servers & VMware

Storage Switch

(2) HA shared storage

Data ProtectionApplication

Data Protection A

pplication

• One Building Block• 3x TCO Savings• Global Unified Management• Operational Efficiency

EnterpriseCapabilities

CloudEconomics

Legacy Comparison

www.SimpliVity.comPage 6 of 27

Reference Architecture

Citrix XenApp and XenDesktop Technology OverviewCitrix XenApp and XenDesktop are application and desk-top virtualization solutions that control virtual machines, applications, licensing, and security while providing anywhere access for any device. Citrix FlexCast Manage-ment Architecture (FMA) is a unified architecture that integrates XenApp and XenDesktop to centralize man-agement. Different types of workers across the enterprise need different types of desktops. Some require simplicity and standardization, and others require high performance and personalization. XenDesktop can meet these require-ments in a single solution using Citrix FlexCast delivery technology. With FlexCast, IT can deliver every type of virtual desktop, each specifically tailored to meet the performance, security, and flexibility requirements of each individual user.

XenApp and XenDesktop allow:

• End users to run applications and desktops indepen-dently of the device’s operating system and interface.

• Administrators to manage the network and provide or restrict access from selected devices or from all devices.

• Administrators to manage an entire network from a single data center.

Citrix XenDesktop ComponentsCitrix FlexCast Management Architecture key components:

Receiver NetScaler

StoreFront

Any Location Controller

ServerHosted

Desktops

WindowsApps

DirectorEdgeSight

Studio

HDX

HDX

HDX

Citrix Receiver

Users access their applications and desktop through Citrix Receiver, a universal client that runs on virtually any device operating platform, including iOS and Android in addition to Windows, Mac® and Linux®. This software client sup-plies the connection to the virtual machine via TCP port 80 or 443, and communicates with StoreFront using the StoreFront Service API.

StoreFront

The interface that authenticates users, manages applica-tions and desktops, and hosts the application store. Store-Front communicates with the Delivery Controller using XML.

Delivery Controller

The central management component of a XenApp or XenDesktop Site that consists of services that manage resources, applications, and desktops; and optimize and balance the loads of user connections.

Virtual Delivery Agent (VDA)

An agent that is installed on machines running Windows Server or Windows desktop operating systems that allows these machines and the resources they host to be made available to users. The VDA-installed machines running Windows Server OS allow the machine to host multiple connections for multiple users and are connected to users on one of the following ports:

• TCP port 80 or port 443 if SSL is enabled

• TCP port 2598, if Citrix Gateway Protocol (CGP) is enabled, which enables session reliability

• TCP port 1494 if CGP is disabled or if the user is con-necting with a legacy client

NetScaler Gateway

A data-access solution that provides secure access inside or outside the LAN’s firewall with additional credentials.

Director EdgeSight

Director provides real-time trend and diagnostic informa-tion on users, applications and desktops to helpdesk staff with troubleshooting. It is a web-based tool that allows administers access to real-time data from the Broker agent, historical data from the Site database, and HDX data from NetScaler for troubleshooting and support.

www.SimpliVity.comPage 7 of 27

Reference Architecture

Studio

A management console that allows administers to config-ure and manage Sites, and gives access to real-time data from the Broker agent. Studio communicates with the Con-troller on TCP port 80.

Provisioning methodsXenDesktop 7.6 feature pack 2 has two integrated solu-tions, Provisioning Services and Machine Creation Services to provide different benefits to business needs.

Citrix Provisioning Services 7.6

Traditional image solutions cost time and money to setup, update, support and decommission on each computer. Citrix Provisioning Service is based on software-streaming technology. This technology allows computers to be provi-sioned and reprovisioned in real time from a single shared disk image (vDisk). In doing so, administrators can elimi-nate the need to manage and patch individual systems. Instead, all image management is done on the master image vDisk. This centralized management enables organi-zations to reduce operational and storage costs.

After PVS components are installed and configured, a vDisk is created from a device’s hard drive by taking a snapshot of the OS and application image and then stor-ing that image as a vDisk file on the network. A device used for this process is referred to as a master target device. The devices that use the vDisks are called target devices. Citrix PVS streams a single shared disk image (vDisk) to individual machines. The write cache includes data written by the target device.

The streaming vDisk is in read-only format, and target devices cannot change the image so that consistency is ensured. Any patches, updates and other configuration changes can be delivered to the end devices in real time when they reboot. Citrix PVS is part of Citrix XenDesktop, desktop administrators can use PVS’s streaming technol-ogy to simplify, consolidate, and reduce the costs of both physical and virtual desktop delivery.

When using SimpliVity OmniStack, the best practice is to create a local partition on each PVS servers and each PVS server have a local vDisk copy. This improves performance by caching vDisk contents in PVS system cache memory.

vDisks can be assigned to a single target device in private-image mode, or to multiple target devices in standard-image mode.

vDisk attributes:

• Read only during streaming

• Shared by many VMs, simplifying updates

Write cache attributes:

• One VM, one write cache file

• Write cache file is empty after VM reboots

• Recommended storage protocol: NFS for write cache

• The write cache file size is 4GB to 10GB for Hosted Desktops and 20GB -60GB for Hosted Shared Desk-tops; if PvDisk is used in the deployment, the write cache size will be smaller.

PVS File Layout

Provisioning Server RAM Cache

A RAM write cache option (cache on device RAM with overflow on hard disk) is available in Provisioning Server 7.6. Write cache can seamlessly overflow to a differencing disk should RAM cache become full. At the PVS console, select vDisk properties and choose Cache type as “Cache in device RAM with overflow on hard disk.” This will use hypervisor RAM first and then use hard disk. Choose the RAM size based on OS type.

RAM Cache sizing consideration:

• 256MB for Hosted Windows 7 32-bit

• 512MB for Hosted Windows 7 64-bit

• 2GB–4GB for Hosted Shared Windows Server 2012 R2

Streamed vDisk

PVS Stream

PVS Stream

PVS Stream

Virtual Desktop 1

Virtual Desktop 2

Virtual Desktop 3

OmniStack

Write Cache

Streamed vDisk Write Cache

Streamed vDisk Write Cache

Windows OSMaster

Visible file on anotherdisk, typically D:\

This is what the user sees as Drive C:\

www.SimpliVity.comPage 8 of 27

Reference Architecture

Machine Creation Services

An image delivery technology, integrated within XenDesktop that utilizes hypervisor APIs to create a unique, read-only thin provisioned clone of a master image where all writes are stored within a differencing disk.

When you provision desktops using MCS, a master image is copied to each datastore. This master image copy uses the hypervisor snapshot clone. Within minutes of the master image copy process, MCS creates a differential disk and an iden-tity disk for each VM. The size of the differential disk is the same as the master image in order to host the session data. The identity disk is normally 16MB and is hidden by default. The identity disk has the machine identity information such as host name and password.

MCS File Layout

Desktop types• Hosted Shared Desktops (XenApp/RDSH): This is many users to one server. Users get a desktop interface, which can

look like Windows 7 but it is a published desktop on one XenApp server. Every user is sharing the XenApp server and we can configure restrictions and redirections to allow users to have a smaller impact on each other. These are inexpen-sive, locked-down Windows virtual desktops hosted from Windows server operating systems. They are well suited for users, such as call center employees, who perform a standard set of tasks.

• Hosted Virtual Desktops (XenDesktop/VDI): A Windows 7/XP desktop running as a virtual machine where a single user connects remotely. One user’s desktop is not impacted by another user’s desktop configuration. Think of this as one user to one desktop. There are many flavors for the hosted virtual desktop model (existing, installed, pooled, dedicated and streamed), but they are all located within the data center. These virtual desktops each run a Microsoft Windows desktop operating system rather than running in a shared, server-based environment. They can provide users with their own desktops that they can fully personalize.

For more information, check Citrix XenDesktop Release 7.6.

Diff Disk

VHD Chain

VHD Chain

VHD Chain

Virtual Desktop 1

Virtual Desktop 2

Virtual Desktop 3

OmniStack

ID Disk

Diff Disk ID Disk

Diff Disk ID Disk

Windows OSMaster

This is hidden from the users view

This is what the user sees as Drive C:\

XenDesktop/VDIXenApp/RDSH

Pooled Hosted Virtual DesktopsWindows 7

Hosted Shared DesktopsWindows Server 2012

VMVirtual Machine VM VM VM

UserSession

OS

UserSession

OS

UserSession

OS

UserSession

OS

UserSession

Remote Desktop Session Host

UserSession

UserSession

UserSession

www.SimpliVity.comPage 9 of 27

Reference Architecture

4. Solution ArchitectureManagement Infrastructure DesignThis section details the OmniStack environment dedicated to running the management workloads required to support 2000 users in a Citrix XenDesktop implementation. A separate, dedicated OmniStack environment is also used for the XenDesktop Hosted Desktops and Hosted Shared Desktops and is detailed in Desktop Infrastructure Design. The man-agement workloads considered in this document are detailed in the table below.

Workload Version vCPUs RAM Disk OSvCenter Server – Desktop

5.5 Update 2e 8 32GB 100GB Windows Server 2012 R2

vCenter Server – Mgmt

5.5 Update 2e 4 16GB 100GB Windows Server 2012 R2

Microsoft SQL Server

2012 SP1 4 8GB 100GB Windows Server 2012 R2

AD DC/DHCP/DNS x 2

N/A 2 4GB 40GB Windows Server 2012 R2

Citrix XenDesktop Controller Server x 2

7.6 4 8GB 40GB Windows Server 2012 R2

Citrix XenDesktop StoreFront Server x 2

7.6 4 4GB 40GB Windows Server 2012 R2

Citrix Provisioning Server x 3

7.6 4 8GB 500GB Windows Server 2012 R2

Citrix XenDesktop Licensing Server

7.6 4 4GB 40GB Windows Server 2012 R2

vSphere Design

Attribute Value RationaleNumber of vCenter Servers 1 A vCenter Server instance will support 2000 virtual desk-

tops.

Number of vSphere Clusters 1 Given the number of OmniStack systems required to support the given workload, there is no need to split out hosts into separate vSphere Clusters.

Number of vSphere Datacenters 1 A single vSphere Cluster means only a single vSphere Datacenter is required.

vSphere HA Configuration 1. HA enabled

2. Admission Control enabled

3. % of cluster resources reserved – 50%

4. Isolation Response – Leave Powered On

1. Enabled to restart VMs in the event of an ESXi host failure

2. Ensure VM resources will not become exhausted in the case of a host failure.

3. Set to the percentage of the cluster a single host rep-resents.

4. Ensure a host isolation event does not needlessly power off desktops.

www.SimpliVity.comPage 10 of 27

Reference Architecture

vSphere HA – Advanced Settings das.vmmemoryminmb – 9137MB das.vmcpuminmhz – 1000MHz

Both are set to averages of the workloads in the cluster. This serves to set the percentage of cluster resources in HA calculation to that of an average VM.

ESXi – Advanced SettingsSunRPC.MaxConnPerIP 256 (max) Avoid hitting NFS connection limit

NFS.MaxVolumes 256 (max) Increase number of NFS volumes per host

NFS.MaxQueueDepth 256 Performance consideration

NFS.SendBufferSize 512 Performance consideration

NFS.ReceiveBufferSize 512 Performance consideration

Net.TcpipHeapSize 32 Performance consideration

Net.TcpipHeapMax 512 Performance consideration

Misc.APDHandlingEnable 1 Turn on All Paths Down handling in ESXi

These workloads are visually represented on the right:

• (2) OmniStack Integrated Solution with Cisco UCS C240 M4

• Intel Xeon E5-2680 v3 (Haswell 12-core, 2 sockets per server)

• 199GB usable memory each

• 2TB datastores x2

• 10GbE interconnect between systems (no 10GbE switch required, but may be used)

OmniStack Servers – To support the management work-loads outlined in this document, a 2-host vSphere Cluster, comprised of OmniStack Integrated Solution with Cisco UCS C240 M4, is recommended. Unlike other HCI vendors, SimpliVity fully supports a 2-host cluster in its minimum configuration. Using OmniStack from SimpliVity allows you to start small, with only the infrastructure you need, and scale out as your VDI environment grows.

vCenter Servers – All roles were installed onto a single vir-tual machine, including the vCenter Server Service, vCen-ter Single Sign On (SSO), Inventory Service, and Update Manager. No CPU or memory pressure were observed during testing, so dedicating servers for each service was unnecessary. If an embedded database server had been utilized in this infrastructure, it would be advised to have dedicated servers for the SSO and Inventory services to avoid resource contention.

The vCenter Server appliance was not used in these tests.

Management vCenter Server

Management Datacenter/Cluster

2x DesktopControllers

2 StoreFrontServers

1 LicensingServer

3 ProvisioningServers

2 ActiveDirectory

2x 2TB NFSOmniStack Datastores

4xC240-M4SX16 core 2.6GHz each

256GB RAM usable each

vCenterServer(Mgmt)

vCenter Server(Desktop)

2 MS SQLServer

www.SimpliVity.comPage 11 of 27

Reference Architecture

It is a perfectly acceptable alternative to the Windows version of vCenter Server, and is permissible to use with the caveat that any Windows-based features will require a stand-alone Windows server to support. Please see kb.vmware.com/kb/2005086 for more details.

XenDesktop Delivery Controllers – A single Delivery Controller supports up to 5000 users. Two Delivery Controllers were deployed in an N+1 configuration for high availability.

XenDesktop StoreFront Servers – A single StoreFront Server supports up to 10000 users. Two StoreFront Servers were deployed in an N+1 configuration for high availability.

XenDesktop License Server – Only a single License Server is required.

Provisioning Servers – A single PVS server can support up to around 500 virtual machines. Given our environment of 800 Hosted Desktop VMs and 40 Hosted Shared Desktop VMs, three PVS servers were deployed in an N+1 configuration for high availability.

For the vDisk Store, local disk was leveraged on the PVS servers. This was done to support vDisk RAM cache.

Infrastructure Services (Domain Controllers/DNS/DHCP) – These services were all co-located on the same virtual machines. While no CPU or memory pressure was observed during testing, in-depth Active Directory design and recom-mendations are outside the scope of this document. Please see msdn.microsoft.com/en-us/library/bb727085.aspx for more information and best practices.

When PVS PXE boot is used, DHCP option 66 and 67 must be configured to enable TFTP boot. Please see docs.citrix.com/en-us/provisioning/7-6.html for more details.

Microsoft SQL Server – All supporting databases for this reference design, were run on a single virtual machine. These databases are referenced in the table below.

Database Authentication Size Recovery ModeDesktop vCenter Server Windows authentication 5GB Simple

Management vCenter Server Windows authentication 1GB Simple

Desktop Update Manager SQL authentication 100MB Simple

Management Update Manager SQL authentication 100MB Simple

XenDesktop Delivery Controller

Windows Authentication Default Simple

Provisioning Server Windows Authentication Default Simple

Sizing – Compute, Storage, and Network resources for each infrastructure VM were selected using Citrix best practices as a baseline and modified based on their observed performance on the OmniStack systems.

SimpliVity Arbiter Placement – Please refer to the SimpliVity OmniCube Deployment Guide for further guidance.

vStorage API for Array Integration (VAAI) – VAAI is a vSphere API that allows storage vendors to offload some common storage tasks from ESXi to the storage itself. The VAAI plugin for OmniStack is installed during deployment, so no manual intervention is required.

Datastores – A single datastore per server is recommended to ensure even SimpliVity storage distribution across cluster members. This is less important in a 2 OmniStack server configuration; however, following this best practice guideline will ensure a smooth transition to a 3+ node OmniStack environment, should the environment grow over time. This best prac-tice has been proven to deliver better storage performance and is highly encouraged.

www.SimpliVity.comPage 12 of 27

Reference Architecture

Networking – The following best practices were utilized in the vSphere networking design:

• Segregate OVC networking from ESXi host and virtual machine network traffic

• Leverage 10GbE where possible for OVC and virtual machine network traffic

These best practices offer the highest network performance to VMs running on OmniStack 3.0. Taking this into consider-ation, a single vSphere Standard Switch is deployed for management traffic, and a single vSphere Distributed Switch is deployed for the remaining traffic, including:

• Virtual Machines

• SimpliVity Federation

• SimpliVity Storage

• vMotion

vSphere Standard Switch Configuration

Parameter SettingLoad balancing Route based on Port ID

Failover detection Link status only.

Notify switches Enabled.

Failback No.

Failover order Active/Active

Security Promiscuous Mode – RejectMAC Address Changes – RejectForged Transmits – Reject

Traffic Shaping Disabled

Maximum MTU 1500

Number of Ports 128

Number of Uplinks 2

Network Adapters 1GbE NICs on each host

VMkernel Adapters/VM Networks vmk0 – ESXi Management – Active/Active – MTU 1500 VM – vCenter Server – Active/Active – MTU 1500

www.SimpliVity.comPage 13 of 27

Reference Architecture

vSphere Distributed Switch Configuration

Parameter SettingLoad balancing Route based on physical NIC load.

Failover detection Link status only.

Notify switches Enabled.

Failback No.

Failover order Active/Active

Security Promiscuous Mode – RejectMAC Address Changes – RejectForged Transmits – Reject

Traffic Shaping Disabled

Maximum MTU 9000

Number of Ports 4096

Number of Uplinks 2

Network Adapters 10GbE NICs on each host

Network I/O Control Disabled

VMkernel ports/VM Networks vmk1 – vMotionvmk2 – StoragevMotion – Active/Standby – MTU 9000Federation – Standby/Active – MTU 9000Storage – Standby/Active – MTU 9000Management VMs – Active/Active – MTU 9000

Port Binding Static

vSwitch0

dvSwitch0

Management or Resource Cluster

vmnic0

vmnic1

Management VLAN A

VM Networks

vMotion VLAN B

SVT Federation VLAN C

SVT Storage VLAN C

Switch1

A

A

A

A

S

S

S

A

A

A

dvUplink2

dvUplink1

VLAN DVLAN E

...

Switch2

www.SimpliVity.comPage 14 of 27

Reference Architecture

Desktop Infrastructure DesignThis desktop block is sized to support 2000 users provisioned by Citrix Provisioning Server with 800 users on Hosted Desktops and 1200 users on Hosted Shared Desktops OR 1400 users provisioned by Citrix Machine Creation Service with 600 users on Hosted Desktops and 800 users on Hosted Shared Desktops. The sizing of the infrastructure supporting this desktop block is dependent on the workload profile defined for the use case supported by that block. In this case, we defined a single block supporting 2000 office workers.

2000 Office Worker Block

The desktop block for 2000 office workers is two 5-node vSphere Clusters that are contained within separate vSphere Datacenter objects (5+5 Federation). This configuration has been tested and validated to support the workload as defined, including N+1 design. Results of these tests are available in this document.

Office Worker Virtual Machine Configuration – Hosted Desktops

Attribute SpecificationOperating System Windows 7 SP1 64-bit

Virtual Hardware VM virtual hardware version 10

VMware Tools Latest

Number of vCPUs 1

Memory – including PVS RAM cache 2048MB

PVS RAM cache size 512MB

Virtual Disk – VMDK 25GB

NTFS Cluster Alignment 8KB

SCSI Controller VMware Paravirtual

Virtual Floppy Drive Removed

Virtual CD/DVD Drive Removed

NIC vendor and model VMXNET3

Number of ports/NIC x speed 1x 10 Gigabit Ethernet

OS Page file 1.5GB starting and max

Number deployed 800

www.SimpliVity.comPage 15 of 27

Reference Architecture

Office Worker Virtual Machine Configuration – Hosted Shared Desktops

Attribute SpecificationOperating System Windows Server 2012 R2

Virtual Hardware VM virtual hardware version 10

VMware Tools Latest

Number of vCPUs 6

Memory – including PVS RAM cache 22528MB

PVS RAM cache size 2048MB

Virtual Disk – VMDK 60GB

NTFS Cluster Alignment 8KB

SCSI Controller VMware Paravirtual

Virtual Floppy Drive Removed

Virtual CD/DVD Drive Removed

NIC vendor and model VMXNET3

Number of ports/NIC x speed 1x 10 Gigabit Ethernet

OS Page file 20GB starting and max

Number deployed 40

Users per server 30

The following infrastructure was used to support these workloads:

• (10) OmniStack Integrated Solution with Cisco UCS C240 M4

• Intel Xeon E5-2697 v3 (Haswell 14-core, 2 sockets per server)

• 455GB usable memory per OmniStack system for Hosted Desktops

• 327GB usable memory per OmniStack system for Hosted Shared Desktops

• 2TB datastores x10

• 10GbE networking

SimpliVity Federation and vSphere Cluster/Datacenter Sizing – The decision was made to split the workload into multiple vSphere Clusters, with the 800 Hosted Desktop workloads in one vSphere Cluster and the 1200 Hosted Shared Desktop workloads in the other.

To support multiple vSphere Datacenters in a single vCenter Server, both Datacenters must belong to a single SimpliVity Federation, as a vCenter Server supports a single Federation. A Federation can span multiple vCenter Server instances, but that configuration is outside the scope of this document.

www.SimpliVity.comPage 16 of 27

Reference Architecture

Note: This solution architecture was designed based on a standard workload size. When sizing a production environment, proper assessment and use case definition should be done to accurately size the environment.

vSphere Design

Attribute Value RationaleNumber of vCenter Servers 1 A vCenter Server instance will support 2000 users.

Number of vSphere Clusters 2 Given the number of OmniStack systems required to support the given workload, the decision was made to split out into separate vSphere Clusters.

Number of vSphere Datacen-ters

2 With OmniStack 3.0, the fault domain is at the vSphere Datacen-ter level. Desktops will not cross back and forth between vSphere clusters, so each vSphere Cluster should have its own vSphere Datacenter.

vSphere HA Configuration 1. HA enabled2. Admission Control enabled3. % of cluster resources

reserved – 20%4. Isolation Response – Leave

Powered On

1. Enabled to restart VMs in the event of an ESXi host failure2. Ensure VM resources will not become exhausted in the case of

a host failure.3. Set to the percentage of the cluster a single host represents.4. Ensure a host isolation event does not needlessly power off

desktops.

vSphere HA – Advanced Set-tings(Hosted Desktop Cluster)

das.vmmemoryminmb – 2048MBdas.vmcpuminmhz – 300MHz

Both are set to averages of the workloads in the cluster. This serves to set the percentage of cluster resources in HA calcula-tion to that of an average VM.

vSphere HA – Advanced Set-tings(Hosted Shared Desktop Clus-ter)

das.vmmemoryminmb – 22528MBdas.vmcpuminmhz – 2000MHz

Both are set to averages of the workloads in the cluster. This serves to set the percentage of cluster resources in HA calcula-tion to that of an average VM.

Reservations and Limits Full memory reservation for all desktop workloads

Ensures all desktop workloads have access to memory resources. Also avoids creation of VMkernel swap files on storage.

Desktop vCenter Server

Desktop Datacenter/Cluster1

Hosted Desktops PVS (800 Office Workers)

Template

MasterImage

WriteCache

WriteCache

WriteCache

WriteCache

WriteCache

WriteCache

WriteCache

WriteCache

5x 1TBOmniStack Datastores

5xC240-M4SX2xIntel 14 core 2.6GHz

384GB RAM

... ...

Desktop Datacenter/Cluster2

Hosted Shared Desktops PVS (1200 Office Workers)

Template

PVSvDISK

WriteCache

WriteCache

WriteCache

WriteCache

WriteCache

WriteCache

WriteCache

WriteCache

5x 1TBOmniStack Datastores

5xC240-M4SX2xIntel 14 core 2.6GHz

384GB RAM

... ...

www.SimpliVity.comPage 17 of 27

Reference Architecture

ESXi – Advanced SettingsSunRPC.MaxConnPerIP 256 (max) Avoid hitting NFS connection limit

NFS.MaxVolumes 256 (max) Increase number of NFS volumes per host

NFS.MaxQueueDepth 256 Performance consideration

NFS.SendBufferSize 512 Performance consideration

NFS.ReceiveBufferSize 512 Performance consideration

Net.TcpipHeapSize 32 Performance consideration

Net.TcpipHeapMax 512 Performance consideration

Misc.APDHandlingEnable 1 Turn on All Paths Down handling in ESXi

OmniStack Servers – Two 5-host vSphere Clusters comprised of OmniStack Integrated Solution with Cisco UCS C240 M4 systems to support the Office Worker desktop workload.

The following design patterns were observed:

• Limit physical CPU to virtual CPU oversubscription

• Do not overcommit memory

Limit physical CPU to virtual CPU oversubscription

The table below shows steps on how to calculate useable physical CPU:

Step Desktop CPU Calculation RationaleEach node usable CPU 28 - OVC CPU 4 = 24 Each node has 28 physical CPU and one

OVC takes 4 physical CPU cores.

Total 4 node usable CPU 24 x 4 nodes = 96 Multiple usable desktop CPU by the num-ber of nodes. N+1 is calculated so we use 4 instead of 5.

Total desktop virtual CPU requirement Hosted Desktops: 1 X800vm =800vCPU Hosted Shared Desktops : 6x40VM=240 vCPU

Each Hosted Desktop VM has a single vCPU, and each Hosted Shared Desktop VM has 6 vCPUs.

Check CPU overcommitment Hosted Desktops : 800 vCPU required / 96 pCPU = 8.33 vCPU:pCPUHosted Shared Desktops: 240 vCPU required / 96 pCPU = 2.5 vCPU:pCPU

In both case, there should not be over-commit.

www.SimpliVity.comPage 18 of 27

Reference Architecture

Do not overcommit memory

In this configuration, each OmniStack system has 384GB or 512GB of available physical memory. We used 384GB memory for hosted shared desktops and 512GB for hosted desktops. The table below shows steps on how to calculate useable physical memory:

Calculation Step Desktop Memory Calculation RationaleEach node usable memory 384GB- OVC memory 57GB = 327GB.

512GB- OVC memory 57GB = 455GB.

Each node has 384GB or 512GB physical memories and one OVC takes 57GB.

Total 4 node usable memory Hosted Shared Desktops: 327GB x 4 nodes = 1308GB

Hosted Desktops: 455GB x 4 nodes = 1820GB

Multiple usable desktop memories by the number of nodes. N+1 is calculated so we use 4 instead of 5.

Total desktop memory requirement Hosted Shared Desktops: 20GB x 40VM =800GB

Hosted Desktops: 2GB x 800VM=1600GB

Each Hosted Desktop VM has 2GB of memory, and each Hosted Shared Desk-top VM has 20GB of memory.

Check memory overcommitment Hosted Shared Desktops: 1308GB per cluster - 800GB required = 428GB spare capacity

Hosted Desktops: 1820GB per cluster – 1600GB required = 220GB spare capacity

And both cases, no memory overcommit-ment.

NOTE: For our testing, we used OmniStack systems with 384GB of memory, so there was some memory overcommitment with Hosted Desktops. We also did not set memory reservations for desktop workloads. We did not notice any adverse effects, e.g., VMkernel swap to disk; however, we recommend 512GB per OmniStack system for Hosted Desktops in this case to avoid overcommitment of memory.

vStorage API for Array Integration (VAAI) – VAAI is a vSphere API that allows storage vendors to offload some common storage tasks from ESXi to the storage itself. The VAAI plugin for OmniStack is installed during deployment, so no manual intervention is required.

Datastores – An equal number of SimpliVity datastores to the number of OmniStack systems in each vSphere Cluster were deployed. In this 5+5 Federation configuration, five SimpliVity datastores were created for each vSphere Cluster. This is done to more evenly distribute storage load across the OmniStack systems in the vSphere Cluster, as well as increase the likelihood any given desktop has locality with its VMDK disk.

Each datastore contains a virtual machine template and write cache files for every virtual machine. The write cache file con-tains all disk writes of a target device when using a write-protected vDisk (Standard Image).

Networking – The following design patterns were observed in the design of the vSphere networking for the solution:

• Segregate OVC networking from ESXi host and virtual machine network traffic

• Leverage 10GbE where possible for OVC and virtual machine network traffic

www.SimpliVity.comPage 19 of 27

Reference Architecture

With those ideals in mind, a single vSphere Standard Switch is deployed for management traffic, and a single vSphere Dis-tributed Switch is deployed for the rest of our network needs, including

• Virtual Machines

• SimpliVity Federation

• SimpliVity Storage

• vMotion

vSphere Standard Switch Configuration

Parameter SettingLoad balancing Route based on Port ID

Failover detection Link status only.

Notify switches Enabled.

Failback No.

Failover order Active/Active

Security Promiscuous Mode – RejectMAC Address Changes – RejectForged Transmits – Reject

Traffic Shaping Disabled

Maximum MTU 1500

Number of Ports 128

Number of Uplinks 2

Network Adapters 1GbE NICs on each host

VMkernel Adapters/VM Networks vmk0 – ESXi Management – Active/Active – MTU 1500

vSphere Distributed Switch Configuration

Parameter SettingLoad balancing Route based on physical NIC load.

Failover detection Link status only.

Notify switches Enabled.

Failback No.

Failover order Active/Active

Security Promiscuous Mode – RejectMAC Address Changes – RejectForged Transmits – Reject

Traffic Shaping Disabled

Maximum MTU 9000

Number of Ports 4096

Number of Uplinks 2

www.SimpliVity.comPage 20 of 27

Reference Architecture

Network Adapters 10GbE NICs on each host

Network I/O Control Disabled

VMkernel ports/VM Networks vmk1 – vMotionvmk2 – StoragevMotion – Active/Standby – MTU 9000Federation – Standby/Active – MTU 9000Storage – Standby/Active – MTU 9000Desktop VMs – Active/Active – MTU 9000

Port Binding Static, except Desktop VMs, which is ephemeral

5. Login VSIAll performance testing documented utilized the Login VSI (http://www.loginvsi.com) benchmarking tool. Login VSI is the industry-standard load testing solution for centralized virtualized desktop environments. When used for benchmark-ing, the software measures the total response time of several specific user operations being performed within a desktop workload in a scripted loop. The baseline is the measurement of the response time of specific operations performed in the desktop workload, which is measured in milliseconds (ms).

There are two values in particular that are important to note: VSIbase and VSImax.

• VSIbase: A score reflecting the response time of specific operations performed in the desktop workload when there is little or no stress on the system. A low baseline indicates a better user experience, resulting in applications responding faster in the environment.

• VSImax: The maximum number of desktop sessions attainable on the host before experiencing degradation in host and desktop performance.

SimpliVity used Login VSI 4.1.4 to perform the tests. The VMs were balanced across each of the servers, maintaining a consistent number of VMs on each node. For the Login VSImax test, a Login VSI launcher was used per 500 desktops. The Login VSI launcher was configured to launch a new session every 2.88 seconds. All Hosted and Hosted Shared Desktops were powered on, registered, and idle prior to starting the actual test sessions.

Testing MethodologyFor the tests, SimpliVity used the new Login VSI Office Worker workload. The workload simulate the following applications found in almost every environment, as listed below:

Login VSI Workload Applications

• Microsoft Word 2010

• Microsoft Excel 2010

• Microsoft PowerPoint 2010

• Microsoft Outlook 2010

• Internet Explorer

• Mind Map

• Flash Player

• Doro PDF Printer

• Photo Viewer

All tests are executed in Login VSI’s Direct Desktop Mode. Since no specific remoting protocol is used, this makes the test results relevant for everyone. In Direct Desktop Mode, all sessions are started as a console session. The big advantage is that all comparisons are not influenced by changes on a remoting protocol level. As a result, the results are a “pure” com-parison of the tests in a VDI context.

www.SimpliVity.comPage 21 of 27

Reference Architecture

Office Worker Workload Definition(from http://www.loginvsi.com/documentation/Changes_old_and_new_workloads)

There is a new workload added that has no precursor, called the Office Worker, which is based on the Knowledge Worker (previously Medium) workload. The main goal of the Office Worker workload is to be deployed in environments that use only 1vCPU in their VMs. Overall, the Office Worker workload has less resource usage in comparison to the Knowledge Worker workload.

Test EnvironmentThe test environment is as described in the Solution Architecture section of this document. That includes both manage-ment infrastructure design for the management workloads used, with an addition of Login VSI launchers, and the desktop infrastructure design for the desktop workloads tested.

Results

The following results are representative of multiple Login VSI 4.1 runs for office worker users on the infrastructure described above.

Provisioned by Citrix Provisioning Server

1200 Hosted Shared Desktop sessions – Citrix PVS using cache in device RAM with overflow disk

VSIbase for the environment was 549ms, and VSImax was not reached in any run. VSImax average was 1147ms, and VSImax threshold was 1550ms.

www.SimpliVity.comPage 22 of 27

Reference Architecture

800 Hosted Desktops deployed with Citrix PVS using cache in device RAM with overflow disk

VSIbase for the environment was 842ms, and VSImax was not reached in any run. VSImax average was 1630ms, and VSImax threshold was 1842ms.

Provisioned by Machine Creation Services

800 Hosted Shared Desktop sessions deployed with Citrix MCS

VSIbase for the environment was 586ms, and VSImax was hit at 840 sessions. VSImax average was 1481ms, and VSImax threshold was 1856ms.

www.SimpliVity.comPage 23 of 27

Reference Architecture

600 Hosted Desktops deployed with Citrix MCS

VSIbase for the environment was 847ms, and VSImax was not reached in any run. VSImax average was 1145ms, and VSImax threshold was 1847ms.

Data Efficiency

One of the key components of SimpliVity hyperconverged infrastructure is data efficiency. By using inline deduplication and compression to optimize data before it hits the disk, we can reduce I/O and space usage and leaves as much CPU as possible available to run the business applications. The results for our small scale (600-1200 users per vSphere Datacenter) testing was above 20:1 data efficiency.

www.SimpliVity.comPage 24 of 27

Reference Architecture

6. Summary/ConclusionThis Reference Architecture provides guidance to organizations implementing Citrix XenDesktop 7.6 on SimpliVity hyper-converged infrastructure, and describes tests performed by SimpliVity to validate and measure the operation and per-formance of the recommended solution, including third-party validated performance testing from Login VSI, the industry standard benchmarking tool for virtualized workloads.

LoginVSI office workload test results showed that two 5 nodes SimpliVity OmniStack can support 2000 seats XenDesktop and XenApp with PVS RAM Cache option deployment and 1400 seats XenDesktop and XenApp with MCS deployment. In Login VSI testing, consistently low latency of less than 2000ms average response was observed for both Hosted and Hosted Shared Desktop implementations. Native always-on inline deduplication and compression provided a data reduc-tion rate of above 20:1

PVS RAM cache is critical in eliminating IO on OmniStack storage and lower storage latency. MCS tests shows lighter IO footprints compared to PVS without RAM Cache.

Utilizing Simplivity OmniStack hyperverged infrastructure dramatically simplifies IT systems management. OmniStack’s Data Virtualization Platform delivers industry-leading data efficiency, global unified management and built-in data protec-tion. For VDI environments, Simplivity provides an unmatched user experience without compromising desktop density or resiliency.

7. References and Additional ResourcesPerformance Whitepaper: VDI without Compromise with SimpliVity OmniStack and Citrix XenDesktop 7.6

ESG Lab Review Preview: SimpliVity Hyperconverged Infrastructure for VDI Environments

Citrix Product Documentation - XenDesktop 7.6 Long Term Service Release

Citrix Product Documentation - XenApp 7.6 Feature Pack 2 Blueprint

Citrix Product Documentation - XenDesktop 7.6 Feature Pack 2 Blueprint

Citrix Product Documentation - Provisioning Services 7.6

Minimum requirements for the VMware vCenter Server 5.x Appliance (2005086)

Best Practice Active Directory Design for Managing Windows Networks

Login VSI - Changes old and new workloads

www.SimpliVity.comPage 25 of 27

Reference Architecture

8. AppendixDesign GuidelinesWith SimpliVity OmniStack, you can start small with as few as two OmniStack nodes (for storage HA) and grow the envi-ronment as needed. This provides the flexibility of starting with a small scale proof of concept and growing to large scale production without guessing the workload and purchasing up front.

The following section covers the OmniStack, XenDesktop, infrastructure and network design guidelines for Citrix Hosted and Hosted Shared Desktop deployments on SimpliVity OmniStack.

SimpliVity OmniStack Design Guidelines

Item Detail RationaleMinimum Size 2 OmniStack nodes Minimum size requirement with storage HA. A cluster of

a single OmniStack node is possible without storage HA.

Scale Approach Use modular block Scale out from proof of concept to thousands of desktops

Scale Unit Nodes then cluster Granular scale to precisely meet the capacity demands. Scale in node increments to vSphere Cluster and Datacenter maximums.

Federation Maximum 32 OmniStack nodes Combine compute nodes to offload CPU and memory usage.

Cluster Size Up to 5 OmniStack nodes per vSphere Cluster.

OmniStack version 3.0 supports up to 5 nodes in a single vSphere Cluster for desktop workloads.

Management Infrastructure Dedicated OmniStack infra-structure is recommended for all XenDesktop deployments.

Separation of management workloads from desktops is key to ensuring performance, manageability, and security of both.

www.SimpliVity.comPage 26 of 27

Reference Architecture

Citrix XenDesktop 7.6 Design Guidelines

Item Detail RationaleCitrix XenDesktop

Desktop Delivery Controller N+1 One Citrix XenDesktop delivery controller can support up to 5000 users. Use an N+1 configuration for high avail-ability.

StoreFront Server N+1 StoreFront server provides users a list of resources. One StoreFront can support 10,000 connections. Redundant StoreFront servers should be deployed to provide N+1 high availability.

License Server

1 License server was used because the environment will continue to function in a 30-day grace period if the license server is offline.

SQL Database 2+ witness SQL server is a key resource in XenDesktop, so availability is paramount. The best practice is to use SQL mirror with witness.

Citrix Provisioning ServicesPVS Server N+1 One PVS server can support around 500 VMs.

N+1 availability is required for production deployments.Best practice is to use PVS RAM for vDisk to reduce hard disk reads, thereby improving performance. The size of RAM is 2GB + (vDisk size x 2GB).

vDisk Store PVS local disk To enable PVS vDisk RAM cache, PVS local disk is recom-mended.

Write Cache Write cache on RAM with overflow to disk

Use host RAM almost eliminates IO and has become the new best practice. RAM Cache size recommendation based on office worker workload:Hosted desktop with Windows 7 32bit: 256MBHosted desktop with Windows 7 64bit: 512MBHosted shared desktop with Windows 2012 R2: 2GBThe best practice is to run defragment on vDisk.

Citrix NetScaler ( Optional)NetScaler N+1 NetScaler provides load balancing, including global site

load balancing, required for active/active multisite config-urations, and disaster recovery capabilities between sites.

www.SimpliVity.comPage 27 of 27

Reference Architecture

For more information, visit:www.simplivity.com

® 2016, SimpliVity. All rights reserved. Information described herein is furnished for informational use only, is subject to change without notice. SimpliVity, the SimpliVity logo, OmniCube, OmniStack, and Data Virtualization Platform are trademarks or registered trademarks of SimpliVity Corporation in the United States and certain other countries. All other trademarks are the property of their respective owners.

J966-Citrix-RA-EN-0716

Supporting Infrastructure Design Guidelines

Item Detail RationaleDNS N+1 High availability for DNS

DHCP N+1 High availability and load balance for DHCP. DHCP is required for PXE boot with PVS with configuration of Option 66 and 67. Check Citrix PVS 7.6 product documentation for detail.

File Services N+1 The best practice is to tune the Windows volume where the profiles are stored to use 8KB Cluster Size, rather than the default.