Transcript
Page 1: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

Proven Solution Guide

EMC Confidential

EMC Infrastructure for Virtual Desktops

Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4

Page 2: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

2

Copyright © 2010 EMC Corporation. All rights reserved. Published September, 2010 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. Benchmark results are highly dependent upon workload, specific application requirements, and system design and implementation. Relative system performance will vary as a result of these and other factors. Therefore, this workload should not be used as a substitute for a specific customer application benchmark when critical capacity planning and/or product evaluation decisions are contemplated. All performance data contained in this report was obtained in a rigorously controlled environment. Results obtained in other operating environments may vary significantly. EMC Corporation does not warrant or represent that a user can or will achieve similar performance expressed in transactions per minute. No warranty of system performance or price/performance is expressed or implied in this document. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Part number: H8059

EMC <offerings> for <Application>

Enabled by <Products/Services> on <OS>

using <Network Transport>>

Page 3: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

3

Table of Contents

Chapter 1: About this Document ............................................................................................................... 4

Overview ........................................................................................................................................... 4 Audience and purpose ...................................................................................................................... 4 Scope ................................................................................................................................................ 5 Technology solutions ........................................................................................................................ 6 Virtual Desktop Infrastructure ............................................................................................................ 6 Reference architecture ...................................................................................................................... 8 Validated environment profile............................................................................................................ 9 Prerequisites and supporting documentation .................................................................................. 10 Terminology..................................................................................................................................... 11

Chapter 2: Virtual Desktop Infrastructure ................................................................................................ 12 Overview ......................................................................................................................................... 12 XenDesktop VDI .............................................................................................................................. 12 VMware infrastructure ..................................................................................................................... 13 Windows infrastructure .................................................................................................................... 14 Conclusion ....................................................................................................................................... 15

Chapter 3: Storage Design...................................................................................................................... 16 Overview ......................................................................................................................................... 16 Concepts ......................................................................................................................................... 16 Storage design layout ..................................................................................................................... 16 File system layout ........................................................................................................................... 17 Capacity planning ............................................................................................................................ 19 Best practices .................................................................................................................................. 19

Chapter 4: Network Design ..................................................................................................................... 21 Overview ......................................................................................................................................... 21 Considerations ................................................................................................................................ 21 Network layout................................................................................................................................. 22 Virtual LANs .................................................................................................................................... 22 High availability network .................................................................................................................. 23

Chapter 5: Installation and Configuration ................................................................................................ 26 Overview ......................................................................................................................................... 26 Task 1: Set up and configure the NFS datastore ............................................................................ 27 Task 2: Install and configure Desktop Delivery Controller .............................................................. 29 Task 3: Install and configure Provisioning Server ........................................................................... 32 Task 4: Configure and provision the master virtual machine template ........................................... 44 Task 5: Deploy virtual desktops ...................................................................................................... 46

Chapter 6: Testing and Validation ........................................................................................................... 52 Overview ......................................................................................................................................... 52 Testing overview ............................................................................................................................. 52 Testing tools .................................................................................................................................... 52 Test results ...................................................................................................................................... 55 Result analysis of Desktop Delivery Controller ............................................................................... 56 Result analysis of Provisioning Server ............................................................................................ 59 Result analysis of the vCenter Server ............................................................................................. 62 Result analysis of SQL Server ........................................................................................................ 64 Result analysis of ESX servers ....................................................................................................... 67 Result analysis of Celerra unified storage ...................................................................................... 70 Login storm scenario ....................................................................................................................... 78 Test summary.................................................................................................................................. 80

Page 4: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

4

Chapter 1: About this Document

Overview

Introduction EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect real-world deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges currently facing its customers. This document summarizes a series of best practices that were discovered, validated, or otherwise encountered during the validation of the EMC Infrastructure for Virtual Desktops Enabled by EMC® Celerra® Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 solution.

Use case definition

A use case reflects a defined set of tests that validates the reference architecture for a customer environment. This validated architecture can then be used as a reference point for a Proven Solution.

Contents The chapter includes the following topics:

Topic See Page

Overview 4 Audience and purpose 4 Scope 5 Technology solutions 6 Virtual Desktop Infrastructure 6 Reference architecture 8 Validated environment profile 9 Prerequisites and supporting documentation 10 Terminology 11

Audience and purpose

Audience The intended audience for the Proven Solution Guide is:

• Internal EMC personnel • EMC partners • Customers

Page 5: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

5

Purpose The purpose of this solution is to: • Develop a suggested Citrix XenDesktop 4 VDI for 1,000 users in the context of

EMC Celerra unified storage and VMware vSphere virtualization platforms. • Test and document the user response time and the performance of the associated

servers. Information in this document can be used as the basis for a solution build, white paper, best practices document, or training. It can also be used by other EMC organizations (for example, the technical services or sales organization) as the basis for producing documentation for technical services or a sales kit.

Scope

Scope This document describes the architecture of an EMC solution built at EMC’s Global

Solutions Labs. This solution is engineered to enable customers to: • Implement a Citrix XenDesktop VDI 4 solution in their environment after

considering the storage configuration, design, sizing, and software. • Reduce operation costs with VDI, when compared to existing desktop solutions. • Deliver the highest level of service level agreement (SLA) with the lowest cost per

application workload. • Provide VDI with the flexibility of a solution that scales up to meet the requirements

of large enterprises and still offers a simple footprint for midsize organizations. This solution provides information to: • Create a well-performing storage design for a Citrix XenDesktop 4 VDI on a

VMware vSphere virtualization platform for 1,000 desktop users on an EMC Celerra NS-120 unified storage system.

• Document the performance in the validated environment and suggest methods to improve the performance of the Citrix XenDesktop 4 solution.

Not in scope Testing XenDesktop 4 VDI for a workload other than a typical office user workload

was outside the scope of this testing.

Page 6: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

6

Technology solutions

Business challenges for midsize enterprises

With limited resources and increasing demands, today's business must address the following challenges: • Consolidate desktops across the enterprise • Ensure information access, availability, and continuity • Maximize server and storage utilization and deliver high desktop performance • Manage upgrades and migration quickly and easily • Reduce the demands on limited IT resources and budgets • Reduce the complexity of selecting the right technology

Solution for midsize enterprises

The EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 solution establishes a configuration of validated hardware and software that permits easy and repeatable deployment of virtual desktops using the storage provided by Celerra NS-120. This Proven Solution Guide describes the deployment and validation of Citrix XenDesktop 4 VDI on Celerra NS-120 in a manner that provides performance, recoverability, and protection.

Virtual Desktop Infrastructure

Introduction The VDI is used to run desktop operating systems and applications inside virtual

machines that reside on servers, which run a virtualization hypervisor. The desktop operating systems inside virtual machines are referred to as virtual desktops. Users access the virtual desktop and application from a desktop PC client or a thin client by using a remote display protocol. The applications and storage are centrally managed.

Citrix XenDesktop

Citrix XenDesktop is one of the leaders in desktop virtualization. It enables fully personalized desktops for each user with all the security and simplicity for centralized management. XenDesktop simplifies desktop management. Using centralized management, adding, updating, and removing applications are simple tasks. Users will have instant access to applications by using the HDX™ technology, a set of capabilities that delivers high-definition user experience over any network while getting a high-definition user experience over any network, including low-bandwidth and high-latency wide area network (WAN) connections. XenDesktop can instantly deliver every type of virtual desktop, each specifically tailored to meet the performance and flexibility requirements of individual users.

Page 7: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

7

Components of Citrix XenDesktop VDI

This solution has validated a XenDesktop 4 VDI deployment for high availability and simulated the workload of 1,000 real-world users. The VDI was built using the following components: • Citrix DDC to broker and manage virtual desktops. • Citrix Provisioning Services to provision desktop operating system (OS). • EMC unified storage to store virtual desktops. • VMware ESX and vCenter Server as the server virtualization infrastructure. • Windows infrastructure to support services such as Active Directory, dynamic host

configuration protocol (DHCP), Domain Name System (DNS), and SQL Server.

Page 8: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

8

Reference architecture

Corresponding reference architecture

This use case has a corresponding reference architecture document that is available on EMC Powerlink® and EMC.com. Refer to EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 — Reference Architecture for details.

Reference architecture diagram

The following diagram shows the overall physical architecture of the solution.

Page 9: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

9

Validated environment profile

Environment profile and test results

The solution was validated with the following environment profile.

Profile characteristic Value Number of virtual desktops 1,000 Size of each virtual desktop 3 GB (thin provisioned) Number of building blocks 10 Number of virtual desktops per building block 100 Number of NFS datastores per building block 1 Number of XenDesktop Provisioning Services Servers 2 Number of XenDesktop Desktop Delivery Controllers 2 NFS datastore – RAID type, physical drive size, and speed RAID 10, 450 GB, 15k rpm, FC disks Storage to host the Golden images, TFTP boot area, and ISO images – RAID type, physical drive size, and speed RAID 5, 450 GB, 15k rpm, FC disks

Hardware resources

Chapter 6: Testing and Validation on page 52 provides more information on the performance results.

The following table lists the hardware used to validate the solution.

Equipment Quantity Configuration Notes

EMC Celerra NS-120 1

• Two Data Movers (active/standby)

• Two disk-array enclosures (DAEs) with 15 FC 450 GB 15k 2/4 Gb disks

NFS datastore storage and Trivial File Transfer Protocol (TFTP) server

HP ProLiant DL380 G5 3

• Memory: 20 GB RAM • CPU: Two 3.0 GHz quad-core

processors • Storage: One 67 GB disk • NIC: Two Broadcom

NetXtreme II BCM 1,000 BaseT Adapters

ESX servers to host virtual machines for vCenter Server, Active Directory, DHCP, DNS, DDC, PVS, and SQL Server

Dell PowerEdge R710 16

• Memory: 32 GB RAM • CPU: Two 2.6 GHz quad-core

processors • Storage: One 67 GB disk • NIC: Four Broadcom

NetXtreme II BCM 1,000 BaseT Adapters

ESX servers to host 1,000 virtual desktops

Page 10: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

10

Software resources

The following table lists the software used to validate the solution.

Software Version

Celerra NS-120 (Celerra shared storage, file systems)

NAS or Data Access in Real Time (DART) Release 5.6.48-701

CLARiiON® FLARE® Release 28 (4.28.000.5.504)

Celerra plug-in for VMware Version 1.1.9

XenDesktop desktop virtualization

Citrix XenDesktop Version 4 Platinum Edition

Citrix Desktop Delivery Controller Server Version 4.0

Citrix Provisioning Services Server Version 5.1.2.2972

Microsoft SQL Server Version 2005 Enterprise Edition (64-bit)

VMware vSphere

ESX server ESX 4.0.0 (Build 208167)

vCenter Server 4.0.0 (Build 208111)

OS for vCenter Server Microsoft Windows Server 2003 R2 Enterprise Edition

Virtual desktops or virtual machines (One vCPU and 512 MB RAM)

OS Microsoft Windows XP Pro Edition

Microsoft Office 2007 Version 12

Internet Explorer 6.0.2900.5512

Adobe Reader 9.1

Adobe Flash Player 10

Bullzip PDF Printer 6.0.0.865

Prerequisites and supporting documentation

Technology It is assumed that the reader has a general knowledge of the following products:

• EMC Celerra unified storage • Citrix XenDesktop • VMware vSphere

Page 11: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

11

Supporting documents

The following documents, located on Powerlink, provide additional, relevant information. Access to these documents is based on your login credentials. If you do not have access to the following content, contact your EMC representative. • EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage

(NFS), VMware vSphere 4, and Citrix XenDesktop 4 — Reference Architecture • Configuring Citrix XenDesktop 3.0 with Provisioning Server using

EMC Celerra — Build Document • Celerra Plug-in for VMware Solution Guide

Third-party documents

Product documentation is available on the Citrix and VMware websites. • Citrix Product Documentation Library for XenDesktop • VMware vSphere 4.0 Documentation

Terminology

Introduction This section defines the terms used in this document.

Term Definition Desktop Delivery Controller (DDC)

As a part of the Citrix XenDesktop virtual desktop delivery system, this controller authenticates users, manages the assembly of users' virtual desktop environments, and brokers connections between users and their virtual desktops.

Citrix Provisioning Services Server (PVS)

As a part of the Citrix XenDesktop virtual desktop delivery system, this service creates and de-provisions virtual desktops from a single desktop image on demand, optimizes storage utilization, and provides a pristine virtual desktop to each user every time they log on.

PVS vDisk vDisk exists as disk image files on a Provisioning Server or on a shared storage device. The vDisk images are configured to be in Private, Standard, or Difference Disk mode. Private mode gives exclusive read-write access to a single desktop while vDisk in Standard or Difference Disk mode is shared with read-only permission among multiple desktops.

PVS write cache Any writes made to the desktop operating system are redirected to a temporary area called the write cache. The write cache can exist as a temporary file on a Provisioning Server in the virtual desktop’s memory or on the virtual desktop’s hard drive.

EMC Celerra plug-in for VMware

A VMware vCenter plug-in designed to simplify the storage administration of the EMC Celerra network-attached storage (NAS) platform. The plug-in enables VMware administrators to provision new NFS datastores directly from the vCenter Server. When provisioning storage on a cluster, folder, or data center, the plug-in automatically provisions the storage for all ESX hosts within the selected object.

LoginVSI A third-party benchmarking tool, developed by Login Consultants, simulates real-world VDI workload using an AutoIT script and determines maximum system capacity based on the user’s response time.

Page 12: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

12

Chapter 2: Virtual Desktop Infrastructure

Overview

Introduction The VDI design layout instructions described in this chapter apply to the specific

components used during the development of this solution.

Contents This chapter contains the following topics:

Topic See Page

Overview 12 XenDesktop VDI 12 VMware infrastructure 13 Windows infrastructure 14 Conclusion 15

XenDesktop VDI

Introduction Citrix XenDesktop 4 is a desktop virtualization system that centralizes and delivers

Microsoft Windows XP, 7, or Vista virtual desktops to users located anywhere without any performance impact. XenDesktop 4 simplifies desktop management by using a single image to deliver personalized desktops to users and enables administrators to manage service levels with built-in desktop performance monitoring. The open architecture of XenDesktop 4 offers choice and flexibility of virtualization platform and user device.

Deploying a XenDesktop farm

This VDI solution is deployed using a dual-server model in a XenDesktop 4 farm with high availability, which provides a working deployment on a minimal number of computers. As the farm grows, additional controllers and components to the farm can be added seamlessly. The essential elements of a XenDesktop 4 farm are: • Desktop Delivery Controller • Citrix Licensing • Provisioning Server Apart from these Citrix elements, the following components are required for a XenDesktop 4 farm: • Microsoft SQL Server to hold the configuration information and administrator

account information • Active Directory • DNS Server • PXE boot and TFTP servers

Page 13: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

13

Desktop Delivery Controller

The Desktop Delivery Controller (DDC) authenticates users, manages the assembly of users' virtual desktop environments, and brokers connections between users and their virtual desktops. It controls the state of the desktops. However, starting and stopping the desktops are processes based on demand and administrative configuration. DDC also includes the User Profile Manager to manage user personalization settings in virtualized or physical Windows environments. The Citrix licensing service is also installed on the Desktop Delivery Controller.

Provisioning Server

The Provisioning Server creates and de-provisions virtual desktops from a single desktop image on demand, optimizes storage utilization, and provides a pristine virtual desktop to each user every time they log on. Desktop provisioning also simplifies desktop images, provides the best flexibility, and offers fewer points of desktop management for both applications and desktops.

High availability of XenDesktop components

In this solution, two DDCs and two Provisioning Servers were used to provide high availability as well as load balancing. For this solution with 1,000 virtual desktops, 500 virtual desktops are managed by each of the Desktop Delivery Controllers. In a similar way, each Provisioning Server manages 500 virtual desktops. If the DDC or Provisioning Server is made offline, then the other DDC or Provisioning Server takes over the virtual desktops of the offline server and manages all 1,000 virtual desktops.

VMware infrastructure

Introduction This Citrix XenDesktop 4 VDI solution is implemented on a VMware vSphere 4

virtual infrastructure. This will enable organizations to leverage their existing investment or infrastructure of VMware implementation.

VMware vSphere

VMware vSphere 4 is the industry’s first cloud operating system, transforming IT infrastructures into a private cloud, a collection of internal clouds federated on demand to external clouds, and delivering IT infrastructure as a service. vSphere 4 supports the 64-bit VMkernel and the service console. The new service console version is derived from a recent release of a leading enterprise Linux vendor. The following elements of VMware vSphere were used in this solution: • VMware ESX 4 server • VMware vCenter Server • VMware NFS datastore

Page 14: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

14

VMware ESX server

The VMware ESX server is the main building block of the VMware infrastructure. It provides a platform for multiple virtual machines that share the same hardware resources (including processor, memory, storage, and networking resources) and the ability to perform all the functions of a physical machine. This maximizes the hardware utilization and minimizes installation capital and operating cost. In this solution, all XenDesktop components reside as virtual machines on the VMware ESX 4 servers.

VMware vCenter Server

VMware vCenter Server provides a scalable and extensible platform that forms the foundation for virtualization management. VMware vCenter Server centrally manages VMware vSphere environments.

VMware NFS datastore

The ESX server can access a designated NFS volume located on a Celerra unified storage platform, mount the volume, and use it for the storage needs for this solution.

Windows infrastructure

Introduction Microsoft Windows infrastructure is used in this solution to provide the following

services to virtual desktops and XenDesktop elements: • Active Directory Service • DNS • DHCP Service • SQL Server

Domain controller

The Windows domain controller contains the Active Directory service, which provides the means to manage the identities and relationships of virtual desktops and other components in this VDI environment. Active Directory is also used by DDC to enable XenDesktop components to communicate securely.

DNS Server DNS is the backbone of Active Directory and the primary name resolution

mechanism of Windows servers. Domain Controllers dynamically register information about themselves and about Active Directory in the DNS Server. In this solution, DNS Server is installed on the domain controller.

Page 15: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

15

DHCP server The DHCP server provides the IP address, boot server name, and boot file name for the virtual desktops. The IP range of DHCP is configured to allocate IP addresses for 1,000 virtual desktop machines. Because the virtual desktop virtual machines use PXE boot from a bootstrap image prior to loading the master desktop image supplied by Citrix Provisioning Server, DHCP options 66 and 67 are configured to redirect the virtual desktops to retrieve the bootstrap image from a TFTP server that is hosted on EMC Celerra.

SQL Server Microsoft SQL Server is a relational database management system (RDBMS) from

Microsoft. In this solution, Microsoft SQL Server 2005 satisfies the database required for Citrix Provisioning Server and DDC. It can also be used to satisfy the databases required for a VMware vCenter Server. Microsoft SQL Server 2005 Enterprise Edition (64-bit) is used in this solution. Though Microsoft SQL Server 2005 Express Edition is free, lightweight, and can satisfy the database requirement for a very small virtual desktop farm, it is not recommended to be used in a production environment because its support is limited to 1 CPU, 1 GB addressable RAM, and a maximum database size of 4 GB.

Conclusion

Conclusion This XenDesktop 4 VDI implementation for 1,000 virtual desktops is configured in a

desktop farm that contains two Desktop Delivery Controllers and two Provisioning Servers for high availability, using the existing VMware virtual infrastructure and Windows servers that provide networking and database services.

Page 16: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

16

Chapter 3: Storage Design

Overview

Introduction The storage design layout instructions described in this chapter apply to the specific

components of this solution

Contents This chapter contains the following topics:

Topic See Page

Overview 16 Concepts 16 Storage design layout 16 File system layout 17 Capacity planning 19 Best practices 19

Concepts

Introduction The Celerra unified storage system is used for most of the storage needs of this

solution. The Celerra unified storage system is a multiprotocol system that provides access to data through a variety of file access protocols, including the NFS protocol. NFS is a client/server distributed file service that provides file sharing in network environments. When a Celerra is configured as an NFS server, the file systems are mounted on a Data Mover and a path to that file system is exported. Exported file systems are then available across the network and are mounted as NFS datastores on ESX servers that host the virtual desktops.

Storage design layout

Building block approach

This VDI solution is validated using a building block approach, which allows administrators to methodically provision additional blocks of storage as the number of desktop users continues to scale up. A building block is defined as two spindles on a 1+1 Celerra RAID 10 group. Each of these building blocks is designed to accommodate up to 100 virtual desktop users. The validation test uses up to 10 building blocks to support 1,000 virtual desktops.

Page 17: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

17

Disk layout for 10 building blocks

The following figure shows the disk layout for 10 RAID 1/0 building blocks on two shelves using user-defined storage pools.

The NS-120 can be fully populated up to eight disk shelves of 15 disk drives each. This validated solution uses two shelves of 450 GB 15k FC drives. Ten RAID 10 groups are used to store the virtual desktops. A single 4+1 RAID 5 group (RG 0) is used to store the golden image of virtual desktops, the TFTP boot image, and other support files.

File system layout

File system and NFS export

According to the standard NAS template, two LUNs are created per RAID group and each LUN is owned by a different storage processor (SP) for load balancing. These LUNs are represented as disk volumes (dvol) in the Celerra, as shown in the earlier figure. For each building block, a file system is created over a metavolume that concatenates the two dvols from the same RAID group. This file system is exported as an NFS share to the VMware ESX 4 server and used as a NFS datastore. The following table shows the dvol selection for each of the file systems created. To ensure SP load balancing, the order of dvol numbers alternates between the file systems.

File system dvols Golden image d17 TFTP boot d29 Virtual desktop groups d18,d30 (concatenated)

d31,d19 (concatenated) d20,d32 (concatenated) d33,d21 (concatenated) d22,d34 (concatenated) d35,d23 (concatenated) d24,d36 (concatenated)

Page 18: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

18

d37,d25 (concatenated) d26,d38 (concatenated) d39,d27 (concatenated)

The EMC Celerra plug-in for NFS, an integrated tool with the vCenter GUI, streamlines the creation of a file system, NFS export, and datastore. Celerra Plug-in for VMware Solution Guide on Powerlink provides more details on this plug-in.

NFS datastore usage

The Celerra unified storage platform is used to store the following: • Virtual desktop virtual machines • Citrix Provisioning Services vDisk

Virtual desktop virtual machines

The virtual desktops are deployed as virtual machines that are hosted on ESX 4 servers. Each desktop virtual machine has its own folder that contains .vmdk, .vmx, .vswp, and other files that are stored in the NFS datastore. In this proven solution, each building block is configured with one NFS datastore that accommodates up to 100 virtual desktops. There are a total of 10 NFS datastores that support up to 1,000 desktops.

Citrix Provisioning Services vDisk

The master desktop image is stored in a Citrix Provisioning Services vDisk, which corresponds to a virtual hard disk (VHD) file that resides on a local drive (NTFS formatted) of the Provisioning Servers. Because the Provisioning Servers are virtualized as virtual machines, the local drive that holds the master image is in fact a VMDK file that resides on an NFS datastore, whose file system is created from a 4+1 RAID 5 group (RG 0 as shown in the Disk layout for 10 building blocks on page 17) on the Celerra. The following figure shows the storage layers of the vDisk.

The NTFS file system is made read-only when the master image is finalized and ready to be sealed. The read-only file system enables concurrent access for multiple Provisioning Servers without the need of a clustering file system to handle any locking issues that may arise.

Celerra NFS file system

NFS datastore

VMDK

Read-only NTFS

VHD file

vDisk

Desktop OS image

Page 19: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

19

TFTP server All virtual desktops need to PXE boot from a bootstrap image when they are powered up. This bootstrap image is stored on a file system that is also created from a RAID group 0 on the Celerra. The image is then made available through the Celerra TFTP server.

Capacity planning

Building block of 100 virtual desktops

Storage design layout on page 16 indicates that this validated solution uses a building block approach. Each building block consists of two spindles on a 1+1 RAID 10 group that is designed to accommodate up to 100 virtual desktop users. The Celerra unified storage NS-120 uses 450 GB FC 15k rpm spindles. As mentioned in Disk layout for 10 building blocks on page 17, each RAID group produces two dvols of about 201.3 GB each. The file system formed by concatenating these two dvols provides a storage space of about 402 GB. This 402 GB of storage space exported as NFS storage on an ESX server is adequate for 100 virtual desktops of 3 GB each, where the 3 GB is thin provisioned and used as Provisioning Services’ write cache storage, a temporary area to save changes made to the virtual desktops. Virtual desktops typically consume several hundred megabytes of write cache. Care should be taken not to overflow the write cache area that each desktop is allocated. Otherwise, users may experience disk errors when performing write operations. In addition to virtual disk storage, each virtual desktop or virtual machine requires virtual swap space (.vswp file) at the ESX level. Because each virtual machine is allocated 512 MB of memory without ESX memory reservation, 100 virtual desktops require 50 GB (100 x 512 MB) out of the 402 GB.

Thin provisioning

The virtual hard disk provided to virtual desktops carved out of the NFS datastore is thin provisioned. This enables users to control storage costs and provide a higher level of utilization, and eliminate storage waste and the need for dedicated capacity.

Best practices

Celerra Data Mover parameter set up

EMC recommends that users turn off file system read prefetching and enable the uncached option for a random I/O workload, like the one for virtual desktop workload. To set noprefetch and uncached options for a file system, type: server_mount <movername> -option <options>, uncached,noprefetch <fs_name> <mount_point> For example: server_mount server_2 -option rw,uncached,noprefetch ufs1 /ufs1

Page 20: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

20

Disk drives The general recommendations for disk drives are: • Drives with higher revolutions per minute (rpm) provide higher overall

random-access throughput and shorter response times than drives with slower rpm. For optimum performance, higher rpm drives are recommended for file systems that store the virtual desktops.

• FC drives are recommended over Serial Advanced Technology Attached (SATA) drives, because FC drives provide better performance than SATA drives.

• Enterprise Flash Drives (EFDs) could have been considered for their performance, efficiency, power, space, and cooling requirements. However, they increase the cost drastically. As technology cost tends to reduce over the course of time, EFD can be used in such solutions in the near future.

RAID 10 compared to RAID 5

The I/O loads generated by virtual desktops are characterized as small, random, or write-intensive I/O. A workload is considered write-intensive when it consists of greater than 30 percent of random writes. In such a random workload, RAID 10 offers better performance than RAID 5 because of the write penalty that RAID 5 incurs when the parity bit is calculated for every write operation. Since RAID 10 does not calculate parity, it does not suffer a similar penalty when writing data.

Roaming profiles and folder redirection

The local user profile is not recommended in a VDI environment because a performance penalty is incurred when a new local profile is created whenever a user logs in to a new desktop image. On the other hand, roaming profiles and folder redirection allow user data to be stored centrally on a network location that can reside on a Celerra CIFS share. Thus, it reduces the performance hit during user logon while allowing user data to roam with the profiles. Alternative profile management tools such as Citrix User Profile Manager and a third-party tool such as AppSense Environment Manager provide more advanced and granular features to manage various user profile scenarios. Refer to User Profiles for XenApp and XenDesktop on the Citrix website for further details.

Page 21: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

21

Chapter 4: Network Design

Overview

Introduction This chapter describes the network design of Citrix XenDesktop 4 in the VDI

solution.

Contents This chapter contains the following topics:

Topic See Page

Overview 21 Considerations 21 Network layout 22 Virtual LANs 22 High availability network 23

Considerations

Physical design considerations

EMC recommends that the switches support gigabit Ethernet (GbE) connections and Link Aggregation Control Protocol (LACP), and that the ports on the switches support copper-based media.

Logical design considerations

This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. The IP scheme for the virtual desktop network must be designed in such a way that there are enough IP addresses available in one or more subnets for the DHCP server to assign them to each virtual desktop.

Link aggregation

The Celerra unified storage provides network high availability or redundancy by using link aggregation. This is one of the methods to deal with the problem of link or switch failure. Link aggregation is a high availability feature that enables multiple active Ethernet connections to appear as a single link with a single MAC address and potentially multiple IP addresses. In this solution, link aggregation applied on Celerra combines two GbE ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All traffic is distributed across the active links.

Page 22: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

22

Network layout

Network layout for the validated scenario

The network layout implements the following physical connections: • GbE with TCP/IP provides network connectivity. • NFS provides file system semantics for NFS datastores. • Virtual desktop machines run on VMware ESX servers that are connected to the

production network. • ESX VMkernel ports reside on the storage network to access the Data Mover

network ports when mounting NFS datastores. • Dedicated network switches and VLANs are used to segregate production and

storage networks.

Virtual LANs

Production VLAN

The production VLAN is used for end users to access virtual desktops, Citrix XenDesktop components, and associated infrastructure servers such as DNS, Active Directory, and DHCP. Virtual desktops also use this VLAN to access a Celerra TFTP server for the PXE boot image.

Storage VLAN The storage VLAN provides connectivity between ESX servers and storage. It is

used for NFS communication between the VMkernel ports and the Celerra Data Mover network ports. NIC teaming on ESX along with link aggregation on Data Mover provide load balancing and failover capabilities.

Other considerations

In addition to VLANs, separate redundant network switches for storage can be used. It is recommended that these switches support GbE connections, jumbo frames, and port channeling.

Page 23: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

23

High availability network

Link aggregation on Data Mover

LACP is enabled with two GbE ports available on the Data Mover. To configure the link aggregation that uses two ports of the Ethernet NIC on server_2, type: $ server_sysconfig server_2 -virtual -name <Device Name> -create trk –option "device=cge0,cge2 protocol=lacp" To verify if the ports are channeled correctly, type: $ server_sysconfig server_2 -virtual -info lacp0 server_2 : *** Trunk lacp0: Link is Up *** *** Trunk lacp0: Timeout is Short *** *** Trunk lacp0: Statistical Load C is IP *** Device Local Grp Remote Grp Link LACP Duplex Speed -------------------------------------------------------------- cge0 10000 1280 Up Up Full 1000 Mbs cge2 10000 1280 Up Up Full 1000 Mbs The remote group number for both cge ports needs to match and the LACP status must be “Up”. Confirm if the appropriate speed and duplex are established as expected.

NIC teaming on the ESX server

NIC teaming is configured to provide highly available network connectivity to the ESX server. To add a second NIC adapter to the vSwtich, complete the following steps:

Step Action 1 Log in to vCenter Server. 2 Edit vSwitch properties from the ESX server’s Configuration page. 3 Select the Network Adapters tab. 4 Click Add to add the available NIC adapter to the vSwitch.

Page 24: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

24

5 Select the NIC Teaming tab and for the vSwitch select Route based

on ip hash from the Load Balancing list box.

Page 25: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

25

Increase the number of vSwitch virtual ports

A vSwitch, by default, is configured with 24 virtual ports, which may not be sufficient in a VDI environment. On the ESX servers that host the virtual desktops, each port is consumed by a virtual desktop. Therefore, set the number of ports based on the number of virtual desktops that will run on each ESX server. Note: Reboot the ESX server for the changes to take effect.

If an ESX server goes down or needs to be placed in maintenance mode, other ESX servers within the cluster must accommodate additional virtual desktops that are migrated from the ESX server that goes offline. One must take into account the worst-case scenario when determining the maximum number of virtual ports per vSwitch. If there are not enough virtual ports, the virtual desktops will not be able to obtain an IP from the DHCP server.

Page 26: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

26

Chapter 5: Installation and Configuration

Overview

Introduction This chapter provides procedures and guidelines for installing and configuring the

components that make up the validated solution scenarios. It is not intended to be a comprehensive step-by-step installation guide and highlights only configurations that pertain to the validated solution.

Scope The installation and configuration instructions presented in this chapter apply to the

specific revision levels of components used during the development of this solution. Before attempting to implement any real-world-based solution on this validated scenario, gather the appropriate installation and configuration documentation for revision levels of the hardware and software components as planned in the solution.

Contents This chapter contains the following topics:

Topic See Page Overview 26 Task 1: Set up and configure the NFS datastore 27 Task 2: Install and configure Desktop Delivery Controller 29 Task 3: Install and configure Provisioning Server 32 Task 4: Configure and provision the master virtual machine template

44

Task 5: Deploy virtual desktops 46

Page 27: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

27

Task 1: Set up and configure the NFS datastore

ESX advanced parameter to support maximum number of NFS exports

The ESX server can mount up to eight NFS datastores by default. Because this VDI solution uses 10 datastores, the ESX advanced parameter must be adjusted as shown in the following figure:

EMC Celerra plug-in for VMware

The EMC Celerra plug-in for VMware is a VMware vCenter plug-in that is designed to simplify the storage administration of the EMC Celerra NAS platform. The plug-in enables VMware administrators to provision new NFS datastores directly from the vCenter Server. One of the advantages of using the Celerra plug-in for VMware is that if the storage is provisioned on a cluster, folder, or data center, then all ESX hosts within the selected object will mount the newly created Celerra NFS export. To provision the storage, complete the following steps:

Step Action 1 Download the EMC Celerra plug-in from Powerlink and install it on the

machine that is used to run the vSphere Client. 2 Launch the vSphere Client and connect to the vCenter Server. 3 In the left navigation pane, right-click an ESX server in the cluster and

select EMC Celerra > Provision Storage.

Page 28: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

28

The Provision Storage dialog box appears.

Celerra Plug-in for VMware Solution Guide on Powerlink provides more details on this plug-in.

Page 29: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

29

Task 2: Install and configure Desktop Delivery Controller

Database server

Microsoft SQL Server 2005 Enterprise Edition is installed on a dedicated Windows 2003 Server virtual machine to host the databases required to store the configurations for the three components — Desktop Delivery Controller, Provisioning Server, and vCenter Server. Consider the following options when configuring SQL Server: • Configure Windows Authentication Mode as the SQL Server’s Authentication

Mode. Provide a custom SQL Server instance name or use the default instance name.

• Provide this SQL Server name and instance name as Database Server options while installing the Provisioning Server.

• If SQL Server is used as a database server for vCenter Server, run the scripts provided by VMware to create local and remote databases. An ODBC connection also needs to be configured between the vCenter Server and SQL Server. vCenter Server Installation Guide on VMware website provides more information on configuring SQL database for vCenter Server.

Note: The Provisioning Server installation CD comes with Microsoft SQL Server 2005 Express Edition, by default. However, the databases on Express Edition may not offer the scalability required for Provisioning Server, Desktop Delivery Controller, and VMware vCenter Server.

Install Desktop Delivery Controller

On the virtual machine designated as the first Desktop Delivery Controller, install the following components from the Citrix DDC installation CD (or ISO): • Citrix Desktop Delivery Controller • Citrix Management Console • Citrix License Server Select Create new farm when prompted for the Create or Join a Farm dialog during the installation. Select Use an existing Database Server and specify the Microsoft SQL 2005 Server and instance name for the Optional Server Configuration dialog box of the installation wizard.

Configure additional Desktop Delivery Controllers

To install additional Desktop Delivery Controllers, select the Citrix Desktop Delivery Controller component from the installation CD (or ISO). Select Join existing Farm and type the name of the first DDC in the Type the name of the first controller in the farm box.

Page 30: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

30

Throttle commands to VMware vCenter

By default, the DDC Pool Management service will attempt to start 10 percent of a desktop pool size. It may be necessary to throttle the number of concurrent requests sent to the vCenter Server and not to overwhelm the VMware infrastructure. To modify the number of concurrent requests, edit the following configuration on each DDC:

1. Open the C:\Program Files (x86)\Citrix\VmManagement\CdsPoolMgr.exe.config file using a text editor such as Notepad.

2. Add a line with the MaximumTransitionRate parameter and set the

value to the required number of concurrent requests. A value of 20 is used in this solution.

<?xml version="1.0" encoding="utf-8" ?>

<configuration> <appSettings>

<add key="LogToCdf" value ="1"/> <add key=”MaximumTransitionRate” value=”20”/>

</appSettings> </configuration>

3. After saving the file, restart either the DDC or the Citrix Pool Management

Service for the change to take effect.

Virtual desktop idle pool settings

DDC manages the number of virtual desktops that are idle based on the time and automatically optimizes the idle pool settings in the desktop group based on the number of virtual desktops in the group. These default idle pool settings need to be adjusted according to customer requirements to have virtual machines powered on in advance to avoid a boot storm scenario. During the validation testing, the idle desktop count is set to match the number of desktops in the group to ensure that all desktops are powered on in a steady state and ready for client connections immediately. To change the idle pool settings after a desktop group is created:

1. Navigate to Start > All Programs > Citrix > Management Consoles > Delivery Services Console on DDC.

2. In the left pane, navigate to Citrix Resources > Desktop Delivery Controller > [XenDesktopFarmName] > Desktop Groups

3. Right-click the desktop group name and select Properties. 4. Select Idle Pool Settings in the left pane under the Advanced option. 5. In the Idle Desktop Count section in the right pane, modify the number of

desktops to be powered on during Business hours, Peak time, and Out of hours. You can optionally redefine business days and hours per your business requirements.

6. Click OK to save the settings and close the window.

Page 31: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

31

Page 32: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

32

Task 3: Install and configure Provisioning Server

Install Provisioning Server

Unlike Citrix Desktop Delivery Controller, the installation of Provisioning Server is identical for the first Provisioning Server in the desktop farm and the additional Provisioning Servers installed in the farm. The Provisioning Services Configuration Wizard is run after the installation of the Provisioning Services software. The configuration option differs for the first and secondary (or additional) Provisioning Servers. The following steps highlight the configuration wizard options customized for this solution.

Provisioning Server – DHCP services

Since the DHCP services run on a dedicated DHCP server, select The service that runs on another computer for DHCP services when configuring the DHCP services in the configuration wizard.

Page 33: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

33

Provisioning Server – PXE services

The Provisioning Server is not used as a PXE server because DHCP services are hosted elsewhere. Select The service that runs on another computer for PXE services when configuring the PXE services in the configuration wizard.

Page 34: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

34

Provisioning Server – Farm configuration

In the Farm Configuration page of the Configuration Wizard, select Create farm to configure the first Provisioning Server or Join existing farm to configure additional Provisioning Servers. With either option, the wizard will prompt for a SQL server and its instance. First, Provisioning Server will use these inputs to create a database to store the configuration details of the Provisioning Server. Additional Provisioning Servers use these inputs to retrieve information about existing farms from the database.

Page 35: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

35

Provisioning Server – User account

Because the master desktop vDisk is stored on a local drive of each Provisioning Server, select Local system account (Use with SAN) as the user account to run the stream and soap services in the Provisioning Servers.

Page 36: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

36

Provisioning Server – Stream services

Ensure that the appropriate network card is selected, for the stream services, while configuring the Provisioning Servers. Leave the management services communications and soap server ports unchanged.

Provisioning Server – TFTP

Because the TFTP server is hosted on the Celerra, clear Use the Provisioning Services TFTP service, while configuring the TFTP option and bootstrap location settings in the configuration wizard.

Page 37: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

37

Inbound communication

Each Provisioning Server maintains a range of User Datagram Protocol (UDP) ports to manage all inbound communications from virtual desktops. The default port range of 21 ports and threads per port of 8 may not support a large number of virtual desktops in this validated solution. The total number of threads supported by a Provisioning Server is calculated as: Total threads = (Number of UDP ports * Threads per port * Number of network adapters) Ideally, there should be one thread dedicated for each desktop session. The number of UDP ports are increased to 64 (port range of 6910 to 6973) and threads per port are increased to 10 on each Provisioning Server (PVS) (64 * 10 * 1 NIC = 640 threads per server) to accommodate up to 1,000 desktops. The number of UPD ports can be modified in the Network tab of the Server Properties dialog box (as shown in the following figure). The Server Properties dialog box appears when you double-click a Provisioning Server in the Provisioning Services Console. The threads per port parameter can be modified by using the Advanced option.

Page 38: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

38

By default, Citrix PVS two-stage boot service uses port 6969. Since this solution does not require this service, the two-stage boot service is disabled to avoid conflict and it enables the UDP port to range up to 6973. It is a best practice to maintain the same server properties among PVS servers. In particular, all servers must have the same port range configured.

Sharing the SCSI bus

Normally, the VMware ESX server enforces file locking and does not allow two virtual machines to access the same virtual disk (VMDK) at the same time. However, in this validated solution, the PVS virtual machines share the virtual disk containing the master vDisk. Since the PVS virtual machines run on separate ESX servers, SCSI Bus Sharing is set to Physical to enable access to the same virtual disk.

Page 39: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

39

Thick provisioning of virtual disk on Provisioning Server

Because SCSI bus sharing is incompatible with VMware thin provisioned virtual disk, all virtual disks attached to the PVS virtual machines must be “thick”. Otherwise, ESX servers will not allow the virtual machines to power up. If the virtual disk attached to the PVS virtual machine was previously thin provisioned, use the vmkfstools command with the following syntax to convert the thin provisioned virtual disk to thick. The following command inflates a thin provisioned virtual hard disk named vDesktop1.vmdk: vmkfstools -j vDesktop1.vmdk

Disk align virtual disk

For better performance, it is recommended to align the virtual disk of the Provisioning Server and other virtual machines. For Windows 2003 virtual machines, disk alignment is done by using the diskpart.exe tool. Select the appropriate disk in the DISKPART prompt and type the following command to align the partition with 1024 KB offset: DISKPART> create partition primary align=1024

Page 40: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

40

Citrix XenConvert is used when the golden image of the master virtual machine is cloned to the master vDisk. Provisioning Services 5.1 contains XenConvert 2.0.x. XenConvert 2.0.x fixes the partition offset at 252 KB, which causes disk misalignment. XenConvert 2.1 or later versions contain an option to specify the desired offset to align the disk correctly. Locate the XenConvert.ini file in the same location as the XenConvert executable. To set the offset to 1024 KB, add the following section and the value to the file: [parameters] PartitionOffsetBase=1048576 To specify the offset manually, upgrade Xenconvert to the latest version.

vDisk access mode

After the golden image of the master virtual machine is cloned to the master vDisk, the Access Mode must be changed from “Private Image” to “Standard Image” to enable virtual desktops to share the common vDisk. Thereafter, the vDisk becomes read-only. Virtual desktop changes are redirected to a write cache area. In this solution testing, the write cache type is set to “Cache on device’s HD” to ensure that each virtual desktop uses its own VMDK to store the write cache.

Page 41: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

41

Read only NTFS volume with vDisk

Modifying the master vDisk access mode to "Standard Image" changes the underlying VHD file to write-protected because the golden image is “sealed”. As a result, the NTFS volume that is used to host the vDisk can be made read-only such that it can be shared across other Provisioning Servers without the need of a cluster file system to handle any file locking issues. Read-only access to the NTFS volume is done by using the diskpart command. Run this command from the command prompt, select the target volume, and type: DISKPART> attributes volume set readonly On successful completion of setting the read-only attribute, the NTFS volume needs to be remounted for the read-only flag to take effect. Given that PVS runs as a virtual machine, this can be done by removing and adding the virtual disk from the Virtual Machine Properties screen. When the virtual machine is powered on, the add/remove operations are available in vSphere 4 only.

Configure a bootstrap file

The bootstrap file required for the virtual desktops to PXE boot is updated using the Configure Bootstrap option. This option is available in the Provisioning Services Console (Farm > Sites > Site-name > Servers). The Configure Bootstrap dialog box is shown in the following figure.

After the new PVS is added to the server farm, the bootstrap image must be updated to reflect the IP addresses used for all PVS servers that provide streaming services in a round-robin fashion. The list of PVS servers can be obtained by either clicking Read Servers from Database or by manually adding the server information by clicking Add.

Page 42: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

42

On modifying the configuration, click OK to update the ARDBP32.BIN bootstrap file, which is located at C:\Documents and Settings\All Users\Application Data\Citrix\Provisioning Services\Tftpboot. Navigate to the folder and examine the timestamp of the bootstrap file to ensure that the bootstrap file is updated on the intended Provisioning Server.

Copy the bootstrap file to the TFTP server on Celerra

In addition to serving as a NFS server, EMC Celerra unified storage is used as a TFTP server that provides a bootstrap image when virtual desktops PXE boot. To configure the Celerra TFTP server, complete the following steps:

Step Action 1 Enable the TFTP service by using the following command syntax:

server_tftp <movername> -service –start

2 Set the TFTP working directory and enable read/write access for file transfer by using the following command syntax. It is assumed that the path name references to a file system created in RAID group 0 as shown in Disk layout for 10 building blocks on page 17. server_tftp <movername> -set -path <pathname> -readaccess all –writeaccess all

3 Use a TFTP client, of your choice, to upload the ARDBP32.BIN bootstrap file from C:\Documents and Settings\All Users\Application Data\Citrix\Provisioning Services\Tftpboot on the Provisioning Server to the Celerra TFTP server.

4 Set the TFTP working directory access to read-only to prevent accidental modification of the bootstrap file.

server_tftp <movername> -set -path <pathname> –writeaccess none

Page 43: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

43

Configure boot options 66 and 67 on the DHCP

In order for the virtual desktops to PXE boot successfully from the bootstrap image supplied by the Provisioning Servers, the DHCP server must have the boot options 66 and 67 configured. To configure boot options 66 and 67 on the DHCP, complete the following steps:

Step Action 1 On the installed Microsoft DHCP server, select Scope Options. 2 Select 066 Boot Server Host Name. 3 Enter the IP address of the Data Mover configured as TFTP server in

the String value box. 4 Similarly, enable 067 Bootfile Name and enter ARDBP32.BIN in the

String value box. The ARDBP32.BIN bootstrap image is loaded on a virtual desktop before the vDisk image is streamed from the Provisioning Server.

Page 44: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

44

Task 4: Configure and provision the master virtual machine template

Create a virtual machine template for virtual desktops

To create a virtual machine template for the virtual desktop, create a virtual machine using the Create New Virtual Machine wizard, edit the settings, and convert it into a template. This validated solution uses Microsoft Windows XP (32-bit edition) as the virtual desktop guest operating system. Ensure that the virtual machine is allocated with one vCPU, a RAM of 512 MB, a 3 GB “thin” virtual hard disk, and a network adapter that uses a vmxnet2 driver. The virtual hard disk on the virtual machine is neither aligned nor formatted. To align and format the virtual hard disk, complete the following steps:

Step Action 1 Edit the virtual machine settings. 2 Remove the hard disk by clicking Remove. 3 Attach the hard disk on another Windows machine. 4 Align the hard disk using the diskpar or diskpart utility. 5 Quick format the hard disk using a NTFS volume. 6 Remove the hard disk from the proxy Windows machine. 7 Attach the hard disk back to the new virtual machine by clicking Add

as shown in the figure.

Once these modifications are made, convert this virtual machine in to a virtual machine template.

Page 45: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

45

Note: The virtual desktop virtual machines will be created in the same datastore where the virtual hard disk of the template machine resides. If the virtual desktops are distributed among multiple datastores, it is easier to clone the virtual machine template such that there is a template in each datastore.

“New hardware found” message

When the virtual hard disk is attached to the master vDisk as a write-cache drive for the first time, Windows will detect the drive as new hardware and prompt for a reboot as soon as a virtual desktop session begins. To avoid such a reboot, attach the virtual hard disk to the master virtual machine before its image is cloned to the vDisk such that the vDisk image contains the disk signature that will be recognized when the virtual desktops are started.

Page 46: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

46

Task 5: Deploy virtual desktops

Appropriate access to vCenter SDK

DDC and the XenDesktop Setup Wizard require appropriate access to communicate with the SDK of the VMware vCenter Server. This is achieved by one of the following methods depending on the security requirements: HTTPS access to vCenter SDK

Step Action 1 On the VMware vCenter Server, replace the default SSL certificate.

The Replacing vCenter Server Certificates paper on the VMware website provides more details on how to replace the default SSL certificate.

2 Open an MMC and the Certificates snap-in on the Desktop Delivery Controllers and Provisioning Servers.

3 Select Certificates > Trusted Root Certification Authorities > Certificates and import the trusted root certificate for the SSL certificate created in step 1.

HTTP access to vCenter SDK

Step Action 1 Log in to the vCenter Server and open the C:\Documents and

Settings\All Users\Application Data\VMware\VMware VirtualCenter\proxy.xml file.

2 Navigate to the tag where serverNamespace=/sdk. Do not modify the /sdkTunnel properties.

<e id="5"> <_type>vim.ProxyService.LocalServiceSpec</_type> <accessMode>httpAndHttps</accessMode> <port>8085</port> <serverNamespace>/sdk</serverNamespace> </e>

3 Change accessMode to httpAndHttps. Alternatively, set accessMode to httpOnly to disable HTTPS.

4 Save the file and restart the vmware-hostd process using the following command. You may have to reboot the vCenter Server if SDK is inaccessible after restarting the process:

service mgmt-vmware restart

Page 47: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

47

XenDesktop Setup Wizard

The XenDesktop Setup Wizard installed on the Provisioning Server simplifies virtual desktop deployment and can rapidly provision a large number of desktops. To run this wizard, complete the following steps:

Step Action 1 Select Start > All Programs > Citrix > Administration Tools >

XenDesktop Setup Wizard on the Provisioning Server. The Welcome to XenDesktop Setup Wizard page appears.

2 Click Next. The Desktop Farm page appears. 3 Select the relevant farm name from the Desktop farm list. The list of

farms appear.

4 Click Next. Before proceeding to the Hosting Infrastructure page,

complete the steps described in the Appropriate access to vCenter SDK on page 46.

5 On the Hosting Infrastructure page, select VMware virtualization as the hosting infrastructure. Type the URL of the vCenter Server SDK and click Next.

Page 48: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

48

Note: You will be prompted to specify the user credentials for the VMware vCenter Server.

6 On the Virtual Machine Template page, select the virtual machine template that you want to use as a template for the virtual desktops. These virtual machine templates are retrieved from the vCenter Server.

7 Click Next. The Virtual Disk (vDisk) page appears. 8 Select the vDisk from which the virtual desktops will be created. Only

vDisks in standard mode appear. As shown in the following figure, the list of existing device collections contain only the device collections that belong to the same site as the vDisk.

Page 49: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

49

9 Click Next. The Virtual Desktops page appears.

10 Enter the following and click Next. • The number of desktops to create. • The common name to use for all the desktops. • The start number to enumerate the newly created desktops.

The sequence of this number will be appended to the common name, and will be assigned to the virtual desktop names.

The Organizational Unit Location page appears.

11 Select the OU to which the desktops will be added and click Next.

Page 50: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

50

The Desktop Group page appears.

12 Specify the group of the Desktop Delivery Services to which to add the desktops and click Next.

The Desktop Creation page appears.

12 Ensure that the details are correct and then click Next to create the desktops.

Page 51: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

51

The Summary page appears. Note: Clicking Next will start an irreversible process of creating desktops that also includes creating computer objects in the Active Directory.

Page 52: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

52

Chapter 6: Testing and Validation

Overview

Introduction This solution for Citrix XenDesktop 4 on EMC Celerra explores several

configurations that can be used to implement a 1,000-user environment using EMC Celerra.

Contents This section contains the following topics:

Topic See Page

Overview 52 Testing overview 52 Testing tools 52 Test results 55 Result analysis of Desktop Delivery Controller 56 Result analysis of Provisioning Server 59 Result analysis of the vCenter Server 62 Result analysis of SQL Server 64 Result analysis of ESX servers 67 Result analysis of Celerra unified storage 70 Login storm scenario 78 Test summary 80

Testing overview

Introduction This chapter provides a summary and characterization of the tests performed to

validate the solution. The goal of the testing was to characterize the end-to-end solution and component subsystem response under reasonable load for Citrix XenDesktop 4 with Celerra NS-120 over NFS.

Testing tools

Introduction To apply a reasonable real-world user’s workload, a third-party benchmarking

tool — LoginVSI from Login Consultants was used. LoginVSI simulates a VDI workload using the AutoIT script within each desktop session to automate the execution of generic applications like Microsoft Office 2007, Internet Explorer, Acrobat Reader, Notepad, and other third-party software.

Page 53: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

53

LoginVSI – Test methodology

Virtual Session Index (VSI) provides guidance to gauge the maximum number of users a desktop environment can support. LoginVSI workload can be categorized as light, medium, heavy, and custom. Medium is the only workload that is available in the VSI Express (free edition) and Pro editions. VSI Pro edition and medium workload were chosen for testing and they have the following characteristics: • Emulates a medium knowledge worker using Office, Internet Explorer, and PDF. • Once a session is started, the medium workload will repeat every 12 minutes. • The response time is measured every 2 minutes during each loop. • The medium workload opens up to five applications simultaneously. • The type rate is 160 ms for each character. • The medium workload in VSI 2.0 is approximately 35 percent more

resource-intensive than VSI 1.0. • Approximately, 2 minutes of idle time is included to simulate real-world users. Each loop of the medium workload will open and use: • Outlook 2007: Browse 10 messages. • Internet Explorer: One instance is left open (BBC.co.uk). One instance is browsed

to Wired.com, Lonelyplanet.com, and heavy flash app gettheglass.com (not used with MediumNoFlash workload).

• Word 2007: One instance to measure response time and one instance to review and edit the document.

• Bullzip PDF Printer and Acrobat Reader: The Word document is printed and the PDF is reviewed.

• Excel 2007: A very large randomized sheet is opened. • PowerPoint 2007: A presentation is reviewed and edited. • 7-zip: Using the command line version, the output of the session is zipped. The current LoginVSI version is 2.1.2. This version has a gating metric called VSImax that measures the response time of five operations:

1. Maximizing Microsoft Word. 2. Starting the File Open dialog box. 3. Starting the Search and Replace dialog box. 4. Starting the Print dialog box. 5. Starting Notepad.

The LoginVSI workload is gradually increased by starting desktop sessions one after another at a specified interval. Although the interval can be customized, a default interval of 1 second is used during the testing. The desktop infrastructure is considered saturated when the average response time of three consecutive users crosses the 2,000 ms threshold. The administrator guide available at www.loginconsultants.com provides more information on the LoginVSI tool.

LoginVSI launcher

A LoginVSI launcher is a Windows system that launches desktop sessions on target virtual desktop machines. There are two types of launchers — master and slave. There is only one master in a given test bed and there can be many slave launchers as required. Launchers coordinate the start of the sessions using a common CIFS share. In this validated testing, the share is created on a Celerra file system that resides in the 4+1 RAID 5 group as shown in Disk layout for 10 building blocks on page 17.

Page 54: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

54

The number of desktop sessions a launcher can run is typically limited by CPU or memory resources. Login consultants recommend using a maximum of 45 sessions per launcher with two CPU cores (or two dedicated vCPUs) and a RAM of 2 GB, when the GDI limit has not been tuned (default). However with the GDI limit tuned, this limit extends to 60 sessions per two-core machine. In this validated testing, 1,000 desktop sessions were launched from 24 launcher virtual machines, resulting in 41 or 42 sessions established per launcher. Each launcher virtual machine is allocated two vCPUs and a RAM of 4 GB without encountering any system bottlenecks.

Page 55: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

55

Test results

Result summary

The following graph shows the response time compared to the number of active desktop sessions, as generated by the LoginVSI launchers. It shows that the average response time increases marginally as the user count increases. Throughout the test run, the average response time stays below 300 ms, which has plenty of headroom below the 2,000 ms gating metric. The maximum response time increases nominally as the user count increases with some spikes. However, it never exceeds 3,000 ms.

I

Page 56: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

56

Result analysis of Desktop Delivery Controller

Introduction Since the two DDCs are load balanced to host 1,000 desktops, their performance

counters are comparable. As a result, only the statistics for the first DDC is reported in the following sections.

CPU utilization The average percentage processor time is recorded at 8.42 percent with occasional

spikes that reach as high as 65 percent. The percentage process time is reported based on the average of two vCPUs.

Page 57: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

57

Memory utilization

Each DDC virtual machine was configured with a RAM of 4 GB. The memory utilization fluctuates between 1 GB and 2.2 GB. The average utilization is around 1.5 GB, consuming less than half of the available memory.

Disk throughput

Windows operating system and XenDesktop software were installed on a local drive for each DDC. As seen in the following graph, despite a couple of spikes occurring at the end of the test, run of the average disk throughput is about 28 KB/s at the end of the test run.

Page 58: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

58

Network throughput

Each DDC virtual machine was configured with a gigabit adapter that uses the vmxnet2 driver to manage the virtual desktops. An average transfer rate of 443 KB/s translates to 3.5 Mb/s. A surge of 758 KB/s (or 6 Mb/s) was measured at the end of the test run due to concurrent users logging off.

Page 59: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

59

Result analysis of Provisioning Server

Introduction Since the two PVSs are load balanced to host 1,000 desktops, their performance

counters are comparable. As a result, this section covers only the statistics for the first PVS.

CPU utilization Four vCPUs were configured for each PVS server in anticipation of intense network

activities to communicate with 1,000 desktops. It is obviously overkill based on the following graph. A two-CPU virtual machine would suffice.

Page 60: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

60

Memory utilization

Each PVS virtual machine was configured with a RAM of 4 GB. The memory utilization remains steady in the range of 1.3 GB to 1.8 GB.

Disk throughput

The following graph shows the disk throughput measured for the physical disk that stores the master vDisk. Since PVS servers cache the vDisk data blocks in memory, the initial read activity is observed at 4 MB/s. Negligible disk activity is observed thereafter.

Page 61: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

61

Network throughput

Each PVS virtual machine was configured with a gigabit adapter that uses the vmxnet2 driver to stream the vDisk image to virtual desktops. The average network throughput is recorded at 4 MB/s (or 32 Mb/s). The maximum network throughput is capped below 30 MB/s (or 240 Mb/s) towards the end of the run despite bursting activity as a result of concurrent user logoff.

Page 62: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

62

Result analysis of the vCenter Server

Introduction The vCenter Server maintains two clusters of ESX servers. Each cluster contains

500 desktop virtual machines that are hosted on eight ESX servers.

CPU utilization The vCenter Server virtual machine is configured with dual vCPUs. The average

CPU utilization is less than 4 percent throughout the test. Periodic surges are curbed at 81 percent.

Page 63: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

63

Memory utilization

A RAM of 6 GB was allocated to the vCenter Server virtual machine. Committed bytes never exceeded 2.55 GB. The amount of allocated memory could have scaled down to 4 GB.

Disk throughput

Windows operating system and vCenter Server software were installed on a local drive. There is minimal disk I/O activity as seen in the following graph.

Page 64: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

64

Network throughput

The vCenter Server was configured with a gigabit adapter that uses the vmxnet2 driver. The majority of network activity comes from the DDCs that manipulate and detect the state of each virtual desktop. The average network throughput is measured at 17.6 KB/s (or 141 Kbps). Logoff activity towards the end of the run triggers a spike of 782 KB/s (or 6.3 Mbps).

Result analysis of SQL Server

Introduction Three databases were created on SQL Server, which is the central repository of

the DDC, PVS, and vCenter Server configurations. The database size for the vCenter Server grows to 5.3 GB — the largest of the three databases. DDC and PVS databases require merely 10 MB and 5 MB, respectively.

CPU utilization The SQL Server virtual machine was configured with dual vCPUs. The average CPU

utilization is less than 2 percent throughout the test. Periodic surges are curbed at 65 percent.

Page 65: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

65

Memory utilization

A RAM of 6 GB was allocated to the SQL Server virtual machine. Committed bytes never exceeded 3.5 GB. The amount of allocated memory could have scaled down by 1 GB.

Page 66: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

66

Disk throughput

Windows operating system and SQL server software were installed on a local drive. The average disk throughput is below 392 KB/s while the maximum throughput is recorded around 45 MB/s.

Network throughput

The SQL server was configured with a gigabit adapter that uses the vmxnet2 driver. The average network throughput is measured at 14.5 KB/s (or 116 Kbps). The maximum throughput is recorded at 458 KB/s (or 3.7 Mbps).

Page 67: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

67

Result analysis of ESX servers

Introduction A thousand desktop virtual machines are spread among 16 ESX servers. Prior to

testing, ESX server is responsible for hosting 61 to 62 virtual machines that are distributed evenly using VMware Distributed Resource Scheduler (DRS) automation. The DRS automation level is set to manual during the test run to avoid unpredictable workload overhead caused by virtual machine migration. Because each ESX server hosts almost the same number of virtual machines, the esxtop performance counters are sampled from one of the 16 servers.

CPU utilization Each of the 16 ESX servers has 8 Intel Nahalem 2.6 GHz CPUs. Each ESX server

hosts up to 62 desktop virtual machines, yielding the VMs/core ratio at 7.75. As the workload gradually increases when more desktops become active during the test, CPU utilization grows linearly and reaches a maximum of 100 percent towards the end of the test run. Sessions begin to log off simultaneously, triggering a surge of CPU consumption.

Page 68: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

68

Memory utilization

Each of the 16 ESX servers has a memory of 32 GB installed. Ideally 62 virtual machines with a RAM of 512 MB each add up to a total of 31 GB in theory. The memory utilization barely exceeds 29 GB (32 GB–3 GB free memory) because the ESX memory deduplication technology is used.

Disk throughput

Each of the 16 ESX servers is configured with one internal hard disk. There is a nominal disk I/O targeted to the internal drive as the majority of I/Os are redirected to the NFS datastores.

Page 69: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

69

Network throughput

Each of the 16 ESX servers is configured with a NIC teaming on 2 gigabit adapters to provide high availability. The following graph shows that the network utilization continues to increase as desktop sessions are ramped up. Despite the steady increase in network utilization, the maximum throughput of 50 Mb/s is much higher than the physical limit of the aggregated gigabit network bandwidth.

Page 70: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

70

Result analysis of Celerra unified storage

Celerra Data Mover stats

The Celerra command server_stats with the following syntax was used to collect the performance data of the Data Mover every 30 seconds.

$ /nas/bin/server_stats <server_name> -summary basic,caches -table net,dvol,fsvol -interval 30 -format csv -titles once -terminationsummary yes

The following table provides some of the significant Data Mover statistics that were collected:

Measurement parameter Average value Network input 20,694 KB/s (20.2 MB/s) Network output 1,589 KB/s (1.6 MB/s) Dvol read 575 KB/s (0.6 MB/s) Dvol write 22,339 KB/s (21.8 MB/s) Buffer cache hit rate 98% CPU utilization 12%

Data Mover CPU utilization

Despite the gradual increase in the CPU utilization on the Data Mover and the increase in the test workload, the CPU utilization remains below 30 percent until the end of the test run when the logoff storm invokes a spike of 55 percent.

Page 71: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

71

Data Mover disk throughput

The following graph shows the trend of the disk throughput measured on the Data Mover. Its pattern mimics the CPU utilization trend where the disk throughput gradually increases and reaches a maximum at 67 MB/s.

Storage array CPU utilization

CLARiiON Analyzer GUI was started to collect performance data about the storage array at 60-second intervals. The following figure shows the CPU utilization at the SP level. The storage processor (SP) balances the LUN ownership for the 10 building blocks that are used to store the virtual desktops. However, because SP A also consists of the LUNs that store the golden vDisk image, and the CIFS file system that contains the roaming user profiles and LoginVSI results, additional CPU cycles are incurred on SPA, causing its maximum to reach nearly 60 percent, while SPB utilization reaches a maximum of 48 percent only.

Page 72: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

72

Storage array total bandwidth

The storage array can easily handle the I/O bandwidth that the test workload generates, where less than 30 MB/s of I/O bandwidth is observed for each SP.

Page 73: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

73

Storage array total throughput

The maximum aggregated throughput at the SP level is recorded at 6,452 IOPS (4145 + 2307) towards the end of the test run. This includes all I/O activities for this storage array. Throughput measured for the virtual desktops alone is reported to be below the LUN level.

Page 74: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

74

Storage array response time

The SP response time throughout the test run is less than 1 millisecond — an acceptable response time that suggests that the storage processor is not a bottleneck.

Page 75: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

75

Most active LUN utilization

The following four graphs shows the performance statistics for the busiest LUN. This is measured within the 10 building blocks used to store the virtual desktops. The maximum utilization for the most active LUN never exceeds 50 percent as shown in the following figure.

Page 76: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

76

Most active LUN bandwidth

The maximum LUN bandwidth is measured at 5 MB/s for the most active LUN during the test. The storage array can easily handle the bandwidth requirement.

Page 77: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

77

Most active LUN throughput

The maximum throughput measured for the most active LUN is slightly above 500 IOPS. Because the storage array write-cache absorbs some of the front-end IOPS before it writes to the physical disks, the LUN throughput can exceed the theoretical limit of what two 15k drives can yield in a building block.

Page 78: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

78

Most active LUN response time

The response time of the most active LUN is around 1 millisecond throughout the test run, which suggests that there is no bottleneck at the LUN level.

Login storm scenario

Introduction One of the most concerned areas in VDI implementation is the what-if scenarios of

login and boot storms. Given that the DDC has an option to adjust the idle desktop count, it is recommended to tune the parameter accordingly to power up enough virtual desktops ahead of business opening/peak hours and alleviate a boot storm scenario. The impact of login storm, on the other hand, may be minimized by keeping desktop users logged in as long as possible. However, this is beyond the control of the desktop administrators. The following section prepares for the worst-case scenario when logins occur in rapid succession.

Login timing To simulate a login storm, 500 desktops are powered up initially into steady state by

setting the idle desktop count to 500. The login time of each session is then measured by starting a LoginVSI test that establishes the sessions with a custom interval of five seconds. The 500 sessions are logged in within 42 minutes (500 x 5 / 60 = 41.6), a period that models a burst of login activity that takes place in the opening hour of a production environment.

Page 79: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

79

The LoginVSI tool has a built-in login timer that measures from the start of the logon script defined in the Active Directory group policy to the start of the LoginVSI workload for each session. Although it does not measure the total login time from an end-to-end user’s perspective, the measurement gives a good indication of how sessions will be affected in a login storm scenario. The following figure shows the trend of the login time in seconds as sessions are started in rapid succession. The average login time for 500 sessions is approximately 5 seconds. The maximum login time is recorded at 29 seconds with a little over 300 concurrent sessions, while minimum login time is around 2 seconds. It is concluded that while some desktop users might experience a slightly longer login delay during login storm, most users should receive their desktop sessions with a reasonable delay.

Page 80: EMC Infrastructure for Virtual Desktops benchmark when critical capacity planning and/or ... and Citrix XenDesktop 4 Proven Solution Guide 4 ... EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide

80

Test summary

Summary The following conclusions can be drawn from the tests:

• RAID 10 is the preferred RAID type over RAID 5 in a XenDesktop 4 deployment

due to the write-intensive nature of the PVS write cache area. • The recommended number of 100 desktops per a two-disk R10 building block is

based on the medium workload generated by LoginVSI. Individual sizing requirements must be calibrated based on both capacity planning and workload characteristics of a production environment.

• The ratio of 7.75 virtual machines per CPU core measured in the test should be used as a guideline in sizing. Recall that the ESX CPU utilization approaches nearly 100 percent at this ratio, and it would be wise to scale back on this ratio or reduce each user’s workload to reserve headroom for unforeseen overloaded activity such as the boot storm case.

• Boot and login/logoff storms need to be taken in consideration when sizing a VDI implementation. While XenDesktop 4 has an option to cope with boot storm, care should be taken to monitor the environment to minimize the impact of potential login/logoff storm.


Top Related