running microsoft enterprise applications on …...5 running microsoft enterprise applications on...

60
Technical Report Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric VMware and Microsoft Solution Engineering, NetApp June 2011 | TR-3785 EXECUTIVE SUMMARY This report applies to Microsoft ® Exchange Server, SQL Server ® , and SharePoint ® Server mixed workload on VMware ® vSphere™ 4, NetApp ® unified storage (FC, iSCSI, and NFS), and Cisco Nexus ® unified fabric.

Upload: others

Post on 04-Mar-2020

19 views

Category:

Documents


0 download

TRANSCRIPT

Technical Report

Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric VMware and Microsoft Solution Engineering, NetApp June 2011 | TR-3785

EXECUTIVE SUMMARY This report applies to Microsoft® Exchange Server, SQL Server®, and SharePoint® Server mixed workload on VMware® vSphere™ 4, NetApp® unified storage (FC, iSCSI, and NFS), and Cisco Nexus® unified fabric.

2 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

TABLE OF CONTENTS

1 INTRODUCTION .................................................................................................................................... 5 2 SOLUTION SUMMARY.......................................................................................................................... 6 3 FIBRE CHANNEL SOLUTION DESIGN ................................................................................................ 7

3.1 HIGH-LEVEL SOLUTION ARCHITECTURE .............................................................................................................. 7 3.2 SOLUTION HARDWARE AND SOFTWARE REQUIREMENTS ................................................................................ 7 3.3 SOLUTION ARCHITECTURE DETAILS .................................................................................................................... 9 3.4 STORAGE SIZING ................................................................................................................................................... 19 3.5 BACKUP AND RESTORE ARCHITECTURE .......................................................................................................... 20

4 FIBRE CHANNEL SOLUTION VALIDATION...................................................................................... 21 4.1 STORAGE EFFICIENCY ......................................................................................................................................... 21 4.2 PERFORMANCE VALIDATION ............................................................................................................................... 22 4.3 VMWARE VMOTION, HA, AND DRS VALIDATION ................................................................................................ 24 4.4 BACKUP AND RESTORE VALIDATION ................................................................................................................. 24

5 NETWORK FILE SYSTEM SOLUTION DESIGN ................................................................................ 25 5.1 HIGH-LEVEL SOLUTION ARCHITECTURE ............................................................................................................ 25 5.2 SOLUTION HARDWARE AND SOFTWARE REQUIREMENTS .............................................................................. 26 5.3 SOLUTION ARCHITECTURE DETAILS .................................................................................................................. 28 5.4 STORAGE SIZING ................................................................................................................................................... 36 5.5 BACKUP AND RESTORE ARCHITECTURE .......................................................................................................... 37

6 NFS SOLUTION VALIDATION ............................................................................................................ 38 6.1 STORAGE EFFICIENCY ......................................................................................................................................... 38 6.2 PERFORMANCE VALIDATION ............................................................................................................................... 39 6.3 VMWARE VMOTION, HA, AND DRS VALIDATION ................................................................................................ 40 6.4 BACKUP AND RESTORE VALIDATION ................................................................................................................. 40

7 ISCSI SOLUTION DESIGN .................................................................................................................. 41 7.1 HIGH-LEVEL SOLUTION ARCHITECTURE ............................................................................................................ 41 7.2 SOLUTION HARDWARE AND SOFTWARE REQUIREMENTS .............................................................................. 42 7.3 ISCSI SOLUTION ARCHITECTURE DETAILS ........................................................................................................ 44 7.4 BACKUP AND RESTORE ARCHITECTURE .......................................................................................................... 54

8 ISCSI SOLUTION VALIDATION .......................................................................................................... 55 8.1 STORAGE EFFICIENCY ......................................................................................................................................... 55 8.2 PERFORMANCE VALIDATION ............................................................................................................................... 56 8.3 VMWARE VMOTION, HA, AND DRS VALIDATION ................................................................................................ 58 8.4 BACKUP AND RESTORE VALIDATION ................................................................................................................. 58

9 SUMMARY ........................................................................................................................................... 59

3 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

10 ACKNOWLEDGEMENTS .................................................................................................................... 60 11 FEEDBACK .......................................................................................................................................... 60

LIST OF TABLES

Table 1) Exchange SQL Server and SharePoint VM configuration. ............................................................. 6 Table 2) Hardware configuration. ................................................................................................................. 8 Table 3) Software configuration. ................................................................................................................... 8 Table 4) Exchange 2010 backup test results. ............................................................................................ 24 Table 5) Exchange 2010 restore test results. ............................................................................................. 24 Table 6) SQL Server 2008 R2 backup test results. .................................................................................... 24 Table 7) SQL Server 2008 restore test results. .......................................................................................... 24 Table 8) SharePoint backup test results. .................................................................................................... 25 Table 9) SharePoint restore test results. .................................................................................................... 25 Table 10) Hardware configuration. ............................................................................................................. 26 Table 11) Software configuration. ............................................................................................................... 27 Table 12) SQL Server 2008 R2 backup test results. .................................................................................. 41 Table 13) SQL Server 2008 R2 restore test results. .................................................................................. 41 Table 14) SharePoint backup test results. .................................................................................................. 41 Table 15) SharePoint restore test results. .................................................................................................. 41 Table 16) Hardware configuration. ............................................................................................................. 42 Table 17) Software components. ................................................................................................................ 43 Table 18) Exchange Server 2010 backup test results. ............................................................................... 58 Table 19) Exchange Server 2010 restore test results. ............................................................................... 58 Table 20) SQL Server 2008 R2 backup test results. .................................................................................. 58 Table 21) SQL Server 2008 R2 restore test results. .................................................................................. 59 Table 22) SharePoint backup test results. .................................................................................................. 59 Table 23) SharePoint restore test results. .................................................................................................. 59

LIST OF FIGURES

Figure 1) High-level solution architecture. .................................................................................................... 7 Figure 2) ESXi host network architecture. .................................................................................................. 11 Figure 3) Storage network architecture. ..................................................................................................... 12 Figure 4) NetApp storage aggregate layout................................................................................................ 13 Figure 5) NetApp storage volume layout. ................................................................................................... 14 Figure 6) Microsoft Exchange Server 2010 datastore layout. .................................................................... 15 Figure 7) Microsoft Office SharePoint Server 2010 datastore layout. ........................................................ 16

4 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 8) Microsoft Office SharePoint Server 2010 datastore layout. ........................................................ 17 Figure 9) Microsoft SQL Server 2008 R2 datastore layout. ........................................................................ 18 Figure 10) Microsoft SQL Server 2008 R2 datastore layout....................................................................... 19 Figure 11) High-level backup architecture. ................................................................................................. 20 Figure 12) NetApp System Manager screenshot showing multiple levels of storage efficiency. ............... 22 Figure 13) High-level solution architecture. ................................................................................................ 26 Figure 14) ESXi host network architecture. ................................................................................................ 29 Figure 15) Storage network architecture. ................................................................................................... 30 Figure 16) Network storage aggregate layout. ........................................................................................... 31 Figure 17) NetApp storage volume layout. ................................................................................................. 32 Figure 18) Microsoft Office SharePoint Server 2007 datastore layout. ...................................................... 33 Figure 19) Microsoft Office SharePoint Server 2007 datastore layout. ...................................................... 34 Figure 20) Microsoft SQL Server 2008 R2 datastore layout....................................................................... 35 Figure 21) Microsoft SQL Server 2008 R2 datastore layout....................................................................... 36 Figure 22) High-level backup architecture. ................................................................................................. 37 Figure 23) NetApp System Manager screenshot showing multiple levels of storage efficiency. ............... 39 Figure 24) High-level solution architecture. ................................................................................................ 42 Figure 25) ESXi host network architecture. ................................................................................................ 45 Figure 26) Storage network architecture. ................................................................................................... 46 Figure 27) NetApp storage aggregate layout.............................................................................................. 47 Figure 28) NetApp volume layout. .............................................................................................................. 48 Figure 29) Microsoft Exchange Server 2010 datastore layout. .................................................................. 49 Figure 30) Microsoft Office SharePoint Server 2010 datastore layout. ...................................................... 50 Figure 31) Microsoft Office SharePoint Server 2010 datastore layout. ...................................................... 51 Figure 32) Microsoft SQL Server 2008 R2 datastore layout....................................................................... 52 Figure 33) Microsoft SQL Server 2008 R2 datastore layout....................................................................... 53 Figure 34) High-level backup architecture. ................................................................................................. 54 Figure 35) NetApp System Manager screenshot showing multiple levels of storage efficiency. ............... 56

5 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

1 INTRODUCTION As customers move toward their goal of 100% virtualized data centers, they increasingly look for ways to bring the benefits of VMware virtualization to their mission-critical Microsoft applications. Customers planning a new deployment, performing an upgrade, or planning to 100% virtualize their data centers have an ideal opportunity to transition to a VMware vSphere virtual infrastructure built on NetApp unified storage.

This document provides guidance on how to design and architect a scalable Microsoft applications mixed workload solution using highly available VMware vSphere 4 virtual infrastructure and NetApp unified storage. It highlights the flexibility of leveraging either a Fibre Channel (FC) protocol-based storage solution or IP-based solution (that is, iSCSI and Network File System [NFS]) for hosting virtual machines. It also describes the NetApp backup and recovery solution for the Microsoft applications. All the FC, iSCSI, and NFS-based solutions are applicable for all enterprise types (large, midsize, and SMB) and can be scaled up or down based on business requirements. Some key benefits of the overall solution are:

• Reduced costs with VMware virtualization. For many organizations, upgrading to newer Microsoft server applications without virtualization can result in investing more server hardware in an application that has already become excessively costly to run. VMware virtualization can unlock the full power of the hardware by running multiple workloads on each system. This can provide a cost-effective solution and potentially higher ROI when compared to deployments without virtualization.

• Advanced NetApp unified and efficient storage solutions. Customers can deploy Microsoft Exchange, SQL Server, and SharePoint on storage solutions that leverage existing networking infrastructure such as FC, iSCSI, and NFS, which can offer a very cost-effective approach. NetApp FAS and V-Series storage arrays have been fully tested and certified for use in FC and IP-based VMware environments. Also, by leveraging NetApp storage efficiency and intelligent caching capabilities across all the protocols, customers can save significantly on their storage investment without tradeoffs.

• High availability. A platform enabled by VMware can provide high availability (HA) for Microsoft server applications without the need for clustering at the virtual machine (VM) level. Virtual machines are no longer tied to the underlying server hardware and can be moved across servers at any time with VMware VMotion®. VMware HA provides server hardware fault tolerance for every VM and offers greater levels of availability over solutions designed to protect just the server.

• Advanced backup and recovery solutions. The NetApp backup and recovery solution is built using integrated VMware, Microsoft, and NetApp technologies for advanced, application-aware data protection. Deduplication-aware remote replication for disaster recovery with NetApp SnapMirror® provides an end-to-end data protection solution.

The primary Microsoft applications virtualized are:

• Microsoft Exchange Server 2010 • Microsoft Office SharePoint Server 2010 • Microsoft SQL Server 2008 R2

The key highlights of this solution are:

• Microsoft applications virtualization with VMware vSphere 4 • Storage efficiency with NetApp primary storage deduplication and thin provisioning without any

negative tradeoffs • Scalability and ease of management with NetApp deduplication-aware FC, iSCSI, and NFS

datastores, VMware vSphere 4, NetApp unified storage, and backup and granular recovery solution • Efficient, deduplication-aware, application-consistent backup and recovery with NetApp

SnapManager® for Virtual Infrastructure (SMVI) and now is a part of NetApp Virtual Storage Console (VSC), SnapManager for Exchange (SME), SnapManager for SQL Server (SMSQL), SnapManager

6 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

for Microsoft Office SharePoint Server (SMOSS), and integrated NetApp SnapMirror remote replication for FC and iSCSI-based solutions

• Efficient, deduplication-aware, application-consistent backup and recovery with VMware snapshots leveraging VMware VSS, NetApp SMVI, and integrated NetApp SnapMirror remote replication for NFS-based solution

For more information about the best practices followed in this architecture, see these guides:

• TR-3749: NetApp and VMware vSphere 4 Storage Best Practices

• TR-3845: SnapManager 6.0 for Microsoft Exchange Best Practices Guide • TR-3715: SnapManager for Microsoft Office SharePoint Server: Backup and Recovery Guide • TR-3505: NetApp Deduplication for FAS and V-Series Best Practices Guide • TR-3737: SMVI 2.0 Best Practices

2 SOLUTION SUMMARY The solution showcases virtualizing Microsoft applications on VMware vSphere 4.1 virtual infrastructure and NetApp unified storage, achieving significant storage efficiency, performance, operational agility, and efficient, application-consistent, deduplication-aware data protection. Results of the testing demonstrate that the performance of Microsoft applications on VMware vSphere and NetApp storage in this solution is suitable for production environments and is well within Microsoft best practice recommendations.

Features of the VMware vSphere platform, including VMware VMotion, HA, and Distributed Resource Scheduler (DRS), were tested and demonstrated substantial increases in overall flexibility and availability. This guide describes the solution architecture and includes hands-on test results of typical administrative activities such as application backup and restore.

The workload virtualized in all the FC, iSCSI, and NFS-based solutions is as follows:

• Microsoft Exchange 2010. 3,000 heavy users with 250MB mailbox per user and 0.33 IOPS per user • Microsoft SharePoint 2010. 3,000 users with 390MB space per user • SQL Server 2008 R2. 3,000 users; 10 databases using OLTP, DSS, and mixed workloads such as

CRM (sales and manufacturing)

Table 1 shows the virtual CPU and memory configuration for Exchange, SharePoint, and SQL Server VMs.

Table 1) Exchange SQL Server and SharePoint VM configuration.

Microsoft Application Virtual Machine Virtual CPU Memory (GB)

Exchange 2010 Two Exchange mailbox servers 2 9.5 Two Exchange CAS servers, two hub servers

1 1

SQL Server 2008 R2 Two SQL Server instances 4 4 SharePoint Server 2010

Two Web/query servers 2 2 One index server, one SQL Server 4 4

The solution also includes Microsoft IIS Web servers and test and development servers, along with the following solution-monitoring and management tools from NetApp, VMware, and Microsoft:

• NetApp DataFabric® Manager (Operations Manager) • VMware vCenter™ 4.1

7 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

• Microsoft System Center Operations Manager (SCOM) and Windows® Server Update Services Server (WSUS)

All the NetApp FC, iSCSI, and NFS-based solutions are scalable and can be easily tailored for deployments of any size.

Note: Hardware requirements vary for different environments, based on specific workload requirements.

3 FIBRE CHANNEL SOLUTION DESIGN

3.1 HIGH-LEVEL SOLUTION ARCHITECTURE

As shown in Figure 1, the infrastructure used for this solution validation involved three VMware ESXi 4.1 hosts running the mixed Microsoft applications workload described in section 2, with a total of 30 virtual machines hosted on NetApp shared storage. The VM operating system, installed applications, databases, and logs are all hosted on FC datastores based on NetApp. High availability is achieved by using VMware HA, NetApp active-active controllers, and Cisco Nexus 5020 switches. The backup and recovery solution component includes application-consistent point-in-time NetApp Snapshot® copies with NetApp SME, SMSQL, SMOSS, VSC, and NetApp SnapMirror replication, as discussed earlier.

Figure 1) High-level solution architecture.

3.2 SOLUTION HARDWARE AND SOFTWARE REQUIREMENTS

HARDWARE RESOURCES

The equipment listed in Table 2 was used in this configuration validation.

8 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Table 2) Hardware configuration.

Solution Component Minimum Revision Primary Storage One NetApp FAS6040 with active-active storage controllers Data ONTAP® 8.0.1

Six disk shelves Each disk 300GB/15K/FC

Networking Two Cisco Nexus 5020 switches One dual-port 10Gb Ethernet NIC per FAS6040 controller

One dual-port 4Gb QLogic HBA per FAS6040 controller

Backup Storage One NetApp FAS3040HA cluster Data ONTAP 8.0.1

Two disk shelves 28 disks (14 per shelf); each disk 1TB/7200RPM/SATA

Three ESXi Hosts 32GB RAM per host Two quad-core Xeon® processors One dual-port 10Gb Ethernet NIC One dual-port 4Gb QLogic HBA

SOFTWARE RESOURCES

The software components in Table 3 were used in the configuration validation.

Table 3) Software configuration.

Solution Component Minimum Revision Primary Storage Data ONTAP 8.0.1 FCP, ASIS, FlexClone®, SnapMirror, SnapRestore®, NearStore®, SnapManager for Microsoft SQL Server, SnapManager for Exchange, SnapManager for Microsoft Office SharePoint Server, SnapDrive® for Windows licenses

N/A

Backup Storage Data ONTAP 8.0.1 FC, ASIS, SnapMirror, and NearStore licenses N/A

9 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Solution Component Minimum Revision NetApp Management Software NetApp Virtual Storage Console (VSC) 2.0.1 NetApp System Manager 2.0 NetApp SnapManager for Exchange 6.0 NetApp Single Mailbox Recovery for Exchange 6.0 NetApp SnapManager for SQL Server 5.1 NetApp SnapManager for SharePoint 6.0 NetApp SnapDrive 6.3 NetApp DataFabric Manager 4.0.1 VMware vSphere Infrastructure ESX hosts VMware ESXi 4.1.0 (build 260247) vCenter Server 4.1.0 vCenter Database SQL Server 2005 Applications Virtual Machine Operating System Windows Server 2008 x64, Enterprise Edition, R2 Microsoft Applications Microsoft Exchange Server 2010, Enterprise Edition Microsoft Office SharePoint Server 2010, Enterprise Edition Microsoft SQL Server 2008, Enterprise Edition, R2

3.3 SOLUTION ARCHITECTURE DETAILS

VIRTUAL MACHINE LAYOUT

The FC solution described in this section uses a total of 30 virtual machines. This configuration simulates a real-world customer environment with the supporting utility and test and dev servers in addition to the primary Microsoft application servers.

Microsoft Applications VMs

• Microsoft Exchange Server 2010. Total six VMs (two mailbox servers, two hub servers, two CAS servers)

• Microsoft Office SharePoint Server 2010. Total four VMs (two Web front end/query, one index, one SQL Server 2008 R2)

• Microsoft SQL Server 2008 R2. Two VMs • Microsoft IIS. Four VMs

Test and Dev VMs

• Windows Server 2008 R2. Four VMs

10 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Utility VMs

One Microsoft WSUS, one Microsoft SCOM, one Microsoft Exchange LoadGen Tool, four SharePoint test workstations, one NetApp DataFabric Manager, one VMware vCenter 4.1, one VMware vCenter 4.1 database.

All the NetApp recommended FCP settings described in NetApp TR-3749 NetApp and VMware vSphere Storage Best Practices were set using the NetApp VSC vCenter plugin, directly from the vCenter GUI.

NETWORK ARCHITECTURE

In this solution, the network was composed of two Cisco Nexus 5020 switches for managing both the FC back-end storage traffic and the IP-based VM Network, VMotion, and SnapMirror remote replication traffic. Since the Cisco Nexus switches used in this configuration support virtual port channeling (vPC), it provides a high level of redundancy, fault tolerance, and security. With the vPC feature, scalable Layer 2 topologies can be deployed, reducing the dependence on Spanning Tree Protocol for redundancy and loop avoidance. Also, high cross-sectional bandwidth is attained by the feature’s ability to use all available physical links that interconnect the devices.

On the Cisco Nexus network make sure of the following configurations:

• FC ports configured. • Be sure to set up a management VLAN for the service console, a public VLAN for the virtual machine

network, and a private, non-routable VLAN for VMotion. • Be sure to use a 10Gb connection between the two Cisco Nexus 5020 switches. • Be sure to enable a vPC between the two Cisco Nexus 5020 switches. In order to use this feature be

sure to have the Cisco® NX-OS Software Release 4.1(3)N1 for Cisco Nexus 5000 series switches installed on your Cisco Nexus 5020.

Cisco Nexus 5020 switches not only support both the 4Gb FC and 10Gb IP, they also support the 1Gb modules. Therefore, other Cisco switches can be used in conjunction with the Cisco Nexus 5020s in order to further scale out the virtualization and storage network.

ESXi Host Network Architecture

Figure 2 shows the virtual network layout for each ESXi host. Each ESXi host has two 10Gb Ethernet ports configured into different port groups as shown in the figure.

11 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 2) ESXi host network architecture.

Storage Network Layout

Figure 3 shows the FC storage network layout for the ESXi host connectivity with the NetApp storage controller over Cisco Nexus 5020 switches. For a complete plug-and-play FC multipathing solution, ALUA was enabled on the NetApp storage array for the ESXi cluster initiator group, and the Round Robin Path Selection Policy (PSP) was configured on the ESXi hosts (per the detailed instructions in TR-3749: NetApp and VMware vSphere Storage Best Practices). Also, all the other FC configuration best practices highlighted in TR-3749 were followed.

12 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 3) Storage network architecture.

STORAGE ARCHITECTURE

NetApp Storage Aggregate Layout

Figure 4 shows the NetApp storage aggregate layout for hosting the different data components for every VM. NetApp aggregates provide a large virtualized pool of storage capacity and disk IOPS to be used on demand by all the virtual machines hosted in the aggregate. This is comparable to the VMware virtualization where CPU and memory resources are pooled and leveraged on demand.

13 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 4) NetApp storage aggregate layout.

The aggregate sizing is based on the storage requirements for all the applications to meet the storage capacity, performance, and Snapshot backup requirement of an assumed workload. When sizing for your environment, consult with your NetApp SE about the exact storage configuration based on your individual requirements.

Note: In this solution, all of the aggregates hosting volumes required for SharePoint and Exchange VMs are hosted on one storage controller, and the aggregates hosting volumes for SQL Server are hosted on the second controller. This consideration was made from the perspective of the VMware vCenter Site Recovery Manager, which we plan to add in the future release of this guide. The VMware vCenter Site Recovery Manager requires all datastores hosting data for a VM to be on the same storage controller.

NetApp Storage Volume Layout

Figure 5 shows the NetApp storage volume layout for hosting the different data components for every VM.

14 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Note: All of the volumes were thin provisioned to use the capacity on demand.

Each virtual machine had a 32GB C: drive (minimum requirements for Windows Server 2008 R2 per the VMware Guest Operating System Installation Guide) with the vmdk hosted on the VMFS datastore.

The Microsoft applications are deployed as follows:

• Case 1. The application server (Exchange, SQL Server, and SharePoint) database and log drives are hosted on FC-based raw device mapping (RDM) LUNs, directly created and connected inside the guest VMs using NetApp SnapDrive 6.3 software. This provides the flexibility of leveraging the NetApp and Microsoft application-integrated SnapManager products to achieve granular, automated backup, and recovery.

• Case 2. The application server (SQL Server and SharePoint) database and log drives are hosted on FC-based FC VMFS datastores. Using vCenter server create virtual machine disks (VMDKs) in FC VMFS datastores, which requires NetApp SnapDrive 6.3 and Virtual Storage Console 2.0.1 or later. This provides the flexibility of leveraging the NetApp and Microsoft application-integrated SnapManager products to achieve granular, automated backup and recovery.

Figure 5) NetApp storage volume layout.

Microsoft Exchange Server 2010 Datastore Layout

• Case 1. Figure 6 shows the datastore layout for the different data components of Microsoft Exchange Server 2010. The temporary data VM swap file (.vswp) has been separated out. This reduces the daily Snapshot copy change rate and facilitates faster completion of nightly primary storage deduplication operations. The database and transaction log FC RDM LUNs are hosted on separate volumes than the Windows OS and Exchange binaries, as shown in Figure 5.

15 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 6) Microsoft Exchange Server 2010 datastore layout.

Microsoft Office SharePoint Server 2010 Datastore Layout

• Case 1. Figure 7 shows the datastore layout for the different data components of Microsoft Office SharePoint Server 2010. The database and log files are hosted on separate FC RDM LUNs on a separate volume from the Windows OS and SharePoint binaries, as shown in Figure 5.

• Case 2. Figure 8 shows the datastore layout for the different data components of Microsoft Office SharePoint Server 2010. The database and log virtual machine disks (VMDKs) are hosted on separate FC VMFS datastores from the Windows OS and SharePoint binaries, as shown in Figure 5.

16 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 7) Microsoft Office SharePoint Server 2010 datastore layout.

17 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 8) Microsoft Office SharePoint Server 2010 datastore layout.

Microsoft SQL Server 2008 R2 Datastore Layout

• Case 1. Figure 9 shows the datastore layout for the different data components of Microsoft SQL Server 2008 R2. The SQL database and log files are hosted on FC RDM LUNs on a separate volume than the Windows OS and SQL Server binaries, as shown in Figure 5.

• Case 2. Figure 10 shows the datastore layout for the different data components of Microsoft SQL Server 2008 R2. The SQL database and log virtual machine disks (VMDKs) are hosted on separate FC VMFS datastores than the Windows OS and SQL Server binaries, as shown in Figure 5.

18 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 9) Microsoft SQL Server 2008 R2 datastore layout.

19 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 10) Microsoft SQL Server 2008 R2 datastore layout.

3.4 STORAGE SIZING

This section describes the storage sizing. These numbers vary from environment to environment, and you should consult your NetApp systems engineer about the exact sizing for your environment.

• Microsoft SQL Server 2008 R2. The SQL Server workload was divided into 10 separate databases using both OLTP and DSS workloads. The databases required 2.2TB of disk space, and the transaction logs required 4.5GB of disk space.

• Microsoft Office SharePoint Server 2010. SharePoint used 3.1TB of disk space with 80GB databases across multiple site collections on SQL Server. Also, 1GB of disk space was allocated for the transaction logs.

• Microsoft Exchange 2010. Exchange used 1.4TB of disk space for the databases and 800GB of disk space for the transaction logs.

• Datastores hosting VM C: drives (OS and application binaries and VM pagefile): (size of the VM C: drive + VM pagefile (1.5 to 3 times) + 15% free space for VMware snapshots and non-vmdk files) * number of VMs in the datastore; − Application VM datastore on controller A, hosting 14 VMs ~ 600GB − Application VM datastore on controller B, hosting six VMs ~ 260GB

20 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

− Utility VM datastore on controller B, hosting eight VMs ~ 340GB − VMware vCenter VM datastore on controller B, hosting two VMs ~ 100GB

Note: These storage requirements are before considering NetApp deduplication and thin provisioning. The NetApp deduplication and storage efficiency potential savings are discussed in section 4. The deduplication savings vary from environment to environment and application to application. For more information, see NetApp TR-3505.

3.5 BACKUP AND RESTORE ARCHITECTURE

BACKUP ARCHITECTURE DETAILS

One backup policy is used by NetApp VSC for backing up the VMFS datastores hosting the vmdk files with the OS and application binaries for the VMs.

For obtaining application-consistent backups for the Exchange, SQL Server, and SharePoint VMs, NetApp SnapManager for Exchange, SQL, and SharePoint were leveraged to perform scheduled backups of the transaction logs and databases and also to initiate SnapMirror update. The SnapManager products also make sure of granular recovery for these Microsoft applications. It is highly recommended that the VSC and application-specific SnapManager backups be scheduled so that they occur at different times.

Figure 11) High-level backup architecture.

21 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

RESTORE ARCHITECTURE DETAILS

Restores from Local Snapshot Backups

NetApp SnapManager products allow individual application-level granular recovery. Full VM-level recovery (OS, application binaries, and application data) is achieved by using both NetApp VSC and application-specific SnapManager restore functionality.

Restores from Remote SnapMirror Backups

1. Restore the VM OS and application binaries using VSC. a. Quiesce and break the SnapMirror relationship. b. Set up a SnapMirror relationship back from the destination storage system on the remote site to

the source storage system at the primary site. c. Quiesce and break the new SnapMirror relationship again. d. Mount the datastores on the ESXi host. e. Register the VMs from the restored datastore.

Note: Make sure that all the VM hard drives point to the correct vmdk files on the restored datastores.

2. Restore the application data using SnapManager. a. Invoke SnapRestore from within the application-specific SnapManager product inside the guest

VM as done in a physical environment.

4 FIBRE CHANNEL SOLUTION VALIDATION

4.1 STORAGE EFFICIENCY

NetApp thin provisioning capabilities were enabled on all the datastores on the primary storage, and deduplication was enabled on the datastores hosting the VM OS and application binaries. The deduplication schedule was set to run once every night. Figure 12 shows the screenshot of NetApp System Manager, showcasing 90% savings for a datastore hosting OS and application binaries for four VMs (one SQL Server 2008 R2,one index server and two Web/search servers). Similar storage savings were observed for other datastores hosting the OS and application binaries for the Exchange, SharePoint, test and dev, utility, and vCenter VMs.

As you scale out to virtualize your entire data center with hundreds to thousands of VMs, the storage efficiency will be even higher. Also note that NetApp’s intelligent caching capabilities (built natively in Data ONTAP and Flash Cache cards) strongly complement NetApp storage efficiency capabilities.

Storage savings for the application-specific data drives, such as SharePoint, Exchange, and SQL databases, vary from application to application and environment to environment. For savings specific to each application, see NetApp TR-3505.

22 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 12) NetApp System Manager screenshot showing multiple levels of storage efficiency.

4.2 PERFORMANCE VALIDATION

The storage configuration described in this guide was validated by configuring the environment and conducting performance tests using the application-specific tools described in this section. The tests were performed individually for SQL Server, SharePoint, and Exchange and also by running all these applications at the same time. The test results discussed in this section validate that the architecture is capable of handling the mixed workload described earlier.

MICROSOFT EXCHANGE 2010

The Microsoft Exchange Load Generation Tool was used to simulate a 3,000-heavy-user mail profile with 250MB per mailbox. Several eight-hour duration load tests were performed, both with and without NetApp deduplication enabled on the VM C: drives hosting the operating system and Exchange binaries.

VM Disk I/O Latency

For all the test cycles, the read and write latencies were well within the Microsoft recommendation listed here: technet.microsoft.com/en-us/library/aa995945.aspx.

VM CPU and Memory Utilization

Each Exchange mailbox server was configured with 9.5GB RAM [2GB + (1,500 users per mailbox server * 5MB)] and two virtual CPUs. For the entire eight-hour test cycle, there were no CPU or memory bottlenecks on the VMs or the ESXi host.

23 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

NetApp Storage Utilization Summary

For the entire eight-hour test cycle, the NetApp FAS6040 storage controller had more than enough capability to handle the workload for the 3,000-user Exchange environment that was tested. Also, there were no I/O bottlenecks on the storage array.

The NetApp Flash Cache offers significant performance benefits in an Exchange environment. For more information, see TR-3867: Using Flash Cache for Exchange 2010.

Also, see the following VMware white paper, which compares the performance of a 16,000-heavy-user Exchange environment across all the storage protocols (FC, iSCSI, NFS) on NetApp storage: VMware vSphere 4: Exchange Server on NFS, iSCSI, and Fibre Channel.

MICROSOFT SQL SERVER 2008 R2

The Microsoft SQLIOSim utility was used to stress test the storage architecture described earlier. Several load tests were performed, both with and without deduplication enabled on the VM C: drives hosting the operating system and SQL server binaries.

VM Disk I/O Latency

For all the tests, the read and write latencies for the database files were well within the Microsoft recommendations.

VM CPU and Memory Utilization

Each SQL Server VM was configured with 4GB RAM and four virtual CPUs. For the entire duration of the test cycle, there were no CPU or memory bottlenecks on the VMs.

NetApp Storage Utilization Summary

For the entire duration of the test cycles, the NetApp FAS6040 storage controller had sufficient capability to handle the test workload for the SQL environment. Also, there were no I/O bottlenecks on the storage array.

MICROSOFT SHAREPOINT SERVER 2010

AvePoint SharePoint Test Environment Creator and Usage Simulator tools were used to populate and stress test the SharePoint environment. The user workload tested was 25% Web access, 25% list access, 25% list creation, and 25% document creation. Several two-hour load tests were performed with 25% of the users (750 out of the total 3,000 users) online at any point in time. Tests were conducted both with and without data deduplication enabled on the VM C: drives hosting the operating system and SharePoint binaries.

VM Disk I/O Latency

For all the tests, the read and write latencies for the database files were well within the Microsoft recommendations.

VM CPU and Memory Utilization

The Web servers were configured with 2GB RAM and two virtual CPUs, and the index and database servers were configured with 4GB RAM and four virtual CPUs. For the entire duration of the test cycles, there were no CPU or memory bottlenecks on any of the VMs.

NetApp Storage Utilization Summary

For the entire duration of the test cycles, the NetApp FAS6040 storage controller had sufficient capability to handle the test workload for the SharePoint environment. Also, there were no I/O bottlenecks on the storage array.

As mentioned earlier, the load tests for different applications were also conducted all at the same time. There were no performance bottlenecks on the storage controllers, network, ESXi hosts, or VMs.

24 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

4.3 VMWARE VMOTION, HA, AND DRS VALIDATION

During the load tests for different applications and also when all the applications were load tested at the same time, VMs were successfully migrated between different ESXi hosts using VMotion without any issues. Also, VMware HA and DRS were tested without any issues, demonstrating a high level of solution availability and resource utilization.

4.4 BACKUP AND RESTORE VALIDATION

MICROSOFT EXCHANGE SERVER 2010

Table 4 and Table 5 show the results of the Exchange 2010 backup and restore testing at different levels of granularity.

Table 4) Exchange 2010 backup test results.

Backup Level Local Snapshot Backup

SnapMirror Remote Replication

Entire VM

Individual storage group

Table 5) Exchange 2010 restore test results.

Restore Level Restore from Local Snapshot Backup

Restore from SnapMirror Remote Replication

Entire VM

Individual storage group

Individual mailbox recovery (SMBR)

MICROSOFT SQL SERVER 2008 R2

Table 6 and Table 7 show the results of the SQL Server 2008 R2 backup and restore testing at different levels of granularity.

Table 6) SQL Server 2008 R2 backup test results.

Backup Level Local Snapshot Backup

SnapMirror Remote Replication

Entire VM

Individual database

Table 7) SQL Server 2008 restore test results.

Restore Level Restore from Local Snapshot Backup

Restore from SnapMirror Remote Replication

Entire VM

Individual database

25 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Restore Level Restore from Local Snapshot Backup

Restore from SnapMirror Remote Replication

Individual transaction level

MICROSOFT SHAREPOINT 2010

Table 8 and Table 9 show the results of the SharePoint 2010 backup and restore testing for Microsoft Office SharePoint Server 2010 at different levels of granularity.

Table 8) SharePoint backup test results.

Backup Level Local Snapshot Backup

SnapMirror Remote Replication

Entire SharePoint farm

Individual VMs

Table 9) SharePoint restore test results.

Restore Level Restore from Local Snapshot Backup

Restore from SnapMirror Remote Replication

Entire SharePoint site

Item level

5 NETWORK FILE SYSTEM SOLUTION DESIGN

5.1 HIGH-LEVEL SOLUTION ARCHITECTURE

As shown in Figure 13, the infrastructure used for this solution validation involved three VMware ESXi 4.1 hosts running the mixed Microsoft applications workload described in section 2, with a total of 30 virtual machines hosted on NetApp shared storage. The VMware operating system, installed applications, databases, and logs are all hosted on NFS datastores based on NetApp. Solution high availability is achieved by using VMware HA, NetApp active-active controllers, and Cisco Nexus 5020 switches. The backup and recovery solution component includes application-consistent point-in-time NetApp Snapshot copies with NetApp VSC and SnapMirror replication to a secondary site.

26 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 13) High-level solution architecture.

5.2 SOLUTION HARDWARE AND SOFTWARE REQUIREMENTS

HARDWARE RESOURCES

The following equipment was used in this configuration validation.

Table 10) Hardware configuration.

Solution Component Minimum Revision Primary Storage One NetApp FAS6040HA cluster Data ONTAP 8.0.1

Six disk shelves Each disk 300GB/15K/FC

Networking Two Cisco Nexus 5020 switches One dual-port 10Gb Ethernet NIC per FAS6040 controller

Backup Storage One NetApp FAS3040HA cluster Data ONTAP 8.0.1

Two disk shelves 28 disks (14 per shelf); each disk 1TB/7200RPM/SATA

27 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Solution Component Minimum Revision Three ESXi Hosts 32GB RAM Two quad-core Xeon processors One dual-port 10Gb Ethernet NIC

SOFTWARE RESOURCES

The following software components were used in the configuration validation.

Table 11) Software configuration.

Solution Component Minimum Revision Primary Storage Data ONTAP 8.0.1 NFS, ASIS, FlexClone, SnapMirror, SnapRestore, and NearStore licenses N/A

Backup Storage Data ONTAP 8.0.1 NFS, ASIS, SnapMirror, and NearStore licenses N/A

NetApp Management Software NetApp Virtual Storage Console (VSC) 2.0.1 NetApp System Manager 2.0 NetApp DataFabric Manager 4.0.1 NetApp SnapManager for SQL Server 5.1 NetApp SnapManager for SharePoint 6.0 NetApp SnapDrive 6.3 VMware vSphere Infrastructure ESX hosts VMware ESXi 4.1.0 (build 260247) vCenter Server 4.1.0 vCenter Database SQL Server 2005 Applications Virtual Machine Operating System Windows Server 2008 x64, Enterprise Edition, R2 Microsoft Applications Microsoft Office SharePoint Server 2010, Enterprise Edition Microsoft SQL Server 2008 Enterprise Edition, R2

28 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

5.3 SOLUTION ARCHITECTURE DETAILS

VIRTUAL MACHINE LAYOUT

The solution described in this section uses a total of 30 virtual machines. This configuration simulates a real-world customer environment with the supporting utility and test and dev servers in addition to the primary Microsoft application servers.

Microsoft Applications VMs

• Microsoft Office SharePoint Server 2010. Total of four VMs (two Web front end/query, one index, one SQL Server 2008 R2)

• Microsoft SQL Server 2008 R2. Two VMs • Microsoft IIS. Four VMs

Test and Dev VMs

• Windows Server 2008 R2. Four VMs.

Utility VMs

One Microsoft WSUS, one Microsoft SCOM, four SharePoint test workstations, one NetApp DataFabric Manager, one VMware vCenter 4.1, one VMware vCenter 4.1 database.

Note: All of the NetApp recommended NFS settings described in NetApp TR-3749: NetApp and VMware vSphere Storage Best Practices were set using the NetApp VSC vCenter plugin, directly from the vCenter GUI.

NETWORK ARCHITECTURE

In this solution, the network was composed of two Cisco Nexus 5020 switches. Since the Cisco Nexus switches used in this configuration support virtual port channeling (vPC), logical separation of the storage network from the rest of the network is achieved while at the same time providing a high level of redundancy, fault tolerance, and security. With the vPC feature, scalable Layer 2 topologies can be deployed, reducing the dependence on Spanning Tree Protocol for redundancy and loop avoidance. Also, high cross-sectional bandwidth is attained by the features’ ability to use all available physical links that interconnect the devices.

On the Cisco Nexus network make sure of the following configurations:

• Be sure to set up a management VLAN for the management network, a public VLAN for the virtual machine network, and a private, nonroutable VLAN for VMotion.

• Be sure to use a 10Gb connection between the two Cisco Nexus 5020 switches. • Be sure to enable a vPC between the two Cisco Nexus 5020 switches. In order to use this feature be

sure to have the Cisco NX-OS Software Release 4.1(3)N1 for Cisco Nexus 5000 series switches installed on your Cisco Nexus 5020.

While the Cisco Nexus 5020 switches are 10Gb, they do support 1Gb modules. Therefore, other Cisco switches can be used in conjunction with the Cisco Nexus 5020s in order to further scale out a virtualization and storage network.

ESXi Host Network Architecture

Figure 14 shows the virtual network layout for each ESXi host. Each ESXi host has two 10Gb Ethernet ports configured into different port groups as shown in the figure.

29 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 14) ESXi host network architecture.

Storage Network Layout

Figure 15 shows the storage network layout for the ESXi host connectivity with the NetApp storage controller over Cisco Nexus 5020 switches. Make sure to configure a nonroutable VLAN for the NFS storage traffic to pass to and from the NetApp storage controllers to the vSphere hosts. With this setup, the NFS traffic is kept completely contained, and security is more tightly controlled.

Also, it is important to have at least two physical Ethernet switches for proper network redundancy in your VMware environment.

30 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 15) Storage network architecture.

STORAGE ARCHITECTURE

NetApp Storage Aggregate Layout

Figure 16 shows the NetApp storage aggregate layout for hosting the different data components for every VM. NetApp aggregates provide a large virtualized pool of storage capacity and disk IOPS to be used on demand by all the virtual machines hosted in the aggregate. This can be compared to the VMware virtualization where CPU and memory resources are pooled and leveraged on demand.

31 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 16) Network storage aggregate layout.

The aggregate sizing is based on the storage requirements for all the applications to meet the storage capacity, performance, and Snapshot backup requirement of an assumed workload. When sizing for your environment, consult with your NetApp SE about the exact storage configuration based on your individual requirements.

Note: In this solution, all of the aggregates hosting volumes required for SharePoint VMs are hosted on one storage controller, and the aggregates hosting volumes for SQL Server are hosted on the second controller. This consideration was made from the perspective of the VMware vCenter Site Recovery Manager, which we plan to add in the future release of this guide. The VMware vCenter Site Recovery Manager requires all datastores hosting data for a VM to be on the same storage controller.

NetApp Storage Volume Layout

Figure 17 shows the NetApp storage volume layout for hosting the different data components for every VM. Each virtual machine had a 40GB C: drive on the NFS datastore, which contains operating systems binaries and the VM’s pagefile.

32 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Note: All of the volumes were thin provisioned to use the capacity on demand.

Each virtual machine had a 32GB C: drive (minimum requirements for Windows Server 2008 R2 per the VMware Guest Operating System Installation Guide) with the vmdk hosted on the NFS datastore.

The Microsoft applications are deployed as follows:

• Case 1. The application server (SQL and SharePoint Server) database and log drives are hosted on iSCSI -based raw device mapping (RDM) LUNs, directly created and connected inside the guest VMs using NetApp SnapDrive 6.3 software. This provides the flexibility of leveraging the NetApp and Microsoft application-integrated SnapManager products to achieve granular, automated backup and recovery.

• Case 2. The application server (SQL and SharePoint Server) database and log drives are hosted on NFS datastores. Using vCenter server, create virtual machine disks (VMDKs) in NFS datastores, which requires NetApp SnapDrive 6.3 and Virtual Storage Console 2.0.1 or later. This provides the flexibility of leveraging the NetApp and Microsoft application-integrated SnapManager products to achieve granular, automated backup and recovery.

Figure 17) NetApp storage volume layout.

Microsoft Office SharePoint Server 2010 Datastore Layout

• Case 1. Figure 18 shows the datastore layout for the different data components of Microsoft Office SharePoint Server 2010. The database and log files are hosted on separate iSCSI RDM LUNs from the Windows OS and SharePoint binaries. This datastore is hosted on a dedicated aggregate, as shown in Figure 5.

33 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 18) Microsoft Office SharePoint Server 2007 datastore layout.

• Case 2. Figure 19 shows the datastore layout for the different data components of Microsoft Office SharePoint Server 2010. The database and log virtual machine disks (VMDKs) are hosted on separate NFS datastores from the Windows OS and SharePoint binaries, as shown in Figure 5.

34 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 19) Microsoft Office SharePoint Server 2007 datastore layout.

Microsoft SQL Server 2008 R2 Datastore Layout

• Case 1. Figure 20 shows the datastore layout for the different data components of Microsoft SQL Server 2008 R2. The SQL database and log files are hosted on separate iSCSI RDM LUNs from the Windows OS and SQL Server binaries. This datastore is hosted on a dedicated aggregate, as shown in Figure 5.

35 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 20) Microsoft SQL Server 2008 R2 datastore layout.

• Case 2. Figure 21 shows the datastore layout for the different data components of Microsoft SQL Server 2008 R2. The SQL database and log virtual machine disks (VMDKs) are hosted on separate NFS datastores from the Windows OS and SharePoint binaries, as shown in Figure 5.

36 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 21) Microsoft SQL Server 2008 R2 datastore layout.

5.4 STORAGE SIZING

This section describes the storage sizing. These numbers vary from environment to environment, so you should consult your NetApp systems engineer about the exact sizing for your environment.

• Microsoft SQL Server 2008 R2. The SQL Server workload was divided into 10 separate databases using both OLTP and DSS workloads. The databases required 2.2TB of disk space, and the transaction logs required 4.5GB of disk space.

• Microsoft Office SharePoint Server 2010. SharePoint used 3.1TB of disk space with 80GB databases across multiple site collections on the SQL Server. Also, 1GB of disk space was allocated for the transaction logs.

37 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

• Datastores hosting VM C: drives (OS and application binaries and VM pagefile): (size of the VM C: drive + VM pagefile (1.5 to 3 times) + 15% free space for VMware snapshots and non-vmdk files) * number of VMs in the datastore: − Application VM datastore on controller A, hosting 14 VMs ~ 600GB − Application VM datastore on controller B, hosting six VMs ~ 260GB − Utility VM datastore on controller B, hosting eight VMs ~ 340GB − VMware vCenter VM datastore on controller B, hosting two VMs ~ 100GB

Note: These storage requirements are before considering NetApp deduplication and thin provisioning. The NetApp deduplication and storage efficiency potential savings are discussed in section 6. The deduplication savings vary from environment to environment and application to application. For more information, see NetApp TR-3505.

5.5 BACKUP AND RESTORE ARCHITECTURE

HIGH-LEVEL BACKUP ARCHITECTURE

One backup policy is used by NetApp VSC for backing up the NFS datastores hosting the vmdk files with the OS and application binaries for the VMs.

For obtaining application-consistent backups for the SQL Server and SharePoint VMs, NetApp SnapManager for SQL and SharePoint were leveraged to perform scheduled backups of the transaction logs and databases and also initiate SnapMirror update. The SnapManager products also make sure of granular recovery for these Microsoft applications. It is highly recommended that the VSC and application-specific SnapManager backups be scheduled so that they happen at different times.

Figure 22) High-level backup architecture.

38 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

RESTORE ARCHITECTURE DETAILS

Restores from Local Snapshot Backups

NetApp SnapManager products allow individual application-level granular recovery. Full VM-level recovery (OS, application binaries, and application data) is achieved by using both NetApp VSC and application-specific SnapManager restore functionality.

Restores from Remote SnapMirror Backups

For full environment-level restores from the SnapMirror backups on the remote site, follow this process:

1. Restore the VM OS and application binaries using VSC. a. Quiesce and break the SnapMirror relationship. b. Set up a SnapMirror relationship back from the destination storage system on the remote site to

the source storage system at the primary site. c. Quiesce and break the new SnapMirror relationship again. d. Mount the datastores on the ESXi host. e. Register the VMs from the restored datastore.

Note: Make sure that all the VM hard drives point to the correct vmdk files on the restored datastores.

2. Restore the application data using SnapManager. a. Invoke SnapRestore from within the application-specific SnapManager product inside the guest

VM as done in a physical environment.

6 NFS SOLUTION VALIDATION

6.1 STORAGE EFFICIENCY

NetApp thin provisioning capabilities were enabled on all the datastores on the primary storage, and deduplication was enabled on the datastores hosting the VM OS and application binaries. The deduplication schedule was set to run once every night. Figure 23 shows the screenshot of NetApp System Manager, showcasing 90% savings for a datastore hosting OS and application binaries for four VMs (one SQL Server 2008 R2, one index server and two Web/search servers). Similar storage savings were observed for other datastores on controller A and controller B, hosting the OS and application binaries for the SQL Server, IIS, utility, and vCenter VMs.

As you scale out to virtualize your entire data center with hundreds to thousands of VMs, the storage efficiency can even be higher. Also note that NetApp’s intelligent caching capabilities (built natively in Data ONTAP and Flash Cache cards) strongly complement NetApp’s storage efficiency capabilities.

39 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 23) NetApp System Manager screenshot showing multiple levels of storage efficiency.

Storage savings for the application-specific data drives, such as SharePoint and SQL database, vary from application to application and environment to environment. For savings specific to each application, see NetApp TR-3505.

6.2 PERFORMANCE VALIDATION

The storage configuration described in this guide was validated by configuring the environment described earlier and then performing performance tests using the application-specific tools described in this section. The tests were performed individually for SQL Server and SharePoint Server and also by running all these applications at the same time. The test results discussed in this section validate that the architecture is capable of handling the mixed workload described earlier.

MICROSOFT SQL SERVER 2008 R2

The Microsoft SQLIOSim utility was used to stress test the storage architecture. Several load tests were performed, both with and without deduplication enabled on the VM C: drives hosting the operating system and SQL server binaries.

VM Disk I/O Latency

For all the tests, the read and write latencies for the database files were well within the Microsoft recommendations.

40 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

VM CPU and Memory Utilization

Each SQL Server VM was configured with 4GB RAM and four virtual CPUs. For the entire duration of the test cycle, there were no CPU or memory bottlenecks on the VMs.

NetApp Storage Utilization Summary

For the entire duration of the test cycles, the NetApp FAS6040 storage controller had sufficient capability to handle the test workload for the SQL environment. Also, there were no I/O bottlenecks on the storage array.

MICROSOFT SHAREPOINT SERVER 2010

AvePoint SharePoint Test Environment Creator and Usage Simulator tools were used to populate and stress test the SharePoint environment described earlier. The user workload tested was 25% Web access, 25% list access, 25% list creation, and 25% doc creation. Several two-hour load tests were performed with 25% of the users (750 out of the total 3,000 users) online at any point in time. Tests were conducted both with and without data deduplication enabled on the VM C: drives hosting the operating system and SQL Server binaries.

VM Disk I/O Latency

For all the tests, the read and write latencies for the database files were well within the Microsoft recommendations.

VM CPU and Memory Utilization

The Web servers were configured with 2GB RAM and two virtual CPUs, and the index and database servers were configured with 4GB RAM and four virtual CPUs. For the entire duration of the test cycles, there were no CPU or memory bottlenecks on any of the VMs.

NetApp Storage Utilization Summary

For the entire duration of the test cycles, the NetApp FAS6040 storage controller had sufficient capability to handle the test workload for the SharePoint environment. Also, there were no I/O bottlenecks on the storage array.

As mentioned earlier, the load tests for different applications were also conducted simultaneously. There were no performance bottlenecks on the storage controllers, network, ESXi hosts, or VMs.

6.3 VMWARE VMOTION, HA, AND DRS VALIDATION

During the load tests for different applications and also when all the applications were load tested at the same time, VMs were successfully migrated between different ESXi hosts using VMotion without any issues. Also, VMware HA and DRS were tested without any issues, demonstrating a high level of solution availability and resource utilization.

6.4 BACKUP AND RESTORE VALIDATION

MICROSOFT SQL SERVER 2008 R2

Table 12 and Table 13 show the results of the SQL Server 2008 R2 backup and restore testing at different levels of granularity.

41 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Table 12) SQL Server 2008 R2 backup test results.

Backup Level Local Snapshot Backup

SnapMirror Remote Replication

Entire VM

Individual datastore

Table 13) SQL Server 2008 r2 restore test results.

Table 13) SQL Server 2008 R2 restore test results.

Restore Level Restore from Local Snapshot Backup

Restore from SnapMirror Remote Replication

Entire VM

Individual or multiple VM drives

Individual database

MICROSOFT SHAREPOINT 2010

Table 14 and Table 15 show the results of the SharePoint 2010 backup and restore testing.

Table 14) SharePoint backup test results.

Backup Level Local Snapshot Backup

SnapMirror Remote Replication

Entire SharePoint environment

Individual VMs

Table 15) SharePoint restore test results.

Restore Level Restore from Local Snapshot Backup

Restore from SnapMirror Remote Replication

Entire SharePoint environment

7 ISCSI SOLUTION DESIGN

7.1 HIGH-LEVEL SOLUTION ARCHITECTURE

The iSCSI solution validation involved two VMware ESXi hosts running the mixed Microsoft applications workload (described in section 2) with 30 virtual machines hosted on NetApp shared storage. The virtual machine operating system, installed applications, databases, and logs are hosted on NetApp iSCSI-based datastores. The solution high availability is achieved by using VMware HA, NetApp active-active controllers, and Cisco Nexus 5020 switches. The backup and recovery solution component includes application-consistent point-in-time NetApp Snapshot copies with NetApp VSC, SME, SMSQL, SMOSS, and NetApp SnapMirror replication.

42 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 24) High-level solution architecture.

7.2 SOLUTION HARDWARE AND SOFTWARE REQUIREMENTS

HARDWARE RESOURCES

Table 16 shows the equipment that was used in this configuration.

Table 16) Hardware configuration.

Solution Component Minimum Revision Primary Storage One NetApp FAS6040HA cluster Data ONTAP 8.0.1

Five disk shelves Each disk 300GB/15K/FC

Networking Two Cisco Nexus 5020 switches One dual-port 10Gb Ethernet NIC per FAS6040 controller

Backup Storage One NetApp FAS3040HA cluster Data ONTAP 8.0.1

Two disk shelves 28 disks (14 per shelf); each disk 1TB/7200RPM/SATA

Two ESXi Hosts 64GB RAM

43 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Solution Component Minimum Revision Four quad-core Xeon processors One dual-port 10Gb Ethernet NIC

SOFTWARE RESOURCES

Table 17 shows the software components that were used in the configuration.

Table 17) Software components.

Solution Component Minimum Revision Primary Storage Data ONTAP 8.0.1 NFS, ASIS, FlexClone, SnapMirror, SnapRestore, NearStore, SnapManager for Microsoft SQL Server, SnapManager for Exchange, SnapManager for Microsoft Office SharePoint Server, SnapDrive for Windows licenses

N/A

Backup Storage Data ONTAP 8.0.1 NFS, ASIS, SnapMirror, and NearStore licenses N/A

NetApp Management Software NetApp Virtual Storage Console (VSC) 2.0.1 NetApp System Manager 2.0 NetApp SnapManager for Exchange 6.0 NetApp Single Mailbox Recovery for Exchange 6.0 NetApp SnapManager for SQL Server 5.1 NetApp SnapManager for SharePoint 6.0 NetApp SnapDrive 6.3 NetApp DataFabric Manager (DFM) 4.0.1 VMware vSphere Infrastructure ESX hosts VMware ESXi, 4.1.0 (build 260247) vCenter Server 4.1.0 vCenter Database SQL Server 2005 Applications Virtual Machine Operating System Windows Server 2008 x64, Enterprise Edition, R2 Microsoft Applications Microsoft Exchange 2010, Enterprise Edition Microsoft SharePoint 2010, Enterprise Edition

44 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Solution Component Minimum Revision SQL Server 2008, Enterprise Edition, R2

7.3 ISCSI SOLUTION ARCHITECTURE DETAILS

VIRTUAL MACHINE LAYOUT

The solution contained a total of 30 virtual machines. The purpose of this configuration was to simulate a real-world customer environment, including the supporting utility and test and dev servers, in addition to the primary Microsoft application servers.

Microsoft Applications VMs

• Microsoft Exchange Server 2010. Total six VMs (two mailbox servers, two hub servers, two CAS servers)

• Microsoft Office SharePoint Server 2010. Total four VMs (two Web front end/query, one index, one SQL Server 2008 R2)

• Microsoft SQL Server 2008 R2. Two VMs • Microsoft IIS. Four VMs

Test and Dev VMs • Windows Server 2008 R2. Four VMs

Utility VMs

One Microsoft WSUS, one Microsoft SCOM, one Microsoft Exchange LoadGen Tool, four SharePoint test workstations, one NetApp DataFabric Manager, one VMware vCenter 4.1, one VMware vCenter 4.1 database were used.

All the NetApp recommended iSCSI settings highlighted in TR-3749: NetApp and VMware vSphere Storage Best Practices were set directly from the vCenter GUI using the NetApp VSC vCenter plugin.

NETWORK ARCHITECTURE

In this solution, the network is composed of two Cisco Nexus 5020 switches. Since the Cisco Nexus switches used in this configuration support virtual port channeling (vPC), logical separation of the storage network from the rest of the network is achieved while at the same time providing a high level of redundancy, fault tolerance, and security. With the vPC feature, scalable Layer 2 topologies can be deployed, reducing the dependence on Spanning Tree Protocol for redundancy and loop avoidance. Also, high cross-sectional bandwidth is attained by the feature’s ability to use all available physical links that interconnect the devices.

On the Cisco Nexus network make sure of the following configurations:

• Be sure to set up a management VLAN for the management network, a public VLAN for the virtual machine network, and a private, nonroutable VLAN for VMotion.

• Be sure to use a 10Gb connection between the two Cisco Nexus 5020 switches. • Be sure to enable a vPC between the two Cisco Nexus 5020 switches. In order to use this feature be

sure to have the Cisco NX-OS Software Release 4.1(3)N1 for Cisco Nexus 5000 series switches installed on your Cisco Nexus 5020.

While the Cisco Nexus 5020 switches are 10Gb, they do support 1Gb modules. Therefore, other Cisco switches can be used in conjunction with the Cisco Nexus 5020s in order to further scale out the virtualization and storage network.

45 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

ESXi Host Network Architecture

Figure 25 shows the virtual network layout for each ESXi host. Each ESXi host has two 10Gb Ethernet ports configured into different port groups, as shown. Note that there are two VMkernel storage ports as vSphere 4.1 supports multiple TCP sessions with iSCSI datastores. Enabling multiple TCP sessions with the ESX Round Robin PSP (path selection plugin) allows for iSCSI datastores to send I/O over every available path to the iSCSI target (NetApp storage array).

Figure 25) ESXi host network architecture.

Storage Network Layout

Figure 26 shows the storage network layout for the ESXi host connectivity with the NetApp storage controller over Cisco Nexus 5020 switches. Make sure to configure a nonroutable VLAN for the iSCSI storage traffic to pass to and from the NetApp storage controllers to the vSphere hosts. With this setup the iSCSI traffic is kept completely contained, and security is more tightly controlled.

Also, it is important to have at least two physical Ethernet switches for proper network redundancy in your VMware environment.

46 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 26) Storage network architecture.

47 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

STORAGE ARCHITECTURE

NetApp Storage Aggregate Layout

Figure 27 shows the NetApp storage aggregate layout for hosting all the data components for every VM.

Figure 27) NetApp storage aggregate layout.

The aggregate sizing is based on the disk requirements for all the applications to successfully meet their storage capacity and performance requirements.

Note: In this architecture, all the aggregates that host volumes required for Exchange VMs are on one storage controller, and the aggregates that host volumes for SharePoint and SQL Server are on the second controller. This consideration was made from the perspective of VMware vCenter Site

48 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Recovery Manager, which we plan to add in a future release of this guide. VMware vCenter Site Recovery Manager requires all datastores hosting data for a VM to be on the same storage controller.

NetApp Storage Volume Layout

Figure 28 shows the NetApp storage volume layout for hosting the different data components for every VM.

Figure 28) NetApp volume layout.

Each virtual machine had a 32GB C: drive (minimum requirements for Windows Server 2008 R2 per the VMware Guest Operating System Installation Guide) with the vmdk hosted on the VMFS datastore.

The Microsoft applications are deployed as follows:

• Case 1. The application server (Exchange, SQL Server, and SharePoint) database and log drives are hosted on iSCSI-based raw device mapping (RDM) LUNs, directly created and connected inside the guest VMs using NetApp SnapDrive 6.3 software. This provides the flexibility of leveraging the NetApp and Microsoft application-integrated SnapManager products to achieve granular, automated backup and recovery.

• Case 2. The application server (SQL Server and SharePoint) database and log drives are hosted on iSCSI-based iSCSI VMFS datastores. Using vCenter server create virtual machine disks (VMDKs) in iSCSI VMFS datastores, which requires NetApp SnapDrive 6.3 and Virtual Storage Console 2.0.1 or later. This provides the flexibility of leveraging the NetApp and Microsoft application-integrated SnapManager products to achieve granular, automated backup and recovery.

Microsoft Exchange Server 2010 Datastore Layout

• Case 1. Figure 29 shows the datastore layout for the different data components of Microsoft Exchange Server 2010. The temporary data VM swap file (.vswp) has been separated out. This reduces the daily Snapshot change rate and facilitates faster completion of nightly primary storage deduplication operations. The database and transaction log iSCSI RDM LUNs are hosted on a separate volume from the Windows OS and Exchange binaries, as shown in Figure 5.

49 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 29) Microsoft Exchange Server 2010 datastore layout.

Microsoft Office SharePoint Server 2010 Datastore Layout

• Case 1. Figure 30 shows the datastore layout for the different data components of Microsoft Office SharePoint Server 2010. The database and log files are hosted on separate iSCSI RDM LUNs on a separate volume from the Windows OS and SharePoint binaries, as shown in Figure 5.

50 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 30) Microsoft Office SharePoint Server 2010 datastore layout.

• Case 2. Figure 31 shows the datastore layout for the different data components of Microsoft Office SharePoint Server 2010. The database and log virtual machine disks (VMDKs) are hosted on separate iSCSI VMFS datastores from the Windows OS and SharePoint binaries, as shown in Figure 5.

51 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 31) Microsoft Office SharePoint Server 2010 datastore layout.

Microsoft SQL Server 2008 R2 Datastore Layout

• Case 1. Figure 32 shows the datastore layout for the different data components of Microsoft SQL Server 2008 R2. The SQL database and log files are hosted on iSCSI RDM LUNs on a separate volume from the Windows OS and SQL Server binaries, as shown in Figure 5.

52 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 32) Microsoft SQL Server 2008 R2 datastore layout.

• Case 2. Figure 33 shows the datastore layout for the different data components of Microsoft SQL Server 2008 R2.The SQL database and log virtual machine disks (VMDKs) are hosted on separate iSCSI VMFS datastores than the Windows OS and SQL Server binaries, as shown in Figure 5.

53 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 33) Microsoft SQL Server 2008 R2 datastore layout.

STORAGE SIZING

This section contains details about storage sizing. These numbers vary from environment to environment, so you should consult your NetApp systems engineer about the exact sizing for your environment.

• Microsoft SQL Server 2008 R2. The SQL Server workload was divided into 10 separate databases using both OLTP and DSS workloads. The databases required 2.2TB of disk space, and the transaction logs required 4.5GB of disk space.

• Microsoft Office SharePoint Server 2010. SharePoint used 3.1TB of disk space with 80GB databases across multiple site collections on the SQL Server. Also, 1GB of disk space was allocated for the transaction logs.

• Microsoft Exchange 2010: Exchange used 1.4TB of disk space for the databases and 800GB of disk space for the transaction logs.

• Datastores hosting VM C: drives (OS and application binaries): (size of the VM C: drive + VM pagefile (1.5 to 3 times) + 15% free space for non-vmdk files, VMware snapshots, and so on) * number of VMs in the datastore.

54 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

− Application VM datastore on controller A, hosting 14 VMs ~ 600GB − Application VM datastore on controller B, hosting 6 VMs ~ 260GB − Utility VM datastore on controller B, hosting 8 VMs ~ 340GB − VMware vCenter VM datastore on controller B, hosting 2 VMs ~ 100GB

These storage requirements are before considering NetApp deduplication and thin provisioning. The NetApp deduplication and storage efficiency savings are highlighted later in the document. The deduplication savings vary from environment to environment and application to application. For more information, see NetApp TR-3505.

7.4 BACKUP AND RESTORE ARCHITECTURE

BACKUP ARCHITECTURE DETAILS

One backup policy is used by NetApp VSC for backing up the VMFS datastores hosting the vmdk files with the OS and application binaries for the VMs.

For obtaining application-consistent backups for the Exchange, SQL Server, and SharePoint VMs, NetApp SnapManager for Exchange, SQL, and SharePoint were leveraged to perform scheduled backups of the transaction logs and databases and also initiate SnapMirror update. The SnapManager products also make sure of granular recovery for these Microsoft applications. It is highly recommended that the VSC and application-specific SnapManager backups be scheduled so that they happen at different times.

Figure 34) High-level backup architecture.

55 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

RESTORE ARCHITECTURE DETAILS

Restores from Local Snapshot Backups

NetApp SnapManager products allow individual application-level recovery. Full VM-level recovery (OS, application binaries, and application data) is achieved by using both the NetApp VSC and application-specific SnapManager restore functionality.

Restores from Local SnapMirror Backups

For full environment-level restores from the SnapMirror backups on the remote site, follow this process:

1. Restore the VM OS and application binaries using VSC a. Quiesce and break the SnapMirror relationships. b. Set up a SnapMirror relationship back from the destination storage system on the remote site to

the source storage system at the primary site. c. Quiesce and break the new SnapMirror relationship again. d. Mount the datastores on the ESXi host. e. Register the VMs from the restored datastore.

Note: Make sure that all the VM hard disks point to the correct vmdk files on the restored datastores.

2. Restore the application data using SnapManager. a. Invoke SnapRestore from within the application-specific SnapManager product inside the guest

VM as done in a physical environment.

8 ISCSI SOLUTION VALIDATION

8.1 STORAGE EFFICIENCY

NetApp thin provisioning capabilities were enabled on all the datastores on the primary storage, and deduplication was enabled on the datastores hosting the VM OS and application binaries. The deduplication schedule was set to run once every night. Figure 35 shows a screenshot of the NetApp System Manager, showcasing 92% savings for a datastore hosting OS and application binaries for four virtual machines. Similar storage savings were observed for other datastores hosting the OS and application binaries for the Exchange, SharePoint, test and dev, utility, and vCenter VMs.

Note that as you scale out to virtualize your entire data center with hundreds to thousands of VMs, the storage efficiency is even higher and should be considered in sizing the environment and helping you reduce costs. Also note that NetApp intelligent caching capabilities (built natively in Data ONTAP and Flash Cache cards) strongly complement the NetApp storage efficiency capabilities.

56 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Figure 35) NetApp System Manager screenshot showing multiple levels of storage efficiency.

Storage savings for the application-specific data drives (for example, SharePoint, SQL database) vary from application to application and environment to environment. For savings specific to each application, refer to NetApp TR-3505 for further details.

8.2 PERFORMANCE VALIDATION

The iSCSI storage configuration described in this guide was validated by configuring the environment described earlier and then performance testing using the application-specific tools described in this section. The tests were performed individually for SQL Server, SharePoint, and Exchange and also by running all these applications at the same time. The test results validate that the architecture is capable of handling the mixed workload.

MICROSOFT EXCHANGE 2010

The Microsoft Exchange Load Generation Tool was used to simulate the 3,000-heavy-user environment with 250MB per mailbox. Several eight-hour duration load tests were performed, both with and without NetApp deduplication enabled on the VM C: drives hosting the operating system and Exchange binaries.

VM Disk I/O Latency

For all test cycles, the read and write latencies were well within the Microsoft recommendations mentioned here: technet.microsoft.com/en-us/library/aa995945.aspx.

57 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

VM CPU and Memory Utilization

Each Exchange mailbox server was configured with 9.5GB RAM [2GB + (1,500 users per mailbox server * 5MB)] and two virtual CPUs. For the entire eight-hour test cycle, there were no CPU or memory bottlenecks on the VMs or the ESXi host.

NetApp Storage Utilization Summary

For the entire eight-hour test cycle, the NetApp FAS6040 storage controller had more than enough capability to handle the workload for the 3,000-user Exchange environment that was tested. Also there were no I/O bottlenecks on the storage array.

The NetApp Flash Cache offers significant performance benefits in an Exchange environment. For more information, see TR-3867: Using Flash Cache for Exchange 2010.

Also, see this VMware white paper, which compares the performance of a 16,000-heavy-user Exchange environment across all the storage protocols (FC, iSCSI, NFS) on NetApp storage: VMware vSphere 4: Exchange Server on NFS, iSCSI, and Fibre Channel.

MICROSOFT SQL SERVER 2008 R2

The Microsoft SQLIOSim utility was used to stress test the storage architecture described earlier. Several load tests were performed, both with and without deduplication enabled on the VM C: drives hosting the operating system and SQL Server binaries.

VM Disk I/O Latency

For all the tests, the read and write latencies for the database files were well within the Microsoft recommendations.

VM CPU and Memory Utilization

Each SQL Server VM was configured with 4GB RAM and four virtual CPUs. For the entire duration of the test cycle, there were no CPU or memory bottlenecks on the VMs.

NetApp Storage Utilization Summary

For the entire duration of the test cycles, the NetApp FAS6040 storage controller had sufficient capability to handle the test workload for the SQL Server environment. Also, there were no I/O bottlenecks on the storage array.

MICROSOFT SHAREPOINT SERVER 2010

AvePoint SharePoint Test Environment Creator and Usage Simulator tools were used to populate and stress test the SharePoint environment described earlier. The user workload tested was 25% Web access, 25% list access, 25% list creation, and 25% doc creation. Several two-hour load tests were performed with 25% of the users (750 out of the total 3,000 users) online at any point in time. Tests were conducted both with and without data deduplication enabled on the VM C: drives hosting the operating system and SQL Server binaries.

VM Disk I/O Latency

For all the tests, the read and write latencies for the database files were well within the Microsoft recommendations.

VM CPU and Memory Utilization

The Web servers were configured with 2GB RAM and two virtual CPUs, and the index and database servers were configured with 4GB RAM and four virtual CPUs. For the entire duration of the test cycles, there were no CPU or memory bottlenecks on any of the VMs.

58 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

NetApp Storage Utilization Summary

For the entire duration of the test cycles, the NetApp FAS6040 storage controller had sufficient capability to handle the test workload for the SharePoint environment. Also, there were no I/O bottlenecks on the storage array.

As mentioned earlier, the load tests for different applications were also conducted all at the same time. There were no performance bottlenecks on the storage controllers, network, ESXi hosts, or VMs.

8.3 VMWARE VMOTION, HA, AND DRS VALIDATION

During the load tests for different applications, and also when all the applications were load tested at the same time, VMs were migrated between different ESXi hosts by using VMotion. No issues were observed. Also, VMware HA and DRS were tested without any issues, demonstrating a high level of solution availability and resource utilization.

8.4 BACKUP AND RESTORE VALIDATION

MICROSOFT EXCHANGE SERVER 2010

Table 18 and Table 19 show the results of the backup and restore testing for Exchange 2010 at different levels of granularity.

Table 18) Exchange Server 2010 backup test results.

Backup Level Local Snapshot Backup

SnapMirror Remote Backups

Entire VM

Individual storage group

Table 19) Exchange Server 2010 restore test results.

Restore Level Restore from Local Snapshot Backup

Restore from SnapMirror Remote Replication

Entire VM

Individual storage group

Individual mailbox recovery (SMBR)

MICROSOFT SQL SERVER 2008 R2

Table 20 and Table 21 show the results of the backup and restore testing for SQL Server 2008 R2 at different levels of granularity.

Table 20) SQL Server 2008 R2 backup test results.

Backup Level Local Snapshot Backup

SnapMirror Remote Backups

Entire VM

Individual database

59 Running Microsoft Enterprise Applications on VMware vSphere, NetApp Unified Storage, and Cisco Unified Fabric

Table 21) SQL Server 2008 R2 restore test results.

Restore Level Restore from Local Snapshot Backup

Restore from SnapMirror Remote Replication

Entire VM

Individual database

Individual transaction level

MICROSOFT OFFICE SHAREPOINT SERVER 2010

Table 22 and Table 23 show the results of the backup and restore testing for Microsoft Office SharePoint Server 2010 at different levels of granularity.

Table 22) SharePoint backup test results.

Backup Level Local Snapshot Backup

SnapMirror Remote Backups

Entire SharePoint farm

Individual VMs

Table 23) SharePoint restore test results.

Restore Level Restore from Local Snapshot Backup

Restore from SnapMirror Remote Replication

Entire SharePoint site

Item level

9 SUMMARY Many customers successfully using virtualization today are considering VMware vSphere as the next-generation platform for their Microsoft environment. This report describes how a building-block approach can be used to design an environment with differing disk I/O workloads. This approach leverages a virtualized platform for flexibility and is easily scalable by adding building blocks at any time. Performance testing validated that Microsoft server applications perform extremely well when virtualized on VMware vSphere and NetApp. The integrated NetApp SnapManager tools for VMware and Microsoft backup and recovery provided granular, application-consistent, space-efficient backup and recovery for the entire environment.

VMware VMotion offered significant increases in the solution flexibility for virtual servers and can seamlessly move heavily loaded application servers across ESX servers with no loss of service. This proved that VMware VMotion can be a valuable tool for the Microsoft administrator for increased flexibility and avoiding planned downtime.

VMware HA provides a robust solution for protecting every virtual server in the organization from server hardware failure. For customers who have increased availability requirements, VMware HA can be combined with other software clustering solutions such as VMware fault tolerance or Windows failover clustering.

This architecture represents an end-to-end solution for deploying Microsoft server applications on a next-generation platform built on VMware VSphere and NetApp unified FC and IP storage.

10 ACKNOWLEDGEMENTS The following people have contributed to the solution design, validation, and creation of this solution guide: Abhishek Basu, Sitakanta Chaudhury, Amrita Das, Soumen De, Abhinav Joshi, Peter Learmonth, Jack McLeod, John Parker, Vaughn Stewart,Wen Yu, and Rachel Zhu.

11 FEEDBACK If you have questions or comments about this document, send email to [email protected].

NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any information or recommendations provided in this publication, or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS, and the use of this information or the implementation of any recommendations or techniques herein is a customer’s responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.

© 2011 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp, Inc. Specifications are subject to change without notice. NetApp, the NetApp logo, Go further, faster, DataFabric, Data ONTAP, FlexClone, NearStore, SnapDrive, SnapManager, SnapMirror, SnapRestore, and Snapshot are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries. Microsoft, SharePoint, SQL Server, and Windows are registered trademarks of Microsoft Corporation. VMware and VMotion are registered trademarks and VCenter and vSphere are trademarks of VMWare, Inc. Xeon is a registered trademark of Intel Corporation. Cisco and Cisco Nexus are registered trademarks of Cisco Systems. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. TR-3785