james walkenhorst solutions architect alliances and solutions marketing emc isilon division

32
1 Copyright 2012 EMC Corporation. All rights reserved. James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division EMC ISILON vSphere 5 Best Practice Cormac Hogan Technical Marketing Manager - Storage Cloud Infrastructure VMware

Upload: quinta

Post on 04-Jan-2016

43 views

Category:

Documents


0 download

DESCRIPTION

EMC ISILON. vSphere 5 Best Practices. James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division. Cormac Hogan Technical Marketing Manager - Storage Cloud Infrastructure VMware. Agenda. “Best Practices” FAQ - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

1© Copyright 2012 EMC Corporation. All rights reserved.

James WalkenhorstSolutions Architect

Alliances and Solutions MarketingEMC Isilon Division

EMC ISILON

vSphere 5 Best Practices

Cormac HoganTechnical Marketing Manager - Storage

Cloud InfrastructureVMware

Page 2: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

2© Copyright 2012 EMC Corporation. All rights reserved.

Agenda• “Best Practices” FAQ

• General Datastore Guidelines and Best Practices

• Overview of NFS Datastores on Isilon

• Isilon Best Practices for NFS Datastores

• VMware Best Practices for NFS Datastores

• Overview of iSCSI Datastores on Isilon

• Isilon Best Practices for iSCSI Datastores

• VMware Best Practices for iSCSI Datastores

• VMware Best Practices for Optimal Network and Storage Access

• Resources and Links

• Q&A

Page 3: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

3© Copyright 2012 EMC Corporation. All rights reserved.

EMC-Isilon Storage Best-Practices FAQs

Q. What is a Best Practice?

A. An approach, validated through specific testing scenarios or observed frequently in customers’ environments, that produces an optimal outcome to a particular technical challenge.

R. Are Best Practices the same as Standards?

S. No. They simply represent an approach that is widely understood to produce the best possible outcome.

Q. What are the benefits of observing Best Practices?

R. Reduced risk, faster root-cause analysis of issues, faster response times from support organizations, and the best chance of achieving optimal performance.

Page 4: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

4© Copyright 2012 EMC Corporation. All rights reserved.

General Best PracticesFor All Isilon-Based Datastores

• Network segmentation (e.g. VLANs) to separate VM network traffic from VMkernel storage traffico Best practice for optimal performance o For optimal security, use an isolated (or trusted) network for all storage

traffic• Test Jumbo frame (MTU=9000) performance in your environment

o Fully supported by both VMware and EMCo Overall performance results depend on multiple variableso Use whichever configuration produces best overall performance

For optimal datastore performance and availability:

• Use 10Gb/s Ethernet connections if possible for best performance

• Use vSphere Storage I/O Control to manage VM storage utilization

• Use Network I/O control to manage network bandwidth for storage traffic under heavy workloads

Page 5: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

5© Copyright 2012 EMC Corporation. All rights reserved.

General Best PracticesFor All Isilon-Based Datastores

For optimal datastore performance and availability (continued):

• Size your storage cluster first for performance, and then for capacity• Minimize the number of network hops between vSphere hosts and Isilon

storage:o EMC recommends using the same subneto Use the same switch, if possible

• Ensure redundant network links exist between vSphere hosts and Isilon nodes for all datastoreso HA path configuration and administration differs for each datastore type

• Different workloads may require different storage configuration settingso Higher data protection levels vs.

higher performance requirementso Analyze workload patterns for

each application, if possible

Page 6: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

6© Copyright 2012 EMC Corporation. All rights reserved.

Jumbo Frames Jumbo frames were not supported for NAS and

iSCSI traffic on ESX 3.x. They were limited to data networking only (Virtual Machines and the VMotion network).

– Jumbo frames are fully supported for NFS and iSCSI traffic on ESX 4 & 5.

Data

CDB

BHS

Data

CDB

BHS

Data

CDB

Data

CDB

What do they do?

Jumbo frames allows for multiple PDUs to be combined into a single frame to improve throughput.

Page 7: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

7© Copyright 2012 EMC Corporation. All rights reserved.

NFS Datastore Overview• VM data stored on file-based NFS mount, accessed using standard NFS

v3.0 protocol• Datastore is automatically thin-provisioned

Advantages of NFS datastores:• Rapid, simple storage

provisioningo No individual LUNs to

manageo Datastore is

immediately available upon creation

o Multiple exports to the same mount point, using multiple interfaces, increase throughput and performance

Page 8: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

8© Copyright 2012 EMC Corporation. All rights reserved.

NFS Datastore Overview• VM data stored on file-based NFS mount, accessed using standard NFS

v3.0 protocol• Datastore is automatically thin-provisioned

Advantages of NFS datastores:• Rapid, simple storage

provisioning• Higher storage

utilization rateso File system space not

restricted to limitations of a single LUN

o Larger storage pool for VMDK files to share

Page 9: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

9© Copyright 2012 EMC Corporation. All rights reserved.

NFS Datastore Overview• VM data stored on file-based NFS mount, accessed using standard NFS

v3.0 protocol• Datastore is automatically thin-provisioned

Advantages of NFS datastores:• Rapid, simple storage

provisioning• Higher storage

utilization rates• Simplified

managemento No need to balance

space usage across LUNs

o VMs can be balanced across datastores based solely on bandwidth usage

Page 10: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

10© Copyright 2012 EMC Corporation. All rights reserved.

NFS Best PracticesOptimal Configuration for High Availability

Network Redundancy Options• Static Link Aggregation using 802.3ad LAG

o Requires compatible switch and NIC hardwareo Protects against NIC/path failureso Does not increase performance

• SmartConnect Dynamic IP Address Poolso Automatically assigns IP addresses to member interfaces on each nodeo Interface or node failure causes SmartConnect to reassign IP address(es)

to remaining nodes in the clustero Datastore mapping can be IP-addressed based, or use DNS round-robin

Page 11: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

11© Copyright 2012 EMC Corporation. All rights reserved.

NFS Best PracticesOptimal Configuration for Performance

Throughput limits of a single datastore• Two TCP connections per datastore

o One connection for NFS data flowo One connection for NFS control informationo Datastore throughput is dependent on the available bandwidth

Page 12: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

12© Copyright 2012 EMC Corporation. All rights reserved.

NFS Best PracticesOptimal Configuration for Performance (continued)

Creating multiple datastores increases throughput• Best design uses mesh topology• Every vSphere host connects to every datastore• VMs can be created on any datastore to balance the I/O workload between

hosts and cluster nodes

Page 13: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

13© Copyright 2012 EMC Corporation. All rights reserved.

NFS Configuration Gotcha #1 ESXi supports NFS, but more specifically:

– NFS version 3 only, no support for v2 or v4.– Over TCP only, no support for UDP.

The UI and ESXi logs will inform you if you attempt to use a version or protocol other than version 3 over TCP:

NasVsi: 107: Command: (mount) Server: (madpat) IP: (10.16.156.25) Path: (/cormac) Label: (demo) Options: (None)WARNING: NFS: 1007: Server (10.16.156.25) does not support NFS Program (100003) Version (3) Protocol (TCP)

Page 14: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

14© Copyright 2012 EMC Corporation. All rights reserved.

NFS Configuration Gotcha #2 Ensure that the admin who is mounting the NFS datastore

on the ESXi host has appropriate permissions to do so. If an admin attempts to mount a datastore without the

correct permissions, the mount may be successful, but the first attempt to deploy a VM will fail as follows:

Page 15: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

15© Copyright 2012 EMC Corporation. All rights reserved.

Increasing Maximum Number of NFS mounts Default configuration only allows for 8 NFS mounts per ESXi Server. To enable more, start the vSphere Client, select the host from the

inventory, and click Advanced Settings on the Configuration tab. In the Advanced Settings dialog box, Net.TcplpHeapSize needs to be

adjusted if NFS.MaxVolumes is increased or you may deplete heap.

Symptoms from running out of heap are documented here:http://kb.vmware.com/kb/1007332

Page 16: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

16© Copyright 2012 EMC Corporation. All rights reserved.

NIC Teaming – Failover, not Load Balancing There is only one active connection between the ESXi

server and a single storage target (mount point). This means that although there may be alternate

connections available for failover, the bandwidth for a single datastore and the underlying storage is limited to what a single connection can provide.

To leverage more available bandwidth, there must be multiple connections from the ESXi server to the storage targets.

One would need to configure multiple datastores, with each datastore using separate connections between the server and the storage, i.e. NFS shares presented on different IP addresses.

Page 17: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

17© Copyright 2012 EMC Corporation. All rights reserved.

VLANs for Isolation & Security of NFS Traffic Storage traffic is transmitted as clear text across the LAN. Since ESXi 5.0 continues to use NFS v3, there is no built-

in encryption mechanism for the traffic. A best practice would be to use trusted networks for NFS.

– This may possibly entail using separate physical switches or leverage a private VLAN.

Page 18: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

18© Copyright 2012 EMC Corporation. All rights reserved.

iSCSI Datastore Overview• iSCSI LUNs are constructed and treated as files within OneFS• Mounted over Ethernet network using iSCSI Initiators• EMC supports both thin and thick provisioning

Advantages of iSCSI datastores:• Raw-device mapping

supported for VMs that require it

• May provide better throughput performance for some workload types

• iSCSI LUNs can be cloned for certain VM management scenarios

Page 19: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

19© Copyright 2012 EMC Corporation. All rights reserved.

iSCSI Best PracticesOptimal Configuration for High Availability

Network Redundancy Options• Leverage vSphere multipath plug-ins instead of LAG or SmartConnect

Advanced • Use dedicated IP pool for iSCSI target IP management and connectivity

o Enables segmentation of traffic between iSCSI and NFS workloads across the cluster

Page 20: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

20© Copyright 2012 EMC Corporation. All rights reserved.

iSCSI Best PracticesOptimal Configuration for High Availability (continued)

*Requires storage nodes and vSphere hosts to be on the same subnet

Network Redundancy Options• Create multiple VMkernel ports on vSphere hosts, with a single active

network interface, then use port binding to associate those groups with the iSCSI initiator*

• Set Path Selection Policy (PSP) to Fixed, and configure all hosts to use the same preferred path for each datastore

Page 21: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

21© Copyright 2012 EMC Corporation. All rights reserved.

iSCSI Best PracticesOptimal Configuration for Performance

• Highly randomized workloads within a large LUN may benefit from setting the LUN’s access pattern to Streaming within OneFS

• Consider using 2x mirroring protection to minimize parity calculation overhead on iSCSI write operations

• If multiple storage pools are used, create iSCSI LUNs on each pool and assign VM data to tiered pools based on each VM’s performance requirements

Page 22: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

22© Copyright 2012 EMC Corporation. All rights reserved.

iSCSI Gotcha #1 - Routing iSCSI traffic can be routed

between an initiator and target only when iSCSI binding is not implemented.

If iSCSI binding is implemented, then you cannot route between an initiator and target; they must be on the same subnet.

This has been the cause of many Service Requests.

Page 23: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

23© Copyright 2012 EMC Corporation. All rights reserved.

iSCSI Gotcha #2 – Slow Boot on ESXi 5.0 An issue was uncovered soon after ESXi 5.0 was

released. For hosts that used iSCSI, but where some initiators could

not see all configured targets, the slow boot time was observed.

This was due to all initiators trying to login to every target, and retrying multiple times when it failed to do so.

The symptoms of the slow boot, and the patch resolution are described in this KB article – http://kb.vmware.com/kb/2007108

Page 24: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

24© Copyright 2012 EMC Corporation. All rights reserved.

Multipathing - Overview Pluggable Storage Architecture - PSA

Native Multipathing Plugin – NMP Storage Array Type Plugin – SATP Path Selection Policy – PSP (follow vendor recommendations)

Some vendors provide their own plugin to the PSA, e.g. EMC’s PowerPath.

Third-Party PSP

Page 25: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

25© Copyright 2012 EMC Corporation. All rights reserved.

iSCSI Multipathing – Best Practices iSCSI binding ties a VMkernel port to a physical adapter and allows

the PSA to implement multipathing on VMware’s Software iSCSI Adapter.

If you use iSCSI binding, then you cannot route between initiator and target; they must co-exist on the same subnet.

Do not create a NIC team when implementing iSCSI binding. We want customers to consider storage resiliency based on multiple

paths to the storage, rather than basing it on the number of networks available to a single storage path.

Page 26: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

26© Copyright 2012 EMC Corporation. All rights reserved.

Gotcha – Improper Device Removal Improper removal of a physical device containing a VMFS

or RDM could result in an APD (All Paths Dead) state. Improvements have been made to ESX 4.x & 5.0. Follow steps outlined in http://kb.vmware.com/kb/1015084 for

ESX 4.x & http://kb.vmware.com/kb/2004605 for ESXi 5.0 when removing a datastore.

VMFS

VMFS

RDM

Page 27: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

27© Copyright 2012 EMC Corporation. All rights reserved.

Storage I/O Control - Overview• Storage I/O Control is supported with both block & NFS datastores

Monitors I/O latency to both block (iSCSI, FC, FCoE) datastores & NFS datastores at each ESXi host sharing a physical device.

When the average normalized latency exceeds a set threshold (30ms by default), the datastore is considered to be congested.

If congested, SIOC distributes available storage resources to virtual machines in proportion to their configured shares.

Used to determine migration needs with Storage DRS in ESXi 5.0

Troubleshooting Storage I/O Control: http://kb.vmware.com/kb/1022091

Page 28: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

28© Copyright 2012 EMC Corporation. All rights reserved.

Storage I/O Control Usage ScenarioWhat you see

NFS / VMFS Datastore

online store

data mining

MicrosoftExchange

What you want to see

NFS / VMFS Datastore

online store

data mining

MicrosoftExchange

Page 29: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

29© Copyright 2012 EMC Corporation. All rights reserved.

Network I/O Control - Overview With converged networks, network traffic with different

patterns and needs will merge together on the same network.

This may directly impact performance and predictability due to lack of isolation, scheduling & arbitration.

Network I/O Control can be used to prioritize different network traffic on the same pipe.

Network I/O Control also introduces the concept of user defined network groups.

– This is quite useful for Virtual Machines, whereby an administrator can select which VM or group of VMs has the higher priority on the network.

Network I/O Control can be used on both NFS & iSCSI.

Page 30: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

30© Copyright 2012 EMC Corporation. All rights reserved.

Network I/O Control – Configuration UI

NFS traffic can be given a higher priority than other traffic if contention arises

Page 31: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

31© Copyright 2012 EMC Corporation. All rights reserved.

ResourcesLinks and Contact information

– Everything VMware at EMC Community:

http://emc.com/vmwarecommunity

– VMware Storage blog:

http://blogs.vmware.com/vSphere/Storage/

– EMC’s Best Practices for vSphere5 on Isilon storage white paper :

http://www.isilon.com/file-handler/1729/library-emc-isilon-storage-and-vmware-vsphere-5.pdf

– Additional questions

[email protected]

Thank you for attending!

Page 32: James Walkenhorst Solutions Architect Alliances and Solutions Marketing EMC Isilon Division

32© Copyright 2012 EMC Corporation. All rights reserved.

Q&A