configuring left-hand iscsi and vsphere mpio

25
Configuring Left-hand Networks ISCSI and VMware VSphere for MPIO Configuring Left-hand Networks and VMware VSphere for MPIO The following is a example of how to team Two NICS for MPIO in Vsphere but you can use up to eight . The max throughput per NIC is120mbs so in theory you should be able to aggregate up to a Gigabyte of throughput This works particularly well with Left Hand storage as they use a single portal(virtual IP) to service ISCSI connections 1: Log into Virtual center and select > Add Networking 2: Choose Connection Types >Vmkernel 3: Choose create Virtual switch (do not choose Adapter)

Upload: stuarthnks9879

Post on 18-Nov-2014

14.102 views

Category:

Documents


0 download

DESCRIPTION

Configuring Left-Hand ISCSI and VSPHERE MPIO http://virtualy-anything.blogspot.com/2009/12/how-to-configure-vsphere-mpio-for-iscsi.html

TRANSCRIPT

Page 1: Configuring Left-Hand ISCSI and VSPHERE MPIO

Configuring Left-hand Networks ISCSI and VMware VSphere for MPIO Configuring Left-hand Networks and VMware VSphere for MPIO The following is a example of how to team Two NICS for MPIO in Vsphere but you can use up to eight . The max throughput per NIC is120mbs so in theory you should be able to aggregate up to a Gigabyte of throughput This works particularly well with Left Hand storage as they use a single portal(virtual IP) to service ISCSI connections

1: Log into Virtual center and select > Add Networking 2: Choose Connection Types >Vmkernel

3: Choose create Virtual switch (do not choose Adapter)

Page 2: Configuring Left-Hand ISCSI and VSPHERE MPIO

4: Click next and enter VMkernel-ISCSI-A as the Network Label

Page 3: Configuring Left-Hand ISCSI and VSPHERE MPIO

5: Enter your VMkernel ISCSI IP address and subnet mask and click finish

Page 4: Configuring Left-Hand ISCSI and VSPHERE MPIO
Page 5: Configuring Left-Hand ISCSI and VSPHERE MPIO

6: In the VI client you will now see a Virtual switch with One Vmkernel ISCSI port You need to go to step 2: and repeat this process with a different IP address name the port Vmkernel-ISCSI-B so you will Have Two Vmkernel ISCSI ports bound to the same switch

Page 6: Configuring Left-Hand ISCSI and VSPHERE MPIO

7: You will now add the Network cards to the switch. Go to >properties >Network apapters and select the two Adapters you have earmarked for ISCSI

Page 7: Configuring Left-Hand ISCSI and VSPHERE MPIO

8: Click next and leave the cards in the present order

Page 8: Configuring Left-Hand ISCSI and VSPHERE MPIO

9: You will now see the two card connected but not bound to ISCSI stack for the VI Client

Page 9: Configuring Left-Hand ISCSI and VSPHERE MPIO

10: You now need to put the card in correct order for failover . Highlight >properties on the virtual switch and then >ports and highlight Vmkernel-ISCSI-B >edit> and then choose the NIC Teaming tab

Page 10: Configuring Left-Hand ISCSI and VSPHERE MPIO
Page 11: Configuring Left-Hand ISCSI and VSPHERE MPIO

11: You will presented with the screen below you now need to check the Override Switch failover tab and move VMnic3 to the unused adapter category

Page 12: Configuring Left-Hand ISCSI and VSPHERE MPIO
Page 13: Configuring Left-Hand ISCSI and VSPHERE MPIO

12: Click finish and go back to the Vswitch properties screen and repeat the process For VMkernel-ISCSI-A. In this instance you need to reverse the process to Nic failover So Vmnic2 will be the unused adapter

Page 14: Configuring Left-Hand ISCSI and VSPHERE MPIO

13: You will now need to enable ISCSI for the switch so go to >Configuration > Highlight the ISCSI s/w adapter got to general tab and click configure and check the enabled box

Page 15: Configuring Left-Hand ISCSI and VSPHERE MPIO

14: You will now need to Bind the ISCSI initiator to the Vmkernel ISCSI ports This needs to be done via the service console so Putty to the service console of the VSphere host and su –

Page 16: Configuring Left-Hand ISCSI and VSPHERE MPIO

15: type esxcfg-vmknic to list vmk nics

16: You now need to make sure the NICs listed vmk1 and vmk0 are bound to the Vmkernel port so type vmkiscsi-tool -V -a vmk1 vmhba33 for the first nic and press return and vmkiscsi-tool -V -a vmk0 vmhba33 for the second nic

Page 17: Configuring Left-Hand ISCSI and VSPHERE MPIO

17: You will see vmk1 or vmk0 already bound if this was successful

18: Confirm both NICS are bound with the following command > esxcli swiscsi nic list -d vmhba33 and you will see the following below if everything is ok

Page 18: Configuring Left-Hand ISCSI and VSPHERE MPIO

19: You now need to add the ISCSI target from the VI client so it will pick up the LUNS And run a VMFS rescan20

Page 19: Configuring Left-Hand ISCSI and VSPHERE MPIO

20: Once the scan is finished if you view the configuration for the ISCSI adapter you will see that the ISCSI Vsphere Initiator now has two active paths but only one path has active I/O this is because the default Vsphere NMP is set to fixed we need to change this to round robin so that it will create two active I/0 paths and therefore increase performance and utilize the Lefthand MPIO

Page 20: Configuring Left-Hand ISCSI and VSPHERE MPIO

21: Highlight the devices tab and click manage paths. You will then screen the below You need to change the path selection to Round Robin from Fixed and then you will see Two paths with I/O. This needs to be done for every LUN and the following script can be used from the service console if this needs to be applied to a large amount of LUNS esxcli nmp device list | awk '/^naa/{print "esxcli nmp device setpolicy --device "$0" --psp VMW_PSP_RR" };' This will create a list on cmds that you then cut and paste from the SSH shell to and they will manually run when you press return.

Page 21: Configuring Left-Hand ISCSI and VSPHERE MPIO
Page 22: Configuring Left-Hand ISCSI and VSPHERE MPIO

22: Vmware NMP round robin has I/O limit per path of 1000 commands before it will transmit data the next active I/O path. This can cause issues whereby smaller packets than 100 commands will always take the same path and we not see the true benefits of active/actice MPIO We need to change I/O limit on the path to one command to force this. If you run the following command with the LUN identifier copied from the VI client you can see that the I/O limit is 1000 esxcli --server esx2td nmp roundrobin getconfig -d t10.F405E46494C4540077769497E447D276952705D24515B425 The t10.F405E46494C4540077769497E447D276952705D24515B425 is the LUN identifier and this can be copied to the clipboard per LUN from the VI client

Page 23: Configuring Left-Hand ISCSI and VSPHERE MPIO

23: To set the I/O limit to 1 run the following command esxcli --server esx2dt nmp roundrobin setconfig --device t10.F405E46494C4540077769497E447D276952705D24515B425 --iops 1 --type iops

Page 24: Configuring Left-Hand ISCSI and VSPHERE MPIO

24: Run the command esxcli --server esx2td nmp roundrobin getconfig -d t10.F405E46494C4540077769497E447D276952705D24515B425 You will now see the I/O limit is one

Page 25: Configuring Left-Hand ISCSI and VSPHERE MPIO

25: There is a known issue whereby whenever you reboot the host the IOPS value will default to a value 1449662136. The solution is edit the rc.local file (which basically is a script that runs at the end of boot up) to set the limit IOPS limit on on all luns Enter this command, the only variable you will need to change is the “naa.600” witch pertains to the identifier on array for i in `ls /vmfs/devices/disks/ | grep naa.600` ; do esxcli nmp roundrobin setconfig --type “iops” --iops=1--device $i; done