npiv vio presentation
TRANSCRIPT
© 2006 IBM Corporation
NPIV and the IBM Virtual I/O Server (VIOS)
October 2008
NPIV Overview
►N_Port ID Virtualization (NPIV) is a fibre channel industry standard
method for virtualizing a physical fibre channel port.
►NPIV allows one F_Port to be associated with multiple N_Port IDs,
so a physical fibre channel HBA can be shared across multiple guest
operating systems in a virtual environment.
►On POWER, NPIV allows logical partitions (LPARs) to have
dedicated N_Port IDs, giving the OS a unique identity to the SAN,
just as if it had a dedicated physical HBA(s).
NPIV specifics� PowerVM VIOS 2.1 - GA Nov 14
� NPIV support now has planned GA of Dec 19
� Required software levels– VIOS Fix Pack 20.1
– AIX 5.3 TL9 SP2
– AIX 6.1 TL2 SP2
– HMC 7.3.4
– FW Ex340_036
– Linux and IBM i planned for 2009
� Required HW
– POWER6 520,550,560,570 only at this time, Blade planned for 2009
– 5735 PCIe 8Gb Fibre Channel Adapter
� unique WWPN generation (allocated in pairs)***
� Each virtual FC HBA has a unique and persistent identity
� Compatible with LPM (live partition mobility)
� VIOS can support NPIV and vSCSI simultaneously
� Each physical NPIV capable FC HBA will support 64 virtual ports
� HMC-managed and IVM-managed servers
VIO client
Storage Virtualisation With NPIV
VIOS
FC Adapters
NPIV SAN
VIO client
VIOSvSCSI
FC Adapters
SAN
IBM 4700 LUNEMC 5000 LUN IBM 4700 LUNEMC 5000 LUN
Generic SCSI disk IBM 2105 LUNEMC 5000 LUNNote
Path code
And
Devices
difference
VIOS Admin
in charge
SAN Admin
Back in charge
Virtual SCSI
AdaptersSCSI
SAS
Virtual FC
Adapters
Pass Through
modeStorage
Virtualiser
VIOS 2.1
NPIVWhat you need?
VIO client
VIOS
FC Adapters
IBM 4700 LUNEMC 5000 LUN
IBM 2105 LUNEMC 5000 LUN
Virtual FC
Adapters
New PCIe 8Gbit
Fibre Channel adapters
(can run 2 or 4 Gbit)
Entry SAN switch
must be NPIV capable
Disk Sub-System does
not need to be NPIV capable
SAN Fabric
can be
2, 4 or 8 Gbit
(not 1 Gbit)
New EL340
Firmware (disruptive)
AIX 5.3 TL09,
AIX 6.1 TL02,
SLES 10 SP2,
RHEL 4.7,
RHEL 5.2
VIOS 2.1
HMC
7.3.4
VIOS 2.1
POWER6 only
Supports
SCSI-2 reserve/release
SCSI-3 persistent reserve
NPIVWhat you do?
1. HMC 7.3.4 configure
► Virtual FC Adapter
► Just like virtual SCSI
► On both Client and Server
Virtual I/O Server
VIOS 2.1
NPIVWhat you do?
2. Once Created:
LPAR Config
����Manage Profiles
����Edit click FC Adapter
���� Properties
and the WWPN is available
VIOS 2.1
NPIVWhat you do?
3. VIOS connect the virtual FC adapter to the physical FC adapter
► With vfcmap
► lsmap –all –npiv
► lsnports � shows physical ports supporting NPIV
4. SAN Zoning
� To allow the LPAR access to the LUN via the new WWPN
� Allow both WWPN and on any Partition Mobility target.
$ ioslevel
2.1.0.0
$ lsdev | grep FC
fcs0 Available FC Adapter
fscsi0 Available FC SCSI I/O Controller Protocol
Device
vfchost0 Available Virtual FC Server Adapter
$ vfcmap -vadapter vfchost0 -fcp fcs0
vfchost0 changed
$
VIOS 2.1
NPIV benefits
►NPIV allows storage administrators to used existing tools
and techniques for storage management
►solutions such as SAN managers, Copy Services, backup /
restore, should work right out of the box
►storage provisioning / ease-of-use
►Zoning / LUN masking
►physical <-> virtual device compatibility
►tape libraries
►SCSI-2 Reserve/Release and SCSI3 Persistent Reserve
– clustered/distributed solutions
►Load balancing (active/active)
►solutions enablement (HA, Oracle,…)
►Storage, multipathing, apps, monitoring…..
NPIV implementation
►Install the correct levels of VIOS, firmware, HMC,8G HBAs,
and NPIV capable/enabled SAN and storage
►Virtual Fibre channel adapters are created via the HMC
►The VIOS owns the server VFC, the client LPAR owns the
client VFC
►Server and Client VFCs are mapped one-to-one with the
vfcmap command in the VIOS
► The POWER hypervisor generates WWPNs based on the range of names
available for use with the prefix in the vital product data on the managed
system.
► The hypervisor does not reuse the WWPNs that are assigned to the virtual
Fibre Channel client adapter on the client logical partition.
Things to consider
� WWPN pair is generated EACH time you create a VFC. NEVER is re-created or re-used. Just like a real HBA.
� If you create a new VFC, you get a NEW pair of WWPNs.
� Save the partition profile with VFCs in it. Make a copy, don’t delete a profile with a VFCin it.
� Make sure the partition profile is backed up for local and disaster recovery! Otherwise you’ll have to create new VFCs and map to them during a recovery.
� Target Storage SUBSYSTEM must be zoned and visible from source and destination systems for LPM to work.
� Active/Passive storage controllers must BOTH be in the SAN zone for LPM to work
� Do NOT include the VIOS physical 8G adapter WWPNs in the zone
� You should NOT see any NPIV LUNs in the VIOS
� Load multi-path code in the client LPAR, NOT in the VIOS
� Monitor VIOS CPU and Memory – NPIV impact is unclear to me at this time
� No ‘passthru’ tunables in VIOS
NPIV useful commands
� vfcmap -vadapter vfchostN -fcp fcsX
► maps the virtual FC to the physical FC port� vfcmap -vadapter vfchostN -fcp
► un-maps the virtual FC from the physical FC port� lsmap –all –npiv
► shows the mapping of virtual and physical adapters and current status
► lsmap –npiv –vadapter vfchostN shows same ofr one VFC� lsdev -dev vfchost*
► lists all available virtual Fibre Channel server adapters� lsdev -dev fcs*
► lists all available physical Fibre Channel server adapters� lsdev –dev fcs* -vpd
► shows all physical FC adapter properties � lsnports
► shows the Fibre Channel adapter NPIV readiness of the adapter and the SAN switch.
� lscfg -vl fcsx
► In A(X client lpar, shows virtual fibre channel properties
NPIV resources
►Redbooks:
SG24-7590-01 PowerVM Virtualization on IBM Power Systems (Volume 2):
Managing and Monitoring
SG24-7460-01 IBM PowerVM Live Partition Mobility
►VIOS latest info:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html
Questions
BACKUP VIOS SLIDES
#5735 PCIe 8Gb Fibre Channel Adapter
� Supported on 520, 550, 560, 570, 575
� Dual port adapter - each port provides single initiator
► Automatically adjusts to SAN fabric 8 Gbps, 4 Gbps, 2 Gbps
► LED on card indicates link speed
� Ports have LC type connectors
► Cables are the responsibility of the customer.
► Use multimode fibre optic cables with short-wave lasers:
– OM3 - multimode 50/125 micron fibre, 2000 MHz*km bandwidth
● 2Gb (.5 – 500m) 4Gb (.5 – 380m) 8Gb (,5 – 150m)
– OM2 - multimode 50/125 micron fibre, 500 MHz*km bandwidth
● 2Gb (.5 – 150m) 4Gb (.5 – 70m) 8Gb (,5 – 21m)
– OM1 - multimode 62.5/125 micron fibre, 200 MHz*km bandwidth
● 2Gb (.5 – 300m) 4Gb (.5 – 150m) 8Gb (,5 – 50m)
Virtual SCSI
� client LPAR (ie virtual machine) is the SCSI initiator, VIOS
is the SCSI Target
� server LPAR owns physical I/O resources
� client LPAR sees standard SCSI devices,
accesses LUNs via a virtual SCSI adapter
� VIOS is a standard storage subsystem
� transport layer is the interpartition communication channel
provided by PHYP (reliable msg transport)
� SRP(SCSI Remote DMA Protocol)
� LRDMA(logical redirected DMA)
Virtual SCSI (continued)
�SCSI peripheral device types supported:
ƒDisk (backed by logical volume, physical volume, or file)
ƒOptical (backed by physical optical, or file)
� Adapter and device sharing
�Multiple I/O Servers per system, typically deployed in pairs
�VSCSI client support:
ƒAIX 5.3 or later
ƒLinux(SLES9+, RHEL3 U3+, RHEL4) or later
ƒ IBM i
�Boot from VSCSI devices
�Multi-pathing for VSCSI devices
Basic vSCSI Client And Server Architecture Overview
I/O Server
physical HBA
and storage
virtual client
adapter
virtual server
adapter
I/O client I/O client I/O client
PHYP
vSCSI NPIV
EMC
The vSCSI model for sharing storage resources is
storage virtualizer. Heterogeneous storage is
pooled by the VIOS into a homogeneous pool of
block storage and then allocated to client LPARs in
the form of generic SCSI LUNs. The VIOS performs
SCSI emulation and acts as the SCSI Target.
With NPIV, the VIOS's role is fundamentally
different. The VIOS facilitates adapter sharing only,
there is no device level abstraction or emulation.
Rather than a storage virtualizer, the VIOS serving
NPIV is a passthru, providing an FCP connection
from the client to the SAN.
vio client
VIOS
FC HBAs
EMC
generic
scsi disk
generic
scsi disk
IBM 2105
VIOS
FC HBAs
SAN
vio client
FCP
VIOS
FC HBAs
EMC IBM 2105
VIOS
FC HBAs
SAN
IBM 2105EMC
SCSI
vSCSI
VIOS
PHYP
LVM
AIX
LVM
VSCSI
HBA
VSCSI
target
VIOS
SAN
multipathing
Disk Driver
LVM
VSCSI
target
multipathing
Disk Driver
VSCSI
HBA
multipathing
Disk Driver
fibre channel
HBAs
fibre channel
HBAs
PHYP
NPIV
AIX
LVM
VFC
HBA
passthru
module
VIOS
SAN
passthru
module
VIOS
VFC
HBA
multipathing
Disk Driver
fibre channel
HBAs
fibre channel
HBAs
VF
C H
BA
VF
C H
BA
VF
C H
BA
VF
C H
BA
VF
C H
BA
VF
C H
BA
VF
C H
BA
VF
C H
BA
NPIV – provisioning, managing, monitoring
VIOS
SVC
N
P
I
VNPIV enabled
SAN
vio client
vFC adapter pair
DS4000,
DS6000,
DS8000
tape library
WWPN
HDS
EMC
NetApp
VIOS
N
P
I
V
vio client
vio client
vio client
WWPN
WWPN
WWPN
WWPN
WWPN
Live Partition Mobility(LPM) and NPIV
VIOS
N
P
I
V
vio client WWPN
VIOS
N
P
I
V
vio client
vio client
vio client
WWPN
WWPN
WWPN
WWPN
WWPN
VIOS
N
P
I
V
vio client WWPN
VIOS
N
P
I
V
vio client
vio client
vio client
WWPN
WWPN
WWPN
WWPN
WWPN NPIV enabled
SAN
• WWPNs are allocated in pairs
© 2006 IBM Corporation
IBM System p
Heterogeneous multipathing
AIX
POWER Hypervisor
VIOS#1
SAN Switch
A B DC
NPIV
Fibre
HBA NPIV
A’’’’ B’ D’C’
Storage Controller
A
SAN Switch
Fibre
HBA
Passthru module
VIOS block diagram (vSCSI and NPIV) POWER Server
physical storage
LPARs vSCSI devices
(SCSI LUNS)block virtualization
physical adapters
FC/NPIV | SCSI | iSCSI | SAS | USB | SATA
LVM
disk | optical
multi-pathing
filesystems
virtual devices back by a file
virtual devices backed by a logical volume
virtual devices backed by a pathing device
virtual devices physical peripheral device
virtual tape
passthru moduleNPIV ports
NPIV
vSCSI basicsPOWER Server
VIOS
File backed disk storage pool
(/var/vios/storagepools/pool_name)
/var/vios/storagepools/pool1/foo1
vSCSI Target
LPARs (AIX, Linux, or i5/OS)
b1 b2 b4
a1 b3
b1a2
a1: – ../../../foo1
c1: /dev/fscsi0
a2 – ../../../foo2.iso
b1: ../../lv_client12
b2: /dev/hdisk10
b3: /dev/lv_client20
b4: /dev/powerpath0
b5: /dev/cd0
b6: /dev//sas0
p2v mapping devices
Virtual optical media repository
(/var/vios/VMLibrary)
/var/vios/VMLibrary/foo2.iso)
Logical Volume storage pool
(/dev/VG_name)
/dev/storagepool_VG/lv_client12
Physical device backed devices
(/dev)
/dev/hdisk10
/dev/lv_client20
/dev/powerpath0
/dev/cd0
/dev/sas0
NPIV
(/dev)
/dev/fscsi0 <-> WWPN
physical storage
Fibre channel, iSCSI,
SAS, SCSI, USB, SATA
b6
e1
S
C
S
I
E
M
U
L
A
T
I
O
N
PHYP
b5
I/O
server
vscsi
client
phyp
Data flow using LRDMA for vSCSI devices
vscsi
initiator
vscsi
target
control
pci adapter
physical
adapter
driverData (LRD
MA)
data
buffer
I/O
Server
I/O
Server
AIX client
PHYP
VSCSI redundancy using multipathing at the client
vscsi
target
vscsi
target
vscsi
initiator
vscsi
initiator
disk
driverMPIO
SAN
AIX
phyp
Direct attach fibre channel block diagram
FC HBA
fibre channel HBA DD
Data
data
buffer
generic
disk
driver
SCSI Initiator
VIOSAIX
phyp
NPIV block diagram
VFC
client
passthru
module
FC HBA
fibre channel
HBA DD
Data
data
buffer
generic
disk
driver
SCSI Initiator
POWER5 Server
Available via optional Advanced POWER Virtualization or POWER Hypervisor and VIOS features.
Testing VIOSSystem p/i Server
LinuxAIX
logical partitions
POWER Hypervisor
fibre chan
HBA
A2 A3
A1
External Storage
ie. DS8K
A1 A2 A3 A4
VIOS
vSCSI
AIX
A4
AIX
A5
physical
Virtual SCSI
AIX
A6
AIX
A8
A7
fibre chan
HBA
physical
fibre chan
HBA
physical
A5 A6 A7 A8
#5735 PCIe 8Gb Fibre Channel Adapter
� Supported on 520, 550, 560, 570, 575
� Dual port adapter - each port provides single initiator
► Automatically adjusts to SAN fabric 8 Gbps, 4 Gbps, 2 Gbps
► LED on card indicates link speed
� Ports have LC type connectors
► Cables are the responsibility of the customer.
► Use multimode fibre optic cables with short-wave lasers:
– OM3 - multimode 50/125 micron fibre, 2000 MHz*km bandwidth
● 2Gb (.5 – 500m) 4Gb (.5 – 380m) 8Gb (,5 – 150m)
– OM2 - multimode 50/125 micron fibre, 500 MHz*km bandwidth
● 2Gb (.5 – 150m) 4Gb (.5 – 70m) 8Gb (,5 – 21m)
– OM1 - multimode 62.5/125 micron fibre, 200 MHz*km bandwidth
● 2Gb (.5 – 300m) 4Gb (.5 – 150m) 8Gb (,5 – 50m)
© 2008 IBM Corporation
Questions
© 2008 IBM Corporation
This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in
other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM
offerings available in your area.
Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions
on the capabilities of non-IBM products should be addressed to the suppliers of those products.
IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give
you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY
10504-1785 USA.
All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives
only.
The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or
guarantees either expressed or implied.
All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the
results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations
and conditions.
IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions
worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment
type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal
without notice.
IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies.
All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary.
IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are
dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this
document may have been made on development-level systems. There is no guarantee these measurements will be the same on generally-
available systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document
should verify the applicable data for their specific environment.
Revised September 26, 2006
Special notices
© 2008 IBM Corporation
IBM, the IBM logo, ibm.com AIX, AIX (logo), AIX 6 (logo), AS/400, BladeCenter, Blue Gene, ClusterProven, DB2, ESCON, i5/OS, i5/OS (logo), IBM Business Partner
(logo), IntelliStation, LoadLeveler, Lotus, Lotus Notes, Notes, Operating System/400, OS/400, PartnerLink, PartnerWorld, PowerPC, pSeries, Rational, RISC
System/6000, RS/6000, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, WebSphere, xSeries, z/OS, zSeries, AIX 5L, Chiphopper, Chipkill, Cloudscape, DB2
Universal Database, DS4000, DS6000, DS8000, EnergyScale, Enterprise Workload Manager, General Purpose File System, , GPFS, HACMP, HACMP/6000, HASM, IBM
Systems Director Active Energy Manager, iSeries, Micro-Partitioning, POWER, PowerExecutive, PowerVM, PowerVM (logo), PowerHA, Power Architecture, Power
Everywhere, Power Family, POWER Hypervisor, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software (logo), POWER2,
POWER3, POWER4, POWER4+, POWER5, POWER5+, POWER6, POWER6+, System i, System p, System p5, System Storage, System z, Tivoli Enterprise, TME 10,
Workload Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation in the United States, other
countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols
indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law
trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml
The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.
UNIX is a registered trademark of The Open Group in the United States, other countries or both.
Linux is a registered trademark of Linus Torvalds in the United States, other countries or both.
Microsoft, Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both.
Intel, Itanium, Pentium are registered trademarks and Xeon is a trademark of Intel Corporation or its subsidiaries in the United States, other countries or both.
AMD Opteron is a trademark of Advanced Micro Devices, Inc.
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries or both.
TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC).
SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are
trademarks of the Standard Performance Evaluation Corp (SPEC).
NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both.
AltiVec is a trademark of Freescale Semiconductor, Inc.
Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc.
InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association.
Other company, product and service names may be trademarks or service marks of others.
Revised April 24, 2008
Special notices (cont.)