ds8870 ats configuration and best practices - zexpertenforum
TRANSCRIPT
© 2014 IBM Corporation
sTU17 - DS8870 ATS Configuration and Best Practices
Charlie Burger- Certified I/T Storage Specialist
19 May 2014
© 2014 IBM Corporation
Technical Product Deep Dives - Accelerate Storage Webinars
The Free IBM Storage Technical Webinar Series Continues in 2014... IBM Technical Experts cover a variety of Storage topics Audience: Clients who are either currently have IBM Storage products or considering acquiring IBM Storage products. Business Partners and IBMers are also welcome.
How to sign up? To automatically receive announcements of the Accelerate with ATS Storage webinar series, Clients, Business Partners or IBMers can send an email to [email protected], schedules, and archives: Located in the Accelerate Blog
https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=enUpcoming webinars:
May 28th 12:00 ET/9:00PT Accelerate with ATS: OpenStack - Storage in the Open Cloud EcosystemJune 26th12:00 ET/9:00 PT Accelerate with ATS: DS8000 Announcement UpdateAug 21st 12:00 ET/9:00 PT Accelerate with ATS: XIV Announcement Update
2
© 2014 IBM Corporation3
In this customer seminar, we'll discuss real world data replication and business continuity scenarios, including live demonstrations of market leading IBM storage management tools. This 2 day class introduces IBM System Storage FlashCopy®, Metro Mirror, Global Copy, Global Mirror, Metro/Global Mirror and z/OS Global Mirror concepts, usage and benefits. Included is a discussion on the benefits and usage of IBM System Storage Easy Tier, an exciting solution for cost effective implementation of Solid State Drives (SSDs). Advanced features of the DS8870 such as high performance FICON, parallel access volumes, and HyperSwap will also be discussed. The purpose of this Client Seminar is to assist IBM and Business Partner teams progress DS8870 opportunities closer to wins by having experts highlight the technical advantages of the business continuity and other advanced functions of the DS8870. If your client has older DS8870 equipment, competitively installed disk or even perhaps a footprint or two of DS8870s, with the opportunity for additional DS8870s or to add advanced features, SSDs to their DS8870s, consider nominating them for this Seminar.Seating is limited to 37 attendees max.
Nominate your clients here:https://www-950.ibm.com/events/wwe/grp/grp100.nsf/v16_events?openform&lp=nominations&locale=en_US
Workshop client nomination questions contact: Elizabeth Palmer [email protected]
Workshop content questions contact: Craig Gordon [email protected]
ATS DS8870 Advanced FunctionsCustomer Workshop
DS8K AF Seminar
Atlanta 6/18-6/19
© 2014 IBM Corporation
Topics
� DS8000 Performance Considerations
� DS8000 Subsystem Storage Hierarchy
� Defining Extent Pools on DS8000
� Logical Configuration Considerations
� Summary
� I/O Priority Manager
� Addendum– Logical Configuration of CKD Volume Using DS CLI– DS8000 Volumes with z/OS– References
4
© 2014 IBM Corporation5
DS8870 Performance Considerations
DS8870 ATS Configuration and Best Practices
© 2014 IBM Corporation
DS8870 Hardware performance considerations
� Front-end performance considerations
– I/O ports
– Host adapters (HAs)
– I/O enclosures– Servers (processors and memory)
� Backend performance considerations – Ranks (and array type)
– Device Adapter (DA) pairs
– Servers (processors and memory)
6
© 2014 IBM Corporation7
POWER Processors
� Based on POWER7+ server technology
– Two POWER7+ servers installed in the base frame
• Each POWER7+ server contains 2, 4 8 or 16 cores
• POWER7+ processors operate at 4.228 GHz– I/O connectivity is via PCI Express
Empty
2-core config
Processor sockets
Empty
4-core config
Processor sockets
8-core config
Processor sockets
16-core config
Processor sockets
Active cores
Inactive cores
© 2014 IBM Corporation
DS8870 Memory considerations
�Processor memory
–System memory (“Cache”)
• Primarily affects read performance–Persistent memory (‘NVS’)
• Primarily affects write performance
• DS8000 persistent memory scales with processor memory size• Approximately 1/32 of total system memory
8
Processor Memory
Persistent Memory
16/32 1 GB
64 2 GB
128 4 GB
256 8 GB
512 16 GB
1024 32 GB
© 2014 IBM Corporation
Array considerations
� RAID type– RAID5 and RAID10 arrays perform equally for read
– RAID5 array performs better for sequential write
– RAID10 array performs better for random write
� Sparing
– Arrays without spares may mean better potential random performance
– Array capacity should be confirmed after array creation – Array types (with spares and without spares) should be balanced across server0 and
server1 (when ranks are assigned to extent pools)– DS8000 - minimum of 4 spares per DA (64 or 128 disks) (single disk type)
9
© 2014 IBM Corporation
Port Assignment Best Practices� Isolate host connections from remote copy connections (MM, GM, zGM, GC and MGM)
on a host adapter basis
� Isolate CKD host connections from FB host connections on a host adapter basis
� Don’t share a mainframe CHPID with tape and disk devices
– Keep large block and small block transfers isolated to different host adapter cards
� Always have symmetric pathing by connection type (i.e., use the same number of paths on all host adapters used by each connection type). For z/OS, all path groups should be symmetric (i.e., uniform number of ports per HA) and spread path groups as widely as possible across all CKD host adapters
� When possible, use the same number of host adapter connections (especially for System z) as the number of connections coming from the hosts
� Always have additional resources in case of failure
– Plan for HBA or even enclosure failure
1010
© 2014 IBM Corporation
Port Assignment Best Practices (continued)
� When using the 2 Gb and 4 Gb host adapters, avoid using adjacent ports especially for different workloads if you cannot separate by Host Adapter
� Consider using 8-port 8 Gb/second host adapters for connectivity – use 4-port 8 Gb/second host adapters to meet maximum bandwidth requirements
� Size the number of host adapters needed based on expected aggregate maximum bandwidth and maximum IOPS (use Disk Magic or other common sizing methods based on actual or expected workload
� Sharing different connection types within an I/O enclosure is encouraged
� When possible, isolate asynchronous from synchronous copy connections on a host adapter basis
� For a replication (Metro Mirror and/or Global Mirror), assign even and odd LSS/LCU to different ports. For example, odd number ports for odd LSSs and even number ports for even LSSs.
� When utilizing multipathing, try to zone ports from different I/O enclosures to provide redundancy and balance (i.e., include port from a host adapter in enclosure 0 and enclosure 1)
1111
© 2014 IBM Corporation
Port / Host Adapter Utilization Guidelines
Metric Green Amber Red
FICON Host Adapter Utilization Percent < 35% 35% 60%
FICON Port Utilization Percent < 35% 35% 50%
Fibre Host Adapter Utilization Percent < 60% 60% 80%
Fibre Port Utilization Percent < 60% 60% 80%
Metro and Global Mirror Link Utilization Percent < 60% 60% 80%
zGM (XRC) Link Utilization Percent < 60% 60% 80%
Best practice is to keep utilization below these levels
1212
© 2014 IBM Corporation13
DS8870 Configuration options
Processors / Controller System MemoryDA Pairs / Drives
(max)Host Adapters
(max.)Expansion Frames
Business Class
2-core 16 / 32 1 / 144 4 NA
4-core 64 2 / 240 8 NA
8-core 128 / 256 6 / 1,056 16 0-2
16-core 512 / 1024 6 / 1,056 16 0-2
Enterprise Class
2-core 16/32 2 / 144 4 NA
4-core 64 4 / 240 8 NA
8-core128256
8 / 1,0568 / 1,536
160-20-3
16-core 512 / 1024 8 / 1,536 16 0-3
© 2014 IBM Corporation14
DS8870 Subsystem Storage Hierarchy
DS8870 ATS Configuration and Best Practices
© 2014 IBM Corporation
Storage subsystem hierarchy
� Storage Complex– One or multiple physical storage subsystems
– Central management point with Network Server
• DS8000™ Hardware Management Console (HMC)
� Storage Unit (Storage facility)
– Single physical storage subsystem
� Storage Image
– Single logical storage subsystem
• Same as physical subsystem for 2107 921/931, 2107 922/932, 2107 941/94E, 2107 951/951E, 2107 961/96E
� Server – Server0 and Server1
– Manage extent pools and volumes
• Even numbers managed by Server0• Odd numbers managed by Server1
15
© 2014 IBM Corporation
DS8870 Array site
� Logical grouping of disks – Same capacity and speed
� Created and assigned to DA pair by software during installation
� DS8000 array site– 8 disks (DDMs) DS8000 array site
FDE????
16
© 2014 IBM Corporation
Array� DS8000 – 8 DDM arrays = 1 Array Site
– RAID5 • 6+P• 7+P• Parity is striped across all disks in array but consumes
capacity equivalent to one disk• RAID5 array performs better for sequential write
– RAID 6• 6+P+Q• 5+P+Q+Spare
– RAID10 • 3+3• 4+4• RAID10 array performs better for random write• RAID5 and RAID10 arrays perform equally for read
� Sparing– Arrays without spares may mean better potential random
performance– Array capacity should be confirmed after array creation – Array types (with spares and without spares) should be
balanced across server0 and– Minimum of 4 spares per DA (64 or 128 disks) (single disk type)
D DD D
D PD S
D DD D
D DD P
D DD D
D DD D
D DD S
D DD S
RAID5 6+P+S
RAID5 7+P
RAID10 3+3+S+S
RAID10 4+4
D DDD DD Q
RAID6 6+P+Q
D DD
D PD Q
S
P
RAID6 5+P+Q+S
17
© 2014 IBM Corporation
Rank� RAID array with CKD or FB storage type defined
� One-to-one relationship between an array and a rank– One RAID array becomes one rank (DS8000 – 8 DDMs)
� A rank has no relationship to server0 or server1 until after it has been assigned to an extent pool
– Rank ID (Rx) does not indicate a server association unless specifically configured to do so
– Ranks should be assigned to server0 and server1 extent pools in a balanced manner
• Ranks built on arrays containing spares should be balanced across server0 and server1 extent pools
• DS8000 ranks built on array sites associated with each Device Adapter should be balanced across server0 and server1 extent pools
� A rank has no relationship to Logical Subsystems (LSSs)
� Ranks are divided into ‘extents’– Units of space for volume creation– CKD rank extents equivalent to a 3390M1 (1113 cylinders or .94 GB– FB rank extents are 1 GB
RAID5 7+P
D DD D
D DD P
DS8870 CKD rank
18
© 2014 IBM Corporation
Extent pool � Logical grouping of extents from one or more ranks from
which volumes will be created– Ranks are assigned to extent pools (one of more
ranks)– Extent pool is one storage type (CKD or FB)– LUN size is not limited to rank size
• If more than one rank assigned to pool
� User-assigned to Server0 or Server1– Extent Pool ID ‘Px’
• If x is even, assigned to Server 0 and will support even-numbered LSSs
• If x is odd, assigned to Server 1 and will support odd-numbered LSSs
Extent Pool
Extents
19
© 2014 IBM Corporation
Extent pool (continued)
� Easy Tier manages extents in the extent pool for performance
� “Rotate Extents” or “Storage Pool Striping” introduced with R3– Preferred method of volume allocation and default with R6– “Rotate Volumes” method still available– Explicitly specify to insure you invoke the method you want
� Extent Pool can have a user nickname
� Activity should be balanced across both servers
� Minimum of 2 extent pools required to utilize system resources
� Maximum number of ranks in a single pool should be ½ the total number of ranks to balance system resources
– Maximum of 8 ranks for an extent pools with track space efficient volumes backed by repositories
� Minimum of 4 ranks
Extent Pool Rank
20
© 2014 IBM Corporation21
Defining Extent Pools on a DS8870
DS8870 ATS Configurationand Best Practices
© 2014 IBM Corporation
Or Why Not?….
Large Extent Pools are easier to manage and make life easier
27
© 2014 IBM Corporation
Considerations/Observations
� Without Easy Tier
– SATA isolated on DA 2• Could result in contention or overrun
– Do not mix SATA and FC in the same Extent Pool
• 4 Extent Pools – Do not do frequent full box FlashCopy of volumes on FC to targets on SATA
• Better to use larger FC drives to spread BACKGROUND COPY over all ranks
� With Easy Tier– Mix of SATA, FC and SSD in the same Extent Pool has benefits
29
© 2014 IBM Corporation
Easy Tier
� Validate Easy Tier prerequisites
– Easy Tier no-charge feature (#7083) ordered and installed
• Required for data relocation, including auto-rebalance capabilities• Not required for Monitoring only
– DS8000 LIC versions provide different tiering capabilities
• Two tier (R5.1/R6.1), three tier (with R6.2), encryption support across three tiers (with R6.3 on DS8800)
– Space efficient considerations
• Extent Space Efficient (ESE) supported with R6.2 for Easy Tier and with R6.3 for Copy Services (FB only)
� Determine mode of implementation
– Automatic, Monitor, Manual
3030
© 2014 IBM Corporation
DS8000 Easy Tier Releases at a Glance� Easy Tier V1 (DS8700 R5.1)
– Automated cross-tier performance management for SSD/HDD hybrid pools– Manual mode management support for dynamic extent pool merge and dynamic
volume relocation� Easy Tier V2 (DS8700/DS8800 R6.1)
– Automated cross-tier performance or storage economics management forhybrid pools with any 2 tiers (SSD/ENT, SSD/NL or ENT/NL)
– Automated intra-tier performance management (auto-rebalance) in hybrid pools
– Manual mode management support for rank depopulation and optimized volume restriping within non-managed pools (manual volume rebalance)
� Easy Tier V3 (DS8800/DS8700 R6.2)– Automated cross-tier performance and storage economics management for hybrid
pools with 3 tiers (SSD/ENT/NL) – Automated intra-tier performance management in both hybrid (multi-tier) as well as
homogenous (single tier) pools (auto-rebalance)– Thin Provisioning support for Extent Space Efficient (ESE) Volumes
� Easy Tier V4 (DS8800/DS8700 R6.3 and DS8870 R7.0)– Support for encryption capable environments
� Easy Tier 5 (DS8870 R7.1)– Easy Tier Server– Easy Tier Application – Easy Tier Heat Map Transfer Utility
� Easy Tier reporting (DS8870 R7.2)
31
© 2014 IBM Corporation
SSD Implementation Considerations� Drives of different capacities and speeds cannot be intermixed in a storage enclosure pair
– True for SSDs as well– RPQ available to support drive intermix
� DS8870 can have up to 48 SSDs per DA pair– Maximum of 384 SSD drives spread over eight DA pairs– Recommend maximum of 16 SSDs per DA pair for optimal SSD performance– Performance of the DA Pair can support intermix of SSD / HDD
• Each DS8870 DA pair capable of delivering 3200 MB/s for read and 2200 MB/s for write (see DS8870 performance white paper)
� RAID 5 is the only supported implementation for SSDs– RAID 10 is supported through an RPQ
� SSD drives are installed in default locations by manufacturing– First storage enclosure pair on each device adapter pair
• Spreads the SSDs over as many DA pairs as possible to achieve optimal price-to-performance ratio
� When adding (MES) SSDs to existing configuration– No technical need to physically move HDDs to isolate SSDs
3232
© 2014 IBM Corporation
Easy Tier considerations
� Easy Tier operates at a pool level– FB and CKD are supported by Easy Tier
� Utilize as few Extent Pools as possible– Minimum of two extent pools
� OK to mix 6+P and 7+P into same pool
� Utilize Rotate Extents (storage pool striping)
� Recommend installing SSD and SATA/NL-SAS in groups of 16 to balance capacity across both pools
� All volumes in a hybrid / merged extent pool will have I/O monitoring active– Any extents on these volumes determined to be active (or inactive) will see those extents
migrated to/from SSD to HDD storage
� If possible, use the sizing tools to determine appropriate capacity– FlashDA, STAT, DiskMagic, IBM i - DB2 for i media preference or IBM i ASP balancer
– Or typical IO Skew to determine appropriate capacity
3333
© 2014 IBM Corporation
Easy Tier considerations (continued)
� Ensure rank capacity utilization has at least 10 extents per rank free– Do not allocate 100% of the capacity in a managed extent pool with volumes,
or Easy Tier management and extent migrations are effectively halted– As long as free extents exist, Easy Tier will ensure balance across the ranks in
the pool
� Exploit Easy Tier enhancements– Two/three tier support, including encryption with R6.3 on DS8800– Auto-rebalance– Rank depopulation– Backend DA and Rank utilization statistics utilized for migration plan
3434
© 2014 IBM Corporation
DS8870 Logical Subsystem/Logical Control Units� Similar to ESS:
– LCU used for CKD and LSS for FB but are the same concept– LCU/LSS has a maximum of 256 addresses – LCU/LSS is the basis for Copy Services paths and consistency groups– For open systems, LSSs do not directly affect application performance– For System z, more LCUs will provide additional addresses for PAVs which can improve
performance• Aliases (Parallel Access Volumes/PAVs) are shared within an LCU• Each device including an Alias has a Unit Control Block (UCB)
› Purpose is similar to open I/O Queue depth– Logical Subsystem ID ‘xy’
• x indicates the ‘address group’› An address group is a pre-determined set of 16 LCUs/LSSs (x0-xf) of the same storage type
(all CKD or all FB) • y indicates server assignment
› If y is even, LSS is available with Server0 extent pools› If y is odd, LSS is available with Server1 extent pools
� DS8870:– LCU/LSS does not have a pre-determined relationship to rank/DA pair– Up to 255 LCU/LSSs available– FB LSSs are automatically created during LUN creation
• e.g. Creation of volume 1000 results in creation of LSS 10 – CKD LSSs are explicitly defined
• Allows specificatoin of LCU/LSS type and SSID
35
© 2014 IBM Corporation
Volume/LUN � Created from extents in one extent pool
� Volumes/LUNs can be larger than the size of a rank (if multiple ranks are in one extent pool)
– DS8000 introduced with CKD max size 64K cylinders or 56GB (with appropriate software support)
– DS8000 with R3.1 has CKD max size 262,668 cylinders or 223 GB (with appropriate software support)
– FB max size 2TB
� Volumes/LUNs can be presented to host server in cylinder, 100MB or block granularity
– Space is allocated in 1GB extents (FB) or 1113 cylinder extents (CKD)
� Volume ID– User specifies 4-digit hex volume ID which includes
address group, LCU/LSS and device ID:• ‘xyzz’
› x=Address Group › xy=LCU/LSS
» Even LCU/LSSs are available for Server0 extent pools
» Odd LCU/LSSs are available for Server1 extent pools› zz=device ID
Extent Pool Rank
2A10
Extent Pool
Rank Rank
2A11
36
© 2014 IBM Corporation
DS8870 “Rotate Volumes”
� Single volume is created from extents on one rank if possible
� In single-rank extent pool, multiple volumes will be created sequentially on the rank
� Single volume may ‘spill’ across ranks in pool or may be larger than the size of a single rank
� In a multiple-rank extent pool, current implementation places multiple volumes on rank with most free extents
� Volumes may dynamically deleted and extents reused
� Possible performance degradation due to “hot spots” and limited resources per volume
Rank R2
Rank R02 A 1 0
2 A 1 1
Rank R02 A 1 0 2 A 1 1
Rank R2
Rank R0
2 A 1 0
2 A 1 1
37
© 2014 IBM Corporation
DS8870 “Storage Pool Striping”
Extent Pool with 3 Ranks
1
2
3
47
5
6
• Preferred algorithm choice for volume creation with R3
• Naming
– Marketing material � Storage Pool Striping– DS CLI & DS Storage Manager � Rotate
Extents
• Volumes are created by allocating one Extent from available Ranks in an Extent Pool, in a round-robin fashion
– At right - 7 GB Volume showing the order of Extent allocation
• CKD and Fixed Block
Rank 9
Rank 10
Rank 11
38
© 2014 IBM Corporation
Storage Pool Striping - Characteristics
� The next Volume will be started from an Extent on the next Rank in the round-robin rotation
� If a Rank runs out of extents, it is skipped
� Multiple Volume allocations will not start on the same Rank
– If many Volumes are created with a single command, the Volumes will not start on the same Rank
� Supports new Volume Expansion capabilities
Extent pool with 3 Ranks
2nd Volume start
39
© 2014 IBM Corporation
Storage Pool Striping – Considerations & Recommendations (1)
� Deleting Volumes creates free Extent units for future volume allocation
� “reorg” with Easy Tier manual mode
� If ranks are added to extent pool “reorg” the volumes using Easy Tier
� Do not mix striped and non-striped volumes in same multi-Rank Extent Pool
40
ExtPool 0 ExtPool 1
DA2
DA0
DA3
DA2
DA0
DA3
© 2014 IBM Corporation
Storage Pool Striping - Advantages
� Technical Advantages– Method to distribute I/O load across multiple Ranks
– DS8000 optimized for performance
– Far less special tuning required for high performance data placement. • Means less work for the storage administrator
– Reduces storage administrator work needed to optimize performance
41
© 2014 IBM Corporation
Logical Volume Considerations
� Volume/LUN size– Volume size does not necessarily affect performance– For open systems, for a given amount of capacity, choose a volume size
small enough to allow volumes to be spread appropriately across all ranks available to an application workload
– For System z, larger volumes may require more aliases (PAVs)
� Volume/LUN placement– Logical volume placement on ranks, DAs and servers (server0 and
server1) has an effect on performance• More drives per volume will improve performance
– Logical volumes for each application workload should be allocated according to isolation, resource sharing and spreading principles
42
© 2014 IBM Corporation
Volume Virtualization Overview
� Standard Logical Volumes
� Track Space Efficient Logical Volumes
– Used with Space-Efficient Flash Copy
� Extent Space Efficient Logical Volumes
– Used with Thin Provisioning
43
© 2014 IBM Corporation
Standard Logical Volume (LV)
MMMMMM
MMM
Standard LV
Real Rank
Extent Metadata
ExtentData
Extent Pool
MMMMMM
MMM
MMM
Standard LV
Real Rank
Extent Metadata
ExtentData
Extent Pool• Standard LV consists of 1 to N Real Extents
• Each Extent contains Extent Data and Extent Metadata
• Each LV Extent is mapped to a Real Extent on a Real Rank
• All Extents allocated to a LV come from one Extent Pool
• FB Extent = 1024 MB Extent Data
• Less 4.5MB Extent Metadata
• CKD Extent = 1113 Cylinders Extent Data
• Less 64 cylinders Extent Metadata
44
© 2014 IBM Corporation
TSE and ESE
� TSE
– Storage allocated in track increments
– Storage obtained from Repository
• Repository uses extents obtained from extent pool– Intended to be used as FlashCopy targets
� ESE– Storage allocated in extent increments
– Storage obtained directly from extent pool
– Currently supports FB only
– Copy services support for ESE volumes • R6.2 FlashCopy
• R6.3 Metro Mirror, Global Copy and Metro/Global Mirror
45
© 2014 IBM Corporation46
Logical Configuration Considerations
DS8870 ATS Configuration and Best Practices
© 2014 IBM Corporation
Principles of DS Performance Optimization
� Allocation of logical volumes and host connections for an application workload
– Isolation
– Spreading
� These principles are described in detail in Chapter 4 (4.1-4.3) of
– IBM TotalStorage DS8000 Series: Performance Monitoring and Tuning SG24-7146
47
© 2014 IBM Corporation
Workload Isolation
� Dedicating a subset of hardware resources to one workload– Ranks
– I/O ports
– …
� Logical volumes/LUNs and host connections for the workload are isolated to the dedicated resources
� Provides increased probability of consistent response time for an important workload, but…
– Maximum potential performance limited to the set of dedicated resources
– Contention still possible for any resources which are not dedicated (e.g. processor, cache, persistent memory)
48
© 2014 IBM Corporation
Workload Isolation (2)
� Can prevent less important workloads with high I/O demands from impacting more important workloads
– It may be acceptable for multiple less important, I/O intensive resources to contend with each other on a single set of shared resources (isolated from other workloads)
� A good approach if workload experience, analysis or modeling identifies: – A workload which tends to consume 100% of resources available
– A workload which is much more important than other workloads
– Conflicting I/O demands among workloads
� DA level isolation may be appropriate for large blocksize, heavy sequential workloads
49
© 2014 IBM Corporation
Workload Spreading (2)
� Host connections– Host connections for a workload are spread across:
• I/O ports
• Host adapters
• I/O enclosures • Server0 and Server1
› Left side I/O enclosures and right side I/O enclosures
• New host connections are allocated on least-used shared resources– For optimal performance:
• Must use multiple paths
› Configure ports on even I/O Bay and odd I/O Bay
• Do not use all of the ports on a HA (bandwidth)• Do not mix PPRC links with host connections on the same HA to avoid contention
– Use multipathing software
50
© 2014 IBM Corporation52
Easy DS8870 Configuration
� Extent Pool Planning Decisions– Utilize as few extent pools as possible– Pairs of pools (1/2 for DS8000 Server 0, ½ for DS8000 Server 1 for balance)– Separate pairs of pools for:
• z/OS (CountKeyData) and Open systems (Fixed Block)• Disk drive classes (type, size, speed) unless Easy Tier being used• RAID type (RAID5, RAID6, RAID10) unless Easy Tier is being used• Standard (full provisioning) or Extent Space Efficient (thin provisioning) Storage
Allocation Method• Easy Tier
� Volume Planning Decisions• Storage Allocation Method
› Standard (full provisioning), Extent Space Efficient (thin provisioning) or Track Space Efficient (FlashCopy targets)
• Extent Allocation Method › Rotate Extents over Rotate Volumes recommended except for Oracle ASM, DB2
Balanced Configuration Unit and small, hot volumes• z/OS Logical Control Unit (LCU) and Open Systems Logical Subsystem ID - Copy
Services Consistency Groups• z/OS LCU and Subsystem ID – z/OS system definitions
52
© 2014 IBM Corporation
Easy DS8870 Configuration (2)
� z/OS Storage
1. Configure DS8000 I/O ports for z/OS access
2. Create Extent Pools3. Create Logical Control Units
(LCUs), Count Key Data (CKD) Volumes Parallel Access Volume (PAV) Aliases
� Open Systems Storage
1. Configure DS8000 I/O ports for open systems access
2. Create Extent Pools
3. Create DS8000 Host Definition and DS8000 Volume Group (LUN masking)
4. Create Fixed Block Volumes
53
© 2014 IBM Corporation
Storage Resource Summary
� Disk– Individual DDMs
� Array Sites– Pre-determined grouping of DDMs of same
speed and capacity (8 DDMs for DS8000)
� Arrays– One 8-DDM Array Site used to construct
one RAID array
� Ranks– One Array forms one CKD or FB Rank – No fixed, pre-determined relation to LSS
� Extent Pools– 1 or more ranks of a single storage type
(CKD or FB)– Assigned to Server0 or Server1
RAID5, RAID6 or RAID10
CKD or FB
Extent Pool
54
© 2014 IBM Corporation
Storage Resource Summary (continued)
� Volumes or LUNs– Made up of extents from one extent pool– Min allocation is one extent -- 1GB(FB)
Mod1(CKD)– Max size is 2TB (FB); 223GB(CKD)
• Can be larger than 1 rank if more than 1 rank in pool
– Associated with LCU/LSS during configuration • Available LSSs determined by Extent Pool
server affinity– Can be individually deleted
� Open Systems Volume Group – Contains LUNs and host attachments -- FB LUN
masking– One host attachment (one port or port group) can
be member of only one volume group– One volume can be member of multiple volume
groups– Multiple hosts can be contained in a single
volume group 55
AIX host port iSeries host port groupAIX host port
FB
FB
FB (i)
FB
FB
FB
© 2014 IBM Corporation
Recommendations� Use the GUI for configuration
� Create extent pools using storage pool striping with multiple ranks in the pool and use Easy Tier
� Balance extent pools and ranks across servers – An extent pool for each server
� Use a limited number of device types for ease of management– Without Easy Tier use separate pools for DDMs of different size
� Use large FC DDMs and large Extent Pools
� Use custom volumes that are even multiple of extents– CKD 1113 cylinder extents
• 3390 M3• 3390 M9• 30051 cylinders• 60102 cylinders
– FB 1 GB extents
� Use PAVs to allow concurrent access to base volumes for System z– Preferably HyperPAV
• z/OS, z/VM and Linux for System z56
© 2014 IBM Corporation57
I/O Priority Manager Considerations
DS8870 ATS Configuration and Best Practices
© 2014 IBM Corporation58
DS8870 I/O Priority Manager
� Application Level Quality of Service (QoS)– Objective is to protect high importance workload
from poor response times caused by low importance workload saturating the disk resources
– Provide mechanisms to manage quality of service for I/O operations associated with critical workloads and to give them priority over other I/O operations associated with non-critical workloads
– Adaptive based on workload and contention for resources in storage subsystem
� I/O Priority Manager provides volume level support
� Monitoring functions do not require LIC feature key
• Can monitor workload without activating the LIC feature key and alert via SNMP
SLOW
PRIORITY
Help!
RAID
58
© 2014 IBM Corporation
DS8000 Application host
App1
App2
Storage
Workload
Control
� I/O Priority Manager detects critical App1 QoS impact
� App2 usage of critical DS8000 resources is controlled
� Controls on App2 restore App1 performance
59
Automatic control of disruptive workload
59
© 2014 IBM Corporation
Performance groups for FB volumes
� Performance Groups are used to assign a numerical value to a performance policy
� 16 Performance Groups (0 through 15)
– PG1 – PG5: Used for volumes requiring higher performance
– PG6 – PG10: Used for volumes with medium performance requirements– PG11-PG15: Used for volumes with no performance requirements
– PG0: Used for default performance policy
6060
© 2014 IBM Corporation
FB Performance group mapping
Performance Group
Performance Policy
Priority (CLI or GUI)
QoS Target Name
0 1 0 0 Default
1 2 1 70 Fixed Block High Priority
2 2 1 70 Fixed Block High Priority
3 2 1 70 Fixed Block High Priority
4 2 1 70 Fixed Block High Priority
5 2 1 70 Fixed Block High Priority
6 3 5 40 Fixed Block Medium Priority
7 3 5 40 Fixed Block Medium Priority
8 3 5 40 Fixed Block Medium Priority
9 3 5 40 Fixed Block Medium Priority
10 3 5 40 Fixed Block Medium Priority
11 4 15 0 Fixed Block Low Priority
12 4 15 0 Fixed Block Low Priority
13 4 15 0 Fixed Block Low Priority
14 4 15 0 Fixed Block Low Priority
15 4 15 0 Fixed Block Low Priority
6161
© 2014 IBM Corporation
I/O Priority Manager and CKD volumes
� Without z/OS software support
– On ranks with contention, I/O to a volume is managed according to the performance group of the volume
– Volumes are assigned to a policy (similar to FB volumes)
� With z/OS software support– User assign application priorities via Workload Manager (WLM)
– z/OS assigns an “importance” value to each I/O based on WLM inputs
– z/OS assigns an “achievement” value to each I/O based on prior history of I/O response times for I/O with same “importance” and based on WLM expectations for response time
– Importance and achievement value on I/O associates this I/O with a performance policy
– On ranks in contention, I/O is managed according to I/O’s performance policy
6262
© 2014 IBM Corporation
Performance groups and policies
� CKD performance groups and performance policy mapping
– CKD performance groups are 0, 19-31
– Performance groups 16-18 not used, but mapped to performance policy 0
� CKD performance policies
– Performance policy # = performance group # (e.g. PP19->PG19)
– 0 = no management – no delay added, no QOS target– 19-21 = high priority 1-3
– 22-25 = medium priority 1-4
– 26-31 = low priority 1-6
6363
© 2014 IBM Corporation
Performance groups and policies
� Recall that FB performance groups are 0-15
– Note: CKD and FB volumes are not placed on same ranks…but may share same device adapters
� I/O with low priority is delayed up to maximum allowed before delaying I/O with medium priority
– Delay only added if on rank with contention
– Delay added only if there is I/O with higher priority that is not meeting QOS target
– Delay is decreased if all higher priority I/Os are exceeding their QOS– Delays to lower priority workloads do not exceed 200 ms
6464
© 2014 IBM Corporation
CKD performance group mappingPerformance
GroupPerformance
PolicyPriority (CLI
or GUI)QoS Target Name
0 1 0 0 Default
16 16 0 0 No Management
17 17 0 0 No Management
18 18 0 0 No Management
19 19 1 80 CKD High Priority 1
20 20 2 80 CKD High Priority 2
21 21 3 70 CKD High Priority 3
22 22 4 45 CKD Medium Priority 1
23 23 4 45 CKD Medium Priority 2
24 24 5 45 CKD Medium Priority 3
25 25 6 5 CKD Medium Priority 4
26 26 7 5 CKD Low Priority 1
27 27 8 5 CKD Low Priority 2
28 28 9 5 CKD Low Priority 3
29 29 10 5 CKD Low Priority 4
30 30 11 5 CKD Low Priority 5
31 31 12 5 CKD Low Priority 6
6565
© 2014 IBM Corporation
I/O Priority Manager and Easy Tier
� I/O Priority Manager attempts to make sure that the most important I/Os get serviced
– Will throttle lower priority workload, if necessary, to meet the service objectives
– Does not promote/demote any extents
– Does not look at the rank utilization with an eye to rebalancing resources
� Easy Tier in auto mode attempts to relocate extents to the storage tier that is appropriate for the frequency of host access
– Promotes hot extents to ranks of a higher class
– Demote cold extents to ranks of a lower class
– Relocate extents between similar ranks within a tier to distribute workload evenly to avoid overloading
6666
© 2014 IBM Corporation67
DS8870 logical configuration summary
� DS8870 Subsystem Storage Hierarchy– Create extent pools using storage pool striping with multiple ranks in the pool– Balance extent pools and ranks across servers– Configuring for shared vs. isolated workload environments
� Implementing Easy Tier– Three tier capabilities utilizing encryption or non-encryption drives– Sizing guidelines for three tier solution– Auto-rebalance extents to balance performance across all drives
� I/O Priority Manager– Application level Quality of Service– zWLM support for end to end performance management on System z
67
© 2014 IBM Corporation68
AddendumLogical Configuration of CKD Extent Pools and
Volumes Using ds cli
DS8000 Volumes with System z
References
DS8870 ATS Configuration and Best Practices
© 2014 IBM Corporation69
Logical Configuration of CKD Extent Pools and Volumes Using
ds cli
DS8870 ATS Configuration and Best Practices
© 2014 IBM Corporation
DS8000 DS CLI
� Powerful tool for automating configuration tasks and collecting configuration information
� Same DS CLI for DS6000 and for ESS 800 Copy Services
� DS CLI commands can be saved as scripts which significantly reduces the time to create, edit and verify their content
� Uses a consistent syntax with other IBM TotalStorage products now and in the future
� All of the function available to the GUI is also available via the DS CLI
70
© 2014 IBM Corporation
Supported DS CLI PlatformsThe DS Command-Line Interface (CLI) can be installed on the following operatingsystems:
•AIX 5.1, 5.2, 5.3
•HP-UX 11i v1, v2
•HP Tru64 version 5.1, 5.1A
•Linux (RedHat 3.0 Advanced Server (AS) and Enterpri se Server (ES)
•SUSE Linux SLES 8, SLES 9, SUSE 8, SUSE 9)
•Novell Netware 6.5
•Open VMS 7.3-1, 7.3-2
•Sun Solaris 7, 8, 9
•Windows 2000, Windows Datacenter, and Windows 2003
71
© 2014 IBM Corporation
CKD Logical Configuration Steps
� Creating CKD extent pools
� Creating arrays
� Creating and associating ranks with extent pools
� Creating logical control units
� Creating CKD volumes
� Creating CKD volume groups (system generated)
72
© 2014 IBM Corporation
Creating CKD extent pools
� Minimum of 2 extent pools required
� Server0 extent pools will support even-numbered LSSs
� Server1 extent pools will support odd-numbered LSSs
� Consider creating additional extent pools for each of the following conditions:
– Each RAID type (5, 6 or 10)
– Each disk drive module (DDM) size
– Each CKD volume type (3380, 3390) – Each logical control unit (LCU) address group
mkextpool -dev IBM.2107-75nnnnn –rankgrp 0 -stgtype ckd P0mkextpool -dev IBM.2107-75nnnnn –rankgrp 1 -stgtype ckd P1 lsextpool –dev IBM.2107-75nnnnnn -l
Remember from earlier?
73
© 2014 IBM Corporation
Creating Arrays
� RAID array
– RAID5, RAID6 or RAID10
– Created from 1 array site on DS8000
� Array Site
– Logical grouping of disks
– Same capacity and speed
Remember from earlier?
Issue the lsarraysite command to find the unassigned array sites lsarraysite -dev IBM.2107-75nnnnn -state unassigned
Issue the mkarray command to create an array from each site with the status ″unassigned″mkarray -dev IBM.2107-75nnnnn -raidtype 5 -arsite A1 lsarray –dev IBM.2107-75nnnnn –l A1
74
© 2014 IBM Corporation
Creating CKD Ranks
� RAID array with storage type defined– CKD or FB
� One-to-one relationship between an array and a rank– One RAID array becomes one rank– DS8000 – 8 DDMs
� Ranks have no pre-determined or fixed relation to:– Server0, Server1 or Logical Subsystems (LSSs)
� Ranks are divided into ‘extents’– Units of space for volume creation– CKD rank extents equivalent to a 3390M1
› 1113 cylinders or .94GB
Remember from earlier?
Issue the lsarray command to find unassigned arrayslsarray -dev IBM.2107-75nnnnn -state unassigned
Issue the mkrank command to assign a rank to rank group 0 or 1 according to the rank group number of the assigned extent pool ID.mkrank -dev IBM.2107-75nnnnn -array a1 -stgtype ckd -extpool p1lsrank -dev IBM.2107-75nnnnn -l
75
© 2014 IBM Corporation
Creating CKD Logical Control Units (LCUs)
� Up to 255 LCUs are available
� LCU has a maximum of 256 addresses – Aliases (Parallel Access Volumes/PAVs) are shared within an LCU
� Even-numbered LCUs are associated with Server0 and odd-numbered LCUs are associated with Server1
� LCU does not have a pre-determined relationship to rank/DA pair
� Set of 16 Logical Subsystems (LSSs) is called an Address Group• LSS 00-0F, 10-1F, 20-2F, 30-3F, etc.• Storage type for entire Address Group (16 LSSs) is set to CKD or Fixed Block by the first LSS
defined
� CKD LSSs (LCUs) are explicitly defined• Allows specification of LCU type and SSID
Remember from earlier?
Issue lsaddressgrp to find unassigned address groupslsaddressgrp -dev IBM.2107-75nnnnn
76
© 2014 IBM Corporation
Creating CKD LCUs (cont)
� Analyze the report to identify all of the address groups that are available to be defined. Use the following criteria:
– If the list is empty, all of the address groups are available to be defined.– A defined address group with the storage type fb (fixed block) is not
available to be defined.– A defined address group with the storage type ckd and with fewer than 16
LSSs is available for LCU definition.– If you are using an undefined address group to make new LCUs, select
the lowest numbered address group that is not defined.– If you are defining a new LCU in an existing CKD address group, use the
lslcu command to identify LCUs that are already defined in the target address group.
Issue the mklcu command to create an LCU. dscli>mklcu –dev IBM.2107-75nnnnn -qty 16 -id 00 -s s 0010 -lcutype 3390-3lslcu -dev IBM.2107-75nnnnn -l
77
© 2014 IBM Corporation
Creating CKD Volumes
� p1 Extent Pool� a1 Array� r1 Rank� 00:0F LCUs
Remember from earlier?
Issue the mkckdvol command to create 128 3390 base volumes for the LCU. mkckdvol -dev IBM.2107-75nnnnn -extpool p1 -cap 333 9 -name finance#d 0100-017Fmkaliasvol -dev IBM.2107-75nnnnn –base 0100-017F -o rder decrement -qty 2 01FFlsrank -dev IBM.2107-75nnnnn -l
View your list of CKD extent pool IDs and determine which extent pool IDs that you want to use as the source for the CKD volumes to be created. You obtained this list when you first created your extent pools. If this list is not available, you can issue the lsextpool command to obtain the list of extent pool IDs.
Issue the mkckdvol command to make 128 base and 128 alias volumes for each LCU.
78
© 2014 IBM Corporation
Initialize the CKD Volumes
� Use ICKDSF to initialize the newly configured CKD Volumes
� There is no VTOC, IXVTOC, VVDS or Volume Label at this time
� To insure that you only initialize volumes without a label, specify
INIT UNITADDRESS(uadd) VOLID(volser) VFY(*NONE*) VTOC(n,n,nn) -INDEX(n,n,nn)
79
© 2014 IBM Corporation
DS8870 volumes with System z
� System z supports both CKD and FB volumes (FB for zSeries Linux)
� FB volumes are FCP attached, CKD volumes are FICON attached
� Same 4GB 4 port FC Host Attachment Feature supports either FCP or FICON– Assigned at the Port level (FCP or FICON, single port can’t be both)– FICON Fastloadnew method for Adapter Code Load
� FICON FASTLOAD used for FICON attach– Compatible with FCP Fastload…Allows intermix use of ports on HA – Architected event, no long busy used (…yes)– Loss of Light less than 1.5 seconds for Adapter Code Load (only when Adapter Code is
upgraded)
� Concurrent Code Load Support– Advise all host attachments have (at least) two ports
• Preferred on two separate Host Adapters
CKD also supported by ESCON Host Attachment Feature
81
© 2014 IBM Corporation
DS8870 CKD volumes
� CKD standard volumes – 3380– 3390M3– 3390M9
� CKD custom volumes – Minimum volume size specification is 1 cylinder
• Minimum space allocation is 1 extent (1113 cylinders) – Maximum volume size is 65,520 cylinders/56GB (DS8000 introduced with it)
• With z/OS 1.4 or higher software support– Maximum volume size is 262,668 cylinders/223GB with R3.1
• With z/OS 1.9+ or higher software support – Use a multiple of 1113 cylinders if possible
� Maximum number of CKD volumes – 64K per logical DS8000 *
* 4K limitation for for ESCON access
82
© 2014 IBM Corporation
DS8870 z/OS HCD considerations
CNTLUNIT CUNUMBR=A000,PATH=(52,53,54,55),
UNITADD=((00,256)),LINK=(24,34,25,35),
CUADD=20,UNIT=2107,DESC='N150 LCU20'
CNTLUNIT CUNUMBR=A100,PATH=(52,53,54,55),
UNITADD=((00,256)),LINK=(24,34,25,35),
CUADD=21,UNIT=2107,DESC='N150 LCU21‘
IODEVICE ADDRESS=((2000,128)),CUNUMBR=A000,
STADAT=Y,UNIT=3390B
IODEVICE ADDRESS=((2080,128)),CUNUMBR=A000,
STADAT=Y,UNIT=3390A
IODEVICE ADDRESS=((2100,128)),CUNUMBR=A100,
STADAT=Y,UNIT=3390B IODEVICE ADDRESS=((2180,128)),CUNUMBR=A100,
STADAT=Y,UNIT=3390A
Examples provided at the DS8000 Information Center – Search with ‘IOCP’
• New device support for D/T2107
• DS8000 supports up to 16 Address Groups
– 64K logical volumes– For IOCP and HCD, the CU
addresses are hex 00 – FE
– LCU / LSS do not have to be contiguous
Address Group 2 – LCU 20 & 21
83
© 2014 IBM Corporation
DS8870 z/OS HCD considerations subchannel sets
Information provided in: z/OS V1R7.0 HCD Planning (GA22-7525-09)
• Z9 (2094) processor/ z/OS 1.7 only
• HCD Implementation
– Initial Implementation of SS1 req POR
– Channel SubSystem (CSS) definition can contain Subchannel Sets 0 & 1
� 256 Channels per CSS � No changes to LSS definition in ESS,
DS6000, DS8000� Assign IODevice Base to Set 0
� Assign IODevice Alias to Set 0 or 1
� Duplicate Device Numbers Possible …Desirable
� Providing they are in separate Subchannel Sets ...No Problem
• Flexible LSS structure
Multiple Subchannel Sets Relief for 64K Devices per LPARChanged QPAVS display:
DS QPAVS,E278,VOLUME
IEE459I 09.57.53 DEVSERV QPAVS 046
HOST SUBSYSTEM
CONFIGURATION CONFIGURATION
-------------- --------------------
UNIT UNIT UA
NUM. UA TYPE STATUS SSID ADDR. TYPE
----- -- ---- ------ ---- ---- ------------
0E278 78 BASE 3205 78 BASE
1E279 79 ALIAS-E278 3205 79 ALIAS-78
1E27A 7A ALIAS-E278 3205 7A ALIAS-78
0E27B 7B ALIAS-E278 3205 7B ALIAS-78
0E27C 7C ALIAS-E278 3205 7C ALIAS-78
0E27D 7D ALIAS-E278 3205 7D ALIAS-78
1E27E 7E ALIAS-E278 3205 7E ALIAS-78
1E27F 7F ALIAS-E278 3205 7F ALIAS-78
**** 8 DEVICE(S) MET THE SELECTION CRITERIA
0
1LPAR
84
© 2014 IBM Corporation
Using larger volume sizes
� Benefits – Fewer objects to define and manage– Less processing for fewer I/O resources
• CF CHPID, VARY PATH, VARY DEVICE• Channel path recover, link recovery, reset event processing • CC3 processing• ENF Signals• RMF, SMF
– Number of physical resources: CHPIDs, Switches, CU ports, fibers– Each device consumes real storage:
• 768 bytes of real storage for UCB and related control blocks• 256 bytes of HSA• 1024 bytes/device * 64K devices = 64MB• 31 bit common storage constraints
– EOV processing to switch to the next volume of a sequential data set significantly slows the access methods
� Considerations– Data migration to larger devices may be challenging, time consuming
85
© 2014 IBM Corporation
zSeries Parallel Access Volumes (PAVs)
� Additional addresses for a single device for improved performance
� PAVs are shared within an LSS– An LSS may be on multiple ranks – Multiple LSSs may be on one rank
� Recommendations– Use HyperPav if possible
• z/OS, z/VM, Linux for System z• If not HyperPav use dynamic PAV for z/OS if possible
› Requires parallel sysplex and WLM› Requires WLM having dynamic PAV specified› Requires WLM specified in device definition
86
© 2014 IBM Corporation
DS8870 References
� DS8000 Information Centerhttp://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp
– Tutorials– Overviews– Publications… and much more!
� GC35-0515 DS8000 Introduction & Planning� GC26-7914 DS8000 Messages Reference� SC26-7917 DS8000 Host Systems Attachment Guide� SG24-6786 DS8000 Architecture & Implementation� SC26-7917 DS8000 Command-Line Interface User’s Guide� SG24-8085 DS8870 Architecture and Implementation Guide
� DS8000 Code Bundle Information (Code Bundle, DS CLI, Storage Manager xrefhttp://www-01.ibm.com/support/docview.wss?uid=ssg1S1002949&rs=555
� DS8000 Turbo Information (specs, white papers, etc.)http://www-03.ibm.com/systems/storage/disk/ds8000/index.html
87
© 2014 IBM Corporation
DS8000 References
� Techdocs
http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs
� PRS3574 IBM DS8000 + System z Synergy - March 2009 � WP101528 IBM System z & DS8000 Technology Synergy� PRS3565 ATU - Storage Perf Mgmt with TPC� TD104162 Open System Storage Performance Evaluation� TD103689 Pulling TPC Performance Metrics for Archive and Analysis
Many more white papers, presentations and trifolds can be found on Techdocs!
88
© 2014 IBM Corporation
Trademarks
The following are trademarks of the International B usiness Machines Corporation in the United States, other countries, or both.
The following are trademarks or registered trademar ks of other companies.
* All other products may be trademarks or registered trademarks of their respective companies.
Notes : Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area.All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.
Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries.Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce.
For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml:
*, AS/400®, e business(logo)®, DBE, ESCO, eServer, FICON, IBM®, IBM (logo)®, iSeries®, MVS, OS/390®, pSeries®, RS/6000®, S/30, VM/ESA®, VSE/ESA, WebSphere®, xSeries®, z/OS®, zSeries®, z/VM®, System i, System i5, System p, System p5, System x, System z, System z9®, BladeCenter®
Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market.
Those trademarks followed by ® are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.
89