hdt for mainframe considerations: simplified tiered storage
DESCRIPTION
TRANSCRIPT
HITACHI DYNAMIC TIERING FOR MAINFRAME CONSIDERATIONS: SIMPLIFIED TIERED STORAGE
LARRY KORBUS, SENIOR DIRECTOR, STORAGE INFRASTRUCTURE SOFTWARE INTEGRATION, AND ARCHITECTURE MANAGER
JOHN HARKER, SENIOR PRODUCT MARKETING MANAGER
SEPTEMBER 5, 2012
WEBTECH EDUCATIONAL SERIES
HDT for Mainframe Considerations: Simplified Tiered Storage
Learn from the experts how Hitachi Dynamic Tiering for Mainframe (HDT)
complements existing mainframe storage provisioning processes while offering
the full benefits of dynamic tiering to improve performance and simplify
performance and capacity optimization. By automatically spreading application
data sets across large numbers of physical disks, the software optimizes
performance and throughput and reduces performance management concerns.
Existing SMS provisioning processes can be aligned to different tiered storage
pools.
By attending this webcast, you'll learn how to
• Determine if HDT is appropriate for your workload
• Configure HDT to complement DFSMS and improve flexibility, reducing
storage group complexities and overhead
• Use a mix of small quantities of SSDs with SAS storage to dramatically
improve mainframe storage performance and scalability
UPCOMING WEBTECHS
Mainframe series
VSP Mainframe and Dynamic Tiering Performance Considerations,
Sept 12, 9 a.m. PT, 12 p.m. ET
Mainframe Replication, Sept 19, 9 a.m. PT, 12 p.m. ET
Why Networked FICON Storage Is Better than Direct-Attached
Storage, Oct 3, 9 a.m. PT, 12 p.m. ET
Other
Storage Analytics, Sept 20, 9 a.m. PT, 12 p.m. ET
Maximize Availability and Uptime by Clustering your Physical Data
Centers Within Metro Distances, Oct 24, 9 a.m. PT, 12 p.m. ET
Check www.hds.com/webtech for
Links to the recording, the presentation and Q&A (available next week)
Schedule and registration for upcoming WebTech sessions
AGENDA
Hitachi Data Systems mainframe storage support
Hitachi Dynamic Provisioning (HDP) and Hitachi
Dynamic Tiering (HDT) refresher
Mainframe HDP and HDT differences
HDT mainframe implementation
Recommendations and recap
HITACHI MAINFRAME STORAGE SUPPORT
Leverage IBM Compatibility with Hitachi Value Add
Storage Subsystems
FICON, zHPF, FCP, MA, PAV, HyperPAV Hardware Interface
IBM Portfolio:
RCMF
Hyperswap Manager
GDPS
Hitachi Portfolio:
BC Manager,
Universal Replicator,
FDM, FC UVM, HDP
zOS + DFSMSdfp, Linux for System z, zVM, zVSE
IMS, CICS, DB2, MQSeries, etc.
IBM
Storage
Hitachi
Storage
Hitachi
Storage
Solution
Level
Operating Systems
Middleware and
Application Level
Concurrent Copy, Flashcopy, Metro Mirror (PPRC),
Global Copy (PPRC-XD), z/OS Global Mirror (XRC) Storage Functions
IBM Compatibility
Hitachi
Strategy:
IBM Compatibility
IBM Compatibility
Development Focus
on Hitachi Added-
Value Solutions
(Without Host SW)
(Without Host SW)
(Without Host SW)
MAINFRAME VISION AND DIRECTION
Hitachi bringing storage virtualization technology to the mainframe space
HDP provides FCSE/DVE/EAV compatibility
HDP the technology foundation for your next wave of mainframe virtualization
UVM NLSAS SAS SSD
SAS UVM
HDP Pool
HDP Pool + HDT
-Utilize existing assets
or low-cost external
storage
-Better space efficiency
-I/O load balancing
-Flexible volume
allocation
-Automatic dynamic
tiering
-Effective use of high-
speed SSD
EAV
DVE
EAV
DVE
EAV
DVE
FCSE EAV
DVE
EAV
DVE
EAV
DVE
EAV
DVE
EAV
DVE
EAV
DVE
FCSE
UVM
HDP AND HDT REFRESHER
HITACHI DYNAMIC PROVISIONING SPECIFICATIONS: HDP POOL VOLUME
Emulation = 3390-V (MF)
Pool volume capacity = 8GB to 4TB
Maximum number pool volumes = 1024 / pool
HDP/HDT Pool VOLs managed using pages
MF: 38MB/page
Pages are dispersed based on equalizing array
group (V04) coverage across pool volumes
in a pool (or tier)
Any RAID level, intermix allowed but not recommended
- Intermix in an HDT tier now allowed (V04)
Any HDD type, can mix but not recommended
- External also supported
Any parity group type
- RAID1 is supported
CLPR assignment does not matter, CLPR of DP-VOL
matters
SN2 POOL CREATE SETTINGS
• Pool type
• Open/mainframe
• HDT
• Pool volumes
• Pool name
• Pool number
• Subscription limit (threshold)
• Warning and depletion utilization
Thresholds
• Auto or manual
• Auto-cycle time
• Period or continuous
• Buffer percentages per tier
HITACHI DYNAMIC PROVISIONING SPECIFICATIONS: HDP DP-VOL (V-VOL)
DP-VOL Directory Page Space Control Block
HDP Control Information
Emulation = 3390-A in mainframe
DP-VOL capacity = up to 218GB (262,668 cylinders)
Maximum number = ~62K DP-VOLs
Maximum total DP-VOL capacity of all DP-VOLs = ~1.1PB -> ~4.5PB in V03
All DP-VOLs managed using page units of allocation
SN2 DP-VOL CREATE AND EDIT SETTINGS
• Open/mainframe
• HDP or HDT pool
• Size
• LDEV ID
• CLPR
• MP
• Tiering policy LEVEL or ALL
• New page assignment tier
• Relocation priority
• Tier relocation
DP VOL8
Directory
DP VO7
Directory
DP VOL6
Directory
DP VOL5
Directory
DP VOL4
Directory
DP VOL3
Directory
DP VOL2
Directory
DP VOL1
Directory
DP VOL16
Directory
DP VO15
Directory
DP VOL14
Directory
DP VOL13
Directory
DP VOL12
Directory
DP VOL11
Directory
DP VOL10
Directory
DP VOL9
Directory
DP-VOL Directory
An HDP page represents 38MB of contiguous tracks in the DP-VOL The DP-VOL is basically subdivided into 38MB areas where each area matches up to a page
DP
VOL
1
DP
VOL
2
DP
VOL
3 DP
VOL
4
DP
VOL
5
DP
VOL
6 DP
VOL
8
DP
VOL
7
DP
VOL
9
DP
VOL
10
DP
VOL
11 DP
VOL
12
DP
VOL
13
DP
VOL
14
DP
VOL
16
DP
VOL
15
HITACHI DYNAMIC PROVISIONING: DP-VOL THIN PROVISIONING
“White” areas in DP-VOL have no page mapped since no data has been written to those portions of the 3390. No pool capacity is assigned to these track ranges. This is thin provisioning Replication license capacity does not count unmapped DP-VOL capacity
DP VOL8
Directory
DP VO7
Directory
DP VOL6
Directory
DP VOL5
Directory
DP VOL4
Directory
DP VOL3
Directory
DP VOL2
Directory
DP VOL1
Directory
DP VOL16
Directory
DP VO15
Directory
DP VOL14
Directory
DP VOL13
Directory
DP VOL12
Directory
DP VOL11
Directory
DP VOL10
Directory
DP VOL9
Directory
DP-VOL Directory
DP
VOL
1
DP
VOL
2
DP
VOL
3 DP
VOL
4
DP
VOL
5
DP
VOL
6 DP
VOL
8
DP
VOL
7
DP
VOL
9
DP
VOL
10
DP
VOL
11 DP
VOL
12
DP
VOL
13
DP
VOL
14
DP
VOL
16
DP
VOL
15
HITACHI DYNAMIC PROVISIONING: DP-VOL WIDE STRIPING
Pool Volumes
Data from all DP-VOLs using the pool is widely distributed within the pool. This is wide striping
MAINFRAME
HDP AND HDT
DIFFERENCES
MAINFRAME HDP AND HDT: SIGNIFICANT DIFFERENCES TO OPEN
Separate pool for mainframe using 3390-V formatted pool volumes
DP-VOLs only use 3390-A emulation
ZPR is different
– Works based on all tracks having no records in a page
Page size 38MB
Maximum system capacity for pool and DP-VOL is different for Open
‒ About 5% less due to page size difference
Considerations around transitioning between 3390-x and 3390-A
HDP and HDT DP-VOLs must be 3390-A
VSP supports 3390-A. Older platforms do not support 3390-A
3390-A program product support
DYNAMIC PROVISIONING AND TIERING SPECIFICATIONS: REPLICATION COEXISTENCE
Copy Combinations
S-VOL -> T-VOL TC HUR SI VM FC
3390-3/9/L/M -> 3390-A Normal OK NG NG NG OK
3390-3/9/L/M -> 3390-A DP-Volume OK NG NG NG OK
3390-A Normal -> 3390-3/9/L/M NG NG NG NG OK
3390-A DP-Volume -> 3390-3/9/L/M NG NG NG NG OK
Function • Request expansion to DP-VOL from Hitachi Storage Navigator
and RAID manager while online
• Expand virtual capacity of DP-VOL
• VSP performs async report (DSB=85) to the mainframe host and
the host performs refresh of the volume’s VTOC. (Automatic in
z/OS1.11 and later. In z/OS1.10 operator needs to perform VTOC
refresh manually)
Expected
effect
• Able to expand volume capacity while online without stopping
host I/O
Prerequisite
restriction
• Mainframe host should be OS (z/OS) that can recognize capacity
expansion online
• Expansion is disabled for volumes being used for array-based
replication products (P.P.). (necessary to delete pair first)
• Depending on V-VOL capacity, expansion may be disabled
• Capacity can be expanded up to 262,668 cylinders (maximum
capacity of DP-VOL)
DVE Function (Dynamic Volume Expansion a Feature of z/OS)
MAINFRAME DYNAMIC PROVISIONING AND TIERING SPECIFICATIONS: DVE
Function “0” data page reclamation runs when one of the following is
performed
“0” data page reclamation for the entire volume requested from
Storage Navigator/RAID manager to DP-VOL
Rebalance and relocation can reclaim pages that are allocated
to DP-VOL but do not have any valid record
P.P. sync “0” data page reclamation that runs when
initial/update copy is performed in combination with copy P.P.
In mainframe environments, “0 data page” indicates that there is
no valid record in a page
Expected
effect
After migrating older model or normal volume to DP-VOL,
capacity efficiency of entire system can be improved
Restriction When “0” data page reclamation for a page conflicts with host
I/O, “0” data page reclamation for the page is not performed
Effect of “0” data page reclamation varies depending on I/O
pattern and file delete or creation activity
“0” data page reclamation
DYNAMIC PROVISIONING AND TIERING SPECIFICATIONS: PAGE RECLAMATION
HDT MAINFRAME
IMPLEMENTATION
SMS STORAGE GROUPS AND ACS ROUTINES IMPLEMENTATION
PROC STORGRP
SELECT
WHEN (&STORCLAS = ‘EXCEPTIONAL’) SET &STORGRP = ‘SSD’
WHEN (&STORCLAS = ‘NORMAL’) SET &STORGRP = ‘SAS10’
WHEN (&STORCLAS = ‘WORK’) SET &STORGRP = ‘SAS07’
OTHERWISE SET &STORGRP = ‘SAS10’ <<A guess
END
A simple ACS example
- Have some datasets directed at SSD, SAS, and nearline
- Pick a default for the unknown
SMS STORAGE GROUPS AND ACS ROUTINES WITHOUT HDT
Sto
rage
Gro
up “
SS
D”
3390-A
Sto
rage G
roup
“SA
S10”
3390-A
Sto
rage G
roup
“SA
S07”
3390-A
3390 volumes are “fixed” to only one tier
To transition a volume to another tier requires migration/recall
Stale datasets are
treated the same as active ones until HSM migration
Performance problems need
intervention to migrate to
“higher” storage group
IMPLEMENT HDT DP-VOLS INTO MAINFRAME
A simple HDT ACS routine
- You can still have direct control over some select datasets, but you can
also have datasets use the full scope of tiering
PROC STORGRP
SELECT
WHEN (&STORCLAS = ‘EXCEPTIONAL’) SET &STORGRP = ‘SSD’
WHEN (&STORCLAS = ‘NORMAL’) SET &STORGRP = ‘SAS10’
WHEN (&STORCLAS = ‘WORK’) SET &STORGRP = ‘SAS07’
OTHERWISE SET &STORGRP = ‘HDTALL’
END
Sto
rage G
roup
‘HD
TA
LL’
Sto
rage G
roup
“SA
S07”
Sto
rage
Gro
up
“SA
S10”
Sto
rage
Gro
up “
SS
D”
HDT SMS STORAGE GROUPS AND ACS ROUTINES
Tier 1
Tier 2
Tier 3
Pool
Level1
3390-A
Level3
3390-A
Level5
3390-A
“ALL”
3390-A
You can still have the “old
style” storage groups, but
now HSM isn’t needed to
transition between tiers or
solve performance
problems
Now the default storage
group dynamically adjusts
HDT DP-VOLS INTO MAINFRAME
3390-A support
Define new SMS storage group(s) or add to existing ones based on
tier composition and policy (level) settings
Due to dynamic tiering, HDT may allow customer to consolidate
existing storage groups
‒ One storage group of 3 tiers rather than 2-3 separate storage groups
HDT offers ongoing data placement at the page level
‒ ACS routines and HSM cannot achieve this dynamic or granularity
ACS routines can only route a file to a storage group at allocation
(create, recall, restore) time
‒ HDT performs active ongoing management within the storage group based on the pool construction and the DP-VOL assigned policy
IMPLEMENT HDT DP-VOLS INTO MAINFRAME HSM
HSM may be used for relocating data sets for performance or load balancing but HDP/HDT is a better alternative
‒ Wide striping is a better solution for load balancing issues
‒ Dynamic tiering is better for locating more demanding blocks to higher tiers – HDT is automated and more responsive than HSM
‒ HSM must migrate and recall to move a dataset between volumes while HDT moves pages nondisruptively
‒ HDT will automatically adjust data placement with pool capacity and tier changes
May also consider delaying HSM data set migrations if HDT tier-3 relocations satisfy customer needs
‒ However HSM migration is still necessary to avoid X37 problems
HDT is an alternative that can be used to reduce some HSM operations
‒ (Obviously) HDP/HDT cannot eliminate all HSM migrations, recalls, backups and restores
IMPLEMENT HDT DP-VOLS INTO MAINFRAME THIN SAVINGS
3390-A volumes sparsely allocated with a static set of
files are the primary thin savings candidates
‒ Checkpoint volumes (isolating reserves)
‒ Legacy “short-stroked” volumes
‒ Dedicated application volumes with spare capacity
3390-A volumes with high dataset turnover rates are not
thin savings candidates
‒ Heavily managed HSM volumes
All tracks will have either live or residual data
‒ Tracks will be redeployed quickly after being freed by deleting or migrating a file
Effort to reclaim a page would be wasted
BEFORE HDT AND AFTER
Working with Storage Before HDT With HDT
Add physical capacity Add 3390-x volumes into
storage groups
Add capacity into pool
Balance use over new
capacity
Manually use HSM
migration/recall
No actions are needed
Direct specific
applications to specific
storage resources
Code ACS routines,
follow-up with HSM
migrations and recalls
Set 3390-A to an HDT
policy – use same ACS
routines but no HSM
needed
Address performance
problems by moving
datasets or volumes
Code ACS routines and
use HSM migration/recall
HDT relocation has likely
prevented the issue
Otherwise use HDT Policy
Maintain SMS storage
groups and ACS routines
Constant challenge to
keep updated with rules
describing exceptions
Fewer exceptions since
HDT keeps tiers properly
populated
Demote data to lower tiers HSM moves datasets to
an ML “tier” that hasn’t
been opened for a while
HDT automatically moves
pages that haven’t been
used
RECOMMENDATIONS
AND RECAP
CACHE RECOMMENDATIONS
Total logical capacity assigned to
CLPR
Number of Multi-
processor
Package (MPK)
Cache Logical
Partition (CLPR) GB
size
Less than 3TB
2 12
4 22
6 32
8 42
Less than 11.5TB 2 16
Less than 100TB
2-4 24
6 32
8 42
Less than 182TB 2-6 32
8 42
Less than 218TB 2-8 42
Less than 254TB 2-8 48
Less than 290TB 2-8 56
Less than 326TB 2-8 64
More 72
MINIMUM MULTI-PROCESSOR (MPB)
Total logical capacity Number of MPB
Less than 600TB 4MPB (or more)
More than 600TB 8MPB
SYSTEM OPTION MODE (SOM) RECOMMENDATIONS
SOM # Use Description
729 ON For pool full (use OFF if VMWare VAAI phase 2 )
734 ON Improve threshold SIM
749 OFF Rebalance and relocation should be used
755 OFF ZPR should be used
803 ON Use DRU if pool volume blocks
867 ON Format uses reclaim instead of writing zeroes
872 ON UVM improved sequential processing
896 ON Use background task to clean free pages
897 OFF Improve tiering at low tier range values
898 ON Works with SOM897
901 ON Only use if pool is active – more use of SSD if installed
904 OFF Full-speed relocation
917 ON Balance parity groups rather than pool volumes
WRAPPING UP: “STARTING POINT” RECOMMENDATIONS
VSP Add shared memory (cache) Install V04 Use recommended SOMs
Pool storage 1 tier if you can afford to study, otherwise… 2 tiers with SSD and 10K, otherwise… 3 tiers with at least 40% of 10K All non-SSD tiers, at least 4 parity groups, preferable much
more RAID6 Size assuming no oversubscription; otherwise be very
conservative
Pool: Auto 8 hours Continuous Buffer defaults Set utilization thresholds: Warning 75%, depletion 90% Set maximum subscription to 100%
WRAPPING UP WITH RECOMMENDATIONS
DP-VOL
• Use rational sizes – do not increase over what makes
sense without thin provisioning
• Use tiering policy = ALL
• New page assignment tier = middle
• No relocation priority
• Enable tier relocation
• Distribute across MPs
Operational
• Collect relocation log every 3rd day (maybe less often)
• Monitor SIMs for thresholds
• After the pool is loaded and relocation has operated for a
few days, look at and screen-print tier properties display
occasionally
IMPLEMENT HDP OR HDT INTO MAINFRAME
HDP and HDT is the same product with few exceptions for open or mainframe – the technical details are the same.
Check the white papers and training materials for HDP and HDT for open systems
‒ Pool management
‒ Monitoring options
‒ Relocation (tier management)
‒ DPVOL policies
Learn more at www.hds.com/
‒ Mainframe
‒ http://www.hds.com/solutions/infrastructure/mainframe/
‒ Dynamic Provisioning
‒ http://www.hds.com/products/storage-software/hitachi-dynamic-provisioning.html
‒ Dynamic Tiering
‒ http://www.hds.com/products/storage-software/hitachi-dynamic-tiering.html
You do need a separate license (included with HDT)
You do need separate mainframe HDP and/or HDT pools
QUESTIONS AND DISCUSSION
UPCOMING WEBTECHS
Mainframe series
VSP Mainframe and Dynamic Tiering Performance Considerations,
Sept 12, 9 a.m. PT, 12 p.m. ET
Mainframe Replication, Sept 19, 9 a.m. PT, 12 p.m. ET
Why Networked FICON Storage Is Better than Direct-Attached
Storage, Oct 3, 9 a.m. PT, 12 p.m. ET
Other
Storage Analytics, Sept 20, 9 a.m. PT, 12 p.m. ET
Maximize Availability and Uptime by Clustering your Physical Data
Centers Within Metro Distances, Oct 24, 9 a.m. PT, 12 p.m. ET
Check www.hds.com/webtech for
Links to the recording, the presentation and Q&A (available next week)
Schedule and registration for upcoming WebTech sessions
THANK YOU