simplified z/os dasd migration of a very large number of volumes · 2014-02-14 · hitachi...

38
Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved. Sponsored by: Simplified z/OS DASD Migration of a Very Large Number of Volumes by John Varendorff Hitachi Data Systems

Upload: others

Post on 19-Mar-2020

21 views

Category:

Documents


1 download

TRANSCRIPT

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Simplified z/OS DASD Migration of

a Very Large Number of Volumes

by John Varendorff

Hitachi Data Systems

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

John Varendorff, Hitachi Data Systems BOSTON 15 August, 2013

Session Number 14069 This paper is based on a presentation given at the Boston, August 2013, SHARE. The presenter John Varendorff, in his 6th Year with Hitachi Data Systems, has been continually involved with the IBM mainframe platform since 1981, and is currently the Technical Director - Mainframe Systems & Solutions for the Asia Pacific geography, responsible for Enterprise Storage Product Management issues, relating to Mainframe Storage Solutions, as well as being a Practice Manager for Hitachi Mainframe Services across the APAC Geo. The presentation is organized in three sections; 1. An introduction to Hitachi Data Systems Mainframe Services, which employs FDRPAS for Non-Disruptive Mainframe DASD Migration. 2. An introduction to new FDRPAS features supporting the non-disruptive and synchronized migration of very large numbers of z/OS DASD volumes. 3. A review of FDRPAS testing conducted by HDS in their in Santa Clara, CA, Global Solutions Strategy and Development Tech Ops, Complex Test Lab.

Trademarks and statements: IBM, System z, zEnterprise, z/OS, TPF, HyperPAV, zHPF, TSO, CICS, DB2, IMS, RACF PPRC, XRC, HyperSwap and GDPS are trademarks or registered trademarks of International Business Machines Corporation. HDS, HDP-MF, HDT-MF, HTSM-MF, HUR and TrueCopy are trademarks or registered trademarks of Hitachi Data System Corporation. SRDF, GDDR and AutoSwap are trademarks or registered trademarks of EMC Corporation. FDR, FDRABR, FDRPAS, FDRMOVE, FDRERASE and FDRINSTANT are service marks, trademarks or registered trademarks of INNOVATION Data Processing Corporation. All other service marks, trademarks or registered trademarks are the property of their respective owners.

2

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Hitachi maintains three highly experienced Mainframe Services teams organized world wide into distinct business geographies. Hitachi Mainframe Services teams, employing advanced practices, proprietary tools and with access to an extensive knowledge base, help customers implement and optimize mainframe solutions, delivering first hand best practices and guidance on day-to-day systems management. Hitachi Mainframe Services teams determine customers’ mainframe disk storage capacity and performance needs with through analysis of historic storage centric and system centric data. Hitachi Mainframe Services teams on the basis of their analysis of a customers needs can recommend, install, enable and customize unique HDS DASD storage features that provide the highest performance while optimizing the use of mainframe storage resources like Hitachi Dynamic Provisioning for Mainframe (HDP-MF), Hitachi Dynamic Tiering for Mainframe (HDT-MF) and the newly announced Hitachi Tiered Storage Manager for Mainframe (HTSM-MF).

3

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

The Hitachi mainframe services North America team of 15 mainframe storage experts, all with 20+ years of IBM Mainframe experience, have a long history of assisting with mainframe storage migration providing detailed planning and delivery documents, using well established migration methodologies that reflect their experience using Host Based Software Migration Technology. Going as far back as 1997 with the introduction of TDMF from Amdahl. Now more recently, since 2004, using FDRPAS from INNOVATION Data Processing for non-disruptive DASD migration across all three world wide Geos. John Varendorff himself having used FDRPAS to provide non-disruptive data migration in numerous customer new DASD installations and long distance data center relocations. John because of his familiarity with FDRPAS also participated in the review of a beta release of FDRPAS Level 80 migrating a “large” number of volumes conducted by HDS in their in Santa Clara, CA, Test Lab.

4

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

5

FDRPAS V5.4 L80 with reductions in resources consumption and improvements in throughput performance now supports

concurrently swapping 15,000+ (source) volumes, a 50% increase beyond the prior limit of 10,000+ volumes.

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

6

Customers are under pressure… The Challenge…Facing explosive data growth (big data) and business units demanding 24 x 7 data availability (supporting cloud infrastructure), customers are finding little time to properly manage storage or even make a reliable backup copy of their data, yet to support the demands of big data and cloud they must upgrade to faster, higher capacity and expectantly less expensive disk storage while maintaining that 24 x 7 data availability. They have neither time nor the desire to solve the dilemma... They need a solution that will allow them to gain the benefits of consolidating storage and bringing up new applications with peace of mind; knowing their systems will have access to faster, higher capacity more reliable and ultimately less expensive disk storage while maintaining 24 x 7 data availability. Now you have a solution…, Customer can begin using newer, faster, lower cost and more reliable z/OS disk storage systems sooner without any interruption to ongoing business because using FDRPAS™ from INNOVATION Data Processing, allows them to non-disruptively replace any vendor’s disk….. , any time.

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

7

Customers under pressure can meet the challenge… FDRPAS, from INNOVATION Data Processing, allows you to non disruptively upgrade to faster, higher capacity and less expensive disk storage.

Solve the dilemma... FDRPAS lets you non-disruptively bring in the faster, higher capacity, more reliable and ultimately less expensive disk storage that will let you bring up new big data and cloud applications with peace of mind; knowing these systems will have the 24 x 7 availability and continuous business data protection they require.

You have the solution… Use FDRPAS to non disruptively replace almost any number of any vendor’s disk, any time, and begin using newer, faster, lower cost and more reliable z/OS disk storage sooner without interruption to ongoing business.

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

8

“Anytime Empowerment” Customers install new z/OS DASD disk for a variety of reasons: to increase capacity or to increase performance or possibly both. However the installation of new disk storage can be disruptive because though System z and zEnterprise processors allow non-disruptive “plug in” of new FICON disk hardware and the z/OS operating system can bring new equipment online non-disruptively, business interruptions can still occur because traditional solutions for moving data from existing DASD volumes to new volumes are disruptive. Conventional Dump/Restore, traditional copy and even some hardware replication solutions require you to stop business applications that depend on data residing on your disk storage systems. FDRPAS (FDR Plug and Swap) is an “anytime” solution for moving data from old DASD to new disk hardware while maintaining continuous 24 x 7 x 365 accessibility, by non-disruptively moving data on the fly, while business operations continue non-stop.

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

FDRPAS moves a volume from an online source disk to an offline target Active volumes non-disruptively move to new disk devices. Applications, remain active, they do not need to be quiesced Jobs and users are unaware their data is moving to a different disk device

FDRPAS main job copies tracks from online source to off-line target Datasets, remain open, they do not need to be closed Data can be moved at any time even during business hours All active tracks on the source disk are copied to the target disk

9

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

FDRPAS monitors on the sharing LPAR’s check for updates All updated tracks are copied or re-copied When the source & target disk volume are in sync I/O to the source disk is suspended for a final copy

FDRPAS using a system swap service re-directs all I/O to the new device. The online source is put offline on all LPARs The offline target is put online with the VOLSER of the source on all LPARs

Voila! The volume is moved on all LPARs while in use!

10

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

What’s new in FDRPAS V5.4 L80... An increase in the limit on the number of source volumes from 10,000 to 15,000+ allows concurrent and non-disruptive swapping of very large numbers of volumes. FDRPAS design improvements that reduce system resource requirements and better integration with system services makes it more robust and provides the ability to process a larger number of volumes concurrently, for a substantial savings in time to complete migrations. Together these improvements provide FDRPAS an ability to concurrently and non-disruptively migrate upwards of 15,000 source volumes completing migrations transparent to ongoing workloads, in less time and with minimal impact on system resources.

11

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Better Integration with System Services No requirement to shut down HyperSwap or Basic HyperSwap... Using, HyperSwap (Enable/Disable) and Basic HyperSwap (Block/Unblock), system services FDRPAS reduces the length of time HyperSwap and Basic HyperSwap are suspended to only the instant of the actual system device swap. FDRPAS can now complete a SWAP of a volume containing a common page data set even if the common page data set is updated by a page-out during the SWAP. FDRPAS now instead of failing when a page-out to common occurs, continues and recopies the data set on the next pass. FDRPAS can now recognizes a volume containing an active sysplex coupling data set and automatically serialize it in the same way as it does for JES spool and checkpoint volumes. FDRPAS channel programs improvements reduce I/O wait time and improve user response times especially on very active volumes utilizing PAV during migrations. FDRPAS, FDRMOVE, and FDRERASE now all support 1 Terabyte (TB) Extended Address Volumes (EAV). FDRMOVE makes consolidation of multiple smaller volumes to larger volumes easier by allocating and moving VSAM and non-VSAM data sets to an EAV including into the upper Extended Address Space (EAS).

12

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

More Robust and Easier to Use New MONITOR simulation facilities (SIMSWAPMON) invokes real MONITOR tasks that perform extensive checking during a SWAP simulation. Confirming all necessary MONITOR tasks are responding, the integrity of source volume VTOC and VVDS, that target volume sizes match their source volume sizes, and that target volumes are offline to all LPARs. Optionally, if online but with no allocations, target volumes can be automatically varied offline. Dynamic Monitoring is a feature that significantly simplifies the setting up of MONITOR tasks in GRSplex and MIMplex environments by eliminating any need for MONITOR task MOUNT statements. SWAP tasks directly passing the addresses of source volumes to the MONITOR tasks on each LPAR, allowing the MONITOR tasks to dynamically add the source volume addresses to the list of volumes they are to monitor. SWAP task simulation, in GRSplex and MIMplex environments, can automatically start all the required MONITOR tasks. SWAP, SWAPDUMP & SIMSWAPMON tasks, if they detect an LPAR that has no MONITOR running, can submit the MONITOR task to the LPAR. Automatically starting all required MONITOR tasks eliminates the need for users to manually submit MONITOR tasks on each LPAR. Combining simulation with Dynamic Monitoring can ensure all the MONITOR tasks the SWAP needs are started on the appropriate LPARs.

13

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

14

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

15

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

FDRPAS storage hardware feature support – Users, at one time, had to disable many performance and data protection hardware features for the many hours it can take to copy and swap a large number of disk volumes, denying them the performance benefits or leaving their installations exposed during that time to the risk of a disk system failure. FDRPAS allows remote data protection in all its forms, ( i.e. HyperSwap Basic, GDPS/HyperSwap, GDDR/AutoSwap), to remain active during copy and disk synchronization, disabling and enabling HyperSwap or an other remote data protection feature just for the call to the z/OS system swap service. This enhancement dramatically reducing the duration of time a site is at risk without HyperSwap or an other remote data protection feature to only the brief time of the actual z/OS device swap. Additional performance enhancements support zHPF IO and allow PAV access to remain active during copy and synchronization reducing the time PAV is inactive, to only the few brief seconds of the actual z/OS system device swap.

16

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

What’s different swapping thousands of volumes on multiple LPARs…

When Swapping Large Numbers of Volumes up to now you would... Plan to maintain consistency... Organizing volumes into migration groups Create the appropriate JCL for the swap jobs and monitor tasks Stop application updates Shut down GDPS/HyperSwap, GDDR/AutoSwap, HUR, XRC etc. possibly for many hours or days Submit the JCL for a large number of swap jobs and monitor tasks on all the appropriate LPARs

Now you can...

17

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

18

Use FDRPAS V5.4 L80 to SWAP Large Numbers of Volumes Let FDRPAS dynamically generate and submit the SWAP jobs & MONITOR tasks on all the necessary LPARs for up to 15,000+ source volumes And If you need remote protection... Keep it active while synchronizing source and target disks then automatically, And Only briefly suspend your GDPS/HyperSwap, GDDR/AutoSwap, etc. And Complete the SWAP

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Resource Reductions and Performance Improvements FDRPAS and FDRMOVE efficiency enhancements use less common storage, reduce CPU overhead, improve executing speed and boost performance throughput. An important FDRPAS MONITOR service enhancement relieves users from the responsibility of having to ensure MONITOR tasks have a sufficiently high enough dispatching priority to always respond in time to communication requests from the main SWAP task. Now MONITOR tasks, that determine themselves to be in a poorly performing service class, can reset themselves to a higher performing service class and avoid “None Responding Monitor” WTOR console messages.

19

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Concurrently SWAP a Very Large Number of Volumes... In Much Less Time Structural improvement enhancements allow FDRPAS SWAPDUMP operations to now support concurrently swapping 15,000+ source volumes, a 50% increase over the prior limit of 10,000+ volumes. This is especially useful for, but not limited to maintaining HyperSwap remote data protection in all its forms, ( i.e. Basic, GDPS, GDDR/AutoSwap, Consistency Groups), when concurrently migrating a very large number of volume at a set point-in-time as when relocating an entire data center to a new site.

20

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

21

FDRPAS V5.4 L80 can generate and submit SWAP and Monitor JCL FDRPAS by combining automation enhancements that reduce the clerical effort and eliminate potential clerical errors typically associated with creating and submitting JCL and control statements; with an ability to maintain consistent HyperSwap remote data protection in all its forms, during a non-disruptive, concurrent relocation of upwards of 15,000 source volumes at a single point in time, with minimal impact on system resources and ongoing workloads, allows users to undertake and successfully complete very large migrations in substantially less time. Especially important when migrating large numbers of volumes, combining the new GENSWAP with SIMSWAP and SIMSWAPMON simulation allows users to validate their migration jobs before the actual SWAP. SIMSWAP validates SWAP parameters, SIMSWAPMON validates MONITOR parameters and that MONITORS can communicate with the SWAP job. Simulations can validate source volumes are online and targets offline. They can identify when target devices do not exist or are not offline, or are the wrong type or size. Depending on the storage system they can identify the CPU ID of LPARs with access to source volumes, verifying all of the LPARs you expect have access and that there are no unexpected systems with access. It is highly recommended users run simulations prior to a SWAP to validate the JCL and parameters of all their SWAP jobs and MONITOR tasks.

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

NEW FDRPAS GENSWAP Generates, Automates and Coordinates GENSWAP can generate, given local JCL standards, source volume VOLSER mask ranges, storage subsystem SSID or storage subsystem CUID and ranges for the Target devices, the hundred of SWAP job & MONITOR task JCL necessary to SWAP up to 15,000 source volumes, 64 volumes per job; enough to migrate an entire data center. GENSWAP automation generates MONITOR target units to match the SWAP source volumes, balancing the workload by spreading SWAP jobs across one or more LPARs, while submitting the necessary MONITOR tasks on all connected LPARs. Submitting SWAP jobs, for up to 64 volumes per job; GENSWAP can apply the submit delay and max active task parameter values spreading jobs out in time, to avoid flooding the system with jobs and to allow the SWAP tasks it is submitting to the internal reader to start in a more orderly manner. GENSWAP can also employ MONITOR started tasks to conserve z/OS initiators. GENSWAP coordinates job execution to control and maximize performance. Sorting the source volumes that are to be swapped by size and location GENSWAP selects one volume of a given size from each separate SSID in rotation to avoid contention and distributes SWAP jobs across LPARs to balance the workload. Additionally by recognizing and identifying system volumes that require special considerations , i.e. volumes containing JES, PAGE, and System Couple Data Sets (CDS), GENSWAP can wait until the end of the migration and then serialize the submission of the SWAP jobs for these volumes.

22

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Simple Large SWAP Key Control Statements LARGESWAP=nnnn An estimate that allows FDRPAS to reduce below the line CSA for large numbers of volumes by maintaining a CSA requirement for sets of about 1500 concurrent swaps MOUNT VOL=…,SWAPUNIT=… (,CUID=….)/(,SSID=….) CUID=/ SSIS Limits source volume(s) selected to those in the specified Control Unit or logical Storage Subsystem. NOTE: CUID and SSID are mutually exclusive. Specifying a CUID= or SSID= with VOL=* allows the SWAP of an entire Control Unit/ or a specific logical Storage Subsystem with a single statement to a target Control Unit defined like the source Control Unit with regards to UCB addresses and volume sizes. For example, to move all the volumes in CUID=12345 to a new Control Unit defined by UCB addresses 9***, specify: MOUNT VOL=*,SWAPUNIT=9***,CUID=12345 PASJOB DD... Contains templates for generating SWAP & Monitor Job JCL The PASJOB DD statement is where the model definition for the SWAP jobs to be generated is built and contains their common JCL and control statements. i.e. JOB card, routing info... control statement for all the SWAP jobs and MONITOR tasks for submission on any LPAR in the SYSPlex

23

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Key Controls Parameters for SWAP of a Large Number of Volumes Migrating a large number of volumes its important to run simulations to identify errors and mistakes in the SWAP / MONITOR JCL and control statements, as well as in the actual hardware configuration. CHECKSOURCE=> Confirms source volume VTOC & VVDS are error-free. CHECKTARGET=> Confirms the target is empty. Using parameters that allow FDRPAS to automate controlling the work flow is another important consideration when swapping a large number of volumes. MAXTASKS=> allows up to 64 DASD volumes in a single swap job. MAXACTIVESWAPS => limits the number of volumes that can be in “pass 1” across all jobs to the MAXTASKS= limit. This allows the submission of many FDRPAS jobs while still limiting the number of volumes that are actively copying to prevent over burdening the system. PACING=> A delay between each write IO to minimize overall impact of SWAP SUBMITDELAY=> A delay that GENSWAP waits between submitting SWAP jobs to avoid flooding the system with jobs that might impact production work.

24

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

25

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Sample of a SWAP job that GENSWAP generates

26

//PAS.PASJOB DD DATA, DLM=ZX//*SWAPJOB//PASSW&&1 JOB (ACCT#), ‘STGADM1, CLASS=A, MSGCLASS=X, MSGLEVEL=(1,1),// NOTIFY=&SYSUID/*XEQ N1//SWAPC EXEC PASPROC//**************************************************************************************//* PAS SWAP//**************************************************************************************//PAS.SYSIN DD * SWAP TYPE=FULL, MAXTASKS=64, LARGERSIZE=OK, CONFIRMSWAP=YES, DYNMON=YES, ALLOWPAV=YES, LARGESWAP=6000, SWAPID=&, MAXACTIVESWAPS=YES*EXCLUDE CPUID=0000000040 USE FDRXCPU MOUNT VOL=&&&&&&, SWAPUNIT=&&&&/*

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Sample of a MONITOR task that GENSWAP generates

//PAS. PASJOB DD DATA, DLM, DLM=ZY//*CPUID=126ed62817//PASMO&&2 JOB (ACCT#), ‘STGADM1’, CLASS=A, MSGCLASS=X, MSGLEVEL=(1,1),// NOTIFY=&SYSUID/*XEQ N2//MONA EXEC PASPROC//*************************************************************************************//* MONITOR//*************************************************************************************//PAS. SYSIN DD * MONITOR TYPE=SWAP, DURATION=2, SWAPID=&, ALLOWPAV=YES, LARGESWAP=6000* MOUNT SWAPUNIT=&&&&/*//*CUPID=136ED62817//PASMO&&3 JOB (ACCT#), ‘STAGADM1, CLASS=A, MSGCLASS=X, MSGLEVEL=(1,1),// NOTIFY=&SYSUID/*XEQ N3//MONB EXEC PASPROC//*************************************************************************************//* MONITOR//*************************************************************************************//PAS.SYSIN DD * MONITOR TYPE=SWAP, DURATION=2, SWAPID=&, ALLOWPAV=YES, LARGESWAP=6000* MOUNT SWAPUNIT=&&&&/*ZY

27

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

FDRPAS can concurrently SWAP up to 15,000 GDPS volumes. Many large customer sites that depend on GDPS/HyperSwap protection to ensure business continuance can not accept the risk of suspending HyperSwap for the length of time it would normally take to migrate a large number of disk volumes to new storage systems. FDRPAS can copy and synchronize volumes while they remain under HyperSwap protection, but z/OS will not allow a SWAP while a volume is enabled for HyperSwap. FDRPAS addresses this by providing facilities that, can suspend HyperSwap for just the minimal time it takes the actual z/OS UCB SWAP to complete and when the swap completes re-enable HyperSwap. Using the HYPERSW DISABLE command, which is much faster than the older HYPERSW OFF command, FDRPAS can tell GDPS to suspend HyperSwap. Once the SWAP of all the volumes completes, using a HYPERSW ON command, FDRPAS tells GDPS it is okay once again, to enable HyperSwap protection.

28

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

A simple job automates and coordinates the entire FDRPAS process. The five steps that coordinate with FDRPAS 1. CONFIRM. Verifies all disks are synchronized and in a “ready to confirm” state 2. DISABLE. FDR Extended MCS Software Console (FDREMCS) issues NetView MODIFY (F) command to disable GDPS/HyperSwap, monitors command results and, notifies FDRPAS when HyperSwap is inactive and its OK to SWAP. 3. WAITTERM. Waits for the SWAP to terminate on all of the selected volumes. 4. RELABEL. (optional) Gives source volumes valid volume labels & brings them online to the GDPS K system(s). 5. ENABLE. Issues NetView MODIFY (F) re-enable GDPS/HyperSwap, and allow GDPS K controlling system(s) to recognize the volume serial number on the target devices on new unit addresses and the new volume serial numbers for the old source volumes.

29

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Some sample questions from the attendees at the presentation include... Q. Can FDRPAS do an entire box but exclude certain LCU(s)? A. Yes, the FDRPAS User Manual explains the appropriate parameters to use and samples are available. ____________________________________________

Q. Any considerations for FDRPAS swapping ECS enabled catalogs? The FDRPAS User Manual explains, at current z/OS maintenance levels, when a volume containing an ICF catalog enabled for Enhanced Catalog Sharing (ECS) is swapped, ECS sharing recognizes the volume has moved to a new hardware address and disables that catalog. The Manual documents how to avoid this using standard MODIFY CATALOG,ECSHR commands to remove catalogs from ECS before swapping them, and re-enabling them for ECS after the swap. ____________________________________________

Q. Can FDRPAS swap just the VTOC? A. No, but FDR can do a DATA=None backup so no data tracks are backed up. This backup consists only of the FDR control records necessary to allocate and update the characteristics of the output data sets in the VTOC during a restore.

30

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Proposed evaluation of FDRPAS migrating a large number of volumes. The objective of the evaluation was to corroborate the viability of HDS Mainframe Services employing FDRPAS in customer engagements requiring non-disruptive, concurrent migration of a very large number of z/OS disk volumes. The evaluation would simulate a customer’s production environment by having the volumes being swapped put under an IO load from applications running on multiple LPARs that were reading and updating data sets on those volumes. An evaluation would be made on the ease of use of FDRPAS, what negative impact if any it had on overall system IO response time, and what if any potential it might have to consume an unacceptable amount of system CPU, memory or paging resources . It was jointly decided that to meet the evaluation objectives would require using three z/OS systems, sharing access to 10,000 volumes (5,000 source volumes and a like number of targets). The volumes would be a mix of sizes from the largest EAV down through the standard 3390-3 and contain a sample of different data set types. As the aim was to evaluate impact on IO performance and resource consumption, only about 5% to 10% of each individual volume was allocated with data sets, limiting the amount of data across all the source volumes to between 1.5 TB and 2.5TB. This would keep data copy times down, and allow individual tests to complete in under an hour. Yet is would still allow tests to demonstrate the impact that maintaining 10,000 volumes in a synchronized state, would have on system resources and IO throughput as FDRPAS would still be migrating a substantial number of volumes that were all under a constant IO load.

31

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Test Configuration INNOVATION provided a pre-GA copy of what would become FDRPAS V5.4 L80. Hitachi Data Systems conducted the evaluation and analysis in their Santa Clara, CA, Test Lab during April and May of 2013. The HDS test environment included: Processor – z196-508 (4042 MIPS / 498 MSU/ 8 CPUs)

Three LPARS using dedicated CPs Load - PE01 (4 CPs and 8GB) PE02 and PE03 (2 * CPs and 4GB)

Storage – 2 Chassis VSP (1024xSAS10K 300GB HDD) Single HDP Pool Hitachi Dynamic Provisioning (HDP) thin provisioning for virtual storage capacity

10,000+ volumes w/HPAV (5,120 Source and 5,120 Target Volumes) Mix of EAV, 3390-54, 3390-27, 3390-9 and 3390-3... Large volumes exercise HPAV with 2 data sets per volume of 503 Cyls each Small volumes with one data set of 83 Cyl Overall between 1.5 TB and 2.5 TB of data

IO Load Generation – using the Performance Associates PA I/O Driver (Notes *) Create a heavy “IO Load”, i.e. 6000 IOPS per LPAR for a total system load of 18,000 IOPS, skewed w/larger volumes having highest IO rates. HyperPAV & Multiple Device Allegiance for concurrent multi-LPAR access.

*Notes: The results in this document were developed using Version 20.A.01 of the PAI/O Driver for z/OS from Performance Associates, Inc. Performance Associates did not perform this evaluation and consequently does not endorse the results.

32

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

FDRPAS SWAP with concurrent steady state IO Load. SWAP (i.e. Copy, Synchronize and Swap), with a heavy concurrent IO load (6,000 per LPAR) based on a typical OLTP workload is 70% read, writes after read, and average 73% cache hits. Under this IO Load, FDRPAS took 45 minutes, to copy, synchronize and SWAP 5,120 source and target volumes. Since the volumes we no more than 5% to 10% allocated only about 1.5 to 2.5 TB was actually copied. Constantly updating the source volumes meant the FDRPAS Main Swap job had to make multiple passes to synchronize the volumes to a point it could issue the “Ready to SWAP” WTOR. The Load test, also includes a 34 minute period after the “Ready” WTOR during which FDRPAS has to manage the 10,240 volumes, and copy new updates to target volumes, while waiting for an OK to SWAP reply to the “Ready” WTOR. CPU utilization peaked on the Main SWAP job LPAR at about 40% (315/8 CPs) of the CEC, averaging about 30% (240/8 CPs) while copying updated tracks, during the Copy and Test Delay phases. The MONITOR Task LPARs under IO Load gradually rise to about 16% (130/8) of the CEC, staying there through the Test Delay Period and dropping to only 3.5% (30/8) during the Swap and clean up phase. Note all these percentages also include the CPU overhead of the PAIO driver running on each LPAR. The pre-GA FDRPAS code only took 15 minutes to clean-up, but removing the unnecessary wait time the GA FDRPAS code would cut this split and clean up wall clock time to about 4.5 minutes (70% reduction).

33

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Steady Load Test (PAIO IO Load is 6,000 IOPS per LPAR) Analysis of the IO rate in the Load tests shows…the IO activity on each one of the LPARs following the start of the PAIO driver shows an average 16,000 IOPS.

The PE01 LPAR where the Main Swap Job copies the content of the source volumes to the target peaked at about 21,000 IOPS, while averaging about 19,000 IOPS during the time of the copy and stayed there through the Test Delay Phase. The PE02 LPAR and PE03 LPAR, where the Monitor tasks were running, recorded only about 16,000 IOPS, from the start of the PAIO Driver through the FDRPAS Data Copy Phase and the Test Delay Time Phase.

34

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Details of Start Sub-channel IO response time for all LPARs during the Load Test Analysis of the MainSwap job LPAR during this Load tests shows. The IO Response Time in the Data Copy Phase builds to a peak at 12 ms, averaging about 10 ms during the first half of copy phase then fell off to an average 8 ms for the entire second half of the copy phase. The IO Response Time in the Test Delay Phase remained steady at about 6 ms , spiking briefly to peak at about 38 ms at the start of the SWAP Phase and spiking again to 24 ms midway in the SWAP phase but on average remaining under 1 ms during the SWAP Phase.

The PE02 and PE03 LPARs with the Monitor tasks during this load test shows. The IO Response Time in the Data Copy Phase averaging about 6 ms during the first half of copy phase, building to a peak of 8 ms, then falling off to an average 6 ms for the entire second half of the copy phase. The IO Response Time in the Test Delay Phase remained stead at about 3 ms, and rather than spiking at the SWAP Phase and again ms midway in the SWAP phase actually dipped but on average remaining just above 1 ms during the SWAP Phase.

35

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Closing Summary and Conclusion FDRPAS…quickly and efficiently •   with no impact on Paging •   and minimal additional CPU utilization •   imposing a slight increase in system IO response time •   in contention with a heavy IO Load is able to copy, synchronize and SWAP 5,120 source and target volumes while the system maintains an avg 5ms IO response time.

36

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

Post test improvements iin FDR V5.4 L80... •   Paging impact down to zero...

•   Found FDRPAS was zeroing pages out instead of deleting them. •   Change made to delete memory pages when discarding them.

•   Main Swap Job now has twice the concurrency... •   SWAP could only do 32 tasks in one job during the benchmark tests. •   Change made to raise the max tasks in a single Main Swap job to 64.

•   New channel programs reduce PAV I/O wait times •   PAV contention was creating wait delays during the copy •   Change made to raise efficiency of FDRPAS EXCP channel programs.

•   Final copy time during swap phase cut by 80% •   15 second delay on individual SWAPs elongated final copy and swap. •   CONFIRMSWAP delay reduced to 3 seconds.

37

Copyright © 2013 by INNOVATION Data Processing and Hitachi Data System, All rights reserved.

Sponsored by:

38