ibm 8870 archt

518
ibm.com/redbooks Front cover IBM DS8870 Architecture and Implementation Bertrand Dufrasne Andre Candra Scott Helmick Jean Iyabi Peter Kimmel Abilio de Oliveira Axel Westphal Bruce Wilson Dual IBM POWER7+ based controllers All-flash drive configuration Enhanced Business Class configuration option

Upload: gonna009

Post on 27-Dec-2015

112 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: IBM 8870 Archt

ibm.com/redbooks

Front cover

IBM DS8870 Architecture and Implementation

Bertrand DufrasneAndre CandraScott Helmick

Jean IyabiPeter Kimmel

Abilio de OliveiraAxel WestphalBruce Wilson

Dual IBM POWER7+ based controllers

All-flash drive configuration

Enhanced Business Class configuration option

Page 2: IBM 8870 Archt
Page 3: IBM 8870 Archt

International Technical Support Organization

IBM DS8870 Architecture and Implementation

February 2014

SG24-8085-02

Page 4: IBM 8870 Archt

© Copyright International Business Machines Corporation 2014. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.

Third Edition (February 2014)

This edition applies to the IBM DS8870 with Licensed Machine Code (LMC) 7.7.20.xx.xx (bundle version 87.20.xxx.xx) or later.

Note: Before using this information and the product it supports, read the information in “Notices” on page xi.

Page 5: IBM 8870 Archt

Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiTrademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiAuthors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivNow you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviComments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviStay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi

Part 1. Concepts and architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Chapter 1. Introduction to the IBM DS8870 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.1 Introduction to the DS8870 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1.1 Features of the DS8870 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 DS8870 controller options and frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.3 DS8870 architecture and functions overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.3.1 Overall architecture and components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.3.2 Storage capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.3.3 Supported environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.3.4 Configuration flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.3.5 Copy Services functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161.3.6 Resource Groups for copy services scope limiting . . . . . . . . . . . . . . . . . . . . . . . . 181.3.7 Service and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191.3.8 IBM certified secure data overwrite. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191.3.9 Performance features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201.3.10 Sophisticated caching algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201.3.11 Flash drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211.3.12 Multipath Subsystem Device Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211.3.13 Performance for System z. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221.3.14 Performance enhancements for IBM Power Systems . . . . . . . . . . . . . . . . . . . . 22

Chapter 2. IBM DS8870 models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.1 IBM DS8870 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.2 Model overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.3 DS8870 disk drive options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.4 Additional licenses that are needed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Chapter 3. DS8870 hardware components and architecture. . . . . . . . . . . . . . . . . . . . . 353.1 Frames: DS8870 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.1.1 DS8870 Enterprise Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.1.2 DS8870 Business Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.1.3 DS8870 all-flash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.1.4 Base frame: DS8870. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.1.5 Expansion frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.1.6 Rack operator panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.2 DS8870 architecture overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.2.1 The IBM POWER7+ processor-based server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.2.2 Peripheral Component Interconnect Express adapters . . . . . . . . . . . . . . . . . . . . 433.2.3 Storage facility processor complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

© Copyright IBM Corp. 2014. All rights reserved. iii

Page 6: IBM 8870 Archt

3.2.4 Processor memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.2.5 Flexible service processor and system power control network . . . . . . . . . . . . . . . 45

3.3 I/O enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.3.1 DS8870 I/O enclosure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.3.2 Host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473.3.3 Device adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3.4 Disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.4.1 Disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.4.2 Storage enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.5 Power and cooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.6 Management console network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

3.6.1 Ethernet switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Chapter 4. RAS on the IBM DS8870. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614.1 Names and terms for the DS8870. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624.2 DS8870 Processor Complex RAS features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.2.1 POWER7+ Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634.2.2 POWER7+ processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644.2.3 AIX operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.2.4 Cross cluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.2.5 Environmental monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.2.6 Resource deconfiguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.3 CEC failover and failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.3.1 Dual cluster operation and data protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.3.2 Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714.3.3 Failback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.3.4 NVS and power outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4.4 Data flow in DS8870 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744.4.1 I/O enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744.4.2 Host connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744.4.3 Metadata checks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

4.5 RAS on the HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804.5.1 Microcode updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804.5.2 Call home and remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

4.6 RAS on the disk system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.6.1 RAID configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.6.2 Disk path redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844.6.3 Predictive Failure Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854.6.4 Disk scrubbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854.6.5 Smart Rebuild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854.6.6 RAID 5 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864.6.7 RAID 6 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874.6.8 RAID 10 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894.6.9 Spare creation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

4.7 RAS on the power subsystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914.7.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924.7.2 Line power loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944.7.3 Line power fluctuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944.7.4 Power control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954.7.5 Unit emergency power off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

4.8 Other features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964.8.1 Internal network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 974.8.2 Remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

iv IBM DS8870 Architecture and Implementation

Page 7: IBM 8870 Archt

4.8.3 Earthquake resistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 974.8.4 Secure data overwrite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

Chapter 5. Virtualization concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995.1 Virtualization definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005.2 The abstraction layers for disk virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

5.2.1 Array sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025.2.2 Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025.2.3 Ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045.2.4 Extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065.2.5 Logical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095.2.6 Space-efficient volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1145.2.7 Allocation, deletion, and modification of LUNs and CKD volumes . . . . . . . . . . . 1195.2.8 Logical subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235.2.9 Volume access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1265.2.10 Virtualization hierarchy summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

5.3 Benefits of virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295.4 zDAC: z/OS FICON Discovery and Auto-Configuration . . . . . . . . . . . . . . . . . . . . . . . 1305.5 EAV V2: Extended address volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

Chapter 6. IBM DS8000 Copy Services overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1376.1 Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386.2 FlashCopy and FlashCopy Space Efficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

6.2.1 Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396.2.2 Benefits and use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1416.2.3 FlashCopy options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426.2.4 FlashCopy SE-specific options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1436.2.5 Remote Pair FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

6.3 Remote Mirror and Copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1466.3.1 Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1476.3.2 Global Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486.3.3 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486.3.4 Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1506.3.5 Multiple Global Mirror sessions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1516.3.6 Thin provisioning enhancements on open environments . . . . . . . . . . . . . . . . . . 1556.3.7 GM and MGM improvement because of collision avoidance . . . . . . . . . . . . . . . 1566.3.8 z/OS Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1566.3.9 z/OS Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1586.3.10 Summary of Remote Mirror and Copy function characteristics. . . . . . . . . . . . . 1586.3.11 Consistency group considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1596.3.12 GDPS on z/OS environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606.3.13 Tivoli Storage Productivity Center for Replication functionality. . . . . . . . . . . . . 160

6.4 Resource Groups for Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

Chapter 7. Architectured for performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1637.1 DS8870 hardware: Performance characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

7.1.1 Vertical growth and scalability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1647.1.2 DS8870 Fibre Channel switched interconnection at the back-end . . . . . . . . . . . 1667.1.3 Fibre Channel device adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1677.1.4 POWER7+ and POWER7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1697.1.5 Eight-port and four-port host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

7.2 Software performance: Synergy items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717.2.1 Synergy with Power Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717.2.2 Synergy with System z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

Contents v

Page 8: IBM 8870 Archt

7.3 Performance considerations for disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1757.4 DS8000 superior caching algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

7.4.1 Sequential Adaptive Replacement Cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1787.4.2 Adaptive Multi-stream Prefetching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1807.4.3 Intelligent Write Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

7.5 Performance considerations for logical configuration . . . . . . . . . . . . . . . . . . . . . . . . . 1827.5.1 Workload characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1827.5.2 Data placement in the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

7.6 I/O Priority Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1897.6.1 Performance policies for open systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1907.6.2 Performance policies for System z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

7.7 IBM Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1917.7.1 Easy Tier generations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1927.7.2 Easy Tier license. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1937.7.3 Easy Tier basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1947.7.4 IBM Easy Tier operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

7.8 Performance and sizing considerations for open systems . . . . . . . . . . . . . . . . . . . . . 1987.8.1 Determining the number of paths to a LUN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1987.8.2 Dynamic I/O load-balancing: Subsystem Device Driver . . . . . . . . . . . . . . . . . . . 1987.8.3 Automatic port queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1997.8.4 Determining where to attach the host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

7.9 Performance and sizing considerations for System z . . . . . . . . . . . . . . . . . . . . . . . . . 2017.9.1 Host connections to System z servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2017.9.2 Parallel access volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2017.9.3 z/OS Workload Manager: Dynamic PAV tuning . . . . . . . . . . . . . . . . . . . . . . . . . 2037.9.4 HyperPAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2057.9.5 PAV in z/VM environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2077.9.6 Multiple Allegiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2087.9.7 I/O priority queuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2107.9.8 Performance considerations on Extended Distance FICON. . . . . . . . . . . . . . . . 2117.9.9 High Performance FICON for z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

Part 2. Planning and installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215

Chapter 8. DS8870 physical planning and installation . . . . . . . . . . . . . . . . . . . . . . . . 2178.1 Considerations before installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

8.1.1 Who should be involved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2198.1.2 What information is required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220

8.2 Planning for the physical installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2208.2.1 Delivery and staging area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2208.2.2 Floor type and loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2218.2.3 Overhead cabling features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2228.2.4 Room space and service clearance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2238.2.5 Power requirements and operating environment . . . . . . . . . . . . . . . . . . . . . . . . 2248.2.6 Host interface and cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2268.2.7 Host adapter Fibre Channel specifics for open environments . . . . . . . . . . . . . . 2278.2.8 FICON specifics on z/OS environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2278.2.9 Best practice for host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2288.2.10 WWNN and WWPN determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228

8.3 Network connectivity planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2318.3.1 Hardware Management Console and network access . . . . . . . . . . . . . . . . . . . . 2318.3.2 IBM Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2328.3.3 DS command-line interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232

vi IBM DS8870 Architecture and Implementation

Page 9: IBM 8870 Archt

8.3.4 Remote support connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2338.3.5 Remote power control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2348.3.6 Storage area network connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2348.3.7 IBM Security Key Lifecycle Manager server for encryption. . . . . . . . . . . . . . . . . 2358.3.8 Lightweight Directory Access Protocol server for single sign-on . . . . . . . . . . . . 236

8.4 Remote Mirror and Copy connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2368.5 Disk capacity considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

8.5.1 Disk sparing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2378.5.2 Disk capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2378.5.3 DS8000 solid-state drive considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

Chapter 9. DS8870 HMC planning and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2419.1 Hardware Management Console overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

9.1.1 Storage HMC hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2429.1.2 Private Ethernet networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244

9.2 Hardware Management Console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2449.2.1 DS Storage Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2459.2.2 DS command-line interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2459.2.3 DS Open application programming interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 2459.2.4 Web-based user interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245

9.3 HMC activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2479.3.1 HMC planning tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2479.3.2 Planning for microcode upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2489.3.3 Time synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2489.3.4 Monitoring DS8870 with the HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2499.3.5 Call home and remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249

9.4 HMC and IPv6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2509.5 HMC user management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2529.6 External HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254

9.6.1 HMC redundancy benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2549.7 Configuration worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2559.8 Configuration flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256

Chapter 10. IBM System Storage DS8000 features and licensed functions . . . . . . . 25910.1 IBM System Storage DS8000 licensed functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 260

10.1.1 Licensing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26110.1.2 Licensing: cost structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264

10.2 Activating licensed functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26610.2.1 Obtaining DS8000 machine information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26610.2.2 Obtaining activation codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26910.2.3 Applying activation codes by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . 27310.2.4 Applying activation codes by using the DS CLI. . . . . . . . . . . . . . . . . . . . . . . . . 277

10.3 Licensed scope considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27910.3.1 Why you have a choice. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27910.3.2 Using a feature for which you are not licensed . . . . . . . . . . . . . . . . . . . . . . . . . 28010.3.3 Changing the scope to All . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28110.3.4 Changing the scope from All to FB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28210.3.5 Applying an insufficient license feature key . . . . . . . . . . . . . . . . . . . . . . . . . . . 28310.3.6 Calculating how much capacity is used for CKD or FB. . . . . . . . . . . . . . . . . . . 283

Part 3. Storage configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285

Chapter 11. Configuration flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28711.1 Configuration worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288

Contents vii

Page 10: IBM 8870 Archt

11.2 Disk Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28811.3 Network security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28911.4 Configuration flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290

11.4.1 General storage configuration guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291

Chapter 12. Configuration by using the DS Storage Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293

12.1 DS Storage Manager GUI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29412.1.1 Accessing the DS GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29412.1.2 DS GUI Overview window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299

12.2 User management for the DS GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30212.3 Logical configuration introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30612.4 Configuring DS8870 storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307

12.4.1 Defining a storage complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30712.4.2 Creating arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31012.4.3 Creating ranks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31512.4.4 Creating extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31812.4.5 Configuring I/O ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32412.4.6 Configuring logical host systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32512.4.7 Creating fixed block volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32912.4.8 Creating volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33312.4.9 Creating LCUs and CKD volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33712.4.10 Additional tasks on LCUs and CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . . 344

12.5 Other DS GUl functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34612.5.1 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34612.5.2 I/O Priority Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34712.5.3 Checking the status of the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34912.5.4 Preview of the new DS8000 GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350

Chapter 13. Configuration with the DS command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353

13.1 DS command-line interface overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35413.1.1 Supported operating systems for the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . 35413.1.2 User accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35513.1.3 User management by using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35513.1.4 DS CLI profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35713.1.5 Configuring DS CLI to use a second HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35913.1.6 Command structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36013.1.7 Using the DS CLI application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36013.1.8 Return codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36413.1.9 User assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364

13.2 Configuring the I/O ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36613.3 Configuring the DS8000 storage for fixed block volumes . . . . . . . . . . . . . . . . . . . . . 367

13.3.1 Creating arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36713.3.2 Creating ranks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36813.3.3 Creating extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36813.3.4 Creating FB volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37013.3.5 Creating volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37513.3.6 Creating host connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37713.3.7 Mapping open systems host disks to storage unit volumes . . . . . . . . . . . . . . . 378

13.4 Configuring DS8000 storage for CKD volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38113.4.1 Create arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38113.4.2 Ranks and extent pool creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38113.4.3 Logical control unit creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382

viii IBM DS8870 Architecture and Implementation

Page 11: IBM 8870 Archt

13.4.4 Creating CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38213.4.5 Resource Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38813.4.6 Performance I/O Priority Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38813.4.7 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389

13.5 Metrics with DS CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38913.6 Private network security commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39313.7 Copy Services commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395

Part 4. Maintenance and upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397

Chapter 14. Licensed machine code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39914.1 How new microcode is released . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40014.2 Bundle installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40214.3 Concurrent and non-concurrent updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40414.4 Code updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40414.5 Host adapter firmware updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40414.6 Loading the code bundle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40514.7 Post-installation activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40514.8 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406

Chapter 15. Monitoring with Simple Network Management Protocol . . . . . . . . . . . . 40715.1 SNMP implementation on the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408

15.1.1 Message Information Base file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40815.1.2 Predefined SNMP trap requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408

15.2 SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40915.2.1 Serviceable event that uses specific trap 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . 40915.2.2 Copy Services event traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40915.2.3 I/O Priority Manager SNMP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41515.2.4 Thin provisioning SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416

15.3 SNMP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41715.3.1 SNMP preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41715.3.2 SNMP configuration from the HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41715.3.3 SNMP configuration with the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419

Chapter 16. Remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42116.1 Introduction to remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422

16.1.1 Suggested reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42216.1.2 Organization of this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42216.1.3 Terminology and definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423

16.2 IBM policies for remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42416.3 VPN rationale and advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42416.4 Remote connection types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425

16.4.1 Asynchronous modem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42516.4.2 IP network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42716.4.3 IP network with traditional VPN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42816.4.4 Assist On-site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428

16.5 DS8870 support tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42816.5.1 Call home and heartbeat: outbound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42916.5.2 Data offload: outbound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42916.5.3 Code download: inbound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43316.5.4 Remote support: inbound and two-way. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433

16.6 Remote connection scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43416.6.1 No connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43416.6.2 Modem only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434

Contents ix

Page 12: IBM 8870 Archt

16.6.3 VPN only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43616.6.4 Modem and network with no VPN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43716.6.5 Modem and traditional VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438

16.7 Assist On-site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43916.8 Further remote support enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44016.9 Audit logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442

Chapter 17. DS8870 Capacity upgrades and Capacity on Demand . . . . . . . . . . . . . . 44517.1 Installing capacity upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446

17.1.1 Installation order of upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44817.1.2 Checking how much total capacity is installed . . . . . . . . . . . . . . . . . . . . . . . . . 448

17.2 Using Capacity on Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45117.2.1 What is Capacity on Demand? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45117.2.2 Determining whether a DS8870 includes CoD disks . . . . . . . . . . . . . . . . . . . . 45117.2.3 Using the CoD storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456

Chapter 18. DS8800 to DS8870 model conversion. . . . . . . . . . . . . . . . . . . . . . . . . . . . 45918.1 Introducing model conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46018.2 Model conversion overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460

18.2.1 Configuration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46018.2.2 Hardware considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460

18.3 Model conversion phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46118.3.1 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46118.3.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46118.3.3 Mechanical conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46318.3.4 Post conversion operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463

Appendix A. Tools and service offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465Planning and administration tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466

Capacity Magic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466Disk Magic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468Storage Tier Advisor Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470IBM Tivoli Storage Productivity Center 5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474IBM Tivoli Storage FlashCopy Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479

IBM Service offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480IBM Global Technology Services: Service offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . 480IBM STG Lab Services: Service offerings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480

Appendix B. Resiliency improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481B.1 SCSI reserves detection and removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482

B.1.1 SCSI reservation detection and removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482B.1.2 Excursion: SCSI reservations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484

B.2 Querying CKD path groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486B.3 z/OS Soft Fence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489

B.3.1 Basic information about Soft Fence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489B.3.2 How to reset a Soft Fence status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495

x IBM DS8870 Architecture and Implementation

Page 13: IBM 8870 Archt

Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.

Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment.

Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

© Copyright IBM Corp. 2014. All rights reserved. xi

Page 14: IBM 8870 Archt

Trademarks

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:

AIX®CICS®Cognos®DB2®DS4000®DS6000™DS8000®Easy Tier®Enterprise Storage Server®ESCON®FICON®FlashCopy®GDPS®Geographically Dispersed Parallel

Sysplex™Global Technology Services®HyperSwap®i5/OS™IBM Flex System™

IBM SmartCloud®IBM®IMS™iSeries®Parallel Sysplex®Power Architecture®POWER Hypervisor™Power Systems™POWER6+™POWER7 Systems™POWER7+™POWER7®PowerPC®POWER®ProtecTIER®Redbooks®Redpaper™Redbooks (logo) ®Resource Measurement Facility™

RMF™Storwize®System i®System p®System Storage DS®System Storage®System x®System z10®System z®TDMF®Tivoli®XIV®z/OS®z/VM®z10™z9®zEnterprise®

The following terms are trademarks of other companies:

Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Other company, product, or service names may be trademarks or service marks of others.

xii IBM DS8870 Architecture and Implementation

Page 15: IBM 8870 Archt

Preface

This IBM® Redbooks® publication describes the concepts, architecture, and implementation of the IBM DS8870. The book provides reference information to assist readers who need to plan for, install, and configure the DS8870.

The IBM DS8870 is the most advanced model in the IBM DS8000® series and is equipped with IBM POWER7+™ based controllers. Various configuration options are available that scale from dual 2-core systems up to dual 16-core systems with up to 1 TB of cache. The DS8870 also features enhanced 8 Gbps device adapters and host adapters. Connectivity options, with up to 128 Fibre Channel/IBM FICON® ports for host connections, make the DS8870 suitable for multiple server environments in open systems and IBM System z® environments.

The DS8870 supports advanced disaster recovery solutions, business continuity solutions, and thin provisioning. All disk drives in the DS8870 storage system have the Full Disk Encryption (FDE) feature. The DS8870 also can be integrated in a Lightweight Directory Access Protocol (LDAP) infrastructure. The DS8870 features high-density storage enclosures and can be equipped with flash drives. An all-flash drive configuration is also available.

The DS8870 can automatically optimize the use of each storage tier, particularly flash drives, through the IBM Easy Tier® feature, which is available at no extra charge. Easy Tier is covered in separate publications: IBM DS8000 Easy Tier Concepts and Usage, REDP-4667; IBM System Storage DS8000 Easy Tier Server, REDP-5013; IBM System Storage DS8000 Easy Tier Application, REDP-5014; and IBM System Storage DS8000 Easy Tier Heat Map Transfer, REDP-5015.

For information about other specific features, see the following publications:

� IBM System Storage DS8000: Priority Manager, REDP-4760

� DS8000 Thin Provisioning, REDP-4554

� IBM System Storage DS8000: Copy Services Resource Groups, REDP-4758

� IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines, REDP-4500

� IBM System Storage DS8000: LDAP Authentication, REDP-4505

For more information about DS8000 Copy Services functions, see IBM System Storage DS8000: Copy Services for Open Environments, SG24-6788, and IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787.

© Copyright IBM Corp. 2014. All rights reserved. xiii

Page 16: IBM 8870 Archt

Authors

This book was produced by a team of specialists from around the world working for the International Technical Support Organization (ITSO), at the IBM European Storage Competence Center (ESCC) in Mainz (Germany).

Bertrand Dufrasne is an IBM Certified IT Specialist and Project Leader for IBM System Storage® disk products at the ITSO, San Jose Center. He has worked at IBM in various IT areas. He has written many IBM Redbooks publications and has developed and taught technical workshops. Before joining the ITSO, he worked for IBM Global Services as an Application Architect. He holds a Master’s degree in Electrical Engineering.

Andre Candra is an IBM IT Specialist for Storage solutions, working for Global Technology Service (GTS) in IBM Indonesia. He provides support to customers with IBM disk solutions, such as IBM DS8000, XIV®, V7000, TS3500, and SAN switch. His areas of expertise include planning, implementing, and supporting storage solutions for Open System server and IBM Mainframe. Andre holds a degree in Electrical Engineering from the Institut Teknologi Bandung in Indonesia.

Scott Helmick is a Product Field Engineer (PFE) working at IBM in Tucson. He provides support to IBM clients and service representatives to resolve complex and critical problems with hardware, microcode, operating systems, and application. He has been with IBM for about 30 years in various technical support roles. Scott holds a degree from the DeVry Institute of Technology.

Jean Iyabi is a Certified IT Specialist working as an active member of the Storage Experts Pool of the ESCC in Mainz, Germany, since 2001. As a Product Field Engineer, he acts as the last level support for High End Storage Disk. Jean has extensive experience in DS8000 support focusing on Host Attachment (IBM System z), Extended Copy Services Functions, and IBM GDPS®. He also acts as the EMEA Field Support Interface with the DS8000 Development and Test Teams in Tucson, Arizona. Jean holds a degree in Electrical Engineering from the University of Applied Sciences of Wiesbaden (Germany).

xiv IBM DS8870 Architecture and Implementation

Page 17: IBM 8870 Archt

Special thanks to the Enterprise Disk team manager, Bernd Müller; ESCC Pre-Sales and Service Delivery manager, Friedrich Gerken; and the ESCC director, Klaus-Jürgen Rünger, for their continuous interest and support regarding the ITSO Redbooks projects.

Many thanks to the following people who helped with equipment provisioning and preparation:

Uwe Heinrich Müller, Günter Schmitt, Dietmar Schniering, Uwe Schweikhard, Jörg Zahn IBM Systems Lab Europe; Mainz, Germany

Dale H. Anderson, Stephen Blinick, Brian Cagno, Susan Candelaria, Nick Clayton, John Elliott, Thomas Fiege, Dan Husky, Mike Koester, Stephen Manthorpe, Allen Marin, Brian Rinaldi, David Sacks, Falk Schneider, Alexander Warmuth, Allen Wright IBM

Peter Kimmel is an IT Specialist and ATS team lead of the Enterprise Disk Solutions team at the ESCC in Mainz, Germany. He joined IBM Storage in 1999 and since then worked with all the various IBM Enterprise Storage Server® (ESS) and DS8000 generations, with a focus on architecture and performance. He has been involved in the Early Shipment Program (ESP) of these early and all current installs, and co-authored several DS8000 IBM Redbooks publications. Peter holds a Diploma (MSc) degree in Physics from the University of Kaiserslautern.

Abilio de Oliveira is an IBM Certified IT Specialist Expert and works as a Client Technical Specialist in Storage Technical Sales Brazil. He has 18 years of experience in IT. He holds a Degree in Computer Sciences and specializes in Information Security. His current focus is to work as a Regional Designated Specialist designing Storage Solutions for clients in Latin America and supporting Business Partner sales activities.

Axel Westphal is working as an IT Specialist at the ESCC in Mainz, Germany. He joined IBM in 1996, working for Global Services as a Systems Engineer. His areas of expertise include setup and demonstration of IBM System Storage products and solutions in various environments. He has been an author and contributor to several white papers and IBM Redbooks publications for the DS8000 series.

Bruce Wilson is a Senior Education Specialist with ITS in IBM Canada. He has worked with IBM for 32 years, with the first 10 years in the field servicing Mainframe servers and various storage products. For the last 22 years, he has been instructing IBM service representatives on the System z server, IBM Parallel Sysplex®, as well as disk and tape hardware. Bruce has co-authored two previous Redbooks publications dealing with HCD and SAN products.

Preface xv

Page 18: IBM 8870 Archt

Now you can become a published author, too!

Here’s an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base.

Find out more about the residency program, browse the residency index, and apply online at:

http://www.ibm.com/redbooks/residencies.html

Comments welcome

Your comments are important to us!

We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways:

� Use the online Contact us review Redbooks form found at:

http://www.ibm.com/redbooks

� Send your comments in an email to:

[email protected]

� Mail your comments to:

IBM Corporation, International Technical Support OrganizationDept. HYTD Mail Station P0992455 South RoadPoughkeepsie, NY 12601-5400

Stay connected to IBM Redbooks

� Find us on Facebook:

http://www.facebook.com/IBMRedbooks

� Follow us on Twitter:

http://twitter.com/ibmredbooks

� Look for us on LinkedIn:

http://www.linkedin.com/groups?home=&gid=2130806

� Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter:

https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm

� Stay current on recent Redbooks publications with RSS Feeds:

http://www.redbooks.ibm.com/rss.html

xvi IBM DS8870 Architecture and Implementation

Page 19: IBM 8870 Archt

Part 1 Concepts and architecture

This part of the book gives an overview of the IBM DS8870 concepts and architecture. The following topics are included:

� Introduction to the IBM DS8870, the latest member of the DS8000 series� Overview of the IBM DS8870 models� Detailed information about the DS8870 hardware components and architecture� Reliability, availability, and serviceability (RAS) features of the DS8870� A review of the DS8000 virtualization concepts� An overview of the DS8000 Copy Services � A discussion of the DS8870 performance features

Part 1

© Copyright IBM Corp. 2014. All rights reserved. 1

Page 20: IBM 8870 Archt

2 IBM DS8870 Architecture and Implementation

Page 21: IBM 8870 Archt

Chapter 1. Introduction to the IBM DS8870

This chapter introduces the features and functions of the IBM DS8000 series and its newest member, the latest model of DS8870 storage system.

More information about functions and features is provided in subsequent chapters. This chapter covers the following topics:

� The advantages of a POWER7+ based storage system� DS8870 architecture and functions overview� Performance features� Enhanced security

Previous models, such as the DS8700 and DS8800, are described in the IBM Redbooks publication, IBM System Storage DS8000: Architecture and Implementation, SG24-8886.

1

© Copyright IBM Corp. 2014. All rights reserved. 3

Page 22: IBM 8870 Archt

1.1 Introduction to the DS8870

IBM has a wide range of product offerings that are based on open standards and share a common set of tools, interfaces, and innovative features. The IBM DS8000 family is designed as a high-performance, high capacity, and resilient series of disk storage systems. The DS8870 offers high availability, multiplatform support, including System z, and simplified management tools to help provide a cost-effective path to an on-demand world.

Customers expect from a high-end storage subsystem that includes the following features:

� High performance� Highly available� Cost efficient� Energy efficient� Scalable� Has Business Continuity and Data Protection functions

The new release of DS8870 (as shown in Figure 1-1) is the sixth-generation IBM high-end disk system in the DS8000 series. It is designed to support the most demanding business applications with its exceptional all-around performance and data throughput. The DS8870 architecture is server-based. Powerful POWER7+ processor-based servers manage the cache to minimize disk I/Os to maximize performance and throughput.

Figure 1-1 DS8870 base frame

Combined with world-class business resiliency and encryption features, the storage server provides a unique combination of high availability, performance, and security. The DS8870 is equipped with encryption-capable disk drives and flash drives (solid-state drives or SSDs) are also available. The DS8870 must be paired with a key server (using IBM Security Key Lifecycle Manager) to effectively use data at rest encryption.

4 IBM DS8870 Architecture and Implementation

Page 23: IBM 8870 Archt

The DS8870 is tremendously scalable, has broad server support, and virtualization capabilities. These features can help simplify the storage environment by consolidating multiple storage systems. High-density storage enclosures offer a considerable reduction in footprint and energy consumption.

The power supply system is based on direct current uninterruptible power supply (DC-UPS). This feature makes the DS8870 the most energy-efficient model in the DS8000 series. The DS8870 is designed to comply with the emerging ENERGY STAR specifications. ENERGY STAR is a joint program of the US Environmental Protection Agency and the US Department of Energy and helps us all to save money and protect the environment through energy efficient products and practices. For more information, see this website:

http://www.energystar.gov

1.1.1 Features of the DS8870

The DS8870 offers the following features:

� Storage virtualization that is offered by the DS8000 series allows organizations to allocate system resources more effectively and better control application quality of service. The DS8000 series improves the cost structure of operations and lowers energy consumption through a tiered storage environment.

� The DS8870 is available with different processor options that range from dual 2-core systems up to dual 16-cores, which cover a wide-range of performance needs.

� The DS8870 provides an entry system, designated as Business Class configuration option, which is scalable to 1056 drives and up to 16-core processors. The Business Class configuration is capable of achieving good system performance while reducing the initial cost of acquisition.

� The DS8870 supports a broad range of disk drives, starting from fast 400 GB SSDs and fast 146 GB 15 K rpm SAS disk drives, to high-capacity Nearline-SAS 4 TB drives.

� Cache configurations are available that range from 16 GB up to 1 TB cache. The server architecture of the DS8870, with its powerful POWER7+ processors, makes it possible to manage large caches with small cache segments of 4 KB (and hence large segment tables) without the need to partition the cache. The POWER7+ processors have enough power to implement sophisticated caching algorithms. These algorithms and the small cache segment size optimize cache hits. Therefore, the DS8870 provides excellent I/O response times.

Write data is always protected by maintaining a copy of write-data in non-volatile storage until the data is destaged to disks.

� Adaptive Multi-stream Prefetching (AMP) caching algorithm can dramatically improve sequential performance, reducing times for backup, processing for business intelligence, and streaming media. Sequential Adaptive Replacement Cache is a caching algorithm that allows you to run different workloads like sequential and random workloads without negatively affecting each other. For example, sequential workload does not fill up the cache and does not affect cache hits for random workload. Intelligent Write Caching (IWC) improves the Cache Algorithm for random writes.

� IBM proves the performance of its storage systems by publishing standardized benchmark results. For more information about benchmark results of the DS8870, see this website:

http://www.storageperformance.org

� 8-Gbps host adapters (HAs): The DS8870 model offers enhanced connectivity with four- and eight-port Fibre Channel/FICON host adapters that are in the I/O enclosures that are directly connected to the internal processor complexes. The 8-Gbps Fibre Channel/FICON

Chapter 1. Introduction to the IBM DS8870 5

Page 24: IBM 8870 Archt

host adapter also supports FICON attachment to IBM System zEC12, IBM System zBC12, IBM zEnterprise® 196 (z196), IBM System z114, and IBM System z10®.

Each port can be configured by the user to operate as a Fibre Channel port, a FICON port, or a Fibre Channel port that is used for mirroring.

� High Performance FICON for System z (zHPF): zHPF is an IBM z/OS® I/O architecture. zHPF is an optional feature of the DS8870. The DS8870 is at the most up-to-date support level for zHPF. Recent enhancements to zHPF include Extended Distance capability, zHPF List Pre-fetch support for IBM DB2® and utility operations, and zHPF support for sequential access methods. All DB2 I/O is now zHPF capable.

� Peripheral Component Interconnect Express (PCI Express Generation 2) I/O enclosures: To improve input/output operations per second (IOPS) and sequential read/write throughput, the I/O enclosures are directly connected to the internal servers with point-to-point PCI Express cables.

� Storage Pool Striping (rotate extents) provides a mechanism to distribute a volume’s or logical unit number’s (LUN’s) data across many Redundant Array of Independent Disks (RAID) arrays and across many disk drives. Storage Pool Striping helps maximize performance without special tuning and greatly reduces hot spots in arrays.

� Easy Tier: This included feature enables automatic dynamic data relocation capabilities. Data areas that are accessed frequently are moved to higher tier disks; for example, to flash drives. Infrequently accessed data areas are moved to lower tiers; for example, Nearline-SAS drives. Easy Tier optimizes the usage of each tier, especially the utilization of flash drives. No manual tuning is required. Configuration flexibility and overall storage cost-performance can greatly benefit from this feature. The auto-balancing algorithms also provide benefits when homogeneous disk pools are used to eliminate hot spots on disk arrays.

Easy Tier also allows several manual data relocation capabilities (extent pools merge, rank depopulation, volume migration). Easy Tier can also be used when encryption support is turned on. For more information, see 7.7, “IBM Easy Tier” on page 191.

The fifth generation of IBM Easy Tier, available since the DS8000 License Machine Code (LMC) 7.7.10.xx.xx, now includes (LMC) 7.7.20.xx.xx additional reporting improvements, such as workload skew curve, workload categorization, and data movement daily report.

� Storage Tier Advisor Tool: This tool is used with the Easy Tier facility to help clients understand their current disk system workloads. The tool also provides guidance on how much of their existing data is better-suited for the various drive types (spinning disk or solid-state flash). For details, refer to IBM DS8000 Easy Tier Concepts and Usage, REDP-4667.

� Windows Server 2012 support: The command-line interface (DS CLI) is now supported in Windows Server 2012 and Windows 8 in the desktop mode. The new value -hosttype value for mkhostconnect, chhostconnect, mkvolgrp, and the value called Win2012 for Windows Server 2012 are added to DS CLI.

� I/O Priority Manager: This optional feature provides application-level quality of service (QoS) for workloads that share a storage pool. This feature provides a way to manage QoS for I/O operations that are associated with critical workloads and gives them priority over other I/O operations that are associated with non-critical workloads. For z/OS, the I/O Priority Manager allows increased interaction with the host side. For more information, see 1.3.13, “Performance for System z” on page 22, and 7.6, “I/O Priority Manager” on page 189.

� Large volume support: The DS8870 supports LUN sizes up to 16 TB. This configuration simplifies storage management tasks. In a z/OS environment, extended address volumes (EAVs) with sizes up to 1 TB are supported.

6 IBM DS8870 Architecture and Implementation

Page 25: IBM 8870 Archt

� Active Volume Protection: This feature prevents the deletion of volumes still in use.

� T10 DIF support: The Data Integrity Field standard of Small Computer System Interface (SCSI) T10 enables end-to-end data protection from the application or host bus adapter (HBA) down to the disk drives. The DS8870 supports the T10 DIF standard.

� The Dynamic Volume Expansion simplifies management by enabling easier, online volume expansion (for open systems and System z) to support application data growth, and to support data center migration and consolidation to larger volumes to ease addressing constraints.

� Thin provisioning: This feature allows the creation of over-provisioned devices for more efficient usage of the storage capacity for open systems. Copy Services is now available for thin provisioning. For more information, see Chapter 6, “IBM DS8000 Copy Services overview” on page 137.

� Command-line interface (CLI) improvements: The DS8870 release 7.2 introduces some modifications in the CLI commands for thin provisioned volumes to provide more management capabilities.

� Quick Initialization: This feature provides fast volume initialization (for open system LUNs and count key data (CKD) volumes) and therefore allows the creation of devices, making them available when the command completes.

� The Full Disk Encryption (FDE) can protect business-sensitive data by providing disk-based hardware encryption that is combined with a sophisticated key management software (IBM Security Key Lifecycle Manager). The Full Disk Encryption is available for all disks and drives, including SSDs. Because encryption is done by the disk drive, it is transparent to host systems and can be used in any environment, including z/OS. For more information, see 8.3.7, “IBM Security Key Lifecycle Manager server for encryption” on page 235, or refer to the IBM Redpaper™ IBM DS8700 Disk Encryption Implementation and Usage Guidelines, REDP-4500.

� Security Improvements: Installing the IBM DS8870 Release 7.2 (Licensed Machine Code 7.7.20.xx.xx) enables customers to become compliant with the Special Publication (SP) number 800-131a, which is a National Institute of Standards and Technology (NIST) directive that provides guidance for protecting sensitive data by using cryptographic algorithms that have key strengths of 112 bits.

� The following specific features of disk encryption key management help address Payment Card Industry Data Security Standard (PCI DSS) requirements:

– Encryption deadlock recovery key option: When enabled, this option allows the user to restore access to a DS8870 when the encryption key for the storage is unavailable because of an encryption deadlock scenario.

– Dual platform key server support: This support is important if key servers on z/OS share keys with key servers on open systems. The DS8870 requires an isolated key server in encryption configurations. The isolated key server that is currently defined is an IBM System x® x3350 server. Dual platform key server support allows two server platforms to host the key manager with either platform operating in either clear key or secure key mode.

– Recovery key Enabling/Disabling and Rekey data key option for the FDE feature: Both of these enhancements can help clients satisfy Payment Card Industry (PCI) security standards.

� Resource Groups: This feature is a policy-based resource scope-limiting function that enables the secure use of Copy Services functions by multiple users on a DS8000 series storage subsystem. Resource Groups are used to define an aggregation of resources and policies for configuration and management of those resources. The scope of the aggregated resources can be tailored to meet each hosted customers’ Copy Services

Chapter 1. Introduction to the IBM DS8870 7

Page 26: IBM 8870 Archt

requirements for any operating system platform that is supported by the DS8000 series. For more information, see IBM System Storage DS8000 Resource Groups, REDP-4758.

� IBM FlashCopy®: FlashCopy is an optional feature that allows the creation of volume copies (and data set copies for z/OS) nearly instantaneously. Different options are available to create full copies, incremental copies, or copy-on-write copies. The concept of consistency groups provides a means to copy several volumes consistently, even across several DS8000 systems.

FlashCopy can be used to perform backup operations parallel to production or to create test systems. FlashCopy can be managed with the help of the IBM FlashCopy Manager product from within certain applications like DB2, Oracle, SAP, or Microsoft Exchange. FlashCopy is also supported by z/OS backup functions, such as Data Facility Storage Management Subsystem (DFSMS) and DB2 BACKUP SYSTEM.

� The IBM FlashCopy SE capability enables more space-efficient utilization of capacity for copies, thus enabling improved cost effectiveness.

� Remote Mirroring options: The DS8870 provides the same remote mirroring options as previous models of the DS8000 family. Synchronous remote mirroring (Metro Mirror) is supported up to 300 km. Asynchronous copy (Global Mirror) is supported for unlimited distances. Three-site options are available by combining Metro Mirror and Global Mirror. In co-operation with the z/OS Data Mover, another option is available for z/OS: Global Mirror for z/OS. Another important feature for z/OS Global Mirror (two-site) and z/OS Metro/Global Mirror (three-site) is Extended Distance FICON, which can help reduce the need for channel extender configurations by increasing the number of read commands in flight.

Metro Mirror, Global Copy, Global Mirror, Metro/Global Mirror, z/OS Global Mirror, and z/OS Metro/Global Mirror business continuity solutions are designed to provide the advanced functionality and flexibility that is needed to tailor a business continuity environment for almost any recovery point or recovery time objective.

The Copy Services can be managed and automated with IBM Tivoli® Storage Productivity Center for Replication. For z/OS environments, IBM GDPS provides an automated disaster recovery solution.

With IBM AIX® operating systems, the DS8870 supports Open IBM HyperSwap® replication. The Open HyperSwap is a special Metro Mirror replication method designed to automatically fail over I/O from the primary logical devices to the secondary logical devices in the event of a primary disk storage system failure. The swap can be accomplished with minimal disruption to the applications that are using the logical devices.

� Remote Pair FlashCopy: This feature allows you to establish a FlashCopy relationship in which the target is a remote mirror Metro Mirror primary volume that keeps the pair in the full duplex state.

� The DS8870 provides an efficient graphical user interface (GUI) management interface to configure the DS8870 or query status information. The DS8870 GUI has the same look and feel as the GUIs of other IBM storage products, thus making it easier for a storage administrator to work with different IBM storage products. For more information, see Chapter 12, “Configuration by using the DS Storage Manager GUI” on page 293.

8 IBM DS8870 Architecture and Implementation

Page 27: IBM 8870 Archt

� Lightweight Directory Access Protocol (LDAP) authentication support, which allows single sign-on functionality, can simplify user management by allowing the DS8000 to rely on a centralized LDAP directory rather than a local user repository. For more information, see IBM System Storage DS8000: LDAP Authentication, REDP-4505.

For information on Configuring Jazz for Service Management and DS8000 for LDAP authentication, refer to the IBM Tivoli Storage Productivity Center V5.2 Information Center at:

http://pic.dhe.ibm.com/infocenter/tivihelp/v59r1/topic/com.ibm.tpc_V52.doc/fqz0_r_config_ds8000_ldap.html

� The DS8000 series is certified as meeting the requirements of the IPv6 Read Logo program, which indicates its implementation of IPv6 mandatory core protocols and the ability to interoperate with other IPv6 implementations. The IBM DS8000 can be configured in native IPv6 environments. The logo program provides conformance and interoperability test specifications that are based on open standards to support IPv6 deployment globally. Furthermore, the US National Institute of Standards and Technology tested IPv6 with the DS8000, thus granting it support for the USGv6 profile and testing program.

1.2 DS8870 controller options and frames

The IBM DS8870 includes Models 961 (base frame) and 96E (expansion unit) as part of the 242x machine type family.

The DS8870 includes the following features:

� IBM POWER7+ processor technology

The DS8870 features the IBM POWER7+ processor-based server technology for high performance. Compared to its predecessor, the POWER7+ processor can deliver at least 15% of improvement in I/O operations per second (IOPS) in transaction-processing workload environments. The DS8870 uses the simultaneous multithreading (SMT) capabilities of the POWER7+ architecture. Additionally, sequential workloads can receive as much as 60% bandwidth improvement.

� Non-disruptive upgrade path

A nondisruptive upgrade path for the DS8870 configurations and more Model 96E expansion frames allows processor, cache, and storage enhancement to be performed concurrently, without disrupting applications.

� It is possible to model convert a DS8800 to a DS8870. This conversion uses existing storage enclosures, disk drives, host adapters, and device adapters. All other hardware is physically replaced. This conversion process can be performed only by an IBM Service Representative.

� Air-flow system

The air-flow system allows optimal horizontal cool down of the storage system. The DS8870 is designed for hot and cold aisle data center design, drawing air for cooling from the front of the system and exhausting hot air at the rear. For more information, see 3.5, “Power and cooling” on page 55.

� Improved configuration options

The DS8870 supports two systems class options: a Business Class configuration and an Enterprise Class configuration option.

Chapter 1. Introduction to the IBM DS8870 9

Page 28: IBM 8870 Archt

Different from all the previous models, the Business Class option supports up to 1056 drives, 16-core configuration, and 1024 GB of cache. That is a significant improvement over the previous models.

The Enterprise Class option supports up to 1536 drives, 16-core configuration, and up to 1024 GB of cache.

� High-density storage enclosures

The DS8870 provides storage enclosure support for 24 Small Form Factor (SFF) 2.5-inch drives in 2U of rack space. This option helps improve the storage density for disk drive modules (DDMs) as compared to previous enclosures.

� Improved high-density frame design

The DS8870 can support 1536 drives in a small footprint (base frame and three expansion frames) that support high density and helping to preserve valuable raised floor space in data center environments.

Coupled with an improved cooling implementation and Small Form Factor SAS- 2.0 Enterprise drives, a fully configured DS8870 uses up to 20% less power than its predecessor model DS8800. By using the SFF, 2.5-inch drives, the DS8870 base frame supports up to 240 drives. Adding a first expansion frame allows up to 576 drives. A second expansion frame brings the total to up to 1056 drives and up to 1536 SFF drives with the third expansion frame (total of four frames). As an alternative, the DS8870 also supports 3.5-inch disk drives with up to 120 drives with the base frame, up to 288 drives with one expansion frame, 528 drives with two expansion frames, or 768 drives with three expansion frames (total of four frames). The 2.5-inch and 3.5-inch drives can be intermixed in the same frame (although not within the same disk enclosure).

1.3 DS8870 architecture and functions overview

The DS8870 offers continuity concerning the fundamental architecture of its predecessors the DS8700, and DS8800 models. This architecture ensures that the DS8870 can use a stable and well-proven operating environment that offers optimal availability. The hardware also is optimized to provide higher performance, connectivity, and reliability.

1.3.1 Overall architecture and components

For more information about the available configurations for the DS8870, see Chapter 2, “IBM DS8870 models” on page 25.

IBM POWER7+ processor technologyThe DS8870 uses IBM POWER7+ processor technology. The POWER7® processor chip uses the IBM 32 nm Silicon-On-Insulator (SOI) technology. This technology features copper interconnect and implements an on-chip L3 cache that uses embedded dynamic random access memory (eDRAM). The P7+ processor that DS8870 has runs at 4.228 GHz.

The POWER7+ processor supports simultaneous multithreading SMT4 mode. SMT4 enables four instruction threads to run simultaneously in each POWER7+ processor core. It maximizes the throughput of the processor core by offering an increase in core efficiency. These multithreading capabilities improve the I/O throughput of the DS8870 storage server.

Note: The Business Class configuration cannot be upgraded into an Enterprise Class configuration.

10 IBM DS8870 Architecture and Implementation

Page 29: IBM 8870 Archt

The DS8870 offers a dual 2-core processor complex, a dual 4-core processor complex, a dual 8-core processor complex, or a dual 16-core processor complex. A processor complex also is referred to as a storage server or Central Electronics Complex. For more information, see Chapter 4, “RAS on the IBM DS8870” on page 61.

Internal PCIe-based fabricThe DS8870 uses direct point-to-point, high-speed PCI Express (PCIe) connections to the I/O enclosures to communicate with the device adapters and host adapters. Each PCIe connection operates at a speed of 2 GBps in each direction. There are up to 16 PCIe connections from the processor complexes to the I/O enclosures. For more information, see Chapter 3, “DS8870 hardware components and architecture” on page 35.

Device adaptersThe DS8870 offers four-port Fibre Channel Arbitrated Loop (FC-AL) device adapters (DAs). All adapters provide improved IOPS, throughput, and scalability over previous DS8000s. They are optimized for SSD technology and designed for long-term support for scalability growth. These capabilities complement the IBM POWER® server family to provide significant performance enhancements, which allow up to a 400% improvement in performance over previous generations (DS8700). For more information, see Chapter 3, “DS8870 hardware components and architecture” on page 35.

Switched Fibre Channel Arbitrated LoopThe DS8870 uses a switched Fibre Channel Arbitrated Loop (FC-AL) architecture as the back-end for its disk interconnection. The DAs connect to the controller cards in the storage enclosures by using FC-AL with optical short wave multi-mode interconnection. The Fibre Channel interface cards (FCICs) offer a point-to-point connection to each drive and device adapter so that there are four paths available from the DS8000 processor complexes to each disk drive. For more information, see Chapter 3, “DS8870 hardware components and architecture” on page 35.

Drive optionsThe DS8870 offers the following disk drives to meet the requirements of various workloads and configurations (for more information, see Chapter 8, “DS8870 physical planning and installation” on page 217):

� 146 GB and 300 GB (15000 rpm) Enterprise disk drives for high performance requirements

� 600 GB and 1200 GB (10000 rpm) disk drives for standard performance requirements

� 4 TB (7200 rpm) Nearline-SAS disk drives for large-capacity requirements

� 400 GB solid-state flash drives (SSDs) for the highest performance demands

All drives in the DS8870 are FDE-capable drives. You do not need to use encryption. However, if you want to encrypt your data, you need at least two key servers with the IBM Security Key Lifecycle Manager v2.5 software.

Flash drives are the best choice for I/O-intensive workloads. They provide up to 100 times the throughput and 10 times lower response time than 15 K rpm spinning disks. They also use less power than traditional spinning disks. For more information, see Chapter 8, “DS8870 physical planning and installation” on page 217.

With the introduction of the 7.2 micro-code, the system configuration with POWER7+ processor and all flash drives helps the DS8870 to deliver up to 20% more IOPS in random processing workload environments.

Chapter 1. Introduction to the IBM DS8870 11

Page 30: IBM 8870 Archt

Easy TierEasy Tier enables the DS8870 to automatically balance I/O access to disk drives to avoid hot spots on disk arrays. Easy Tier can place data in the storage tier that best suits the access frequency of the data. Highly accessed data can be moved nondisruptively to a higher tier, for example, to flash drives while cold data or data that is primarily accessed sequentially is moved to a lower tier (for example, to Nearline disks).

However, Easy Tiering also can benefit homogeneous disk pools because it can move data away from over-utilized disk arrays to under-utilized arrays to eliminate hot spots and peaks in disk response times.

Easy Tier includes the following features:

� Easy Tier Application is an application-aware storage utility to help deploy storage more efficiently by enabling applications and middleware to direct more optimal placement of the data by communicating important information about current workload activity and application performance requirements.

� Easy Tier Server in this Easy Tier will manage the data placement across direct-attached flash within scale-out servers and DS8870 storage tiers by caching the “hottest” data to the application host direct-attached storage (DAS) flash, while maintaining advanced feature functions. This provides a flexible option for clients that want to ensure certain applications remain on a particular tier to meet performance and cost requirements.

� Easy Tier Automatic Rebalance: Easy Tier Automatic Rebalancing Improvement prevents unnecessary extents movement across the ranks within an extent pool even in low rank utilization. The new algorithm has a higher control to reduce the unnecessary automatic rebalancing activities at lower average utilization level, but keep the agility of the algorithm at normal average utilization level.

� Easy Tier Heat Map Transfer is able to provide whatever the data placement algorithm is on the Metro Mirror/Global Copy/Global Mirror (MM/GC/GM) primary site and reapply it on the MM/GC/GM secondary site when failover occurs through the Easy Tier Heat Map Transfer utility. With this capability, DS8000 systems can maintain application-level performance.

� Easy Tier Reporting Improvements: There are some improvements on Easy Tier reporting in R7.2, such as workload skew curve, workload categorization, and data movement daily report:

– Workload skew curve: Provides a validation on the Easy Tier behavior, showing that based on the current algorithm, each storage tier contains the right data profile. Workload skew curve output can also be read by the Disk Magic application for sizing purposes.

– Workload categorization: Easily interpretable by clients, the workload categorization makes the heat data comparable across tiers and across pools, providing a collaborative view on the workload distribution.

– Data movement daily report: Validates the reactions of DS8870 Easy Tier to configuration and workload changes.

For detailed information about the Easy Tier features, refer to the IBM Redpaper publications: IBM System Storage DS8000: Easy Tier Concepts and Usage, REDP-4667; IBM System Storage DS8000 Easy Tier Server, REDP-5013; IBM System Storage DS8000 Easy Tier Application, REDP-5014; and IBM System Storage DS8000 Easy Tier Heat Map Transfer, REDP-5015.

12 IBM DS8870 Architecture and Implementation

Page 31: IBM 8870 Archt

Host adaptersEach DS8870 Fibre Channel adapter offers four or eight Fibre Channel ports that support up to 8 Gbps speed. Each port independently auto-negotiates to 2, 4, or 8 Gbps link speed. Each of the ports on a DS8870 host adapter can also independently be configured to Fibre Channel Protocol (FCP) or Fibre Channel connection (FICON). If configured for FCP protocol, the port can be used for mirroring. For more information, see Chapter 3, “DS8870 hardware components and architecture” on page 35.

IBM Tivoli Storage Productivity CenterIBM Tivoli Storage Productivity Center is a storage resource management application that is available for DS8000 management and other storage systems. It is designed to provide centralized, automated, and simplified management of complex and heterogeneous storage environments.

IBM Tivoli Storage Productivity Center 5.2 provides a wealth of storage resource management tools. It extends existing management of a single storage system, providing capabilities such as storage reporting, monitoring, and policy-based management. Additionally, it provides storage device configuration, performance monitoring, and management of storage area network (SAN) attached devices. It also provides over 400 enterprise-wide reports, monitoring alerts, policy-based action, and file system capacity utilization information in a heterogeneous environment. IBM Tivoli Storage Productivity Center is designed to help improve capacity utilization of storage systems by adding intelligence to data protection and retention practices. IBM Tivoli Storage Productivity Center 5.2 now includes replication management capabilities that are designed to support hundreds of replication sessions across thousands of data volumes. It also supports open and z/OS-attached volumes.

The Easy Tier Heat Map Transfer utility is also integrated with IBM Tivoli Storage Productivity Center for Replication and all the functions are available through the Tivoli Storage Productivity Center for Replication 5.2 release.

Storage Hardware Management Console for the DS8870The Hardware Management Console (HMC) is the focal point for maintenance activities. The HMC is a dedicated workstation that is physically located inside the DS8870. The HMC can proactively monitor the state of your system and notify you and IBM when service is required. It can also be connected to your network to enable centralized management of your system by using the IBM System Storage data storage command-line interface (DS CLI). The HMC supports the IPv4 and IPv6 standards. For more information, see Chapter 9, “DS8870 HMC planning and setup” on page 241.

An external management console is available as an optional feature. The console can be used as a redundant management console for environments with high availability requirements.

Isolated key serverThe IBM Security Key Lifecycle Manager software performs key management tasks for IBM encryption-enabled hardware, such as the IBM DS8870. IBM Secure Lifecycle Manager provides, protects, stores, and maintains encryption keys that are used to encrypt information that is written to, and decrypt information that is read from, encryption-enabled disks.

The DS8870 ships with FDE drives. To configure a DS8870 to use encryption, two IBM key servers are required. An Isolated Key Server (IKS) with dedicated hardware and non-encrypted storage resources is required and can be ordered from IBM. For more

Chapter 1. Introduction to the IBM DS8870 13

Page 32: IBM 8870 Archt

information, see 8.3.7, “IBM Security Key Lifecycle Manager server for encryption” on page 235.

The IBM Security Key Lifecycle Manager for z/OS (ISKLM) is available for z/OS environments. However, to avoid deadlock situations where you cannot start your key server because it runs on an encrypted DS8870, you also need a dedicated IBM Security Key Lifecycle Manager on a stand-alone server.

1.3.2 Storage capacity

The physical storage capacity for the DS8870 is contained in the disk drive sets. A disk drive set contains 16 DDMs, which have the same capacity and the same revolutions per minute (rpm). In addition, SSDs and Nearline drive sets are available in half sets (8) or full sets (16) of disk drives or DDMs. The available drive options provide industry class capacity and performance to address enterprise application and business requirements. DS8000 storage capacity can be configured as RAID 5, RAID 6, RAID 10, or as a combination of these RAIDs, depending on the drive type. Up to 1536 drives can be installed in a base frame with up to three expansion frames.

For more information, see 2.3, “DS8870 disk drive options” on page 31.

IBM Standby Capacity on Demand offering for the DS8870Standby Capacity on Demand (CoD) provides standby on-demand storage for the DS8000 that allows you to access the extra storage capacity whenever the need arises. With CoD, IBM installs more CoD disk drive sets in your DS8000. At any time, you can logically configure your CoD drives concurrently with production. You are automatically charged for the additional capacity. DS8870 can have up to six Standby CoD drive sets (96 drives).

1.3.3 Supported environments

The DS8000 offers connectivity support across a broad range of server environments, including IBM Power Systems™, System z and System x servers, servers from Oracle and Hewlett-Packard, non-IBM Intel, and AMD-based servers.

The DS8000 supports over 90 platforms. For the most current list of supported platforms, see the DS8000 System Storage Interoperation Center (SSIC) at this website:

http://www.ibm.com/systems/support/storage/config/ssic/index.jsp

A Host Attachment and Interoperability IBM Redbooks publication that provides answers to the proper supported environments is available at this website:

http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/sg248887.html?Open

This rich support of heterogeneous environments and attachments, along with the flexibility to easily partition the DS8000 storage capacity among the attached environments, can help support storage consolidation requirements and dynamic environments.

1.3.4 Configuration flexibility

The DS8000 series uses virtualization techniques to separate the logical view of hosts onto logical unit numbers (LUNs) from the underlying physical layer, thus providing high configuration flexibility. For more information about virtualization, see Chapter 5, “Virtualization concepts” on page 99.

14 IBM DS8870 Architecture and Implementation

Page 33: IBM 8870 Archt

Dynamic LUN and volume creation, deletion, and expansionLUNs can be created and deleted non-disruptively, which gives a high degree of flexibility in managing storage. When a LUN is deleted, the freed capacity can be used with other free space to form a new LUN of a different size. An LUN can also be dynamically increased in size.

Large LUN and large count key data volume supportYou can configure LUNs and volumes to span arrays, which allows for larger LUN sizes of up to 16 TB in open systems. Copy Services are not supported for LUN sizes greater than 2 TB.

The maximum count key data (CKD) volume size is 1,182,006 cylinders (1 TB), which greatly reduces the number of volumes that are managed. The CKD creates a z/OS volume type that is called 3390 Model A. This capability is referred to as extended address volumes (EAVs) and requires z/OS 1.12 or later.

T10 data integrity field supportA modern storage system, such as the DS8870, includes many components that perform error checking, often by checksum techniques, in its RAID components, system buses, memory, Fibre Channel adapters, or by media scrubbing. This configuration also is used for some file systems. Errors can be detected and in some cases corrected. This checking is done between different components within the I/O path. But more often there is demand for an end-to-end data integrity checking solution (from the application to the disk drive).

The ANSI T10 standard provides a way to check the integrity of data that is read and written from the host bus adapter to the disk and back through the SAN fabric. This check is implemented through the data integrity field (DIF) defined in the T10 standard. This support adds protection information that consists of a cyclic redundancy check (CRC), logical block address (LBA), and host application tags to each sector of fixed block (FB) data on a logical volume.

The DS8870 supports the T10 DIF standard for FB volumes that are accessed by the FCP channel of Linux on System z. You can define LUNs with an option to instruct the DS8870 to use the CRC-16 T10 DIF algorithm to store the data. You can also create T10 DIF capable LUNs. The support for IBM i variable LUN now adds flexibility for volume sizes and can increase capacity utilization for IBM i environments.

VMware VAAI supportThe VMware vStorage APIs for Array Integration (VAAI) feature offloads-specific storage operations to disk arrays for highly improved performance and efficiency. With VAAI, VMware vSphere can perform key operations faster and use less CPU, memory, and storage bandwidth. The DS8870 supports the VAAI primitives Atomic Test-and-Set (ATS), also known as Compare and Write for hardware-assisted locking and Clone Blocks (Extended Copy or XCOPY for hardware assisted move or cloning). VAAI also supports Write same, Site Recovery Manager (SRM), vCenter plug-in, and variable LUN size.

Flexible LUN-to-LSS associationWith no predefined association of arrays to LSSs on the DS8000 series, users can put LUNs or CKD volumes into logical subsystems (LSSs) and make best use of the 256 address range, particularly for System z.

Simplified LUN maskingThe implementation of volume group-based LUN masking simplifies storage management by grouping some or all worldwide port names (WWPNs) of a host into a Host Attachment.

Chapter 1. Introduction to the IBM DS8870 15

Page 34: IBM 8870 Archt

Associating the Host Attachment to a Volume Group allows all adapters within the Host Attachment access to all of the storage in the Volume Group.

Thin provisioning featuresThe DS8000 provides two types of space efficient volumes: Track space efficient volumes and extent space efficient volumes. These volumes feature enabled over-provisioning capabilities that provide more efficient usage of the storage capacity and reduced storage management requirements. Track space efficient volumes are intended as target volumes for FlashCopy. FlashCopy, Metro, and Global Mirror of thin provisioned volumes are supported on a DS8870 storage system.

Maximum values of logical definitionsThe DS8000 features the following maximum values for the major logical definitions:

� Up to 255 logical subsystems (LSSs)

� Up to 65,280 logical devices

� Up to 16 TB logical unit numbers (LUNs)

� Up to 1,182,006 cylinders (1 TB) count key data (CKD) volumes

� Up to 130,560 Fibre Connection (FICON) logical paths (512 logical paths per control unit image) on the DS8000

� Up to 1280 logical paths per Fibre Channel (FC) port

� Up to 8192 process logins (509 per SCSI-FCP port)

1.3.5 Copy Services functions

For IT environments that cannot afford to stop their systems for backups, the DS8870 provides a fast replication technique that can provide a point-in-time copy of the data in a few seconds or even less. This function is called FlashCopy.

For data protection and availability needs, the DS8870 provides Metro Mirror, Global Mirror, Global Copy, Metro/Global Mirror, and z/OS Global Mirror, which are Remote Mirror and Copy functions. These functions are also available and are fully interoperable with previous models of the DS8000 family. These functions provide storage mirroring and copying over large distances for disaster recovery or availability purposes.

For more information about Copy Services, see the following resources:

� Chapter 6, “IBM DS8000 Copy Services overview” on page 137� IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788� IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787

FlashCopyThe primary objective of FlashCopy is to quickly create a point-in-time copy of a source volume on a target volume. The benefits of FlashCopy are that the point-in-time target copy is immediately available for use for backups or testing. The source volume also is immediately released so that applications can continue processing with minimal application downtime. The target volume can be a logical or physical copy of the data, with the physical copy that copies the data as a background process. In a z/OS environment, FlashCopy can also operate at a data set level.

The following sections summarize the options available with FlashCopy.

16 IBM DS8870 Architecture and Implementation

Page 35: IBM 8870 Archt

Multiple Relationship FlashCopyMultiple Relationship FlashCopy allows a source to have FlashCopy relationships with up to 12 targets simultaneously.

Incremental FlashCopyIncremental FlashCopy provides the capability to refresh a LUN or volume that is involved in a FlashCopy relationship. When a subsequent FlashCopy is initiated, only the data that is required to make the target current with the source’s newly established point-in-time is copied.

Remote Pair FlashCopyRemote Pair FlashCopy improves resiliency solutions by ensuring data synchronization when a FlashCopy target is also a Metro Mirror source. This configuration keeps the local and remote site consistent, which facilitates recovery, supports IBM HyperSwap, and reduces link bandwidth utilization.

Remote Mirror Primary FlashCopyRemote Mirror primary FlashCopy allows a FlashCopy relationship to be established in which the target also is a remote mirror primary volume. This configuration enables a full or incremental point-in-time copy to be created at a local site. It then uses remote mirroring commands to copy the data to the remote site. While the background copy task is copying data from the source to the target, the remote mirror pair moves into a copy pending state.

FlashCopy Consistency GroupsFlashCopy Consistency Groups can be used to maintain a consistent point-in-time copy across multiple LUNs or volumes, or even multiple DS8000 systems.

Inband commands over Remote Mirror linkIn a remote mirror environment, inband FlashCopy allows commands to be issued from the local or intermediate site and transmitted over the remote mirror Fibre Channel links for execution on the remote DS8000. This configuration eliminates the need for a network connection to the remote site solely for the management of FlashCopy.

IBM FlashCopy SEThe IBM FlashCopy SE feature provides a space efficient copy capability that can greatly reduce the storage capacity that is needed for point-in-time copies. Only the capacity that is needed to save pre-change images of the source data is allocated in a copy repository. This configuration enables more space-efficient utilization than is possible with the standard FlashCopy function. Furthermore, less capacity can mean fewer disk drives and lower power and cooling requirements, which can help reduce costs and complexity. FlashCopy SE can be especially useful in the creation of temporary copies for tape backup, online application checkpoints, or copies for disaster recovery testing. For more information about FlashCopy SE, see IBM System Storage DS8000 Series: IBM FlashCopy SE, REDP-4368.

Remote Mirror and Copy functionsThe Remote Mirror and Copy functions include Metro Mirror, Global Copy, Global Mirror, and Metro/Global Mirror. z/OS Global Mirror for the System z environments also is included. As with FlashCopy, Remote Mirror and Copy functions can be established between DS8000 systems.

Easy Tier Heat Map Transfer is able to provide whatever the data placement algorithm is on the Metro Mirror/Global Copy/Global Mirror (MM/GC/GM) primary site and reapply it at the MM/GC/GM secondary site.

Chapter 1. Introduction to the IBM DS8870 17

Page 36: IBM 8870 Archt

The following sections summarize the Remote Mirror and Copy options that are available with the DS8000 series.

Metro MirrorMetro Mirror, previously called Peer-to-Peer Remote Copy (PPRC), provides a synchronous mirror copy of LUNs or volumes at a remote site within 300 km. When used with a supporting application, Metro Mirror Consistency Groups can be used to maintain data and transaction consistency across multiple LUNs or volumes or multiple DS8000 systems.

Global CopyGlobal Copy, previously called Peer-to-Peer Remote Copy/Extended Distance (PPRC-XD), is a non-synchronous, long-distance copy option for data migration and backup.

Global MirrorGlobal Mirror provides an asynchronous mirror copy of LUNs or volumes over unlimited distances. The distance is often limited only by the capabilities of the network and channel extension technology that is used. A Global Mirror Consistency Group is used to maintain data consistency across multiple LUNs or volumes or multiple DS8000 systems.

Metro/Global MirrorMetro/Global Mirror is a three-site data replication solution for open systems and the System z environments. Local site (Site A) to intermediate site (Site B) provides high availability replication by using synchronous Metro Mirror. Intermediate site (Site B) to remote site (Site C) provides long-distance disaster recovery replication by using asynchronous Global Mirror.

z/OS Global Mirror z/OS Global Mirror, previously called Extended Remote Copy (XRC), provides an asynchronous mirror copy of volumes over unlimited distances for the System z. It provides increased parallelism through multiple SDM readers (Multiple Reader capability).

z/OS Metro/Global Mirrorz/OS Metro/Global Mirror is a combination of Copy Services for System z environments that uses z/OS Global Mirror to mirror primary site data to a remote location that is at a long distance, and Metro Mirror to mirror the primary site data to a location within the metropolitan area. This configuration enables a z/OS three-site high availability and disaster recovery solution.

z/OS Global Mirror also offers Incremental Resync, which can significantly reduce the time that is needed to restore a Disaster Recovery (DR) environment after a HyperSwap in a three-site z/OS Metro/Global Mirror configuration. After the Incremental Resync is performed, you can change the copy target destination of a copy relation without the need for a full copy of the data.

1.3.6 Resource Groups for copy services scope limiting

Copy services scope limiting is the ability to specify policy-based limitations on copy services requests. With the combination of policy-based limitations and other inherent volume-addressing limitations, you can control the volumes that can be in a copy services relationship, which network users or host LPARs issue copy services requests on which resources, and other copy services operations. Use these capabilities to separate and protect volumes in a copy services relationship from each other. This ability can assist you with multi-tenancy support by assigning specific resources to specific tenants, limiting copy services relationships so that they exist only between resources within each tenant’s scope of resources, and limiting a tenant’s copy services operators to an operator-only role. When a

18 IBM DS8870 Architecture and Implementation

Page 37: IBM 8870 Archt

single-tenant installation is managed, the partitioning capability of resource groups can be used to isolate various subsets of the environment as though they were separate tenants. For example, to separate mainframes from open servers, Windows from UNIX, or accounting departments from telemarketing.

For more information, see IBM System Storage DS8000: Resource Groups, REDP-4758.

1.3.7 Service and setup

The installation of the DS8000 is performed by IBM in accordance with the installation procedure for this machine. The client’s responsibility is the installation planning, retrieval, and installation of feature activation codes, logical configuration, and execution.

For maintenance and service operations, the Storage HMC is the focus. The management console is a dedicated workstation that is physically located inside the DS8870 where it can automatically monitor the state of your system. It notifies you and IBM when service is required. Generally, use a dual-HMC configuration, particularly when Full Disk Encryption is used.

The HMC also is the interface for remote services (call home and remote support), which can be configured to meet client requirements. It is possible to allow one or more of the following configurations:

� Call home on error (machine-detected)� Connection for a few days (client-initiated)� Remote error investigation (service-initiated)

The remote connection between the management console and the IBM Service organization is done by using a virtual private network (VPN) point-to-point connection over the Internet, modem, or with the new Assist On-site (AOS) feature. AOS offers more options, such as Secure Sockets Layer (SSL) security and enhanced audit logging. For more information, refer to the IBM Redpaper publication, Introduction to Assist On-site Software for Storage, REDP-4889-01.

The DS8000 storage system can be ordered with an outstanding four-year warranty (an industry first) on hardware and software.

1.3.8 IBM certified secure data overwrite

Secure data overwrite (SDO) is a process that provides a secure overwrite of all data drives in a DS8870 storage system. Removal of all logical configuration is a required client activity before SDO can be performed. The process is initiated by the IBM service representative, then continues unattended until completed. This process takes a full day to complete. Two DDM overwrite options exist.

DDM overwrite optionsThere are two options for SDO.

CryptoeraseThis option performs a cryptoerase on the DDMs, which re-creates the internal encryption key on the DDMs, rendering the previous information unreadable. It then performs a single-pass overwrite on all DDMs.

Chapter 1. Introduction to the IBM DS8870 19

Page 38: IBM 8870 Archt

Three-pass overwriteThis option performs a cryptoerase on the DDMs, then performs a three-pass overwrite on all DDMs. This overwrite pattern allows compliance with the US Department of Defense (DoD) 5220.22-M standard.

CEC and HMCA three-pass overwrite is performed on both the central electronics complex (CEC) and HMC disk drives. If there is a secondary HMC associated with the storage system, SDO is run against the secondary HMC after completion on primary HMC.

SDO process overviewThe SDO process can be summarized as follows:

1. Client removal of all logical configuration.2. IBM Service Representative initiates SDO on HMC.3. SDO performs a dual cluster reboot of the CECs.4. SDO cryptoerases all DDMs in the storage system.5. SDO initiates an overwrite method.6. SDO initiates a three-pass overwrite on the CEC and HMC hard disks.7. When complete, SDO generates a certificate.

CertificateThe certificate verifies, by drive serial number, the full result of the overwrite operations. The certificate can be obtained from the IBM service representative or offloaded via DS CLI.

1.3.9 Performance features

The IBM DS8870 offers optimally balanced performance. This feature is possible because the DS8870 incorporates many performance enhancements, such as the dual multi-core POWER7+ processor complex implementation, fast 8-Gbps Fibre Channel/FICON host adapters, flash drives, and the high bandwidth, fault-tolerant point-to-point PCI Express internal interconnections.

With all these components, the DS8870 is positioned at the top of the high performance category.

1.3.10 Sophisticated caching algorithms

IBM Research conducts extensive investigations into improved algorithms for cache management and overall system performance improvements. To implement sophisticated caching algorithms, it is essential to include powerful processors for the cache management. With a 4 KB cache segment size and up to 1 TB cache sizes, the tables to maintain the cache segments become large.

Sequential Prefetching in Adaptive Replacement CacheOne of the performance features of the DS8000 is its self-learning cache algorithm, which optimizes cache efficiency and enhances cache hit ratios. This algorithm, which is used in the DS8000 series, is called Sequential Prefetching in Adaptive Replacement Cache (SARC).

20 IBM DS8870 Architecture and Implementation

Page 39: IBM 8870 Archt

SARC provides the following abilities:

� Sophisticated algorithms to determine what data should be stored in cache that is based on recent access and the frequency needs of the hosts

� Prefetching, which anticipates data before a host request and loads it into cache

� Self-learning algorithms to adaptively and dynamically learn what data should be stored in cache that is based upon the frequency needs of the hosts

Adaptive Multi-stream PrefetchingAdaptive Multi-stream Prefetching (AMP) is a breakthrough caching technology that improves performance for common sequential and batch processing workloads on the DS8000. AMP optimizes cache efficiency by incorporating an autonomic, workload-responsive, and self-optimizing prefetching technology.

Intelligent Write CachingIntelligent Write Caching (IWC) improves performance through better write-cache management and destaging order of writes. It minimizes disk actuator movements on writes so the disks can do more I/O in total. IWC also can double the throughput for random write workloads. Specifically, database workloads benefit from this new IWC cache algorithm.

SARC, AMP, and IWC play complementary roles. While SARC is carefully dividing the cache between the RANDOM and the SEQ lists to maximize the overall hit ratio, AMP is managing the contents of the SEQ list to maximize the throughput obtained for the sequential workloads. IWC manages the write cache and decides what order and rate to destage to disk.

1.3.11 Flash drives

To improve data transfer rate (IOPS) and response time, the DS8870 provides support for flash drives based on NAND technology. Flash drives feature improved I/O transaction-based performance over traditional spinning drives. The DS8870 is available with 400 GB encryption-capable flash drives.

Flash drives are high-IOPS class enterprise storage devices that are targeted at Tier 0, I/O-intensive workload applications that can use a high level of fast-access storage. Flash drives offer a number of potential benefits over hard disk drives, including better IOPS, lower power consumption, less heat generation, and lower acoustical noise. For more information, see Chapter 8, “DS8870 physical planning and installation” on page 217.

1.3.12 Multipath Subsystem Device Driver

The IBM Multipath Subsystem Device Driver (SDD) is a pseudo-device driver on the host system that is designed to support the multipath configuration environments in IBM products. It provides load balancing and enhanced data availability capability. By distributing the I/O workload over multiple active paths, SDD provides dynamic load balancing and eliminates data flow bottlenecks. SDD helps eliminate a potential single point of failure by automatically rerouting I/O operations when a path failure occurs.

SDD is provided with the DS8000 series at no additional charge. Fibre Channel (SCSI-FCP) attachment configurations are supported in the AIX, HP-UX, Linux, Windows, and Oracle Solaris environments.

Chapter 1. Introduction to the IBM DS8870 21

Page 40: IBM 8870 Archt

If you use the multipathing capabilities of your operating system, such as the AIX MPIO, the SDD package provides a plug-in to optimize the operating system’s multipath driver for use with the DS8000.

For more information about SDD, see IBM System Storage DS8000: Host Attachment and Interoperability, SG24-8887.

1.3.13 Performance for System z

The DS8000 series supports the following IBM performance enhancements for System z environments:

� Parallel access volumes (PAVs) enable a single System z server to simultaneously process multiple I/O operations to the same logical volume, which can significantly reduce device queue delays. This reduction is achieved by defining multiple addresses per volume. With Dynamic PAV, the assignment of addresses to volumes can be automatically managed to help the workload meet its performance objectives and reduce overall queuing. PAV is an optional feature on the DS8000 series.

� HyperPAV is designed to enable applications to achieve equal or better performance than with PAV alone, while also using fewer unit control blocks (UCBs) and eliminating the latency in targeting an alias to a base. With HyperPAV, the system can react immediately to changing I/O workloads.

� Multiple Allegiance expands the simultaneous logical volume access capability across multiple System z servers. This function, along with PAV, enables the DS8000 series to process more I/Os in parallel, which improves performance and enables greater use of large volumes.

� I/O priority queuing allows the DS8000 series to use I/O priority information that is provided by the z/OS Workload Manager to manage the processing sequence of I/O operations at the adapter level.

� I/O Priority Manager includes the major enhancements that were described earlier in this section. It extends priority management to the disk arrays in a shared pool. For more information, see 7.6, “I/O Priority Manager” on page 189.

� High Performance FICON for z (zHPF) reduces the effect that is associated with supported commands on current adapter hardware. This configuration improves FICON throughput on the DS8000 I/O ports. The DS8000s also supports the new zHPF I/O commands for multi-track I/O operations, DB2 list-prefetch, and sequential access methods.

For more information about the performance aspects of the DS8000 family, see Chapter 7, “Architectured for performance” on page 163.

1.3.14 Performance enhancements for IBM Power Systems

Many IBM Power Systems users can benefit from the following DS8000 features:

� End-to-end I/O priorities� Cooperative caching� Long busy wait host tolerance� Automatic Port Queues

Support for multipath: Support for multipath is included in an IBM i server as part of Licensed Internal Code and the IBM i operating system (including IBM i5/OS™).

22 IBM DS8870 Architecture and Implementation

Page 41: IBM 8870 Archt

Easy Tier Server is a unified storage caching and tiering solution across AIX servers and supported direct-attached storage (DAS) flash drives. Performance of the Power Systems can be improved by enabling Easy Tier Server on DS8870 to cache the hottest data to the AIX host DAS flash drives. For a detailed description and technical information, refer to the IBM Redpaper, IBM System Storage DS8000 Easy Tier Server, REDP-5013.

For more information about performance enhancements, see Chapter 7, “Architectured for performance” on page 163.

Chapter 1. Introduction to the IBM DS8870 23

Page 42: IBM 8870 Archt

24 IBM DS8870 Architecture and Implementation

Page 43: IBM 8870 Archt

Chapter 2. IBM DS8870 models

This chapter describes the IBM DS8870 different models. It explains the various options for each model, in terms of the number of CPUs, cache, and number of frames that are attached, and shows how well they scale regarding capacity and performance.

This chapter covers the following topics:

� The DS8870 Business Class model� The DS8870 Enterprise model

2

© Copyright IBM Corp. 2014. All rights reserved. 25

Page 44: IBM 8870 Archt

2.1 IBM DS8870

Similar to earlier generations of the IBM DS8000 series, the DS8870 consists of one base frame, which incorporates the processor complexes and optional expansion frames that mainly serve to host more disk drives. The processor complexes in the base frame can eventually be upgraded with more CPUs or cache to accommodate growing performance needs that arise when more disk drives are used. An All-Flash configuration is also available.

The DS8870 offers unprecedented scalability in terms of CPU power and cache sizes, in comparison to previous DS8000 generations. Furthermore, upgrades from the smallest to the largest configuration in terms of CPU, cache, and storage capacity can be done with no system downtime as long as the system is in the same model configuration. These scalability and upgrade characteristics make the DS8870 the most suitable system for large consolidation projects.

2.2 Model overview

The DS8870 storage systems include the DS8800 Model 961 base frame and the associated DS8870 expansion frame 96E.

The DS8870 is available in the following options:

� DS8870 Model 961 Enterprise (standard) Class

This model is available as a dual 2-core, dual 4-core, dual 8-core, or dual 16-core processor complex with storage enclosures for up to 240 disk drive modules (DDMs) and 8 Fibre Channel (FC) host adapters. This standard model is optimized for performance and highly scalable configurations, which allows for large, long-term growth. The cache for this model scales between 64 GB and 1 TB, which offers a performance increase of 166% over the previous DS8800 high-end model.

� DS8870 Model 961 Business Class (enhanced)

The model utilizes a separate cabling scheme to reduce initial configuration costs while increasing device adapter utilization, which reduces associated performance.

This configuration of the model 961 is available as a dual 2-core, dual 4-core, dual 8-core, or dual 16-core, complex with storage enclosures for up to 1056 DDMs and 16 FC host adapters. A business class system can be configured from 16 GB up to 1 TB of cache. The Business Class model is meant to offer a cost-efficient way to enter the DS8000 sphere for clients who require only lower capacity or performance and who use only a small subset of the DS8870 features.

The use of Copy Services (or the I/O Priority Manager) features require at least a cache size of 32 GB.

Important: DS8870 Business Class now uses a different cabling scheme than in the Enterprise model for its disk enclosures. Therefore, upgrade to the Enterprise model is not supported.

Expansion frames: Expansion frames can be added to a DS8870 in a Business Class configuration with more than 8-core configuration. However, DS8870 Business Class can only add two expansion frames instead of three in the Enterprise Class.

26 IBM DS8870 Architecture and Implementation

Page 45: IBM 8870 Archt

� DS8870 Model 96E

This expansion frame for the 961 model includes enclosures for more DDMs and FC adapters to allow a maximum configuration of 16 FC adapters. One expansion frame 96E can be attached only to the 961 8-core or 16-core base frame. The added FC adapters can be installed only in the first (96E) expansion frame. At the time of this writing, up to a total of three expansion frames can be attached to a model 961.

Table 2-1 DS8870 Configuration options for base and expansion frames

Each Fibre Channel/Fibre Channel connection (FICON) host adapter has four or eight Fibre Channel ports, which provide up to 128 Fibre Channel ports for a maximum configuration.

All-flash configurationThe DS8870 now supports an all-flash configuration. The all-flash indicator, feature number 0600, allows the client to specify systems comprised solely of flash drives. For small capacity needs, using an all-flash configuration can help improve I/O response times while also reducing floor space and power requirements. For details, refer to 3.1.3, “DS8870 all-flash” on page 37.

Machine type 242xDS8870 models are associated with machine type 242x. This machine type corresponds to the length of warranty offer that allows a 1-year, 2-year, 3-year, or 4-year warranty period

Important: Cache size and the possible CPU options (number of cores) are not fully independent. Only certain combinations of both are allowed, as shown in Table 2-1.

Modelnumber

Processor Max. DA Pairs

Max. diskdrives

System memory

Host adapters

9xEattach

961 Business Class 2-core 1 144 16 GB 4 0

961 Business Class 2-core 1 144 32 GB 4 0

961 Business Class 4-core 2 240 64 GB 8 0

961 Business Class 8-core 6 1056 128 GB, 256 GB

16 0 - 2

961 Business Class 16-core 6 1056 512 GB, 1 TB

16 0 - 2

961 Enterprise Class 2-core 2 144 16 GB 4 0

961 Enterprise Class 2-core 2 144 32 GB 4 0

961 Enterprise Class 4-core 4 240 64 GB 8 0

961 Enterprise Class 8-core 8 1 056 128 GB 16 0 - 2

961 Enterprise Class 8-core 8 1 536 256 GB 16 0 - 3

961 Enterprise Class 16-core 8 1 536 512 GB,1 TB

16 0 - 3

96E First Expansion Frame N/A 4 336 N/A 8 961

96E Second and Third Expansion Frame

N/A 0 480 N/A 0 961

Chapter 2. IBM DS8870 models 27

Page 46: IBM 8870 Archt

(where x equals the number of years). The 96E expansion frames have the same 242x machine type as the base frame.

DS8870 Model 961 overviewFigure 2-1 on page 29 provides a high-level view of the front and back of the Model 961 base, which includes space for up to 10 disk enclosures, called Gigapacks (24 drives per Gigapack), or up to 15 Small Form Factor (SFF) disk drive sets (16 drives per disk drive set). You can also install Large Form Factor (LFF) enclosures. In a maximum configuration, the base model can hold 240 SFF disk drives (see number 1 in Figure 2-1 on page 29).

The Model 961 is offered with 2-core, 4-core, 8-core, or 16-core processor complexes. Expansion models (96E) can be added to the base model in a dual 8-core or 16-core system.

The Hardware Management Console (HMC) is stored below the drives (see Number 2 in Figure 2-1 on page 29).

The DS8870 features an integrated rack power control (RPC) system that manages power distribution. The POWER7+ processor-based servers (see Number 3 in Figure 2-1 on page 29) contain the processor and memory that drive all functions within the DS8870. The power subsystem in the DS8870 complies with the latest directives for the Restriction of Hazardous Substances (RoHS). Previous models in the IBM DS8000 series contain primary power supply (PPS) units. The DS8870 uses direct current uninterruptible power supply (DC-UPS) units. The DC-UPS units improve energy efficiency. The DS8870 power subsystem also contains a faster processor with parity-protected memory.

The I/O enclosures provide connectivity between the adapters and the storage processors (see Number 4 in Figure 2-1 on page 29). The adapters that are contained in the I/O enclosures can be either device adapters (DAs) or host adapters (HAs).

The base model contains DC-UPS power supplies (see Number 5 in Figure 2-1 on page 29). The DC-UPS provides rectified alternating current (AC) power distribution and power switching for redundancy. The rack has two AC power cords, each one feeding a DC-UPS. The DC-UPS distributes rectified line AC. If AC is not present at the input line, the output is switched to rectified AC from the partner DC-UPS. If neither AC input is active, the DC-UPS switches to battery power.

A redundant pair of RPC adapters coordinates the power management within the storage facility (see Number 6 in Figure 2-1 on page 29). The RPC adapters are attached to the service processors in each complex, which allows them to communicate with the HMC and storage facility image logical partitions (LPARs). The RPC is also attached to the primary power system in each rack and on some models is indirectly connected to the fan-sense adapters and to the storage enclosures in each rack. The power system in each rack is a PPS with associated batteries or a DC-UPS with internal batteries.

Figure 2-1 on page 29 shows the base model of a Model 961.

Important: You cannot add an expansion frame to a DS8870 two-core or four-core system. However, you can first upgrade to an eight-core system and then add expansion frames.

28 IBM DS8870 Architecture and Implementation

Page 47: IBM 8870 Archt

Figure 2-1 Base model (front and back views) of a Model 961 (four-core)

Cache memory upgrades are concurrent.

For the HAs, with four or eight ports, each port can be independently set as one of the following configurations:

� FCP port, for open systems host attachment

� FCP port for Metro Mirror, Global Copy, Global Mirror, and Metro/Global Mirror connectivity

� FICON port to connect to System z hosts

� FICON port for z/OS Global Mirror connectivity

The first expansion frame can hold additional HAs and DAs. No I/O enclosures are installed in the second and third expansion frames. In a full four-frame installation (Enterprise model), the 1536 drives are distributed over all of the DA pairs. For more information about DA pairs, see 3.3.3, “Device adapters” on page 49.

Chapter 2. IBM DS8870 models 29

Page 48: IBM 8870 Archt

Figure 2-2 shows a front view of the DS8870 frames in their current maximum configuration option of four frames.

Figure 2-2 DS8870 models 961/96E maximum configuration with 1536 drives: Front

Figure 2-3 shows the rear view of the DS8870 models 961/96E.

Figure 2-3 DS8870 models 961/96E maximum configuration with 1536 disk drives: Rear

Base frame 1st Expansion frame 2nd Expansion frame 3rd Expansion frame

3rd Expansion frame 2nd Expansion frame 1st Expansion frame Base frame

30 IBM DS8870 Architecture and Implementation

Page 49: IBM 8870 Archt

DS8870 Models 96EAs shown in Table 2-1 on page 27, the first expansion frame (sometimes called B frame, with the base frame referred to as A frame) can hold up to 336 drives and more I/O enclosures.

The second and third expansion frames (sometimes called C frames) can hold up to 480 drives each, but no host adapters or device adapters.

2.3 DS8870 disk drive options

The DS8870 offers enterprise-class drives that feature a 6-Gbps SAS 2.0 interface. These enterprise-class drives, which use a 6.35 cm (2.5 inch) form factor, provide increased density and thus increased performance per frame. These drives are built on a traditional design with spinning platters so they are also known as mechanical drives. As of this writing, the following 2.5-inch enterprise drives are available for DS8870, which all support Full Disk Encryption (FDE):

� 146 GB (15 000 rpm)� 300 GB (15 000 rpm)� 600 GB (10 000 rpm)� 1.2 TB (10 000 rpm)

An 8.89 cm (3.5-inch) SAS Nearline disk is also available and supports Full Disk Encryption (FDE). These 4 TB (7 200 rpm) Nearline drives are offered in steps of a half drive set (eight disk drives).

Another option, often chosen by clients, is to install a certain capacity percentage of solid-state drives (SSDs), now designated as flash drives, in the DS8870. At the time of this writing, a 400 GB SSD (2.5-inch FDE) is available.

The SSD drives that previously were listed in this section can be ordered in 16-drive installation groups (full-drive set), such as 15 K/10 K HDD drives, or in eight-drive installation groups (half-drive set). The suggested configuration of SSDs for an optimum price-to-performance ratio and to come to ideally balanced configurations is 16 drives per DA pair. For more information about SSD configurations, see 8.5.3, “DS8000 solid-state drive considerations” on page 239.

Encrypting the data at rest is one option. Whether the FDE-capable drives are used with or without active encryption does not impose any performance difference.

Intermixing drives: Intermixing drives of different capacities and speeds is supported on a DA pair, but not within a storage enclosure pair.

Chapter 2. IBM DS8870 models 31

Page 50: IBM 8870 Archt

CapacityA summary of the capacity characteristics is listed in Table 2-2. The minimum capacity is achieved by installing one drive group (16 drives) of 146 GB 15 K enterprise drives.

Table 2-2 Capacity comparison of device adapters, DDMs, and storage capacity

* TB and PB definition according to ISO/IEC 80000-13

Additional drive types continuously are qualified during the lifetime of the machine.

The 96E scales with 2.5-inch drives in the following configurations:

� With one DS8870 Model 96E expansion unit, the DS8870 supports up to 576 2.5-inch disk drives, for a maximum gross capacity of up to 518 TB, and up to 16 Fibre Channel/FICON adapters.

� With two 96E expansion units, the DS8870 Model 961 supports up to 1056 2.5-inch disk drives, for a maximum gross capacity of up to 950 GB, and up to 16 Fibre Channel/FICON adapters.

� With three DS8870 Model 96E expansion units, the DS8870 Model 961 supports up to 1536 2.5-inch disk drives, for a maximum gross capacity of currently up to 1.8 PB, and up to 16 Fibre Channel/FICON adapters.

For more information about comparison values for Model 961 and 96E, see the Overview of physical configurations section at this web page:

http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp?topic=%2Fcom.ibm.storage.ssic.help.doc%2Ff2c_physconfigovr_1xfg41.html

Component 2-core base frame

4/8/16-core base frame

8-core or 16-core, with one expansion frame

8-core or 16-core, with two expansion frames

8-core or 16-core, with three expansion frames

DA pairs 1-2 1 - 4 5 - 8 8 8

HDDs Up to 144 Up to 240 Up to 576 Up to 1056 Up to 1536

Flash drives (SSD) Up to 96 Up to 192 Up to 384 Up to 384 Up to 384

Physical capacity*, gross2.5-inch SFF disks

Up to 173 TB Up to 288 TB Up to 690 TB Up to 1.27 PB Up to 1.84 PB

Physical capacity*, net RAID-52.5-inch SFF disks

Up to 133 TB Up to 218 TB Up to 532 TB Up to 1.00 PB Up to 1.48 PB

Physical capacity*, gross3.5-inch LFF disks

Up to 288 TB Up to 480 TB Up to 1152 TB Up to 1.58 PB Up to 3.06 PB

Physical capacity*, net RAID-63.5-inch LFF disks

Up to 173 TB Up to 284 TB Up to 694 TB Up to 1.34 PB Up to 2.01PB

Best practice: Although the flash drives (SSDs) can be ordered in full sets (16 drives) or half sets (eight drives), the optimum price-to-performance ratio is achieved by using full drive sets (16).

32 IBM DS8870 Architecture and Implementation

Page 51: IBM 8870 Archt

Adding DDMs and Capacity on DemandThe DS8870 includes a linear capacity growth of up to 3 PB gross capacity that uses 3.5-inch Nearline disks. A significant benefit of this capacity growth is the ability to add DDMs without disruption. IBM offers Capacity-on-Demand solutions that are designed to meet the changing storage needs of rapidly growing e-business. The IBM Standby Capacity on Demand (CoD) offering is designed to tap into more storage and is attractive if you have rapid or unpredictable storage growth.

Up to six standby CoD disk drive sets (for a total of 96 disk drives) can be concurrently field-installed into the system. To activate the sets, logically configure the disk drives for use, which is a nondisruptive activity and does not require intervention from IBM.

Upon activation of any portion of a standby CoD disk drive set, you must place an order with IBM to initiate billing for the activated set. Then, you can also order replacement Standby CoD disk drive sets.

Flash drives (SSDs) are not available for CoD configurations.

For more information about the Standby CoD offering, see the announcement letter at the following website:

http://www.ibm.com/common/ssi/index.wss

Device adapters and performanceBy default, the DS8870 includes a pair of DAs per 48 DDMs. If you order a system with, for example, 96 drives, you receive two DA pairs (a total of four DAs).

When you order 432 disk drives, you receive eight DA pairs, which are the maximum number of DA pairs. Adding more drives beyond 432 does not add DA pairs; each DA loop gets bigger.

Having enough DA pairs is important to achieve the high throughput level that is required by certain sequential workloads, such as data-warehouse installations.

Scalable upgradesWith the DS8870, it is possible to start with a two-core configuration with disk enclosures for 48 DDMs, and grow to a full-scale, 1056-drive, three-frame configuration concurrently for Business Class configuration. It is also possible to start with two-core configuration to 1536-drive, four-frame configuration concurrently for Enterprise Class systems.

The following configurations are available:

� Two-core Business Class base with one I/O enclosure pair:

– Enables lower entry price by not requiring a second I/O enclosure pair– Total: 4 HA, 2 DA

� Four-core = two-core base + processor card feature + second I/O enclosure pair feature

– Enables improved performance on base rack– Total: 8 HA, 8 DA

� Eight+-core base + first expansion frame

– Enables 4 I/O enclosure pairs and 16 host adapters and 8 device adapter pairsTotal: 16 HA, 16 DA

Chapter 2. IBM DS8870 models 33

Page 52: IBM 8870 Archt

� Eight+-core base with first expansion frame + second expansion frame, which enables up to 1 056 drives

� A 16-core or 8-core large-cache base with first expansion frame + second expansion frame + third expansion frame, which enables up to 1 536 drives

2.4 Additional licenses that are needed

Some of the IBM DS8870 features require a license key. For example, Copy Services are licensed by the gross amount of capacity that is installed. For more information about the features and licensing options for these features, see Chapter 10, “IBM System Storage DS8000 features and licensed functions” on page 259.

One feature that was described in 1.3.1, “Overall architecture and components” on page 10 is IBM Easy Tier. You need a license key to enable Easy Tier, but Easy Tier is available at no charge. For more information about Easy Tier, see 7.7, “IBM Easy Tier” on page 191, or see IBM System Storage DS8000 Easy Tier, REDP-4667.

For more information, refer to Chapter 10, “IBM System Storage DS8000 features and licensed functions” on page 259.

34 IBM DS8870 Architecture and Implementation

Page 53: IBM 8870 Archt

Chapter 3. DS8870 hardware components and architecture

This chapter describes the hardware components of the IBM DS8870. It provides insights into individual components and the architecture that holds them together. Although the DS8870 remains essentially similar to the DS8800, there are significant hardware differences, which are highlighted.

This chapter covers the following topics:

� DS8870 frames� DS8870 architecture overview� Disk subsystem� Host adapters� Power and cooling� Management console network

3

© Copyright IBM Corp. 2014. All rights reserved. 35

Page 54: IBM 8870 Archt

3.1 Frames: DS8870

The DS8870 is designed for modular expansion. From a high-level view, there appears to be three types of frames that are available for the DS8870. However, on closer inspection, the frames themselves are almost identical. The only variations are the combinations of processors, I/O enclosures, storage enclosures, batteries, and disks that the frames contain.

3.1.1 DS8870 Enterprise Class

Figure 3-1 shows a fully configured four-frame DS8870 Enterprise Class system. The left frame is a base frame (Model 961) that contains the processors. In this example, this frame features dual 8-core IBM POWER7+ processor-based servers (a dual 8-core system with 256 GB of total system memory is the minimum that is required to support up to three expansion frames). The second frame is an expansion frame (96E) that contains more I/O enclosures but no additional processors. The third and fourth frames are also expansion frames that contain only disk and power enclosures. If the extended power line disturbance (ePLD) option is not installed, one battery service module (BSM) set per power supply (for the base and expansion racks) is needed. If the ePLD option is installed, two BSM sets per power supply (for the main and expansion racks) are needed. Each frame contains redundant power supplies and other power hardware.

Figure 3-1 Front view of fully configured Enterprise Class with four frames

3.1.2 DS8870 Business Class

Figure 3-2 on page 37 shows a fully configured three-frame DS8870 Business Class system. The left frame is a base frame (Model 961) that contains the processors. In this example, this frame features dual 8-core POWER7+ processor-based servers (a dual 8-core system with 128 GB of total system memory is the minimum that is required to support up to two expansion frames). The second frame is an expansion frame (96E) that contains more I/O enclosures but no additional processors. The third frame is also an expansion frame that contains only disk and power enclosures.

36 IBM DS8870 Architecture and Implementation

Page 55: IBM 8870 Archt

Figure 3-2 Front view of fully configured DS8870 Business Class with 3 frames

3.1.3 DS8870 all-flash

Figure 3-3 shows a fully configured two-frame DS8870 all-flash drive configuration. The left frame is a base frame (Model 961) that contains the processors. In this example, this frame features dual 8-core POWER7+ processor-based servers (a dual 8-core system with 128 GB of total system memory is the minimum that is required to support up to two expansion frames). The second frame is an expansion frame (96E) that contains more I/O enclosures but no additional processors. The all-flash drive configuration is not expandable to include hard disk drives of any form factor.

Figure 3-3 Front view of a fully configured all-flash Enterprise Class with two frames

Chapter 3. DS8870 hardware components and architecture 37

Page 56: IBM 8870 Archt

3.1.4 Base frame: DS8870

As shown in Figure 3-1 on page 36, the left side of the base frame (viewed from the front of the machine) is the frame power area. Only the base frame contains rack power control (RPC) cards to control power sequencing for the storage unit. Previous models in the IBM DS8000 series contain primary power supply (PPS) units and separate battery backup units (BBU). All DS8870 frames use direct current uninterruptible power supplies (DC-UPSs) with alternating current (ac) input.

Each DC-UPS consists of one DC supply unit (DSU), and up to two BSM sets. This configuration is used whether the configuration is dual 2-core, dual 4-core, dual 8-core, or dual 16-core, and whether the system includes the ePLD feature.

The base frame can contain up to 10 storage enclosures, which are installed in pairs. Each enclosure can contain up to 24 disk drives with the Small Form Factor storage enclosure or 12 disk drives with the Large Form Factor (LFF) storage enclosure. In its maximum configuration, the base frame can hold 240 SFF disk drives or 120 LFF disk drives. Disk drives are hard disk drives (HDDs) with real spinning disks or flash drives. Although both hard disk drives or flash drives can be used in the same storage system, they cannot be intermixed within the same storage enclosure pair. For more information about the disk subsystem, see 3.4, “Disk subsystem” on page 50.

A storage enclosure pair can contain SFF hard disk drives or flash drives. Intermixing hard disk drives and solid-state flash drives in the same storage enclosure pair is not possible. Hard disk drives are installed in groups of 16. Flash drives can be installed in groups of 16 (full disk set) or 8 (half disk set). A storage enclosure pair for LFF Nearline-SAS HDDs can only contain LFF disks. The 4 TB Nearline-SAS HDD can be installed in groups of 8 (full drive set).

Inside the storage enclosures are cooling fans in the storage enclosure power supply units. These fans pull cool air from the front of the frame and exhaust to the rear of the frame. For more information about disks, see Chapter 8, “DS8870 physical planning and installation” on page 217.

Physically located between the storage enclosures and the processor complexes are two Ethernet switches and the storage Hardware Management Console (HMC).

The base frame contains two central electronics complexes (CECs). These POWER7+ based servers contain the processors and memory that drive all functions within the DS8870.

The base frame also contains I/O enclosures, which are installed in pairs horizontally. These I/O enclosures provide connectivity between the adapters and the processors. Each I/O enclosure can contain up to two device adapters and two host adapters.

The DS8870 uses PCI Express Generation 2 (PCIe Gen2) connections to provide adapter-to-central electronics complex (CEC) communication and to transfer I/O data.

3.1.5 Expansion frames

Only dual 8-core and dual 16-core DS8870 configuration systems can have expansion frames (model 96E). There are two types of expansion frame configurations. The first expansion frame always contains four I/O enclosures (two pairs). The second and third expansion frames do not include I/O enclosures. The I/O enclosures provide connectivity between the

RoHS compliant: The DS8870 is Restriction of Hazardous Substances (RoHS) compliant.

38 IBM DS8870 Architecture and Implementation

Page 57: IBM 8870 Archt

adapters and the processors. The adapters that are contained in the I/O enclosures can be device adapters, host adapters, or both.

The left side of each expansion frame (when viewed from the front of the system) is the frame power area. The expansion frames do not contain RPC cards. RPC cards are present only in the base frame. Each expansion frame contains two DC-UPSs.

The DS8870 power system consists of one DSU and one or two BSM sets, depending on the configuration. Each BSM set includes one Primary Module and three Secondary Modules. If the ePLD feature is installed, two BSM sets per DC-UPS are needed.

The first expansion frame can contain up to 14 storage enclosures (installed in pairs). A storage enclosure can have up to 24 SFF 2.5-inch disks that are installed vertically or 12 LFF 3.5-inch disks installed horizontally. In a maximum configuration, the first expansion frame can contain up to 336 SFF disks or 168 LFF disks.

The second and third expansion frames can contain up to 20 storage enclosures (installed in pairs), which contain the disk drives. The second and third expansion frames can contain up to 480 SFF disks or 240 LFF disks.

3.1.6 Rack operator panel

The DS8870 frame features status indicators. The status indicators can be seen when the doors are closed. When the doors are open, the emergency power off (EPO) switch is also accessible. Figure 3-4 shows the operator panel for DS8870.

Figure 3-4 Rack operator window: DS8870

Each panel has two line cord indicators, one for each line cord. For normal operation, both of these indicators are illuminated green if each line cord is supplying correct power to the frame. There also is an attention LED. If this LED is lit solid or is flashing, use the DS Storage Manager graphical user interface (GUI) or the HMC Service Web User Interface (WUI), Manage Serviceable Events menu, to determine why the attention LED is illuminated.

Important: You cannot use expansion frames from previous DS8000 models as expansion frames for a DS8870 storage system.

Chapter 3. DS8870 hardware components and architecture 39

Page 58: IBM 8870 Archt

The unit emergency power off (UEPO) switch is above the DC-UPS units in the upper left corner of the DS8870 base frame. This switch is used only for emergencies. Switching off the UEPO bypasses all power sequencing control and results in immediate removal of system power. Modified data is not destaged and is lost.

Figure 3-5 shows the location of the UEPO switch in the DS8870.

Figure 3-5 Unit emergency power off (UEPO) switch: DS8870

There is no power on/off switch in the operator window because power sequencing is managed through the HMC. This configuration ensures that all data in nonvolatile storage, which is known as modified data, is destaged properly to disk before power down.

Important: Do not use this switch to turn off the DS8870 unless it is creating a safety hazard or is placing human life at risk.

When the UEPO switch is activated, IBM must be contacted to restart the storage system.

40 IBM DS8870 Architecture and Implementation

Page 59: IBM 8870 Archt

3.2 DS8870 architecture overview

The DS8870 processor complex is based on the POWER7+ technology and uses PCIe Gen2 links to communicate with the I/O enclosures.

The 8-Gbps Fibre Channel Protocol (FCP) or Fibre Channel connection (FICON) host adapter and 8-Gbps device adapter, and the storage enclosures remain the same as in the DS8800.

The DS8870 uses a DC-UPS power subsystem that provides full wave rectification, power monitoring, and battery backup functions. The DC-UPS is designed to provide rectified alternating current (ac) with battery backup to power the frame.

3.2.1 The IBM POWER7+ processor-based server

The DS8870 uses the POWER7 740 server technology. The POWER7+ processors operate at 4.228 GHz (The POWER7 processors used in the previous version operated at 3.55 GHz).

The Model 961 is available with dual 2-core, dual 4-core, dual 8-core, or dual 16-core processors, as shown in Figure 3-6.

Figure 3-6 DS8870 configuration matrix

The POWER 740 server supports up to four memory riser cards, each with a maximum capacity of eight dual inline memory modules (DIMMs). The DS8870 server has one or two physical processor modules. Each can access all memory DIMMs installed within the server (CEC).

The DS8870 CEC is a 4U-high drawer, and features the following configuration:

� One system planar� One to four memory riser cards, each card holds a maximum of eight DIMMs� One storage cage with two utilized drives� Five PCIe x8 Gen2 full-height, half-length slots� One PCIe x4 Gen2 full-height, half-length slot that is shared with the GX++ slot2� Optional 4 PCIe x8 Gen2 low-profile slots� Two GX++ slots

Chapter 3. DS8870 hardware components and architecture 41

Page 60: IBM 8870 Archt

� Four 120 mm fans� Two power supplies

Figure 3-7 shows the processor complex as configured in the DS8870.

Figure 3-7 DS8870 Processor Complex front and rear view

For more information about the server hardware that is used in the DS8870, see IBM Power 720 and 740 Technical Overview and Introduction, REDP-4984, for POWER7+ processors, which is found at this website:

http://www.redbooks.ibm.com/redpapers/pdfs/redp4984.pdf

The DS8870 base frame contains two processor complexes. The POWER7+ processor-based server features up to two processor single chip modules (SCMs). Each processor socket can be configured with two cores in the minimum configuration and up to eight cores (per processor SCM) in the maximum configuration, as shown in Figure 3-8.

Figure 3-8 Processor sockets and cores

The number of enabled processor cores depends on the amount of installed memory. This varies based on configuration.

empty

Processor Sockets

Active cores

Inactive cores

P7 CECempty

Processor Sockets Processor Sockets Processor Sockets

2-core configuration 4-core configuration 8-core configuration 16-core configuration

empty

Processor Sockets

Active cores

Inactive cores

P7 CECempty

Processor Sockets Processor Sockets Processor Sockets

2-core configuration 4-core configuration 8-core configuration 16-core configuration

42 IBM DS8870 Architecture and Implementation

Page 61: IBM 8870 Archt

In the DS8870, the number of cores and memory can be upgraded non-disruptively. The upgrade preserves the system serial number. Table 3-1 shows the seven supported configurations.

Table 3-1 Configuration attributes

3.2.2 Peripheral Component Interconnect Express adapters

The DS8870 processor complex uses Peripheral Component Interconnect Express (PCIe) adapters. These adapters allow for point-to-point interconnections between CECs and I/O enclosures by using a directly wired interface between these connection points.

Depending on the configuration that is used, there are up to four PCIe adapters that are placed in specific slots of the DS8870 process complex.

A DS8870 processor complex is equipped with the following PCIe cards. These cards are designed to provide the connection between the CECs and I/O bays:

� Two single port PCIe Gen2 adapters in slot 3 and 4. � Two multi-port PCIe Gen2 adapters in slot 1 and 8. These adapters plug into the CEC

GX++ bus and have an onboard chip (P7IOC) that connects the three PCIe ports.

Figure 3-9 shows the PCIe adapter locations in the CEC.

Figure 3-9 PCIe adapter locations in the processor complex

Processor configuration

Cache/NVS per CEC (GB)

DIMM size (GB) Expansion frames

2-core 8/0,5 4 0

2-core 16/0,5 4 0

4-core 32/1 4 0

8-core 64/2 4 0, 1, 2

8-core 128/4 4 0, 1, 2, 3

16-core 256/8 8 0, 1, 2, 3

16-core 512/16 16 0, 1, 2, 3

Chapter 3. DS8870 hardware components and architecture 43

Page 62: IBM 8870 Archt

For more information about PCI Express, see this website:

http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0456.html?Open

3.2.3 Storage facility processor complex

The DS8870 storage facility consists of two Power 740 servers. The DS8870 uses the PCIe paths through the I/O enclosures to provide a communication path between the CECs, as shown in Figure 3-10.

All cross-cluster communication uses the PCIe paths through the I/O enclosures.

Figure 3-10 shows how the DS8870 hardware is shared between the servers. On the left side and on the right side, there is one CEC. The CEC uses the N-core symmetric multiprocessor (SMP) of the complex to perform operations. It records write data and caches read data in the volatile memory of the complex. For fast-write data, it features a persistent memory area for the processor complex. To access the disk arrays under its management, it has its own affiliated device adapters. The host adapters are shared between both servers.

Figure 3-10 DS8870 series architecture

3.2.4 Processor memory

The DS8870 includes up to 1 TB of total memory per system. Each processor complex has half of the total system memory. All memory that is installed on each CEC is accessible to all processors in that CEC. The absolute addresses that are assigned to the memory are common across all processors in the CEC. The set of processors is referred to as a symmetric multiprocessor (SMP).

The POWER7+ processor that is used in the DS8870 can operate in simultaneous multithreading (SMT4) mode, which runs up to four instructions in parallel. SMT4 mode enables the POWER7+ processor to maximize the throughput of the processor cores by offering an increase in core efficiency.

44 IBM DS8870 Architecture and Implementation

Page 63: IBM 8870 Archt

The DS8870 configuration options are based on the total installed memory, which in turn establishes the number of installed processor cores. The following DS8870 configuration upgrades can be performed non-disruptively:

� Scalable processor configuration with 2, 4, 8, and 16 cores per server� Scalable memory 8 - 512 GB per server

The possible configurations are shown in Figure 3-11. The 961 supports up to 4 DA Pairs and the 96E supports up to 4 DA Pairs for a total 8 in multiframe configurations.

Figure 3-11 Configuration options for Business Class and Enterprise Class

Caching is a fundamental technique for reducing I/O latency. Like other modern caches, the DS8870 contains volatile memory that is used as a read and write cache and non-volatile memory that is used to back up the write cache. If power is lost, the batteries keep the system running until data in nonvolatile storage (NVS) is written to the CEC’s internal disks.

The NVS scales to the processor memory that is installed, which also helps to optimize performance.

The effectiveness of a read cache depends on the read hit ratio, which is the fraction of requests that are served from the cache without necessitating a read from the disk (read miss).

3.2.5 Flexible service processor and system power control network

The Power 740 system planar contains one flexible service processor (FSP). It is an embedded controller that is based on an IBM PowerPC® processor which controls power for the CEC. The FSP is connected to the system power control network (SPCN), which is used to control the power of the attached I/O enclosures.

The FSP performs predictive failure analysis that is based on any recoverable processor or memory errors. The FSP monitors the operation of the firmware during the boot process, and it can monitor the operating system for loss of control. This ability enables the FSP to take appropriate action.

The SPCN monitors environmental characteristics such as power, fans, and temperature. Critical and non-critical environmental conditions can generate emergency power-off warning (EPOW) events. Critical events trigger appropriate signals from the hardware to the affected components to prevent any data loss without operating system or firmware involvement. Non-critical environmental events also are logged and reported.

Chapter 3. DS8870 hardware components and architecture 45

Page 64: IBM 8870 Archt

3.3 I/O enclosures

The DS8870 base frame and first expansion frame (if installed) contain I/O enclosures, which are installed in pairs. The I/O enclosure physical architecture is the same as in the DS8800. One or two I/O enclosure pairs can be installed in the base frame, depending on memory and processor configuration.

Two enclosures pairs are installed in the first expansion frame. Each I/O enclosure has six slots. DAs and HAs are installed in the I/O enclosures. The I/O enclosures provide connectivity between the processor complexes and the HAs or DAs. The DS8870 can have up to two DAs and two HAs installed in each I/O enclosure.

3.3.1 DS8870 I/O enclosure

Each CEC (Power 740) has a P7IOC chip that drives two single-port PCIe adapters that connect to two I/O enclosures. Other I/O enclosures connect to the first 3-port PCIe adapter with an integrated P7IOC in slot 1, and a second 3-port PCIe adapter with an integrated P7IOC in slot 8, which connects to I/O enclosures in the expansion frame, as shown in Figure 3-12.

Figure 3-12 shows the DS8870 CEC to I/O enclosure connectivity (dual 8-core minimum for the first expansion frame). There is no difference between the previous DS8800 and the DS8870 I/O bays and HAs.

Figure 3-12 DS8870 I/O enclosure connections to CEC

A dual 8-core configuration includes two I/O enclosure pairs that are installed in the base frame, and two I/O enclosure pairs in the first expansion frame (if installed). It is possible to have a dual 8-core configuration with only two I/O enclosure pairs.

Each I/O enclosure includes the following attributes:

� 5U rack-mountable enclosure � Six PCIe slots� Redundant power and cooling

46 IBM DS8870 Architecture and Implementation

Page 65: IBM 8870 Archt

3.3.2 Host adapters

Attached host servers interact with software that is running on the complexes to access data on logical volumes. The servers manage all read and write requests to the logical volumes on the disk arrays. During write requests, the data is written to volatile memory on one CEC and preserved memory on the other CEC. The server then reports the write as complete before it is written to disk. This configuration provides much faster write performance than writing to the actual disk.

DS8870 host adaptersThe DS8870 supports up to two FC or FICON HAs per I/O enclosure. Each port can be configured to operate as a Fibre Channel port or a FICON port. The DS8870 HA cards can have four or eight ports and support 2-Gbps, 4-Gbps, or 8-Gbps full-duplex data transfer over longwave or shortwave fibre links.

HA cards can be installed only in slot 1 and 4. Slots 2 and 5 are reserved and cannot be used. Figure 3-13 shows the locations for HA cards in the DS8870 I/O enclosure.

Figure 3-13 DS8870 I/O enclosure adapter layout

The DS8870 Model 961 with four I/O enclosures contains a maximum of eight 8-port host adapters (64 host adapter ports). With the first expansion rack, model 96E, it supports up to eight 8-port HAs adding up to 64 more host adapter ports, for a maximum of 128 host ports.

Optimum availability: To obtain optimum availability and performance, one HA card should be installed in each available I/O enclosure before a second HA card is installed in the same enclosure.

Chapter 3. DS8870 hardware components and architecture 47

Page 66: IBM 8870 Archt

Figure 3-14 shows the preferred HA plug order for DS8870. HA positions and plugging order for the four I/O enclosures are the same for the base frame and the expansion frames with I/O enclosures. The chart shows the host adapter positions and plugging order for four I/O enclosures.

Figure 3-14 DS8870 HA plug order

Fibre Channel is a technology standard that allows data to be transferred from one node to another at high speeds and great distances (up to 10 km). The DS8870 uses the Fibre Channel protocol to transmit Small Computer System Interface (SCSI) traffic inside Fibre Channel frames. It also uses Fibre Channel to transmit FICON traffic, which uses Fibre Channel frames to carry System z I/O.

Each DS8870 Fibre Channel adapter offers four or eight 8 Gbps Fibre Channel ports. The cable connector that is required to attach to this adapter is an LC type. Each 8 Gbps port independently auto-negotiates to 2, 4, or 8 Gbps link speed. Each of the ports on a DS8870 host adapter can be independently configured for FCP or FICON. The type of port can be changed through the DS Storage Manager GUI or by using data storage command-line interface (DS CLI) commands. A port cannot be FICON and FCP simultaneously, but it can be changed as required.

The card itself is PCIe Gen 2. The card is driven by a new high-performance application-specific integrated circuit (ASIC). To ensure maximum data integrity, it supports metadata creation and checking. Each Fibre Channel port supports a maximum of 509 host login IDs and 1280 paths. This configuration allows large storage area networks (SANs) to be created.

48 IBM DS8870 Architecture and Implementation

Page 67: IBM 8870 Archt

Fibre Channel-supported serversThe current list of servers that are supported by Fibre Channel attachment is available at this website:

http://www.ibm.com/systems/support/storage/config/ssic/index.jsp

Consult these documents regularly because they contain the most current information about server attachment support.

Fibre Channel distances The following types of HA cards are available:

� Longwave� Shortwave

With longwave, you can connect nodes at distances of up to 10 km (non-repeated). With shortwave, you are limited to a distance of 500 meters (non-repeated). All ports on each card must be longwave or shortwave. There is no intermixing of the two types within a card.

3.3.3 Device adapters

Device adapters are Redundant Array of Independent Disks (RAID) controllers that access the installed disk drives. Each processor complex accesses the disk subsystem by way of 4-port Fibre Channel Arbitrated Loop (FC-AL) DAs. The DS8870 can have up to 16 of these adapters (installed in pairs).

Each of the DS8870 device adapter (DA) cards have four Fibre Channel ports, which are connected to the storage enclosures via two dual FC-AL loops. The DA cards connect the processor complexes through the PCIe interface, through the I/O enclosures, to the storage enclosures. The DA cards are responsible for managing, monitoring, and rebuilding the RAID arrays. The DA cards provide remarkable performance thanks to a high function and high performance ASIC. To ensure maximum data integrity, the adapter supports metadata creation and checking.

Each device adapter connects the complex to two separately switched Fibre Channel networks. Each network attaches to storage enclosures that each contain up to 24 disks. Each disk is attached to both networks. Whenever the device adapter connects to a disk, it uses a bridged connection to transfer data.

Chapter 3. DS8870 hardware components and architecture 49

Page 68: IBM 8870 Archt

3.4 Disk subsystem

The disk subsystem consists of the following components:

� The installed disks, commonly referred to as disk drive modules (DDMs).

� Device adapter pairs (installed in the I/O enclosures).

� The device adapter pairs connect to Fibre Channel interface cards (FCICs) in the storage enclosures. This connection creates a switched Fibre Channel network to the installed disks.

We describe the disk subsystem components in the remainder of this section. For more information, see 4.6, “RAS on the disk system” on page 83.

3.4.1 Disk drives

The DS8870 supports the following disk types:

� SAS Enterprise disks (all disks that are spinning at 15 K or 10 K rpm)� SAS solid-state (SSD) flash disks (all flash memory with no moving parts)� SAS Nearline disks (all disks that are spinning at 7200 rpm)

For the DS8870, each disk includes two indicators. The green indicator shows ready status and disk activity when flashing. The amber indicator is used with light path diagnostic tests to allow for easy identification and replacement of a failed disk. All disks in the DS8870, except for those converted from non-full disk encryption DS8800, are full disk encryption (FDE). Encryption is optional on the DS8870, and requires an extra feature code.

Table 3-2 shows the DS8870 disk configurations.

Table 3-2 DS8870 drive types

For more information about solid-state flash drives, see 8.5.3, “DS8000 solid-state drive considerations” on page 239.

For more information about encrypted drives and inherent restrictions, see IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines, REDP-4500.

3.4.2 Storage enclosures

The DS8870 data disks are installed in enclosures. These storage enclosures are installed in pairs.

SAS FDE drives and SSD FDE drives SAS drives with Encryption Standby Capacity

146 GB 15 K rpm 146 GB 15 K rpm

300 GB 15 K rpm 300 GB 15 K rpm

600 GB 10 K rpm 600 GB 10 K rpm

1.2 TB 10 K rpm 1.2 TB 10 K rpm

4 TB 7.2 K rpm Nearline-SAS 4 TB 7.2 K rpm Nearline-SAS

400 GB SSD

50 IBM DS8870 Architecture and Implementation

Page 69: IBM 8870 Archt

DS8870 storage enclosuresEach DS8870 frame contains a maximum of 10, 14, or 20 storage enclosures, depending on whether it is a 961, 96E, or 96F frame model. Each DS8870 storage enclosure contains a total of 24 2.5-inch SFF disks or 12 3.5-inch LFF disks. Both enclosure types can contain dummy carriers. A dummy carrier is similar to a disk drive module (DDM) in appearance, but contains no electronics. The SFF and LFF enclosures are shown in Figure 3-15.

The DS8870 also supports solid-state flash drives. Flash drives can be installed in storage enclosures that are partially populated with 4, 8, or 16 disks, or fully populated with 24 disks. Solid-state flash drives are only available as SFF disks.

Each disk is an industry-standard serial-attached SCSI (SAS) disk. The disks can be the following 2.5-inch Small Form Factor or 3.5-inch Large Form Factor disks:

� SFF disks: This size allows 24 disk drives to be installed in each storage enclosure� LFF disks: This size allows 12 disk drives to be installed in each storage enclosure

Each disk plugs into the storage enclosure midplane. The midplane provides physical connectivity for the DDMs, FCICs, and power supplies.

Each storage enclosure has a redundant pair of Fibre Channel interface cards (FCICs) that provides the interconnect logic for the disk access and a Storage Enclosure Services (SES) processor to provide all enclosure services. The FCIC has an 8 Gbps Fibre Channel (FC) switch with an FC to SAS conversion logic on each disk port. The FC and SAS conversion function provides speed aggregation on the FC interconnection ports.

Figure 3-15 shows DS8870 storage enclosures with SFF and LFF.

Figure 3-15 DS8870 storage enclosures for SFF and LFF

Important: If a DDM is not present, its slot must be occupied by a dummy carrier to maintain cooling integrity.

Chapter 3. DS8870 hardware components and architecture 51

Page 70: IBM 8870 Archt

Switched Fibre Channel Arbitrated Loop (FC-AL) advantagesThe DS8870 uses switched FC-AL technology to link the DA pairs and the disks. Switched FC-AL uses the standard FC-AL protocol, but the physical implementation is different. Switched FC-AL technology includes the following key features:

� Standard FC-AL communication protocol from DA to DDMs

� Direct point-to-point connections are established between DA and DDM

� Isolation capabilities in case of DDM failures, providing easy problem determination

� Predictive failure statistics

� Simplified expansion, where no cable rerouting is required when another storage enclosure is added

The DS8870 architecture uses dual redundant switched FC-AL access to each of the storage enclosures. This configuration features the following key benefits:

� Two independent switched FC-AL networks provide high performance connections to disk� Four access paths are available to each DDM� Each DA port operates independently� Double the bandwidth over traditional FC-AL loop implementations

In Figure 3-16, each DDM is shown as attached to two separate FCICs with connections to the disk drive. By using two DAs, there are redundant data paths to each disk. Each DA can support two switched FC networks.

Figure 3-16 DS8870 storage enclosure (only 16 disks are shown for simplicity)

The DA cards and FCIC cards are directly connected via Fibre Channel Arbitrated Loops. When a connection is made between the device adapter and a disk, the storage enclosure translates from FC to SAS to the disk drives. This means that a mini-loop is created between the DA port and the disk.

ExpansionStorage enclosures and device adapters are added in pairs and disks are added in groups of 16. If storage enclosure pairs are fully populated, additional storage enclosures would need to be installed to add more disks. If the DA pair is fully populated, more pairs would also be required.

When only disks are added to an existing storage enclosure pair, they are added in groups of eight to each storage enclosure in the pair. The DDMs that are added must be of the same capacity and speed of those already in the pair.

52 IBM DS8870 Architecture and Implementation

Page 71: IBM 8870 Archt

When new storage enclosures are added, each storage enclosure pair is first added to the DA pair. Then, the disks are added to the new storage enclosure pairs in the same manner as adding disks to an existing pair.

If a new DA pair is being added, this is added to the appropriate I/O enclosures, then followed with storage enclosures and DDMs.

Arrays and sparesArray sites that contain eight DDMs are created as the DDMs are installed. During the configuration process, arrays are created from an array site. You can create a RAID 5, RAID 6, or RAID 10 array based on the protection and performance requirements for the array.

Remember the following RAID support guidelines:

� SFF hard disk drives support all RAID 5, RAID 6, and RAID 10.

� SSF flash drives support only RAID 5. RAID 6 and RAID 10 are supported only as an RPQ.

� LFF 4-TB Nearline-SAS disks support only RAID 6.

Depending on the RAID type, the first arrays that are created on each DA pair include drives that are used as spares until the minimum number of required spares per DA pair is reached.

There is a minimum of four spares per DA pair. Four spares of the first capacity and speed installed and at least two spares of any other size and speed disks installed on the same DA pair. However, this number can increase depending on the drive intermix. If all disks are the same capacity and speed, four spares will be created.

Array across loopsFigure 3-17 on page 54 shows the DA pair layout. One DA pair creates two dual switched loops.

Chapter 3. DS8870 hardware components and architecture 53

Page 72: IBM 8870 Archt

Figure 3-17 DS8870 switched loop layout (only eight disks per enclosure are shown for simplicity)

For the DS8870, the upper enclosure connects to one dual loop and the lower enclosure connects the other dual loop, in a storage enclosure pair.

Each enclosure places two FC switches onto each dual loop. Disks are installed in groups of 16. Half of the new DDMs go into one storage enclosure and the other half is placed into the other storage enclosure of the pair.

An array site consists of eight disks. Four disks are taken from one enclosure in the storage enclosure pair, and four are taken from the other enclosure in the pair. When a RAID array is created on the array site, half of the array is in each storage enclosure, doubling the bandwidth to the disk in a single array.

One storage enclosure of the pair is on one FC switched loop, and the other storage enclosure of the pair is on a second switched loop. This configuration splits the array across two loops, which is known as array across loops (AAL), as shown in Figure 3-18 on page 55. Only 16 DDMs are shown in Figure 3-18 on page 55, eight in each storage enclosure. When fully populated, there are 16 or 24 DDMs in each enclosure.

54 IBM DS8870 Architecture and Implementation

Page 73: IBM 8870 Archt

Figure 3-18 shows the layout of the array sites. Array site 1 in green (the darker disks) uses the four left DDMs in each enclosure. Array site 2 in yellow (the lighter disks), uses the four right DDMs in each enclosure. When an array is created on each array site, half of the array is placed on each loop.

Figure 3-18 Array across loop

Array across loops benefitsAAL is used to increase performance. When the device adapter writes a stripe of data to a RAID 5 array, it sends half of the write to each switched loop. By splitting the workload in this manner, each loop is worked evenly. This configuration aggregates the bandwidth of the two loops and improves performance. If RAID 10 is used, two RAID 0 arrays are created. Each loop hosts one RAID 0 array. When servicing read I/O, half of the reads can be sent to each loop, which improves performance by balancing the workload across loops.

For more information about SSDs, see 8.5.3, “DS8000 solid-state drive considerations” on page 239.

For more information about encrypted drives and inherent restrictions, see IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines, REDP-4500.

3.5 Power and cooling

The DS8870 power and cooling system is highly redundant, the components of which are described in this section. For more information, see 4.7, “RAS on the power subsystem” on page 91.

Rack power control cardsThe DS8870 features a pair of redundant new rack power control (RPC) cards that are used to control certain aspects of power sequencing throughout the DS8870. These cards are attached to the FSP card in each processor complex, which allows them to communicate with the HMC and the storage facility. The RPCs also communicate with each DC-UPS.

Chapter 3. DS8870 hardware components and architecture 55

Page 74: IBM 8870 Archt

Power supplyTo increase power efficiency, the power system of the DS8870 was redesigned. The PPS of previous models was replaced with the DC-UPS technology.

The DC-UPS provides rectified ac power distribution and power switching for redundancy. The rack features two ac power cords. Each cord feeds a single DC-UPS. The DC-UPS distributes rectified line ac. If ac is not present at the input line, the output is switched to rectified ac from the partner DC-UPS. If no ac input is active to either DC-UPS in the frame, the DC-UPSs switch to battery power.

Figure 3-19 shows the front and rear view of the DC-UPS.

Figure 3-19 DC-UPS front and rear view

The line cord must be ordered specifically for the input voltage to meet specific requirements. The line cord connector requirements vary widely throughout the world. The line cord might not include the suitable connector for the country in which the system will be installed. In this case, the connector must be replaced by an electrician after the machine is delivered. Previous model DS8000 line cords are not compatible. For more information, see the IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

There are two redundant DC-UPSs in each frame of the DS8870. Each DC-UPS features internal fans to supply cooling for that power supply.

The extended power line disruption (ePLD) feature, which is an optional feature, allows the system to run for up to 50 seconds without line power and then gracefully shuts down the system. If ePLD is not installed, the system shuts down 4 seconds after a power loss. For more information about why this feature might be necessary for your installation, see Chapter 4, “RAS on the IBM DS8870” on page 61.

The DC-UPS supplies output power to six power distribution units (PDUs). Each PDU is supplied from both DC-UPSs in the frame in which they are installed, for redundancy.

In the base frame, the PDUs supply power to the processor complexes, the I/O enclosures, and the storage enclosures. In the first expansion frame, the PDUs supply power to the I/O enclosures and the storage enclosures. In the second expansion frame, the PDUs supply

56 IBM DS8870 Architecture and Implementation

Page 75: IBM 8870 Archt

power to the storage enclosures because there are no I/O enclosures or processor complexes in these frames.

Each storage enclosure includes two power supply units (PSUs). The storage enclosure PSUs are connected to two separate PDUs, which in turn are connected to separate DC-UPSs for redundancy.

Figure 3-20 shows the DS8870 base frame PDUs.

Figure 3-20 DS8870 base frame power distribution units

Processor and I/O enclosure power suppliesEach processor complex and I/O enclosure features dual redundant power supplies to convert input voltage into the required voltages for that enclosure or complex. Each enclosure also has its own cooling fans.

Storage enclosure power and coolingFor DS8870, the storage enclosures feature two PSUs for each storage enclosure. These PSUs draw power from the DC-UPS via the PDUs. There are cooling fans in each PSU.

Chapter 3. DS8870 hardware components and architecture 57

Page 76: IBM 8870 Archt

These fans draw cooling air through the front of each storage enclosure and exhaust air out the rear of the frame. Figure 3-15 on page 51 shows the DS8870 storage enclosure PSUs.

Power Junction AssemblyThe Power Junction Assembly (PJA) is a component of the DS8870 power subsystem. Dual PJAs provide redundant power to HMC, Ethernet switches, and HMC tray fans.

Battery service module setA single battery module is called a battery service module (BSM). A group of four battery enclosures makes up a BSM set.

The BSM helps protect data in the event of a loss of external power. If there is a complete loss of ac input power, the batteries are used to maintain power to the processor complexes and I/O enclosures for sufficient time to allow the contents of NVS memory (modified data that is not yet destaged to disk from cache) to be written to the disk drives internal to the processor complexes (not the storage DDMs).

The following types of BSMs are used:

� There is one primary BSM. The primary BSM is the only BSM with an electrical connection to the DSU.

� There are three secondary BSMs.

3.6 Management console network

All base frames ship with one HMC and two Ethernet switches. A notebook HMC (as shown in Figure 3-21) is shipped with a DS8870.

DS8870 logical configuration creation and changes are performed by the storage administrator by using the GUI or DS CLI. The changes are passed to the storage system via the HMC.

For more information about the HMC, see Chapter 9, “DS8870 HMC planning and setup” on page 241.

Figure 3-21 Mobile computer HMC

Efficient air flow: DS8870 is designed for a more efficient air flow to be installed with hot and cold aisle configurations.

58 IBM DS8870 Architecture and Implementation

Page 77: IBM 8870 Archt

3.6.1 Ethernet switches

The DS8870 base frame has two 8-port Ethernet switches. Two switches are supplied to allow the creation of a fully redundant private management network. Each processor complex includes connections to each switch to allow each server to access both private networks. These networks cannot be accessed externally, and no external connections are allowed. External client network connection to the DS8870 system is through a dedicated connection to the HMC. The switches receive power from the PJAs and do not require separate power outlets. The ports on these switches are shown in Figure 3-22.

Figure 3-22 Ethernet switch ports

For more information, see 4.5, “RAS on the HMC” on page 80, and 9.1, “Hardware Management Console overview” on page 242.

Important: The DS8870 HMC supports IPv6, the next generation of the Internet Protocol. The HMC continues to support the IPv4 standard and mixed IPV4 and IPv6 environments.

Important: The internal Ethernet switches that are shown in Figure 3-22 are for the DS8870 private network only. No client network connection should ever be made directly to these internal switches.

Chapter 3. DS8870 hardware components and architecture 59

Page 78: IBM 8870 Archt

60 IBM DS8870 Architecture and Implementation

Page 79: IBM 8870 Archt

Chapter 4. RAS on the IBM DS8870

This chapter describes the reliability, availability, and serviceability (RAS) characteristics of the IBM DS8000 family of products. Several changes and enhancements were introduced with the DS8870. These changes or enhancements were in the central electronics complex (CEC) and power subsystem and are described in this chapter.

The following topics are covered:

� Names and terms for the DS8870� DS8870 Processor Complex RAS features� CEC failover and failback� Data flow in DS8870� RAS on the HMC� RAS on the disk system� RAS on the power subsystem� Other features

4

© Copyright IBM Corp. 2014. All rights reserved. 61

Page 80: IBM 8870 Archt

4.1 Names and terms for the DS8870

It is important to understand the naming conventions that are used to describe DS8000 components and constructs to fully appreciate the discussion of RAS concepts. Although most terms were introduced in previous chapters of this book, they are repeated and summarized here because the rest of this chapter uses these terms frequently.

Storage complexThe term storage complex describes a group of DS8000s (all models) that are managed by a single Hardware Management Console (HMC). All DS8000 systems in a storage complex must run the same level of microcode. A storage complex can (and often does) consist of a single DS8000 storage unit (base frame plus other installed expansion frames) and primary HMC, optionally a secondary HMC can also manage the storage complex.

Storage unit The term storage unit describes a single DS8000 (base frame plus other installed expansion frames). If your organization has one DS8000, then you have a single storage complex that contains a single storage unit.

Base frameThe DS8870 base frame is available as a single model type (961). It is a complete storage unit that is contained within a single base frame. To increase the storage capacity, expansion frames can be added.

A base frame contains the following components:

� Power and cooling components: direct current uninterruptible power supply (DC-UPS)

� Power control cards: Rack Power Control (RPC) and system power control network (SPCN)

� Two POWER7+ 740 CECs

� Two or four I/O enclosures that contain host adapters (HA) and device adapters (DA)

� Two Gigabit Ethernet switches for the internal networks

� Hardware Management Console (HMC)

� Up to five disk enclosure pairs (10 enclosures total) for storage disks

Expansion frameThe 96E model type is used for expansion frames in DS8870.

Expansion frames can be only added to 8-way and 16-way systems. Up to three expansion frames can be added to the DS8870 base frame. Business class 2-core, 4-core systems, and enterprise class 2-core, 4-core systems cannot have expansion frames. However, they can be upgraded non-disruptively to an 8-core or 16-core systems to accommodate expansion frames.

Expansion frames of previous DS8000 generations are not supported by the DS8870 due to the new power subsystem.

All expansion frames contain the power and cooling components that are needed to run the frame. The first expansion frame contains storage disks and I/O enclosures. Subsequent expansion frames (second or third expansion frame in the overall system) contain only

62 IBM DS8870 Architecture and Implementation

Page 81: IBM 8870 Archt

DC-UPS and storage enclosures for disks. Adding an expansion frame is a concurrent operation for the DS8870.

CEC (processor complex) storage serverIn the DS8870, a CEC consists of an IBM POWER server that is built on the POWER7+ processor. The DS8870 contains two CECs as a redundant pair so that if either fails, the DS8870 fails over to the remaining CEC and continues to run the storage unit. Each CEC can have up to 512 GB of memory and up to 16 active processor cores. A CEC is also referred to as a processor complex. The CECs are identified as CEC 0 and CEC 1.

There is one logical partition in each CEC, running the AIX V7.1 operating system and storage-specific microcode, called storage server. The storage servers are identified as Server 0 and Server 1, matching CEC 0 and CEC 1 respectively.

Hardware Management ConsoleThe Hardware Management Console (HMC) is the management console for the DS8870 storage unit. With connectivity to the CECs, the client network, and other management systems, the HMC becomes the focal point for most operations on the DS8870. All storage configuration, user controlled tasks, and service actions are managed through the HMC. Although many other IBM products use an HMC, microcode makes the DS8000 HMC unique to the specific model of DS8000.

4.2 DS8870 Processor Complex RAS features

Reliability, availability, and serviceability (RAS) are important concepts in the design of the IBM DS8870. Hardware features, software features, design considerations, and operational guidelines all contribute to make the DS8870 reliable. At the heart of the DS8870 is a pair of POWER7+ processor-based servers. These servers (CECs) share the load of receiving and moving data between the attached hosts and the disk arrays. However, they are also redundant so that if either CEC fails, the system will fail over to the remaining CEC and continues to run without any host interruption. This section looks at the RAS features of the CECs, including the hardware, the operating system, and the interconnections.

4.2.1 POWER7+ Hypervisor

The POWER7+ Hypervisor (PHYP) is a component of system firmware that is always active, regardless of the system configuration, even when disconnected from the Hardware Management Console. PHYP runs on the FSP and it requires the FPS processor and memory to support the resource assignments to the logical partition on the server. It operates as a hidden partition, with no CEC processor resources assigned to it but does allocate a small amount memory from the partition.

The Hypervisor provides the following capabilities:

� Reserved memory partitions set aside a portion of memory to use as cache and a portion to use as non-volatile storage (NVS).

� Preserved memory support allows the contents of the NVS and cache memory areas to be protected in the event of a server reboot.

� I/O enclosure initialization, power control, slot power control, prevents a CEC that is rebooting from initializing an I/O adapter that is in use by another server.

Chapter 4. RAS on the IBM DS8870 63

Page 82: IBM 8870 Archt

� Provides automatic reboot of a frozen partition. The Hypervisor also monitors the service processor and performs a reset or reload if it detects the loss of the service processor. It notifies the operating system if the problem is not corrected.

The AIX operating system uses PHYP services to manage the translation control entry (TCE) tables. The operating system communicates the wanted I/O bus address to logical mapping, and the Hypervisor returns the I/O bus address to physical mapping within the specific TCE table. The Hypervisor needs a dedicated memory region for the TCE tables to translate the I/O address to the partition memory address. The Hypervisor then can monitor direct memory access (DMA) transfers to the PCIe adapters.

4.2.2 POWER7+ processor

The IBM POWER7+ processor implements 64-bit IBM Power Architecture® technology and represents a leap forward in technology achievement and associated computing capability. The multi-core architecture of the POWER7+ processor modules is matched with innovation across a wide range of related technologies to deliver leading throughput, efficiency, scalability, and RAS.

Areas of innovation, enhancement, and consolidationThe POWER7+ processor represents an important performance increase in comparison with previous generations. The POWER7+ processor features the following areas of innovation, enhancement, and consolidation:

� On-chip L3 cache that is implemented in embedded dynamic random access memory (eDRAM), which improves latency and bandwidth. There is lower energy consumption and a smaller physical footprint.

� Cache hierarchy and component innovation.

� Advances in memory subsystem.

� Advances in off-chip signaling.

� The simultaneous multithreading mode, SMT4 permits four instruction threads to execute simultaneously in each POWER7+ processor core. SMT4 mode also enables the POWER7+ processor to maximize the throughput of the processor core by offering an increase in core efficiency.

� The POWER7+ processor features intelligent threads that can vary based on the workload demand. The system automatically selects whether a workload benefits from dedicating as much capability as possible to a single thread of work, or if the workload benefits more from having capability spread across two or four threads of work. With more threads, the POWER7+ processor can deliver more total capacity as more tasks are accomplished in parallel. With fewer threads, those workloads that need fast individual tasks can get the performance that they need for maximum benefit.

The remainder of this section describes the RAS features of POWER7+ processor. These features and abilities apply to the DS8870.

POWER7+ RAS featuresThe following sections describe the RAS leadership features of IBM POWER7 Systems™.

POWER7+ processor instruction retryAs with previous generations, the POWER7+ processor can perform processor instruction retry and alternate processor recovery for a number of core-related faults. This ability significantly reduces exposure to permanent and intermittent errors in the processor core.

64 IBM DS8870 Architecture and Implementation

Page 83: IBM 8870 Archt

With the instruction retry function, when an error is encountered in the core in caches and certain logic functions, the POWER7+ processor first automatically retries the instruction. If the source of the error was truly transient, the instruction succeeds and the system can continue normal operation.

POWER7+ alternate processor retryHard failures are more difficult because permanent errors are replicated each time that the instruction is repeated. Retrying the instruction does not help in this situation because the instruction continues to fail. As in IBM POWER6+™ and POWER7, POWER7+ processors can extract the failing instruction from the faulty core and retry it elsewhere in the system for a number of faults. The failing core is then dynamically unconfigured and scheduled for replacement. The entire process is transparent to the partition that owns the failing instruction. Systems with POWER7+ processors are designed to avoid a full system outage.

POWER7+ cache protectionProcessor instruction retry and alternate processor retry, as described previously in this chapter, protect processor and data caches. L1 cache is divided into sets. The POWER7+ processor can deallocate all but one set before a Processor Instruction Retry is performed. In addition, faults in the Segment Lookaside Buffer (SLB) array are recoverable by the IBM POWER Hypervisor™. The SLB is used in the core to perform address translation calculations.

The L2 and L3 caches in the POWER7+ processor are protected with double-bit detect single-bit correct error correction code (ECC). Single-bit errors are corrected before they are forwarded to the processor, and are then written back to L2 or L3.

In addition, the caches maintain a cache line delete capability. A threshold of correctable errors that is detected on a cache line can result in the data in the cache line that is purged and the cache line that is removed from further operation without requiring a reboot. An ECC uncorrectable error that is detected in the cache can also trigger a purge and delete of the cache line. This action results in no loss of operation because an unmodified copy of the data can be held on system memory to reload the cache line from main memory. Modified data is handled through special uncorrectable error handling. L2 and L3 deleted cache lines are marked for persistent deconfiguration on subsequent system reboots until they can be replaced.

POWER7+ single processor checkstoppingThe POWER7+ processor provides single core check stopping. A processor checkstop would result in a system checkstop. This feature, included in POWER7+ processor-based CEC, has the ability to contain most processor checkstops to the partition that was using the processor at the time. This feature significantly reduces the probability of any one processor affecting total CEC availability.

POWER7+ First Failure Data CaptureFirst-failure data capture (FFDC) is an error isolation technique. FFDC ensures that when a fault is detected in a system through error checkers or other types of detection methods, the root cause of the fault is captured without the need to re-create the problem or run an extended tracing or diagnostics program.

For most faults, a good FFDC design means that the root cause is detected automatically without intervention by a service representative. Pertinent error data that is related to the fault is captured and saved for analysis. In hardware, FFDC data is collected from the fault isolation registers and the associated logic. In firmware, this data consists of return codes, function calls, and so on.

Chapter 4. RAS on the IBM DS8870 65

Page 84: IBM 8870 Archt

FFDC check stations are carefully positioned within the server logic and data paths to ensure that potential errors can be quickly identified and accurately tracked to a field-replaceable unit (FRU).

This proactive diagnostic strategy is a significant improvement over the classic, less accurate reboot and diagnose service approaches.

Redundant componentsHigh opportunity components (those components that most effect system availability) are protected with redundancy and the ability to be repaired concurrently.

The use of the following redundant components allows the system to remain operational:

� POWER7+ cores, which include redundant bits in L1 instruction and data caches, L2 caches, and L2 and L3 directories

� POWER7+ 740 CEC main memory, dual inline memory modules (DIMMs), which use an innovative ECC algorithm from IBM research that improves single-bit error correction and memory failure identification

� Redundant cooling

� Redundant power supplies

� Redundant 12X loops to I/O subsystem

Self-healingFor a system to be self-healing, it must be able to recover from a failing component by detecting and isolating the failed component. The system then should be able to take the component offline, fix or isolate it, and then reintroduce the fixed or replaced component into service without any application disruption. Self-healing technology includes the following examples:

� Bit steering to redundant memory in the event of a failed memory module to keep the server operational.

� Chipkill is an enhancement that enables a system to sustain the failure of an entire DRAM chip. An ECC word uses 18 DRAM chips from two DIMM pairs, and a failure on any of the DRAM chips can be fully recovered by the ECC algorithm. The system can continue indefinitely in this state with no performance degradation until the failed DIMM can be replaced.

� Single-bit error correction by using ECC without reaching error thresholds for main, L2, and L3 cache memory.

� L2 and L3 cache line delete capability, which provides more self-healing.

� ECC extended to inter-chip connections on fabric and processor bus.

� Hardware scrubbing is a method that is used to address intermittent errors. IBM POWER7+ processor-based systems periodically address all memory locations. Any memory locations with a correctable error are rewritten with the correct data.

� Dynamic processor deallocation.

Memory reliability, fault tolerance, and integrityPOWER7+ uses ECC circuitry for system memory to correct single-bit memory failures. In POWER7+, an ECC word consists of 72 bytes of data. Of these bytes, 64 are used to hold application data. The remaining 8 bytes are used to hold check bits and more information about the ECC word. This innovative ECC algorithm from IBM research works on DIMM pairs on a rank basis. With this ECC code, the system can dynamically recover from an entire DRAM failure (Chipkill), and it can also correct an error even if another symbol (a byte, which

66 IBM DS8870 Architecture and Implementation

Page 85: IBM 8870 Archt

is accessed by a 2-bit line pair) experiences a fault. This feature is an improvement from the Double Error Detection/Single Error Correction ECC implementation that is found on the POWER6+ processor-based systems.

The memory DIMMs also use hardware scrubbing and thresholding to determine when memory modules within each bank of memory should be used to replace modules that exceeded their threshold of error count (dynamic bit-steering). Hardware scrubbing is the process of reading the contents of the memory during idle time and checking and correcting any single-bit errors that accumulated by passing the data through the ECC logic. This function is a hardware function on the memory controller chip and does not influence normal system memory performance.

Fault maskingIf corrections and retries succeed and do not exceed threshold limits, the system remains operational with full resources and there is no external administrative intervention required.

Mutual surveillanceThe service processor monitors the operation of the POWER Hypervisor firmware during the boot process and monitors for loss of control during system operation. It also allows the POWER Hypervisor to monitor service processor activity. The service processor can take appropriate action (including calling for service) when it detects that the POWER Hypervisor firmware lost control. The POWER Hypervisor also can request a service processor repair action, if necessary.

4.2.3 AIX operating system

Each CEC is a server that is running the IBM AIX Version 7.1 operating system (OS). This OS is the IBM well-proven, scalable, and open standards-based UNIX-like OS. This version of AIX includes support for Failure Recovery Routines (FRRs).

For more information about the features of the IBM AIX operating system, see this website:

http://www.ibm.com/systems/power/software/aix/index.html

4.2.4 Cross cluster communication

In the DS8870, the I/O enclosures are wired point-to-point and each CEC uses a PCI Express (PCIe) architecture. DS8870 uses the PCIe paths across the I/O enclosures to provide the Cross Cluster (XC) communication between CECs. This configuration means that there is no separate path between XC communications and I/O traffic, which simplifies the topology. During normal operations, XC communication traffic uses a low portion of the overall available PCIe bandwidth (less than 1.7 percent) so that it has negligible effect on I/O performance. Compared to the dedicated RIO interface for cross cluster communication using the PCIe fabric for cross cluster communication provides much greater redundancy because any other I/O bay can be used for cross cluster communication if the current path fails.

Figure 4-1 on page 68 shows the redundant PCIe fabric design for cross cluster communication in the DS8870. If the I/O bay used as the cross cluster communication path fails, the system will automatically use the next I/O for cross cluster communication.

Chapter 4. RAS on the IBM DS8870 67

Page 86: IBM 8870 Archt

Figure 4-1 DS8870 Cross Cluster communication through the PCIe fabric and I/O enclosures

4.2.5 Environmental monitoring

Environmental monitoring that is related to power, fans, and temperature is performed by the FSP over the system power control network (SPCN). Environmental critical and non-critical conditions generate emergency power-off warning (EPOW) events. Critical events (for example, a complete alternating current (ac) power loss) trigger appropriate signals from hardware to initiate emergency shutdown to prevent data loss without operating system or firmware involvement. Non-critical environmental events are logged and reported by using Event Scan.

Temperature monitoring also is performed. If the ambient temperature rises above a preset operating range, the rotation speed of the cooling fans is increased. Temperature monitoring also warns the internal microcode of potential environment-related problems. An orderly system shutdown, which is accompanied by a service call to IBM, occurs when the operating temperature exceeds a critical level.

Voltage monitoring provides a warning and an orderly system shutdown when the voltage is out of operational specification.

4.2.6 Resource deconfiguration

If recoverable errors exceed threshold limits, resources can be unconfigured and the system remains operational. This ability allows deferred maintenance at a convenient time. Dynamic deconfiguration of potentially failing components is nondisruptive, which allows the system to continue to run. Persistent deconfiguration occurs when a failed component is detected. It is then deactivated at a subsequent reboot.

Dynamic deconfiguration functions include the following components:

� Processor� L3 cache lines� Partial L2 cache deconfiguration� PCIe bus and slots

68 IBM DS8870 Architecture and Implementation

Page 87: IBM 8870 Archt

Persistent deconfiguration functions include the following components:

� Processor� Memory� Unconfigure or bypass failing I/O adapters� L2 cache

Following a hardware error that is flagged by the service processor, the subsequent reboot of the server invokes extended diagnostic testing. If a processor or memory is marked for persistent deconfiguration, the boot process attempts to proceed to completion with the faulty device automatically unconfigured. Failing I/O adapters are unconfigured or bypassed during the boot process.

4.3 CEC failover and failback

To understand the process of CEC failover and failback, the logical construction of the DS8870 must be reviewed. For more information, see Chapter 5, “Virtualization concepts” on page 99.

Creating logical volumes on the DS8000 works through the following constructs:

� Storage DDMs are installed into predefined array sites.

� Array sites are used to form arrays, which are structured as Redundant Array of Independent Disks (RAID) 5, RAID 6, or RAID 10. (Restrictions apply for solid-state flash disks. For more information, see “RAID configurations” on page 83.)

� RAID arrays become members of a rank.

� Each rank becomes a member of an extent pool. Each extent pool has an affinity to either server 0 or server 1, also referred to as logical partition (LPAR)0 or LPAR1. Each extent pool is defined as open system fixed block (FB) or System z count key data (CKD).

� Within each extent pool, we create logical volumes. For open systems, these logical volumes are called logical unit numbers (LUNs). For System z, these logical volumes are called volumes. LUNs are used for Small Computer System Interface (SCSI) addressing. Each logical volume belongs to a logical subsystem (LSS).

For open systems, the LSS membership is only significant for Copy Services. But for System z, the LSS is the logical control unit (LCU), which equates to a 3990 (a System z disk controller, which the DS8000 emulates). It is important to remember that LSSs that have an even identifying number have an affinity with LPAR0. LSSs that have an odd identifying number have an affinity with LPAR1.

4.3.1 Dual cluster operation and data protection

Regarding processing host data, one of the basic premises of RAS is that the DS8000 always tries to maintain two copies of the data while it is moving through the storage system. The LPARs have two areas of their primary memory that are used for holding host data: cache memory and non-volatile storage (NVS). NVS is 1/32 of system memory with a minimum of 0.5 GB per server. NVS contains write data until the data is destaged from cache to disk. NVS data is written to the CEC hard disk drives in the case of an emergency shutdown due to a complete loss of input ac power.

Important: For the previous generations of DS8000, the maximum available NVS was 6 GB per server. For the DS8870, the maximum was increased to 16 GB per server.

Chapter 4. RAS on the IBM DS8870 69

Page 88: IBM 8870 Archt

When a write is issued to a volume and both the LPARs are operational, the write data is placed into the cache memory of the owning LPAR as well as into the NVS of the other storage server. The NVS copy of the write data is accessed only if a write failure should occur and the cache memory is empty or possibly invalid. Otherwise, it is discarded after the destage from cache to disk is complete.

The location of write data with both CECs operational is shown in Figure 4-2.

Figure 4-2 Write data when CECs are dual operational

Figure 4-2 shows how the cache memory of server 0 in CEC0 is used for all logical volumes that are members of the even LSSs. Likewise, the cache memory of LPAR 1 in CEC1 supports all logical volumes that are members of odd LSSs. For every write that is placed into cache, a copy is placed into the NVS memory that is in the alternate storage server. Thus, the following normal flow of data for a write when both CECs are operational is used:

1. Data is written to cache memory in the owning LPAR. At the same time, data is written to NVS memory of the alternate LPAR.

2. The write operation is reported to the attached host as completed.

3. The write data is destaged from the cache memory to a disk array.

4. The write data is discarded from the NVS memory of the alternate storage server.

Under normal operation, both DS8000 storage servers are actively processing I/O requests. The following sections describe the failover and failback procedures that occur between the CECs when an abnormal condition affected one of them.

CEC0 CEC1

Cache Memory for EVEN

numbered LSS

Cache Memory for ODD

numbered LSS

NVS for ODD numbered LSS

NVS for EVEN numbered LSS

70 IBM DS8870 Architecture and Implementation

Page 89: IBM 8870 Archt

4.3.2 Failover

In the example that is shown in Figure 4-3, CEC0 failed. CEC1 needs to take over all of the CEC0 functions. Because the RAID arrays are on Fibre Channel Loops that reach both CECs, they can still be accessed through the device adapters that are owned by CEC1. For more information about the Fibre Channel Loops, see 4.6.1 “RAID configurations” on page 83.

Figure 4-3 CEC0 failover to CEC1

At the moment of failure, server 1 in CEC1 includes a backup copy of the server 0 write data in its own NVS. From a data integrity perspective, the concern is for the backup copy of the server 1 write data, which was in the NVS of server 0 in CEC0 when it failed. Because the DS8870 now has only one copy of that data (active in the cache memory of server 1 in CEC1), it performs the following steps:

1. Server 1 destages the contents of its NVS (the server 0 write data) to the disk subsystem. However, before the actual destage and at the beginning of the failover, the following tasks occur:

a. The working storage server starts by preserving the data in cache that was backed by the failed CEC NVS. If a reboot of the single working CEC occurs before the cache data is destaged, the write data remains available for subsequent destaging.

b. The existing data in cache (for which there is still only a single volatile copy) is added to the NVS so that it remains available if the attempt to destage fails or a server reboot occurs. This functionality is limited so that it cannot consume more than 85% of NVS space.

2. The NVS and cache of server 1 are divided in two, half for the odd LSSs and half for the even LSSs.

3. Server 1 begins processing the I/O for all the LSSs, taking over for server 0.

CEC0 CEC1

Cache Memory for EVEN

numbered LSS

NVS for ODD numbered LSS

Failover

NVSFor EVENnumbered

LSS

NVSFor ODD

numberedLSS

CacheMemory

For EVENnumbered

LSS

CacheMemoryFor ODD

numberedLSS

Chapter 4. RAS on the IBM DS8870 71

Page 90: IBM 8870 Archt

This entire process is known as a failover. After failover, the DS8000 operates as shown in Figure 4-3 on page 71. Server 1 now owns all the LSSs, which means all reads and writes are serviced by server 1. The NVS inside server 1 is now used for odd and even LSSs. The entire failover process should be transparent to the attached hosts.

The DS8000 can continue to operate in this state indefinitely. There is no loss of functionality, but there is a loss of redundancy, and performance is decreased because of the reduced system cache. Any critical failure in the working CEC renders the DS8000 unable to serve I/O for the arrays. Because of this failure, the IBM support team should begin work immediately to determine the scope of the failure and to build an action plan to restore the failed CEC to an operational state.

4.3.3 Failback

The failback process always begins automatically when the DS8000 microcode determines that the failed CEC resumed to an operational state. If the failure was relatively minor and recoverable by the operating system or DS8000 microcode, the resume action is initiated by the software. If there was a service action with hardware components replaced, the IBM service representative or remote support engineer resumes the failed CEC.

For this example in which CEC0 failed, we should now assume that CEC0 was repaired and resumed. The failback begins with server 1 in CEC1 starting to use the NVS in server 0 in CEC0 again, and the ownership of the even LSSs being transferred back to server 0. Normal I/O processing, with both CECs operational, then resumes. Just like the failover process, the failback process is transparent to the attached hosts.

In general, recovery actions (failover or failback) on the DS8000 do not affect I/O operation latency by more than 15 seconds. With certain limitations on configurations and advanced functions, this effect to latency is typically limited to 8 seconds or less.

If you have real-time response requirements in this area, contact IBM to determine the latest information about how to manage your storage to meet your requirements.

4.3.4 NVS and power outages

During normal operation, the DS8000 preserves write data by storing a duplicate in the NVS of the alternate CEC. To ensure that this write data is not lost because of a power event, the DS8870 DC-UPSs contain Battery Service Module (BSM) sets. The single purpose of the BSM sets is to provide continuity of power during ac power loss, long enough for the CECs to write modified data to their internal hard disks. The design is to not move the data from NVS to the disk arrays. Instead, each CEC features dual internal disks that are available to store the contents of NVS.

Should any frame lose ac input (known as wall power or line power) to both DC-UPSs, the CECs are informed that they are running on batteries and, in case of continuous power unavailability for 4 seconds, they would begin a shutdown procedure. This is known as an on-battery condition. It is during this emergency shutdown that the entire contents of NVS

Important: Unless the extended power line disturbance feature (ePLD) was purchased, BSM sets guarantees storage disk operation for up to 4 seconds in case of power outage. After this period, BSM sets keep the CECs and I/O enclosures operable long enough to write NVS contents to internal CEC hard disks. The ePLD feature can be ordered so that disk operation can be maintained for 50 seconds after a power disruption.

72 IBM DS8870 Architecture and Implementation

Page 91: IBM 8870 Archt

memory are written to the CEC hard disk drives so that the write data will be available for destaging after the CECs are operational again.

If power is lost to a single DC-UPS, the partner DC-UPS provides power to this UPS, and the output power to other DS8870 components remains redundant.

If all the batteries were to fail (which is unlikely because the batteries are in an N+1 redundant configuration), the DS8870 would lose this NVS protection. The DS8870 takes all CECs offline because reliability and availability of host data are compromised.

The following sections describe the steps that are used in the event of dual ac loss to the entire frame.

Power lossWhen an on-battery condition shutdown begins, the following events occur:

1. All host adapter I/O is blocked.

2. Each CEC begins copying its NVS data to internal disk (not the storage DDMs). For each CEC, two copies are made of the NVS data. This process is referred to as firehose dump (FHD).

3. When the copy process is complete, each CEC shuts down.

4. When shut down in each CEC is complete, the DS8000 is powered down.

Power restoredWhen power is restored, the DS8870 needs to be manually powered on, unless the remote power control mode is set to auto.

After the DS8870 is powered on, the following events occur:

1. When the CECs power-on, PHYP loads and power-on self-test (POST) runs at this time.

2. Each CEC begins the IML.

3. At an early stage in the IML process, the CEC detects NVS data on its internal disks and begins to restore the data to destage it to the storage DDMs.

4. IML will pause for the battery units to reach a certain level of charge, the CECs come online and begin to process host I/O. This prevents a subsequent loss of input power resulting in a loss of data

Battery chargingIn many cases, sufficient charging occurs during the power-on self test, operating system boot, and microcode boot. However, if a complete discharge of the batteries occurred (which can happen if multiple power outages occur in a short period) recharging could take up to two hours.

Note: Be careful before deciding to set the remote power control mode to auto. If the remote power control mode is set to auto, after input power is lost, the DS8870 is powered on automatically as soon as external power becomes available again. For more information about how to perform power control on DS8870, see the IBM System Storage DS8000 Information Center at the following website:

http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp

Chapter 4. RAS on the IBM DS8870 73

Page 92: IBM 8870 Archt

4.4 Data flow in DS8870

One of the significant hardware changes for the DS8700 and DS8800 generation was in how host I/O was brought into the storage unit. The DS8870 continues this design for the I/O enclosures, which house the device adapters and host adapters. Connectivity between the CEC and the I/O enclosures was also improved by using the many strengths of the PCI Express architecture.

For more information, see 3.2.2 “Peripheral Component Interconnect Express adapters” on page 43.

4.4.1 I/O enclosures

The DS8870 I/O enclosure is a design that was introduced in the DS8700. The older DS8000 I/O enclosure connected by the RIO I/O fabric consisted of multiple parts that required removal of the bay and disassembly for service. In later generations, the switch card can be replaced without removing the I/O adapters, which reduces the time and effort that is needed to service the I/O enclosure. As shown in Figure 4-1 on page 68, each CEC is connected to all four I/O enclosures (base frame) or all eight I/O enclosures (expansion frame installed) through PCI Express cables. This configuration makes each I/O enclosure an extension of each server.

The DS8870 I/O enclosures use adapters with PCI Express connections. The I/O adapters in the I/O enclosures are replaceable concurrently. Each slot can be independently powered off for concurrent replacement of a failed adapter, installation of a new adapter, or removal of an old one.

In addition, each I/O enclosure has N+1 power and cooling in the form of two power supplies with integrated fans. The power supplies can be concurrently replaced and a single power supply can provide power to the whole I/O enclosure.

4.4.2 Host connections

Each DS8870 Fibre Channel host adapter provides four or eight ports for connection directly to a host or to a Fibre Channel storage area network (SAN) switch.

Single or multiple pathIn DS8870, the host adapters are shared between the CECs. To illustrate this concept, Figure 4-4 on page 75 shows a potential machine configuration. In this example, two I/O enclosures are shown. Each I/O enclosure has two Fibre Channel host adapters. If a host has only a single path to a DS8870, as shown in Figure 4-4 on page 75, it is able to access volumes that belong to all LSSs because the host adapter (HA) directs the I/O to the correct CEC. However, if an error occurs on the host adapter (HA), HA port, I/O enclosure, or in the SAN, all connectivity would be lost. The same is true for the host bus adapter (HBA) in the attached host, making it a single point of failure as well.

Important: The CECs do not come online (process host I/O) until the batteries are sufficiently charged to ensure that at least one complete FHD is possible.

74 IBM DS8870 Architecture and Implementation

Page 93: IBM 8870 Archt

Figure 4-4 shows a single-path host connection.

Figure 4-4 A single-path host connection

Important: Best practice for host connectivity is that hosts that access the DS8870 have at least two connections to host ports on separate host adapters in separate I/O enclosures.

CEC 0 CEC 1

I/O enclosure 2 PCI

Express

I/O enclosure 3

HP HP HP HP

Host Adapter

HP HP HP HP

HP HP HP HP HP HP HP HP

PCI Express

Host Adapter

Host AdapterHost Adapter

HBA

Single pathed host

Chapter 4. RAS on the IBM DS8870 75

Page 94: IBM 8870 Archt

A more robust design is shown in Figure 4-5 in which the host is attached to separate Fibre Channel host adapters in separate I/O enclosures. This configuration also is important because during a microcode update, a host adapter port might need to be taken offline. This configuration allows host I/O to survive a hardware failure on any component on either path.

Figure 4-5 A dual-path host connection

SAN/FICON switchesBecause many hosts can be connected to the DS8870, each using multiple paths, the number of host adapter ports that are available in the DS8870 might not be sufficient to accommodate all of the connections. The solution to this problem is the use of SAN switches or directors to switch logical connections from multiple hosts. In a System z environment, you need to select a SAN switch or director that also supports Fibre Channel connection (FICON).

A logic or power failure in a switch or director can interrupt communication between hosts and the DS8870. Provide more than one switch or director to ensure continued availability. Ports from two separate host adapters in two separate I/O enclosures should be configured to go through each of two directors. The complete failure of either director leaves half the paths still operating.

Using channel extension technologyFor Copy Services scenarios in which single mode fibre distance limits are exceeded, use of channel extension technology is required. The following site contains information about network devices that are marketed by IBM and other companies to extend Fibre Channel communication distances. They can be used with DS8000 Series Metro Mirror, Global Copy, Global Mirror, Metro/Global Mirror (MGM) Support, and z/OS Global Mirror. For more information, see DS8000 Series Copy Services Fibre Channel Extension Support Matrix:

http://www.ibm.com/support/docview.wss?uid=ssg1S7003277

CEC 0 CEC 1

I/O enclosure 2 PCI

Express

I/O enclosure 3

HP HP HP HP HP HP HP HP

HP HP HP HP HP HP HP HP

PCI Express

Host Adapter

Host Adapter

Host Adapter

Host Adapter

HBA

Dual pathed host

HBA

76 IBM DS8870 Architecture and Implementation

Page 95: IBM 8870 Archt

Support for T10 Data Integrity Field (DIF) standardOne of the firmware enhancements that the DS8870 incorporates, regarding end-to-end data integrity through the SAN, is the ANSI T10 Data Integrity Field (DIF) standard for FB volumes that are accessed by the FCP channel of Linux on System z.

When data is read, the DIF is checked before leaving the DS8870 and again when received by the host system. Until now, it was only possible to ensure the data integrity within the disk system with error correction code (ECC). However, T10 DIF can now check end-to-end data integrity through the SAN. Checking is done by hardware, so there is no performance impact.

For more information about T10 DIF implementation in the DS8870, see “T10 data integrity field support” on page 112.

Multipathing softwareEach attached host operating system requires multipathing software to manage multiple paths to the same device, and provide at least redundant routes for host I/O requests. When a failure occurs on one path to a logical device, the multipathing software on the attached host can identify the failed path and route the I/O requests for the logical device to alternative paths. Furthermore, it should be able to detect when the path is restored. The multipathing software that is used varies by attached host operating system and environment, as described in the following sections.

Open systemsIn most open systems environments, the Subsystem Device Driver (SDD) is useful to manage path failover and preferred path determination. SDD is a software product that IBM supplies as an option with the DS8870 at no additional fee.

For the AIX operating system, the DS8870 is supported through the AIX multipath I/O (MPIO) framework, which is included in the base AIX operating system. You can either choose to use the base AIX MPIO support or to install the Subsystem Device Driver Path Control Module (SDDPCM). For multipathing under Microsoft Windows, Subsystem Device Driver Device Specific Module (SDDDSM) is available.

SDD provides availability through automatic I/O path failover. If a failure occurs in the data path between the host and the DS8870, SDD automatically switches the I/O to the other paths. SDD also automatically sets the failed path back online after a repair is made. SDD also improves performance by sharing I/O operations to a common disk over multiple active paths to distribute and balance the I/O workload.

SDD is not available for every supported operating system by DS8870. For more information about the multipathing software that might be required for various operating systems, see the IBM System Storage Interoperability Center (SSIC) at this website:

http://www.ibm.com/systems/support/storage/config/ssic/index.jsp

SDD is covered in more detail in the following IBM publications:

� IBM System Storage DS8000: Host attachment and Interoperability, SG24-8887. � IBM System Storage DS8000 Host Systems Attachment Guide, SC27-4210.

System zIn the System z environment, normal practice is to provide multiple paths from each host to a disk system. Typically, four or eight paths are installed. The channels in each host that can access each logical control unit (LCU) in the DS8870 are defined in the hardware configuration definition (HCD) or input/output configuration data set (IOCDS) for that host. Dynamic Path Selection (DPS) allows the channel subsystem to select any available

Chapter 4. RAS on the IBM DS8870 77

Page 96: IBM 8870 Archt

(non-busy) path to initiate an operation to the disk subsystem. Dynamic Path Reconnect (DPR) allows the DS8870 to select any available path to a host to reconnect and resume a disconnected operation; for example, to transfer data after disconnection because of a cache miss.

These functions are part of the System z architecture and are managed by the channel subsystem on the host and the DS8870.

A physical FICON path is established when the DS8870 port sees light on the fiber; for example, a cable is plugged in to a DS8870 host adapter, a processor or the DS8870 is powered on, or a path is configured online by z/OS. Now, logical paths are established through the port between the host and some or all of the LCUs in the DS8870 are controlled by the HCD definition for that host. This configuration happens for each physical path between a System z host and the DS8870. There can be multiple system images in a CPU. Logical paths are established for each system image. The DS8870 then knows which paths can be used to communicate between each LCU and each host.

Control-unit initiated reconfiguration (CUIR) will vary a path or paths offline to all System z hosts to allow service to an I/O enclosure or host adapter, then will vary on the paths to all host systems when the host adapter ports are available. This function automates channel path management in System z environments in support of selected DS8870 service actions.

CUIR is available for the DS8870 when operated in the z/OS and IBM z/VM® environments. CUIR provides automatic channel path vary on and vary off actions to minimize manual operator intervention during selected DS8870 service actions.

CUIR also allows the DS8870 to request that all attached system images set all paths that are required for a particular service action to the offline state. System images with the appropriate level of software support respond to such requests by varying off the affected paths, and either notifying the DS8870 subsystem that the paths are offline, or that it cannot take the paths offline. CUIR reduces manual operator intervention and the possibility of human error during maintenance actions, and reduces the time required for the maintenance. This function is useful in environments in which many z/OS or z/VM systems are attached to a DS8870.

4.4.3 Metadata checks

When application data enters the DS8870, special codes or metadata, also known as redundancy checks, are appended to that data. This metadata remains associated with the application data as it is transferred throughout the DS8870. The metadata is checked by various internal components to validate the integrity of the data as it moves throughout the disk system. It is also checked by the DS8870 before the data is sent to the host in response to a read I/O request. The metadata also contains information that is used as an additional level of verification to confirm that the data that is returned to the host is coming from the wanted location on the disk. Figure 4-6 shows metadata along the different stages of the virtualization process.

78 IBM DS8870 Architecture and Implementation

Page 97: IBM 8870 Archt

Figure 4-6 Metadata and virtualization process

Since the introduction of the DS8800, the metadata size was increased to support future functionality. For more information about raw and net storage capacities, see “Capacity Magic” on page 466.

For more information about logical configuration and virtualization, see Chapter 5, “Virtualization concepts” on page 99.

Chapter 4. RAS on the IBM DS8870 79

Page 98: IBM 8870 Archt

4.5 RAS on the HMC

The HMC is used to configure, manage, and maintain the DS8870. One HMC (the primary) is included in every DS8870 base frame. A second HMC (the secondary) can be ordered, and is located external to the DS8870. The DS8870 HMCs work with IPv4, IPv6, or a combination of both IP standards. For more information about the HMC and network connections, see 9.1.1 “Storage HMC hardware” on page 242 and 8.3 “Network connectivity planning” on page 231.

If the HMC is not operational, it is impossible to perform maintenance, perform modifications to the logical configuration, or perform Copy Services tasks, such as the establishment of FlashCopies by using the data storage command-line interface (DS CLI) or data storage graphical user interface (DS GUI). Generally, you should order two management consoles to act as a redundant pair.

DS8870 does not use DVD-RAM media. SDHC media is used instead. SDHC is not bootable and is used to offload data collections or to save physical configurations on discontinue. DVD read-only media and DVD drive are still included.

In the DS8870, there is an orientation change in the way that CECs are serviced in compared to previous generations. The HMC can move from the standard service position to a new alternate service position, as shown in Figure 4-7.

Figure 4-7 HMC standard service position (left) and HMC alternate service position (right)

4.5.1 Microcode updates

The DS8870 contains many discrete redundant components. Most of the following components have firmware that can be updated:

� Flexible service processor (FSP)� DC-UPS� Rack Power Control cards (RPC)� Host adapters� Fibre Channel interface cards (FCICs)� Device adapters� DDMs

80 IBM DS8870 Architecture and Implementation

Page 99: IBM 8870 Archt

DS8870 CECs have an operating system (AIX) and Licensed Machine Code (LMC) that can be updated. As IBM continues to develop and improve the DS8870, new releases of firmware and licensed machine code become available that offer improvements in function and reliability.

For more information about microcode updates, see Chapter 14, “Licensed machine code” on page 399.

Concurrent code updatesThe architecture of the DS8870 allows for concurrent code updates. This ability is achieved by using the redundant design of the DS8870. In general, redundancy is lost for a short period as each component in a redundant pair is updated. Power subsystem was improved regarding power firmware update and some elements are updated while power redundancy is not lost. For more information, see 4.7 “RAS on the power subsystem” on page 91.

4.5.2 Call home and remote support

Call home is the capability of the HMC to contact IBM support services to report a problem, which is referred to as call home for service. The HMC also communicates machine reported product data (MRPD) to IBM by the call home facility. MRPD data has been enhanced to include more information about logical volume and logical subsystem (LSS) configuration. Call home can use the HMC modem or SSL for Internet connections.

IBM service personnel outside of the client facility logs in to the HMC to provide remote service and support. There are three options for remote support connections: modem, IBM Virtual Private Network (VPN), and Assist On-site (AOS). For more information about remote support and the call home option, see Chapter 16, “Remote support” on page 421.

Users can enable Service Event Notification via email or Simple Network Management Protocol (SNMP) for RAS service events. Any customer-only event or call home event will generate a notification that is sent to the email addresses or SNMP server that was configured through the HMC web user interface (WUI) by the IBM service representative. SNMP server information is configured from the menu on the second tab. For SNMP trap notification for Copy Services events, DSCLI must be used to enable this type of notification.

Chapter 4. RAS on the IBM DS8870 81

Page 100: IBM 8870 Archt

Figure 4-8 shows the HMC WUI option in which the email address can be configured.

Figure 4-8 Manage Serviceable Event Notification HMC GUI panel

Example 4-1 shows one notification that was received via email.

Example 4-1 Service Event Notification via email

REPORTING SF MTMS: 2107-961*75ZA180REPORTING SF LPAR: SF75ZA180ESS11PROBLEM NUMBER: 155PROBLEM TIMESTAMP: Oct 10, 2012 10:11:12 AM CESTREFERENCE CODE: BE3400AA

************************* START OF NOTE LOG **************************BASE RACK ORDERED MTMS 2421-961*75ZA180LOCAL HMC MTMS 4242BC5*R9K1VXK HMC ROLE PrimaryLOCAL HMC INBOUND MODE Attended MODEM PHONE Unavailable LOCAL HMC INBOUND CONFIG Continuous LOCAL HMC OUTBOUND CONFIG VPN only FTP: enabled

REMOTE HMC Single HMCHMC WEBSM VERSION 5.3.20120817.0HMC CE default HMC REMOTE defaultHMC PE default HMC DEVELOPER default 2107 BUNDLE 87.0.155.0HMC DRIVER 20120723.1 LMC LEVEL UnavailableFIRMWARE LEVEL SRV0 01AL74094 SRV1 01AL74094

82 IBM DS8870 Architecture and Implementation

Page 101: IBM 8870 Archt

PARTITION NAME SF75ZA180ESS11PARTITION HOST NAME SF75ZA180ESS11PARTITION STATUS SFI 2107-961*75ZA181 SVR 8205-E6C*109835R LPAR SF75ZA180ESS11 STATE = AVAILABLE

FIRST REPORTED TIME Oct 10, 2012 10:11:12 AM CESTLAST REPORTED TIME Oct 10, 2012 10:11:12 AM CESTCALL HOME RETRY #0 of 12 on Oct 10, 2012 10:11:14 AM CEST.REFCODE BE3400AA <------------------------------ System Reference Code (SRC)SERVICEABLE EVENT TEXTDDM format operation failed. <------------------ SRC description FRU group HIGH FRU class FRUFRU Part Number 45W7457 FRU CCIN I60BFRU Serial Number 504BC7C3CC0DFRU Location Code U2107.D02.BG079WL-P1-D3 FRU Previous PMH N/A Prev ProbNum N/A PrevRep Data N/A ************************** END OF NOTE LOG ***************************

4.6 RAS on the disk system

The DS8870 was designed to safely store and retrieve large amounts of data. RAID is an industry-wide method implemented to store data on multiple physical disks to enhance data redundancy. There are many variants of RAID in use today. The DS8870 supports RAID 5, RAID 6, and RAID 10. It does not support the non-RAID configuration of disks that are better known as just a bunch of disks (JBOD).

4.6.1 RAID configurations

The following RAID configurations are possible for the DS8870:

� 6+P+S RAID 5 configuration: The array consists of six data drives and one parity drive. The remaining drive on the array site is used as a spare.

� 7+P RAID 5 configuration: The array consists of seven data drives and one parity drive.

� 5+P+Q+S RAID 6 configuration: The array consists of five data drives and two parity drives. The remaining drive on the array site is used as a spare.

� 6+P+Q RAID 6 configuration: The array consists of six data drives and two parity drives.

� 3+3+2S RAID 10 configuration: The array consists of three data drives that are mirrored to three copy drives. Two drives on the array site are used as spares.

� 4+4 RAID 10 configuration: The array consists of four data drives that are mirrored to four copy drives.

No JBOD support: The DS8870 models do not include support for JBOD.

+P/+Q indicator: The +P/+Q indicators do not mean that a single drive is dedicated to holding the parity bits for the RAID. The DS8870 uses floating parity technology such that no one drive is always involved in every write operation. The data and parity stripes float between the member drives of the array to provide optimum write performance.

Chapter 4. RAS on the IBM DS8870 83

Page 102: IBM 8870 Archt

For more information about the effective capacity of these configurations, see Table 8-8 on page 238. An updated version of Capacity Magic (see “Capacity Magic” on page 466) helps you to determine the raw and net storage capacities, and the numbers for the required extents for each available type of RAID.

There must be enough free space to reconfigure arrays. For example, if an online DS8870 storage unit is 99% loaded with RAID-6 arrays, it is not possible to complete an online reconfiguration to turn a RAID 6 array into a RAID 5 array. This reconfiguration can be done only during downtime.

4.6.2 Disk path redundancy

Each DDM in the DS8870 is attached to two Fibre Channel switches. These switches are built into the disk enclosure controller cards. Figure 4-9 shows the redundancy features of the DS8870 switched Fibre Channel disk architecture.

Figure 4-9 Switched disk path connections

Important restrictions: The following restrictions apply:

� Nearline-SAS drives are supported in RAID 6 configurations.� Solid-state flash drives are supported in RAID 5 configurations.

This information is subject to change. Consult with your IBM service representative for the latest information about supported RAID configurations.

The RPQ/SCORE process can be used to submit requests for other RAID configurations for solid-state flash drives and Nearline drives. For more information, see the Storage Customer Opportunity REquest (SCORE) system page at this website:

http://iprod.tucson.ibm.com/systems/support/storage/ssic/interoperability.wss

Storage Enclosure Backplane

FC-AL Switch FC-AL Switch

CEC 0DeviceAdapter

CEC 1DeviceAdapter

NextStorage

Enclosure

NextStorage

Enclosure

InOut In In InOut Out Out

Disk Drive Modules

DS8000 Storage Enclosure with Switched Dual Loops

84 IBM DS8870 Architecture and Implementation

Page 103: IBM 8870 Archt

Each disk has two separate connections to the backplane. This configuration allows the disk to be simultaneously attached to both FC switches. If either disk enclosure controller card is removed from the enclosure, the switch that is included in that card is also removed. However, the FC switch in the remaining controller card retains the ability to communicate with all the disks and both DAs in a pair. Equally, each DA has a path to each switch, so it also can tolerate the loss of a single path. If both paths from one DA fail, it cannot access the switches. However, the partner DA retains connection.

Figure 4-9 on page 84 also shows the connection paths to the neighboring Storage Enclosures. Because expansion is done in this linear fashion, adding enclosures is nondisruptive.

For more information about the disk subsystem of the DS8870, see 3.4 “Disk subsystem” on page 50.

4.6.3 Predictive Failure Analysis

The storage drives that are used in the DS8870 incorporate Predictive Failure Analysis (PFA) and can anticipate certain forms of failures by keeping internal statistics of read and write errors. If the error rates exceed predetermined threshold values, the drive is nominated for replacement. Because the drive has not yet failed, data can be copied directly to a spare drive by using the technique that is described in 4.6.5 “Smart Rebuild” on page 85. This copy ability avoids the use of RAID recovery to reconstruct all of the data onto the spare drive.

4.6.4 Disk scrubbing

The DS8870 periodically reads all sectors on a disk. This reading is designed to occur without any interference with application performance. If error correcting code (ECC) detects correctable bad bits, the bits are corrected immediately. This ability reduces the possibility of multiple bad bits accumulating in a sector beyond the ability of ECC to correct them. If a sector contains data that is beyond ECC's ability to correct, RAID is used to regenerate the data and write a new copy onto a spare sector of the disk. This scrubbing process applies to array members and spare DDMs.

4.6.5 Smart Rebuild

Smart Rebuild is a function that is designed to help reduce the possibility of secondary failures and data loss in RAID arrays. It can be used to rebuild a RAID 5 array when certain disk errors occur and a normal determination is made that it is time to use a spare to proactively replace a failing disk drive. If the suspect disk is still available for I/O, it is kept in the array rather than being rejected as under a standard RAID rebuild. A spare is brought into the array at the same time. The suspect disk drive and the new spare are set up in a temporary RAID 1 association, allowing the troubled drive to be duplicated onto the spare rather than performing a full RAID reconstruction from data and parity. The new spare is then made a regular member of the array and the suspect disk is rejected from the RAID array. The array never goes through an n-1 stage in which it would be exposed to complete failure if another drive in this array encounters errors. The result is a substantial time savings and a new level of availability that is not found in other RAID 5 products.

Chapter 4. RAS on the IBM DS8870 85

Page 104: IBM 8870 Archt

Smart Rebuild is not applicable in all situations, so it is not guaranteed to be used. If there are two drives with errors in a RAID 6 configuration, or if the drive mechanism failed to the point that it cannot accept any I/O, the standard RAID rebuild procedure is used for the RAID array. If communications across a drive fabric are compromised, such as a FC-AL loop error that causes the drive to be bypassed, then standard RAID rebuild procedures are used because the suspect drive is not available for a one-to-one copy with a spare. If Smart Rebuild is not possible or would not provide the designed benefits, a standard RAID rebuild occurs.

Smart Rebuild enhancements The proven benefit of Smart Rebuild is highly improved in the DS8870. Smart Rebuild DDM error patterns are continuously analyzed as part of one of the normal tasks that are driven by DS8870 microcode. Drive firmware has been optimized to report predictive errors to the DA adapter. At any time, when certain disk errors (following specific criteria) reach a determined threshold, the RAS microcode component starts Smart Rebuild within the hour. This enhanced technique combined with a more frequent schedule leads to a considerably faster identification if disks showing signs of imminent failure.

A fast response in fixing drive errors is vital to avoid a second drive failure in the same RAID 5 disk array, and to avoid potential data loss. The possibility of having an array that has no redundancy, as when a RAID rebuild occurs, is reduced by shortening the time when a specific error threshold is reached until Smart Rebuild is triggered, as described in the following scenarios:

� Smart Rebuild could avoid the circumstance in which a suspected DDM is rejected, as Smart Rebuild process is started before rejection. Therefore, Smart Rebuild avoids the array from going to a standard RAID rebuild, the time the array has no redundancy and is susceptible to hitting a second drive failure.

� Specific DDM error threshold is detected by DS8870 microcode immediately as DS8870 microcode is continuously analyzing drive errors.

� RAS microcode component starts Smart Rebuild after Smart Rebuild threshold criteria are met. The Smart Rebuild process runs every hour and does not wait for 24 hours as was done previously.

IBM remote support representatives can manually launch Smart Rebuild on their own if deemed necessary; for instance, when two drives in the same array have logged temporary media errors it is considered appropriate to manually launch the rebuild.

4.6.6 RAID 5 overview

The DS8870 supports RAID 5 arrays. RAID 5 is a method of spreading volume data plus parity data across multiple disk drives. RAID 5 provides faster performance by striping data across a defined set of DDMs. Data protection is provided by the generation of parity information for every stripe of data. If an array member fails, its contents can be regenerated by using the parity data.

The DS8870 uses the idea of stripped parity, meaning that there is no one storage drive in an array that is dedicated to holding parity data, which would make such a drive active in every I/O operation. Instead, the drives in an array rotate between holding data stripes and holding parity stripes, thus balancing out the activity level of all drives in the array.

86 IBM DS8870 Architecture and Implementation

Page 105: IBM 8870 Archt

RAID 5 implementation in DS8870In a DS8870, an array that is built on one array site contains eight disks. The first 4 array sites on a DA Pair will have a spare assigned, the rest of the array sites have no spare assigned, provided all disks are the same size and speed. An array site with a spare will create a RAID 5 array that is 6+P+S (where the P stands for parity and S stands for spare). The other array sites on the DA Pair will be 7+P arrays.

Drive failure with RAID 5When a drive fails in a RAID 5 array, the device adapter rejects the failing drive and takes one of the hot spare drives. Then the DA starts rebuild, an operation to reconstruct the data that was on the failed drive onto one of the spare drives. The spare that is used is chosen based on a smart algorithm that looks at the location of the spares and the size and location of the failed drive. The RAID rebuild is performed by reading the corresponding data and parity in each stripe from the remaining drives in the array, performing an exclusive-OR operation to re-create the data, and then writing this data to the spare drive.

While this data reconstruction is occurring, the device adapter can still service read and write requests to the array from the hosts. There might be some performance degradation while the rebuild operation is in progress because some DA and switched network resources are used to complete the reconstruction. Because of the switch-based architecture, this effect is minimal. Also, any read requests for data on the failed drive require data to be read from the other drives in the array, and then the DA reconstructs the data.

Performance of the RAID 5 array returns to normal when the data reconstruction onto the spare device completes. The time that is required for rebuild will vary, depending on the size of the failed drive and the workload on the array, the switched network, and the DA. The use of array across loops (AAL) speeds up rebuild time and decreases the impact of a rebuild.

4.6.7 RAID 6 overview

The DS8870 supports RAID 6 protection. RAID 6 presents an efficient method of data protection in case of double disk errors, such as two drive failures, two coincident medium errors, or a drive failure and a medium error. RAID 6 protection provides more fault tolerance than RAID 5 in the case of disk failures and uses less raw disk capacity than RAID 10.

RAID 6 allows for more fault tolerance by using a second independent distributed parity scheme (dual parity). Data is striped on a block level across a set of drives, similar to RAID 5 configurations. The second set of parity is calculated and written across all the drives and reduces the usable space compared to RAID 5. The striping is shown in Figure 4-10 on page 88.

RAID 6 is best used with large-capacity disk drives because they have a longer rebuild time. One of the risks here is that longer rebuild times increase the possibility that a second DDM error occurs within the rebuild window. Comparing RAID 6 to RAID 5 performance gives about the same results on reads. For random writes, the throughput of a RAID 6 array is only two thirds of a RAID 5, due to the additional parity handling. Workload planning is especially important before RAID 6 for write-intensive applications is implemented, including Copy Services targets and Space Efficient FlashCopy repositories. Yet, when properly sized for the I/O demand, RAID 6 is a considerable reliability enhancement. See Figure 4-10 on page 88.

Chapter 4. RAS on the IBM DS8870 87

Page 106: IBM 8870 Archt

Figure 4-10 Illustration of one RAID 6 stripe on a 5+P+Q+S array

RAID 6 implementation in the DS8870A RAID 6 array in one array site of a DS8870 can be built on one of the following configurations:

� In a seven-disk array, two disks are always used for parity, and the eighth disk of the array site is needed as a spare. This type of RAID 6 array is referred to as a 5+P+Q+S array, where P and Q stand for parity and S stands for spare.

� A RAID 6 array, consisting of eight disks, is built when all necessary spare drives are available. An eight-disk RAID 6 array also always uses two disks for parity, so it is referred to as a 6+P+Q array.

Drive failure with RAID 6When a drive fails in a RAID 6 array, the DA starts to reconstruct the data of the failing drive onto one of the available spare drives. A smart algorithm determines the location of the spare drive to be used, depending on the size and the location of the failed drive. After the spare drive replaces a failed one in a redundant array, the recalculation of the entire contents of the new drive is performed by reading the corresponding data and parity in each stripe from the remaining drives in the array and then writing this data to the spare drive.

During the rebuild of the data on the new drive, the DA can still handle I/O requests of the connected hosts to the affected array. Performance degradation could occur during the reconstruction because DAs and switched network resources are used to do the rebuild. Because of the switch-based architecture of the DS8870, this effect is minimal. Additionally, any read requests for data on the failed drive require data to be read from the other drives in the array, and then the DA reconstructs the data. Any subsequent failure during the reconstruction within the same array (second drive failure, second coincident medium errors, or a drive failure and a medium error) can be recovered without loss of data.

Drives 0 1 2 3 4 5 6

0 1 2 3 4 P0 0 P01

5 6 7 8 9 P1 0 P111 0 11 12 13 14 P2 0 P21

1 5 16 17 18 19 P3 0 P31P41

One Stride with 5 data drives (5+P+Q):

P00 = 0+1+2+3+4; P10 = 5+6+7+8+9; etc.

P01 = 9+13+17+0; P11 = 14+18+1+5; etc.

P41 = 4+8+12+16

NOTE: For illustrat ive purposes only – implementation details vary, parity is striped across al l drives

88 IBM DS8870 Architecture and Implementation

Page 107: IBM 8870 Archt

Performance of the RAID 6 array returns to normal when the data reconstruction on the spare device has completed. The rebuild time varies, depending on the size of the failed drive and the workload on the array and the DA. The completion time is comparable to a RAID 5 rebuild, but slower than rebuilding a RAID 10 array in the case of a single drive failure.

4.6.8 RAID 10 overview

RAID 10 provides high availability by combining features of RAID 0 and RAID 1. RAID 0 optimizes performance by striping volume data across multiple disk drives. RAID 1 provides disk mirroring, which duplicates data between two disk drives. By combining the features of RAID 0 and RAID 1, RAID 10 provides a second optimization for fault tolerance. Data is striped across half of the disk drives in the RAID 1 array. The same data is also striped across the other half of the array, which creates a mirror. Access to data is preserved if one disk in each mirrored pair remains available. RAID 10 offers faster data reads and writes than RAID 5 because it does not need to manage parity. However, with half of the drives in the group that is used for data and the other half to mirror that data, RAID 10 arrays have less capacity than RAID 5 or RAID 6 arrays.

RAID 10 is not as commonly used for workloads that require the highest performance from the disk subsystem. A typical area of operation for RAID 10 are workloads with a high random write ratio. Either member in the mirrored pair can respond to the read requests.

RAID 10 implementation in DS8870In the DS8870, the RAID 10 implementation is achieved by using six or eight drives. If spares need to be allocated on the array site, six drives are used to make a three-disk RAID 0 array, which is then mirrored to a three-disk array (3x3). If spares do not need to be allocated, eight DDMs are used to make a four-disk RAID 0 array, which is then mirrored to a four-disk array (4x4).

Drive failure with RAID 10When a drive fails in a RAID 10 array, the DA rejects the failing drive and takes a hot spare into the array synthesists and data is copied from the good drive to the hot spare drive. The spare that is used is chosen based on a smart algorithm that looks at the location of the spares and the size and location of the failed drive. Remember that a RAID 10 array is effectively a RAID 0 array that is mirrored. Thus, when a drive fails in one of the RAID 0 arrays, we can rebuild the failed drive by reading the data from the equivalent drive in the other RAID 0 array.

While this data copy is going on, the DA can still service read and write requests to the array from the hosts. There might be degradation in performance while the copy operation is in progress because DA and switched network resources are used to do the RAID rebuild. Because there is a good drive available, this effect is minimal. Read requests for data on the failed drive should not be affected because they are all directed to the good copy on the mirrored drive. Write operations are not affected.

Performance of the RAID 10 array returns to normal when the data copy onto the spare device completes. The time that is taken for rebuild can vary, depending on the size of the failed DDM and the workload on the array and the DA. In relation to a RAID 5, RAID 10 rebuild completion time is faster because rebuilding a RAID 5 6+P configuration requires six reads plus one parity operation for each write. However, a RAID 10 configuration requires one read and one write (essentially, a direct copy).

Chapter 4. RAS on the IBM DS8870 89

Page 108: IBM 8870 Archt

Array across loops and RAID 10The DS8870, as with previous generations, implements the concept of array across loops (AAL). With AAL, an array site is split into two halves. Half of the site is on the first disk loop of a DA pair and the other half is on the second disk loop of that DA pair. AAL is implemented primarily to maximize performance and it is used for all the RAID types in the DS8870. However, in RAID 10, we are able to take advantage of AAL to provide a higher level of redundancy. The DS8870 RAS code deliberately ensures that one RAID 0 array is maintained on each of the two loops that are created by a DA pair. This configuration means that in the unlikely event of a complete loop outage, the DS8870 does not lose access to the RAID 10 array. This access is not lost because when one RAID 0 array is offline, the other remains available to service disk I/O. Figure 3-18 on page 55 shows a diagram of this strategy.

4.6.9 Spare creation

When the arrays are created on a DS8870, the microcode determines which array sites contain spares. The first array sites on each DA pair that are assigned to arrays contribute one or two spares (depending on the RAID option) until the DA pair has access to at least four spares, with two spares placed on each loop.

A minimum of one spare is created for each array site that is assigned to an array until the following conditions are met:

� There are a minimum of four spares per DA pair.

� There are a minimum of four spares for the largest capacity array site on the DA pair.

� There are a minimum of two spares of capacity and RPM greater than or equal to the fastest array site of any capacity on the DA pair.

Spare rebalancingThe DS8870 implements a spare rebalancing technique for spare drives. When a drive fails and a hot spare is taken, it becomes a member of that array. When the failed drive is repaired, DS8870 microcode might choose to allow the hot spare to remain where it was moved, but it can instead choose to migrate the spare to a more optimum position. This migration is done to better balance the spares across the FC-AL loops to provide optimum spare location based on disk sizes and spare availability. It might be preferable that the drive that is in use as an array member is converted to a spare. In this case, the data on that DDM is migrated in the background onto an existing spare by using the Smart Rebuild technique, see 4.6.5 “Smart Rebuild” on page 85. This process does not fail the disk that is being migrated, though it does reduce the number of available spares in the DS8870 until the migration process is complete.

In the case drive intermix on a DA Pair, it is possible to rebuild the contents of a 450 GB drive onto a 600 GB spare drive, approximately one-fourth of the 600 GB drive is wasted because that space cannot be used. When the failed 450 GB DDM is replaced with a new 450 GB drive, the DS8870 microcode will migrate the data back onto the recently replaced 450 GB drive. When this process completes, the 450 GB DDM rejoins the array and the 600 GB drive becomes the spare again. This same algorithm applies when the hot spare that is taken at the time of the initial drive failure has a speed mismatch.

Hot pluggable drivesReplacement of a failed drive does not affect the operation of the DS8870 because the drives are fully hot pluggable. Each disk plugs into a switch, so there is no loop break associated with the removal or replacement of a disk. In addition, there is no potentially disruptive loop initialization process.

90 IBM DS8870 Architecture and Implementation

Page 109: IBM 8870 Archt

Enhanced sparingThe drive sparing policies support having spares for all size and speed drives on the DA Pair. When any DA Pair has a single spare for any drive type, a call home to IBM is generated. Because of spare over-allocation, there can be several drives in a Failed/Deferred Service state. All failed drives will be included in the call home when any drive type has 1 spare. For example, in a configuration with 16 solid-state flash drives on a DA Pair, there are two spares created. If one solid-state flash drive fails, all failed drives of any type would be reported to IBM.

You can use the following DS CLI command to know whether actions can be deferred:

lsddm -state not_normal IBM.2107-75XXXXX

An example of where repair can be deferred is shown in Example 4-2.

Example 4-2 DS CLI lsddm command shows DDM state

dscli> lsddm -state not_normal IBM.2107-75ZA571Date/Time: September 26, 2012 13:03:29 CEST IBM DSCLI Version: 7.7.0.566 DS: IBM.2107-75ZA571ID DA Pair dkcap (10^9B) dkuse arsite State===========================================================================================IBM.2107-D02-0774H/R1-P1-D21 0 900.0 unconfigured S3 Failed/Deferred Service

If immediate repair for drives in State Failed/Deferred is needed, an RPQ/SCORE process can be used to submit a request to disable enhanced sparing service. For more information, contact your marketing representative for details of this RPQ.

4.7 RAS on the power subsystem

Compared to the previous generation of the DS8000 series family, the power subsystem in the DS8870 was redesigned. It offers a higher energy efficiency, lower power loss, and reliability improvement. The DS8870 base frame requires 20% less power when compared to the DS8800. The former primary power supply (PPS) is replaced by a DC-UPS. RPC cards also are improved.

All power and cooling components that constitute the DS8870 power subsystem are fully redundant. Key elements that allow this high level of redundancy are two DC-UPSs per rack for a 2N redundancy. By using this configuration, DC-UPSs are duplicated in each rack so that only one DC-UPS by itself provides enough power to all components inside a rack, if the other DC-UPS becomes unavailable.

As described in“Battery Service Module sets” on page 93, each DC-UPS has its own battery backup function. Therefore, the battery system in DS8870 also has 2N redundancy. The battery of a single DC-UPS allows for the completion of FHD if there is a dual ac loss (as described in 4.3.4 “NVS and power outages” on page 72).

The CECs, I/O enclosures, disk enclosures, and primary HMC Components inside the rack all features duplicated power supplies.

A smart internal power distribution connectivity makes it possible to maintain redundant power distribution on a single power cord. If one DC-UPS power cord is pulled (equivalent to having a failure in one of the client circuit breakers), the partner DC-UPS can provide power to this UPS and feed each internal redundant power supply inside the rack. For example, if a DC-UPS power cord is pulled, the two-redundant power supplies of any CEC continue to be

Chapter 4. RAS on the IBM DS8870 91

Page 110: IBM 8870 Archt

powered on. This ability gives an extra level of reliability in the unusual case of failure in multiple power elements.

In addition, internal Ethernet switches and tray fans (which are used to provide extra cooling to internal HMC) receive redundant power.

4.7.1 Components

This section describes the power subsystem components of the DS8870 from a RAS standpoint.

Direct current uninterruptible power supplyThere are two DC-UPS units per rack for a 2N redundancy. DC-UPS is a built-in power converter capable of power monitoring and integrated battery functions. It distributes full wave rectified ac to Power Distribution Units (PDU) which then provide that power to all the separate areas of the machine.

If ac is not present at the input line, the output is switched to rectified ac from the partner DC-UPS. If neither ac input is active, the DC-UPS switches to battery power for up to 4 seconds or 50 seconds if ePLD is installed. Each DC-UPS has internal fans to supply cooling for that power supply. If ac input power is restored before the ride through time expires, an emergency shutdown will result and firehose dump will copy the data in NVS to the CEC hard disk drives to prevent data loss.

DC-UPS supports high or low voltage three-phase and single-phase as input power. Input power to feed DC-UPS must be configured with phase selection jumpers that are at rear of the DC-UPS. Special care must be taken regarding the power cord because power cables are unique for high or low voltage three-phase, or single-phase. The appropriate power cables and power select jumper must be used. For information about power cord feature codes, see the IBM publication IBM DS8870 Introduction and Planning Guide, GC27-4209.

All elements of the DC-UPS can be replaced concurrently with client operations. Furthermore, BSM set replacement and DC-UPS fan assembly are done while the corresponding Direct current Supply Unit (DSU) remains operational.

The following important enhancements also are available:

� Improvement in DC-UPS data collection.

� During DC-UPS firmware update, the current power state is maintained so that the DC-UPS remains operational during this service operation. Because of its dual firmware image design, dual power redundancy is maintained in all internal power supplies of all frames during DC-UPS firmware update.

Each DC-UPS unit consists of one DSU and one or two BSMs. Figure 3-19 on page 56 shows the DSU (rear view) and BSMs (front view).

Direct current Supply UnitEach DC-UPS has a Direct current Supply Unit (DSU), which contains the control logic of the DC-UPS and it is where images of the power firmware reside. It is designed to protect the

Important: If you install a DS8870 so that both DC-UPSs are attached to the same circuit breaker or the same power distribution unit, the DS8870 is not well-protected from external power failures. This configuration can cause an unplanned outage.

92 IBM DS8870 Architecture and Implementation

Page 111: IBM 8870 Archt

DSU from failures during a power firmware update, avoiding physical intervention or hardware replacement, except in cases of a permanent hardware failure.

A DSU contains the necessary battery chargers that are dedicated to monitor and charge all BSM sets that are installed in the DC-UPS.

Battery Service Module setsThe BSM set provides backup power to the system when both ac inputs to a rack are lost. Each DC-UPS supports one or two BSM sets. As standard, there is one BSM in each DC-UPS. If the ePLD feature is ordered, an additional BSM set is added. As a result, each DC-UPS has two BSM sets (for more information, see “Power line disturbance” on page 95). All racks in the system must have the same number of BSM sets, including expansion racks without I/O enclosures.

A BSM set consists of four battery enclosures. Each of these single-battery enclosures is known as a Battery Service Module (BSM). A group of four BSMs (battery enclosures) makes up a BSM set. There are two types of BSMs: the primary and the secondary. The primary BSM is the only BSM with an electrical connector to the DSU and it can be installed only in the top position. This primary BSM is the only BSM to have status LEDs.There are also three secondary BSMs.

The DS8870 BSMs feature a fixed working life of five years.

Power Distribution Unit The Power Distribution Units (PDUs) are used to distribute power from the DC-UPS through the power distribution units to the power supplies in disk enclosures, CECs, I/O enclosures, Ethernet switches, and HMC fans.

In all racks, there are six PDUs. A PDU module can be replaced concurrently. Figure 3-20 on page 57 shows where PDUs are at the rear-side of the frame.

Disk enclosure power supplies The disk enclosure power supply units provide power for the disks, and house the cooling fans for the disk enclosure. The fans draw air from the front of the frame, through the disks, and then move it out through the back of the frame. The entire rack cools from front to back, complying with data center hot aisle/cold aisle cooling strategy. There are redundant fans in each power supply unit and redundant power supply units in each disk enclosure. The disk enclosure power supply can be replaced concurrently. Figure 3-15 on page 51 shows a front and rear view of a disk enclosure.

The power distribution units (PDU) for disk enclosures can supply power for five to seven disk enclosures. Each disk enclosure power supply plugs into two separate PDUs, which are supplied from separate DC-UPSs.

CEC power supplies and I/O enclosure power suppliesEach CEC and I/O enclosure has dual redundant power supplies to convert power that is provided by PDUs into the required voltages for that enclosure or complex. Each I/O enclosure and each CEC has their own cooling fans.

Important: Although the DS8870 no longer vents through the top of the frame, IBM still advises clients not to store any objects on top of a DS8870 frame for safety reasons.

Chapter 4. RAS on the IBM DS8870 93

Page 112: IBM 8870 Archt

Power Junction AssemblyPower Junction Assembly (PJA) provides redundant power to HMC, Ethernet switches, and HMC tray fans.

Rack Power Control cardRPCs manage the DS8870 power subsystem and provide control, monitoring, and reporting functions. RPC cards are responsible for receiving DC-UPS status and controlling DC-UPS functions. There are two RPC cards for redundancy. When one is unavailable, the remaining RPC is able to perform all RPC functions.

The following RPC enhancements are available in DS8870:

� The DS8870 RPC card contains a faster processor and more parity-protected memory.

� There are two different buses for communication between each RPC card and each CEC. These buses provide redundant paths to have an error recovery capability in case of failure of one of the communication paths.

� Each RPC card has two firmware images. If an RPC firmware update fails, the RPC card can still boot from the other firmware image. The new design also leads to a reduced period during which one of the RPC cards is not available because of an RPC firmware update. Because of the dual firmware image, an RPC card is only unavailable for the time that is required (only a few seconds) to boot from the new firmware image after it is downloaded. Because of this configuration, full RPC redundancy is available during most of the time that is required for an RPC firmware update.

� RPC cards can detect failures in the HMC fan tray, which facilitates isolation and repair of such failures.

System Power Control NetworkThe system power control network (SPCN) is used to control the power of the attached I/O subsystem. The SPCN monitors environmental components such as power, fans, and temperature for I/O enclosures. Environmental-critical and noncritical conditions can generate emergency power-off warning (EPOW) events. Critical events trigger appropriate signals from the hardware to the affected components to prevent any data loss without operating system or firmware involvement. Non-critical environmental events also are logged and reported.

4.7.2 Line power loss

The DS8870 uses an area of server memory as nonvolatile storage (NVS). This area of memory is used to hold modified data that has not yet been written to the disk subsystem. If line power fails, meaning that both DC-UPSs in a frame were to report a loss of ac input power, the DS8870 must protect that data. See 4.3 “CEC failover and failback” on page 69 for a full explanation of the NVS and cache operation.

4.7.3 Line power fluctuation

The DS8870 frames contain BSM sets that protect modified data in the event of dual ac power loss to the entire frame. If a power fluctuation occurs that causes a momentary interruption to power (often called a brownout), the DS8870 tolerates this condition for approximately four seconds rather than 30 milliseconds as in previous DS8000 generations. If the ePLD feature is not installed on the DS8870 system, the disks are powered off and the servers begin copying the contents of NVS to the internal disks in the CECs. For many clients who use uninterruptible power supply (UPS) technology, brownouts are not an issue.

94 IBM DS8870 Architecture and Implementation

Page 113: IBM 8870 Archt

UPS-regulated power is generally reliable, so more redundancy in the attached devices is often unnecessary.

Power line disturbanceIf power at your installation is not always reliable, consider adding the ePLD feature. This feature adds an additional BSM set to each DC-UPS in all frames of the system. As a result, each DC-UPS in the system contains two BSM sets.

Without the ePLD feature, a standard DS8870 offers you about four seconds of protection from power line disturbances. Installing this feature increases your protection to 50 seconds (running on battery power for 50 seconds) before an FHD begins. For a full explanation of this process see, 4.3.4 “NVS and power outages” on page 72.

4.7.4 Power control

Power control is usually done through the HMC, which communicates sequencing information to the service processor in each CEC and RPC. You can perform the power control of a DS8870 by using the WUI that is provided by the HMC or by using the DS Storage Manager/DS CLI. Figure 4-11 shows power control through DS Storage Manager.

For more information about how to perform power control on DS8870, see the IBM System Storage DS8000 Information Center at this website:

http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp

Figure 4-11 DS8870 power control from DS Storage Manager

In addition, the following switches in the base frame of a DS8870 are accessible when the rear cover is open:

� Local/remote switch. It has two positions: local and remote. � Local power on/local force power off switch. When the local/remote switch is in local mode,

the local power on/local force power off switch can manually power on or force power off a complete system. When the local/remote switch is in remote mode, the HMC is in control of power on/power off.

Chapter 4. RAS on the IBM DS8870 95

Page 114: IBM 8870 Archt

4.7.5 Unit emergency power off

Each DS8870 frame has an operator panel with three LEDs that show the line power status and the system fault indicator. The LEDs can be seen when the front door of the frame is closed. On the left side of the frame is a unit emergency power off (UEPO) switch (as shown in Figure 4-12). This switch is red and is located inside the front door that protects the frame. It can only be seen when the front door is open. This switch is intended to remove power from the DS8870 only in the following extreme cases:

� The DS8870 has developed a fault that is placing the environment at risk, such as a fire.� The DS8870 is placing human life at risk, such as the electrocution of a person.

Figure 4-12 DS8870 UEPO switch

Apart from these two contingencies (which are uncommon events), the UEPO switch should never be used. When the UEPO switch is used, the battery protection that allows FHD is bypassed. Normally, if line power is lost, the DS8870 can use its internal batteries to destage the write data from NVS memory to internal disks in CECs so that the data is preserved until power is restored. However, the UEPO switch does not allow this destage process to happen and all NVS data is immediately lost. This event most likely results in data loss.

If the DS8870 needs to be powered off for building maintenance or to relocate it, always use the HMC to shut it down properly.

4.8 Other features

There are many more features of the DS8870 that enhance reliability, availability, and serviceability. Some of these features are described next.

Important: These switches must not be used by DS8870 users. They can be used only under certain circumstances and as part of an action plan that is carried out by an IBM service representative.

Important: In any case, the use of the UEPO switch is forced. Contact IBM support as soon as possible.

96 IBM DS8870 Architecture and Implementation

Page 115: IBM 8870 Archt

4.8.1 Internal network

Each DS8870 base frame contains two Gigabit Ethernet switches to allow the creation of a fully redundant management (private) network. Each CEC in the DS8870 has a connection to each switch. The primary HMC (and the secondary HMC, if installed) has a connection to each switch, which means that if a single Ethernet switch fails, all communication can complete from the HMCs to other components in the storage unit that are using the alternate network.

There also are Ethernet connections for the FSP within each CEC. If two DS8870 storage complexes are connected together, they use ports on the Ethernet switches. For more information about the DS8870 internal network, see 9.1.2 “Private Ethernet networks” on page 244.

4.8.2 Remote support

The DS8870 HMC can be accessed remotely by IBM Support personnel for many service tasks. IBM Support can offload service data, change configuration settings, and initiate recovery actions over a remote connection. You decide which type of connection you want to allow for remote support. The following options are included:

� Modem-only for inbound and outbound connection� Virtual private network (VPN) for access to the support interface and outbound data� Modem and VPN� Assist On-site (AOS)

Remote support is an important item for DS8870 support. As more clients eliminate modems and analog phone lines from their data centers, the usage of a VPN or AOS connection becomes more important. For more information about remote support operations, see Chapter 16, “Remote support” on page 421.

For more information about planning the connections that are needed for HMC installations, see Chapter 9, “DS8870 HMC planning and setup” on page 241.

For more information about AOS, see the IBM Redpaper publication Introduction to Assist On-site for DS8000, REDP-4889.

4.8.3 Earthquake resistance

The Earthquake Resistance Kit is an optional seismic kit for stabilizing the storage unit rack so that the rack complies with IBM earthquake resistance standards. It helps to prevent personal injury and increases the probability that the system will be available following an earthquake by limiting potential damage to critical system components, such as hard disk drives.

A storage unit frame with this optional seismic kit includes cross-braces on the front and rear of the frame that prevent the frame from twisting. Hardware at the bottom of the frame secures it to the floor. Depending on the flooring in your environment (specifically, non-raised floors), installation of required floor mounting hardware might be disruptive. This kit must be special-ordered for the DS8870. For more information, contact your IBM sales representative.

Important: Connections to your network are made at the Ethernet connector at the rear of the machine. No network connection should ever be made to the DS8870 internal Ethernet switches.

Chapter 4. RAS on the IBM DS8870 97

Page 116: IBM 8870 Archt

4.8.4 Secure data overwrite

Secure data overwrite (SDO) is a process that provides a secure overwrite of all data drives in a DS8870 series storage system. Removal of all logical configuration is a required client activity before SDO can be performed. The SDO process is initiated by the IBM services representative, then continues unattended until completed. This takes a full day to complete. There are two DDM overwrite options.

DDM overwrite optionsStarting with Licensed Machine Code (LMC) 7.7.10.xx, there are two options for SDO. This section describes both options.

Three-pass overwriteThis option performs a cryptoerase on the DDMs, then performs a three-pass overwrite on all DDMs. This overwrite pattern is compliant with the US Department of Defense (DoD) 5220.22-M standard.

CryptoeraseThis option performs a cryptoerase on the DDMs, which re-creates the internal encryption key on the DDMs, rendering the previous information unreadable. It then performs a single-pass overwrite on all DDMs. This option is only available in DS8870 with Licensed Machine Code (LMC) 7.7.10.xx or later. Compared to the three-pass overwrite SDO, this new option shortens the SDO duration.

CEC and HMCWith either option, a three-pass overwrite is performed on areas of both the CEC and HMC disk drives that contain any client-related information. If there is a secondary HMC associated with the storage system, SDO runs against the secondary HMC after completion on the primary HMC. This detects the previous SDO and only performs overwrite on the secondary HMC hard disks.

SDO process overviewThe following is a list of the basic steps in the SDO process.

� Client removal of all logical configuration.� IBM service representative initiates SDO on HMC.� SDO performs a dual cluster reboot of the CECs.� SDO cryptoerases all DDMs in the storage system.� SDO initiates an overwrite method.� SDO initiates a three-pass overwrite on the CEC and HMC hard disks.� When complete, SDO generates a certificate.

CertificateThe certificate can be offloaded via DS CLI. If this is not possible, the IBM service representative can offload the certificate to removable media.

98 IBM DS8870 Architecture and Implementation

Page 117: IBM 8870 Archt

Chapter 5. Virtualization concepts

This chapter describes virtualization concepts as they apply to the DS8000.

The following topics are covered:

� Virtualization definition

� The abstraction layers for disk virtualization:

– Array sites– Arrays– Ranks– Extent pools– Dynamic extent pool merge– Track space-efficient volumes– Logical subsystems (LSSs)– Volume access– Virtualization hierarchy summary

� Benefits of virtualization

� zDAC: z/OS FICON Discovery and Auto-Configuration

� EAV V2: Extended address volumes

5

© Copyright IBM Corp. 2014. All rights reserved. 99

Page 118: IBM 8870 Archt

5.1 Virtualization definition

In a fast-changing world, to react quickly to changing business conditions, IT infrastructure must allow for on-demand changes. Virtualization is key to an on-demand infrastructure. However, when talking about virtualization, many vendors are talking about different things.

For this chapter, the definition of virtualization is the abstraction process from the physical disk drives to a logical volume that is presented to hosts and servers in a way that they see it as though it were a physical disk.

5.2 The abstraction layers for disk virtualization

Virtualization in the DS8000 refers to the process of preparing physical disk drives (disk drive modules) for storing data that belongs to a volume that is used by a host. This allows the host to think that it is using a disk storage device that belongs to it, but is really being implemented in the DS8000. In the open system world, this is known as creating logical unit numbers (LUNs). In the System z world, it refers to the creation of 3390 volumes.

The disk drive modules (DDMs) are mounted in disk enclosures and connected in a switched Fibre Channel (FC) topology that uses a Fibre Channel Arbitrated Loop (FC-AL) protocol. The DS8870 disks are either a Small Form Factor (SFF) type and are mounted in 24-DDM enclosures, or Nearline drives, which are in Large Form Factor (LFF) and installed in 12-DDM enclosures.

The disk drives can be accessed by a pair of device adapters. Each device adapter has four paths to the disk drives. One device interface from each device adapter is connected to a set of FC-AL devices so that either device adapter has access to any disk drive through two independent switched fabrics (the device adapters and switches are redundant).

Each device adapter has four ports, two of the ports provide access to one storage enclosure. Because device adapters operate in pairs, there are four paths to each disk drive. All four paths can operate concurrently and can access all disk drives on the attached storage enclosures. In normal operation, however, disk drives are typically accessed by one device adapter. Which device adapter owns the disk is defined during the logical configuration process. This definition avoids any contention between the two device adapters for access to the disks.

100 IBM DS8870 Architecture and Implementation

Page 119: IBM 8870 Archt

Two storage enclosures make a storage enclosure pair. All DDMs of one pair are accessed through the eight ports of a device adapter pair. Other storage enclosure pairs can be attached to existing pairs in a daisy chain fashion. Figure 5-1 shows the physical layout on which virtualization is based.

Figure 5-1 Physical layer as the base for virtualization

Because of the switching design, each drive has a direct connection to a device adapter. DDMs in enclosures that are attached to existing enclosures feature an additional hop through the Fibre Channel switch card in the enclosure to which they are attached.

This design is not really a loop but a switched FC-AL loop with the FC-AL addressing schema; that is, arbitrated loop physical address (AL_PA).

Ser

ver 0

Ser

ver 1

I/O Enclosure

DAHA HA DA

Switched loop 1

Switches

Switched loop 2

I/O Enclosure

DAHA HA DA

PCIe PCIe

……

…24

…...

……

…24

…...

……

…24

…...

……

…24

…...

Storage enclosure pair

Storage enclosure pair

Chapter 5. Virtualization concepts 101

Page 120: IBM 8870 Archt

5.2.1 Array sites

An array site is a group of eight identical DDMs (same capacity, speed, and disk class). Which DDMs are forming an array site is predetermined automatically by the DS8000. The DDMs that are selected can be from any location within the disk enclosures. There is no predetermined server affinity for array sites. The DDMs that are selected for an array site are chosen from the two disk enclosures (four from each enclosure) that make one storage enclosure pair. This configuration ensures that half of the DDMs are on different loops. This design is called array across loops, as shown in Figure 5-2. Array sites are the building blocks that are used to define arrays.

Figure 5-2 Array site

5.2.2 Arrays

An array is created from one array site. Forming an array means defining its Redundant Array of Independent Disks (RAID) type. The following RAID types are supported:

� RAID 5� RAID 6� RAID 10

For more information, see “RAID 5 implementation in DS8870” on page 87, “RAID 6 implementation in the DS8870” on page 88, and “RAID 10 implementation in DS8870” on page 89.

For each array site, you can select a RAID type. The process of selecting the RAID type for an array is also called defining an array.

ArraySite

Switc

h

.. …

…24

……

..

Loop 1

.. …

…24

……

..

Loop 2

Important: RAID configuration information does change occasionally. Consult with your IBM Service Representative for the latest information about supported RAID configurations. For more information about important restrictions about DS8870 RAID configurations, see 4.6.1, “RAID configurations” on page 83.

102 IBM DS8870 Architecture and Implementation

Page 121: IBM 8870 Archt

According to the sparing algorithm of the DS8000 series, zero to two spares can be taken from the array site. For more information, see 4.6.9, “Spare creation” on page 90.

Figure 5-3 shows the creation of a RAID 5 array with one spare, also called a 6+P+S array (it has a capacity of six DDMs for data, capacity of one DDM for parity, and a spare drive). According to the RAID 5 rules, parity is distributed across all seven drives in this example.

On the right side of Figure 5-3, the terms D1, D2, D3, and so on, stand for the set of data that is contained on one disk within a stripe on the array. For example, if 1 GB of data is written, it is distributed across all of the disks of the array.

Figure 5-3 Creation of an array

Depending on the selected RAID level and sparing requirements, there are six different types of arrays possible, as shown in Figure 5-4.

Important: In a DS8000 series implementation, one array is defined as using one array site.

Array Site

RAIDArray

Spare

Data

Data

Data

Data

Data

Data

Parity

Creation ofan array

D1 D7 D13 ...

D2 D8 D14 ...

D3 D9 D15 ...

D4 D10 D16 ...

D5 D11 P ...

D6 P D17 ...

P D12 D18 ...

Spare

Chapter 5. Virtualization concepts 103

Page 122: IBM 8870 Archt

Figure 5-4 DS8000 array types

5.2.3 Ranks

In the DS8000 virtualization hierarchy, there is another logical construct called a rank. When a new rank is defined, its name is chosen by the DS Storage Manager; for example, R1, R2, or R3. You must add an array to a rank.

The available space on each rank is divided into extents. The extents are the building blocks of the logical volumes. An extent is striped across all disks of an array, as shown in Figure 5-5, and indicated by the small squares in Figure 5-6 on page 107.

The process of forming a rank accomplishes the following objectives:

� The array is formatted for fixed block (FB) data for open systems or count key data (CKD) for System z data. This formatting determines the size of the set of data that is contained on one disk within a stripe on the array.

� The capacity of the array is subdivided into equal-sized partitions, called extents. The extent size depends on the extent type, FB or CKD.

An FB rank features an extent size of 1 GB (more precisely, GiB, gibibyte, or binary gigabyte, being equal to 230 bytes).

IBM System z users or administrators typically do not deal with gigabytes or gibibytes. Instead, storage is defined in terms of the original 3390 volume sizes. A 3390 Model 3 is three times the size of a Model 1. A Model 1 features 1113 cylinders, which is about 0.94 GB. The extent size of a CKD rank is one 3390 Model 1, or 1113 cylinders.

Array site

Array

P S P

RAID 5

6 + P + S 7 + P

P Q S P Q

RAID 6

5 + P + Q + S 6 + P + Q

A B C S

A’ B’ C’ S

A B C D

A’ B’ C ’ D’

RAID 10

3x2 + 2S 4x2

or

or

or

Important: In the DS8000 implementation, a rank is built by using one array.

104 IBM DS8870 Architecture and Implementation

Page 123: IBM 8870 Archt

Figure 5-5 shows an example of an array that is formatted for FB data with 1-GB extents (the squares in the rank indicate that the extent is composed of several blocks from separate DDMs).

Figure 5-5 Forming an FB rank with 1-GB extents

It is still possible to define a CKD volume with a capacity that is an integral multiple of one cylinder or a fixed-block LUN with a capacity that is an integral multiple of 128 logical blocks (64 KB). However, if the defined capacity is not an integral multiple of the capacity of one extent, the unused capacity in the last extent is wasted. For example, you can define a one cylinder CKD volume, but 1113 cylinders (1 extent) are allocated and 1112 cylinders would be wasted.

Encryption groupAll drives that are offered in the DS8870 are full disk encryption (FDE) capable to secure critical data. The only exception can be for a DS8800 that was equipped with non-FDE drives, and field-converted to a DS8870 (refer to Chapter 18, “DS8800 to DS8870 model conversion” on page 459).

If you plan to use encryption, you must order the encryption capability authorization feature code and apply the license on DS8870. Also, you must define an encryption group before a rank is created. For more information, see the latest version of the IBM Redpaper publicationIBM DS8870 Disk Encryption, REDP-4500. The DS8000 series supports only one encryption group. All ranks must be in this encryption group. The encryption group is an attribute of a rank. Therefore, your choice is to encrypt everything or nothing. You can turn on encryption later (create an encryption group), but then all ranks must be deleted and re-created, which means your data is also deleted.

FB Rankof 1GBextents

Creation of

a Rank

D1 D7 D13 ...

D2 D8 D14 ...

D3 D9 D15 ...

D4 D10 D16 ...

D5 D11 P ...

D6 P D17 ...

P D12 D18 ...

RAIDArray

Spare

Data

Data

Data

Data

Data

Data

Parity

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

1GB 1GB 1GB 1GB

Chapter 5. Virtualization concepts 105

Page 124: IBM 8870 Archt

5.2.4 Extent pools

An extent pool is a logical construct to aggregate the extents from a set of ranks, which forms a domain for extent allocation to a logical volume. Originally, extent pools were used to separate disks with different revolutions per minute (rpm) and capacity in different pools that have homogeneous characteristics. You still might want to use extent pools for this purpose. However, with the capabilities of Easy Tier moving data across different disk tiering levels to optimize I/O throughput, you can create extent pools with a mix of SSD disks, serial-attached SCSI (SAS) disks, and Nearline disks. Thus, you can allow Easy Tier to optimize the placement of the data within the extent pool.

There is no predefined affinity of ranks or arrays to a storage server. The affinity of the rank (and its associated array) to a server is determined at the point it is assigned to an extent pool.

One or more ranks with the same extent type (FB or CKD) can be assigned to an extent pool. If you want Easy Tier to automatically optimize rank utilization, have more than one rank in an extent pool. One rank can be assigned to only one extent pool. There can be as many extent pools as there are ranks.

There are considerations regarding how many ranks should be added in an extent pool. Storage Pool Striping allows you to create logical volumes striped across multiple ranks. This configuration typically enhances performance. To benefit from Storage Pool Striping (see “Storage Pool Striping: Extent rotation” on page 119), more than one rank in an extent pool is required.

Storage Pool Striping can enhance performance significantly. However, when you loose one rank (in the unlikely event that a whole RAID array failed), not only is the data of this rank lost, but all data in this extent pool is lost because data is striped across all ranks. To avoid data loss, mirror your data to a remote DS8000.

When an extent pool is defined, it must be assigned with the following attributes:

� Server affinity� Extent type (FB or CKD)� Encryption group

As with ranks, extent pools belong to an encryption group. When an extent pool is defined, you must specify an encryption group. Encryption group 0 means no encryption, Encryption group 1 means encryption. Currently, the DS8000 series supports only one encryption group and encryption is on for all extent pools or off for all extent pools.

As a minimum number of extent pools, you should have two, with one assigned to server 0 and the other to server 1 so that both servers are active. In an environment where FB and CKD are to go into the DS8000 series storage system, four extent pools provide one FB pool for each server and one CKD pool for each server to balance the capacity between the two servers. Figure 5-6 shows an example of a mixed environment that features CKD and FB extent pools. Additional extent pools might also be desirable to segregate ranks with different DDM types. Extent pools are expanded by adding more ranks to the pool. All ranks that belong to extent pools with the same server affinity, are called a rank group. Ranks are organized in two rank groups: Rank group 0 is controlled by server 0, and rank group 1 is controlled by server 1.

Important: Do not mix ranks with separate RAID types or disk rpm in an extent pool. Do not mix ranks of different classes (or tiers) of storage in the same extent pool, unless you want to enable the Easy Tier Automatic Mode facility.

106 IBM DS8870 Architecture and Implementation

Page 125: IBM 8870 Archt

Figure 5-6 Extent pools

Dynamic extent pool mergeDynamic extent pool merge is a capability that is provided by the Easy Tier manual mode facility.

Dynamic extent pool merge allows one extent pool to be merged into another extent pool while the logical volumes in both extent pools remain accessible to the host servers. Dynamic extent pool merge can be used for the following reasons:

� For the consolidation of two smaller extent pools with equivalent storage type (FB or CKD) into a larger extent pool. Creating a larger extent pool allows logical volumes to be distributed over a greater number of ranks, which improves overall performance in the presence of skewed workloads. Newly created volumes in the merged extent pool allocate capacity as specified by the selected extent allocation algorithm. Logical volumes that existed in either the source or the target extent pool can be redistributed over the set of ranks in the merged extent pool by using the Migrate Volume function.

� For consolidating extent pools with different storage tiers to create a merged extent pool with a mix of storage technologies (since Easy Tier IV, any combination of SSD, Enterprise, and Nearline disk is supported). Such an extent pool is called a hybrid pool and is a prerequisite for using the Easy Tier automatic mode feature.

Important: For best performance, balance capacity between the two servers and create at least two extent pools, with one per server.

1GBFB

1GBFB

1GBFB

1GBFB

Ser

ver0

1113Cyl.CKD

1113Cyl.CKD

1113Cyl.CKD

1113Cyl.CKD

Ser

ver1

1113Cyl.CKD

1113Cyl.CKD

1113Cyl.CKD

1113Cyl.CKD

Extent Pool CKD0 Extent Pool CKD1

Extent Pool FBtest

Extent Pool FBprod1113Cyl.CKD

1113Cyl.CKD

1113Cyl.CKD

1113Cyl.CKD

1GBFB

1GBFB

1GBFB

1GBFB

1GBFB

1GBFB

1GBFB

1GBFB

1GBFB

1GBFB

1GBFB

1GBFB

1GBFB

1GBFB

1GBFB

1GBFB

Chapter 5. Virtualization concepts 107

Page 126: IBM 8870 Archt

The Easy Tier manual mode volume migration is shown in Figure 5-7.

Figure 5-7 Easy Tier: Migration types

Dynamic extent pool merge is allowed only among extent pools with the same server affinity or rank group. Additionally, the dynamic extent pool merge is not allowed in the following circumstances:

� If source and target pools feature different storage types (FB and CKD)� If both extent pools contain track space-efficient (TSE) volumes� If there are TSE volumes on the SSD ranks� If you selected an extent pool that contains volumes that are being migrated� If the combined extent pools include 2 PB or more of ESE logical capacity (virtual

capacity).

Important: Volume migration (or Dynamic Volume Relocation) within the same extent pool is not supported in hybrid (or multi-tiered) pools. Easy Tier Automatic Mode automatically rebalances the volumes extents onto the ranks within the hybrid extent pool, which is based on the activity of the ranks. However, since Easy Tier V, available with DS8870 LMC 7.7.10.xx.xx, you can use Easy Tier Application to manually place volumes in designated tiers. For more information, see IBM DS8870 Easy Tier Application, REDP-5014.

Merged

SSD Pools Nearline Pools

Manual volume migration�Change Disk Class�Change RAID Type�Change RPM�Change striping

Volume-based data relocation

Cross-tier data relocation

Automated intra-tier rebalance

Enterprise Pools

Easy Tier managed pools

Merge pools

Easy Tier Application�Application-assisted data placement to designated tiers.

Easy Tier Application

108 IBM DS8870 Architecture and Implementation

Page 127: IBM 8870 Archt

For more information about Easy Tier, see the latest version of the IBM Redpaper publication, IBM DS8000 Easy Tier, REDP-4667.

5.2.5 Logical volumes

A logical volume is composed of a set of extents from one extent pool.

On a DS8000, up to 65,280 (we use the abbreviation 64 K in this discussion, even though it is actually 65,536 minus 256, which is not quite 64 K in binary) volumes can be created (either 64-K CKD, or 64-K FB volumes, or a mixture of both types with a maximum of 64-K volumes in total).

Fixed block LUNsA logical volume that is composed of fixed block extents is called a logical unit number (LUN). A fixed-block LUN is composed of one or more 1 GiB (230 bytes) extents from one FB extent pool. A LUN cannot span multiple extent pools, but a LUN can have extents from separate ranks within the same extent pool. You can construct LUNs up to a size of 16 TiB (16 x 240 bytes, or 244 bytes).

LUNs can be allocated in binary GiB (230 bytes), decimal GB (109 bytes), or 512 or 520-byte blocks. However, the physical capacity that is allocated for a LUN is always a multiple of 1 GiB. Therefore, it is a good idea to have LUN sizes that are a multiple of a gibibyte. If you define a LUN with a LUN size that is not a multiple of 1 GiB (for example, 25.5 GiB), the LUN size is 25.5 GiB. However, 26 GiB are physically allocated, of which 0.5 GiB of the physical storage remain unusable.

Important: There is no Copy Services support for logical volumes larger than 4 TiB (2 x 240 bytes). Do not create LUNs larger than 4 TiB if you want to use Copy Services for those LUNs, unless you want to integrate it as Managed Disks in an IBM SAN Volume Controller, with at least Release 6.2 that is installed. Use SAN Volume Controller Copy Services instead.

Chapter 5. Virtualization concepts 109

Page 128: IBM 8870 Archt

The allocation process for FB volumes is illustrated in Figure 5-8.

Figure 5-8 Creation of an FB LUN

An FB LUN must be managed by a logical subsystem (LSS). One LSS can manage up to 256 LUNs. The LSSs are created and managed by the DS8000 as required. A total of 255 LSSs can be created in the DS8000.

IBM i logical unit numbersIBM i logical unit numbers (LUNs) are also composed of fixed block 1 GiB extents. However, there are special aspects with IBM System i® LUNs. LUNs that are created on a DS8000 are always RAID-protected. LUNs are based on RAID 5, RAID 6, or RAID 10 arrays. However, you might want to deceive IBM i and tell it that the LUN is not RAID-protected. This deception causes the IBM i to conduct its own mirroring. IBM i LUNs can have the attribute unprotected, in which case the DS8000 reports that the LUN is not RAID-protected. This selection of protected or unprotected does not affect the RAID protection that is used by DS8000 on the open volume, though.

IBM i LUNs expose a 520-byte block to the host. The operating system uses eight of these bytes, so the usable space is still 512 bytes like other SCSI LUNs. The capacities that are quoted for the IBM i LUNs are in terms of the 512-byte block capacity and are expressed in GB (109). These capacities should be converted to GiB (230) when effective utilization of extents that are 1 GiB (230) are considered.

Important: Starting with DS8000 LMC 7.7.10.xx.xx, IBM i variable volume (LUNs) sizes are supported, in addition to the currently existing fixed volume sizes.

1 GB 1 GB 1 GB 1 GBfree

Extent Pool FBprod

Rank-a

Rank-b

3 GB LUN

1 GBfree

1 GBfree

used used Allocate a 3 GB LUN

1 GB 1 GB 1 GB 1 GBused

Extent Pool FBprod

Rank-a

Rank-b

3 GB LUN

1 GBused

1 GBused

used used

Logical 3 GB LUN

2.9 GB LUNcreated

100 MB unused

110 IBM DS8870 Architecture and Implementation

Page 129: IBM 8870 Archt

IBM i volumes enhancement adds flexibility for volume sizes and can optimize DS8000 capacity utilization for IBM i environments. For instance, Table 5-1 shows the fixed volume sizes, which have been supported for DS8000 for a while, and their correspondent space wasted due to fixed volume sizes not matching an exact number of GiB extents.

Table 5-1 IBM i fixed volume sizes

DS8000 LMC 7.7.10.xx.xx introduces two new IBM i volume data types to support the variable volume sizes: A50, an unprotected variable size volume; and A99, a protected variable size volume. See Table 5-2.

Table 5-2 System i variable volume sizes

Example 5-1 demonstrates the creation of both a protected and an unprotected IBM i variable size volume using DS CLI.

Example 5-1 Creating System i variable size unprotected and protected volumes

dscli> mkfbvol -os400 050 -extpool P4 -name itso_iVarUnProt1 -cap 10 5413CMUC00025I mkfbvol: FB volume 5413 successfully created.

dscli> mkfbvol -os400 099 -extpool P4 -name itso_iVarProt1 -cap 10 5417CMUC00025I mkfbvol: FB volume 5417 successfully created.

When planning for new capacity for an existing IBM i server, keep in mind that the larger the LUN, the more data it might have, causing more input/output operations per second (IOPS) to be driven to it. Therefore, mixing different disk sizes within the same system might lead to hot spots.

Model type IBM i device size (GB)

Numberof logical block addresses (LBAs)

Extents Unusable space (GiB1)

Usable space%

Unprotected Protected

2107-A81 2107-A01 8.5 16,777,216 8 0.00 100.00

2107-A82 2107-A02 17.5 34,275,328 17 0.66 96.14

2107-A85 2107-A05 35.1 68,681,728 33 0.25 99.24

2107-A84 2107-A04 70.5 137,822,208 66 0.28 99.57

2107-A86 2107-A06 141.1 275,644,416 132 0.56 99.57

2107-A87 2107-A07 282.2 551,288,832 263 0.13 99.95

1. GiB represents “binary gigabytes” (230 bytes), and GB represents “decimal gigabytes” (109 bytes).

Model type IBM i device size (GB)

Numberof logical block addresses (LBAs)

Extents Unusable space (GiB1)

Usable space%

Unprotected Protected

2107-050 2107-099 Variable 0.00 Variable

Attention: The creation of IBM i variable size volumes is only supported using DS CLI commands. Currently, there is no support for this task on GUI.

Chapter 5. Virtualization concepts 111

Page 130: IBM 8870 Archt

For more information and recommendations about IBM i LUN sizing, refer to Chapter 11, “IBM i considerations” in IBM System Storage DS8000: Host Attachment and Interoperability, SG24-8887.

T10 data integrity field supportThe ANSI T10 standard provides a way to check the integrity of data that is read and written from the application or the host bus adapter (HBA) to the disk and back through the SAN fabric. This check is implemented through the data integrity field (DIF) that is defined in the T10 standard. This support adds protection information that consists of cyclic redundancy check (CRC), logical block address (LBA), and host application tags to each sector of FB data on a logical volume.

A T10 DIF-capable LUN uses 520-byte sectors instead of the common 512-byte sector size. To the standard 512-byte data field, 8 bytes are added. The 8-byte DIF consists of 2-bytes CRC data, a 4-byte Reference Tag (to protect against misdirected writes), and a 2-byte Application Tag for applications that might use it.

On a write, the DIF is generated by the HBA, which is based on the block data and logical block address. The DIF field is added to the end of the data block, and the data is sent through the fabric to the storage target. The storage system validates the CRC and Reference Tag and, if correct, stores the data block and DIF on the physical media. If the CRC does not match the data, the data was corrupted during the write. The write operation is returned to the host with a write error code. The host records the error and retransmits the data to the target. In this way, data corruption is detected immediately on a write and is never committed to the physical media.

On a read, the DIF is returned with the data block to the host, which validates the CRC and Reference Tags. This validation adds a small amount of latency per I/O, but might impact overall response time on smaller block transactions (less than 4 KB I/Os).

The DS8870 supports the T10 DIF standard for FB volumes that are accessed by the Fibre Channel Protocol (FCP) channel of Linux on System z. You can define LUNs with an option to instruct the DS8870 to use the CRC-16 T10 DIF algorithm to store the data.

You can also create T10 DIF-capable LUNs for operating systems that do not yet support this feature (except for IBM System i), but active protection is available only for Linux on System z.

A T10 DIF-capable volume must be defined by using the data storage command-line interface (DS CLI) because the graphical user interface (GUI) in the current release does not yet support this function. When an FB LUN with the mkfbvol DS CLI command is created, add the option -t10dif. If you query a LUN with the showfbvol command, the data type is shown to be FB 512T instead of the standard FB 512 type.

Note: IBM i fixed volume sizes will continue to be supported in current and future DS8870 code levels. Consider the best option for your environment between fixed and variable size volumes.

Important: Because the DS8000 internally always uses 520-byte sectors (to be able to support IBM i volumes), there are no capacity considerations when standard or T10 DIF capable volumes are used.

112 IBM DS8870 Architecture and Implementation

Page 131: IBM 8870 Archt

Count key data volumesA System z count key data (CKD) volume is composed of one or more extents from one CKD extent pool. CKD extents are of the size of 3390 Model 1, which features 1113 cylinders. However, when you define a System z CKD volume, you do not specify the number of 3390 Model 1 extents but the number of cylinders you want for the volume.

Before a CKD volume can be created, a logical control unit (LCU) must be defined that provides up to 256 possible addresses that can be used for CKD volumes. Up to 255 LCUs can be defined. For more information about LCUs, which also are called logical subsystems (LSSs), see 5.2.8, “Logical subsystem” on page 123.

On a DS8870 and previous models that start with the DS8000 microcode Release 6.1, you can define CKD volumes with up to 1.182.006 cylinders, which are about 1 TB. For Copy Services operations, the size is still limited to 262.668 cylinders (approximately 223 GB). This volume capacity is called extended address volume (EAV) and is supported by the 3390 Model A.

A CKD volume cannot span multiple extent pools, but a volume can have extents from different ranks in the same extent pool. You also can stripe a volume across the ranks (see “Storage Pool Striping: Extent rotation” on page 119). Figure 5-9 shows an example of how a logical volume is allocated with a CKD volume.

Figure 5-9 Allocation of a CKD logical volume

CKD alias volumesThere is another type of CKD volumes, the parallel access volume (PAV) alias volumes. They are used by z/OS to send parallel I/Os to the same base CKD volume. Within an LCU, you can define alias volumes and base volumes. Alias volumes do not occupy storage capacity.

Target LUN: When FlashCopy for a T10 DIF LUN is used, the target LUN must also be a T10 DIF type LUN. This restriction does not apply to mirroring.

1113 1113 1113 1113free

Extent Pool CKD0

Rank-x

Rank-y

3390 Mod. 3

1113free

1113free

used used Allocate 3226 cylinder volume

1113 1113 1113 1113used

Extent Pool CKD0

Rank-x

Rank-y

3390 Mod. 3

1000used

1113used

used used

Logical 3390 Mod. 3

113 cylinders unused

Volume with3226 cylinders

Chapter 5. Virtualization concepts 113

Page 132: IBM 8870 Archt

Although they have no size, each alias volume needs an address, which will be tied to a base volume and is included in the total of 256 maximum for an LCU.

5.2.6 Space-efficient volumes

When a standard FB LUN or CKD volume is created on the physical drive, it occupies as many extents as necessary for the defined capacity.

For the DS8870, the following types of space-efficient volumes can be defined:

� Track space-efficient (TSE) volumes� Extent space-efficient (ESE) volumes

TSE volumesThe FB and CKD volumes can be set up as TSE volumes. TSE volumes are only possible with the FlashCopy feature being installed. In order to implement FlashCopy, there must be a ‘target’ volume that will be used to contain a ‘copy’ of the ‘source’ volume. There are two ways to implement FlashCopy.

The first is the ‘standard’ (FlashCopy) implementation. This requires that a standard volume must be used as the ‘target’ volume. It must be a volume at least equal to the size of the ‘source’ volume. A complete copy of the ‘source’ can be written to the ‘target’ because there is enough physical space in the ‘target’ for all the data in the ‘source’. This does not use a TSE volume.

The second is the space efficient (FlashCopy SE) implementation. This requires that a TSE volume must be used as the ‘target’ volume. This volume must belong to a ‘repository’ and will use the space in the ‘repository’ to store data written to it. The assumption is that you do not need to copy all the data from the ‘source’ volume to the ‘target’ volume. In fact, the only data that will be written to the ‘target’ volume will be a copy of the old data on the ‘source’ volume that has been changed.

A TSE repository is required in order to create TSE volumes. It is defined in an extent pool to provide the physical capacity to store data for all of the TSE volumes defined in the extent pool. This space will be used for storing data written to all the TSE volumes in the extent pool. There can only be one TSE repository defined in an extent pool.

The details about TSE volumes and implementing FlashCopy SE are available in IBM System Storage DS8000 Series: IBM FlashCopy SE - REDP-4368.

ESE volumesOnly FB volumes can be set up as ESE volumes. ESE volumes are only possible with the thin provisioning feature being installed.

The purpose of creating ESE volumes allows for a volume to exist that may be larger than the physical space in the extent pool that it belongs to. This allows the ‘host’ to work with the volume at its defined capacity, even though there may not be physical space to fill the volume with data. The assumption is that either the volume will never be filled, or that if the DS8870 is running out of physical capacity, more will be added. Of course this assumes that the DS8870 is not at its maximum capacity.

An ESE volume can exist with or without an ESE repository. It is preferred that one is created to protect space in the extent pool for storing data in the ESE volumes that are created in the extent pool. There can only be one ESE repository defined in an extent pool.

114 IBM DS8870 Architecture and Implementation

Page 133: IBM 8870 Archt

The details about ESE volumes and thin provisioning are provided in DS8000 Thin Provisioning, REDP-4554.

Both thin provisioning and FlashCopy features require a payable license.

Repository for TSE volumesIn order for a TSE volume to exist, there must be a TSE repository created in the extent pool that the TSE volume will be created in. This repository will be used to store all data that needs to be written in the TSE volumes in the extent pool. Once the TSE volumes are created, the repository cannot be deleted. All TSE volumes must be deleted and then the repository can be deleted. The size of the repository cannot be changed. It will need to be deleted and then created with the new size.

The TSE repository can be created using either the DS GUI or DS CLI commands. An extent pool can only have one TSE repository created in it. The extent pool can be formatted for either FB or CKD volumes. The amount of available virtual space in the extent pool to create TSE volumes is dependent on a number of factors. These include the physical space in the extent pool, size of the repository, and other standard or ESE volumes already in the extent pool.

The requested size of the repository will be the actual size of the repository in the extent pool. If the capacity of the extent pool increases, the size of the repository will not change.

See the details about implementing TSE repositories in IBM System Storage DS8000 Series: IBM FlashCopy SE, REDP-4368.

Repository for ESE volumesESE volumes do not require a repository for the extent pool that they belong to. With Release 7.2, there is an option to create an ESE repository when creating ESE volumes. By creating an ESE repository, you specify both a minimum capacity reserved for ESE volumes and a maximum capacity allowed for ESE volumes. By default, no ESE repository allows the entire pool to be used.

The ESE repository can be created either before or after the ESE volumes are created. It is really a ‘pseudo’ repository, which means it will operate differently than a TSE repository. It will prevent the ‘over provisioned ratio’ (opratio) for the ESE volumes from changing as standard volumes are added to an extent pool. This can cause the creation of the volumes to fail.

The ESE repository can only be created using the DS CLI command mksestg, and including the parameter -reptype ese. The amount of available virtual space in the extent pool to create ESE volumes is dependent on a number of factors. These include the physical space in the extent pool, size of the repository, the size of the TSE repository (if it exists), and other standard or TSE volumes already in the extent pool.

The size of the repository can be modified at any time, whether or not there are ESE volumes that are created. It can also be removed at any time. The rmsestg command is used to remove an ESE repository.

Although the size of the repository is specified as a GiB value, it will be adjusted upward to a value that will be equal to a whole number percentage of the current extent pool physical capacity. For example, the current extent pool capacity is 1542 GiB. A repository of 500 GiB is

Important: The TSE repository cannot be created on Serial Advanced Technology Attachment (SATA) drives.

Chapter 5. Virtualization concepts 115

Page 134: IBM 8870 Archt

created by using the mksestg command. Actual repository capacity will be 508.9 GiB, which is 33 percent of 1542. The capacity of the repository and the total capacity of the defined ESE volumes, will define the ‘opratio’. In our scenario, a total ESE volume capacity of 5000 GiB, would mean an ‘opratio’ of 9.8. See the top row of Table 5-3 for a summary of this scenario.

If the extent pool capacity is increased, the size of the repository will automatically be increased to maintain the same percentage capacity for the repository, which will reduce the ‘opratio’ to reflect the change. See the middle row of Table 5-3 to see a summary of this scenario. Note the change in the repository capacity and the opratio.

The chsestg command can be used to modify the size of the repository at any time. It can only be increased to use ‘free’ physical capacity in the extent pool. Free capacity means space not used for standard volumes and any repositories that exist. In this example, we entered the chsestg command to return the repository to 500 GiB. See the bottom row of Table 5-3 for a summary of this change.

Table 5-3 ESE Repository sizing example

See the details about implementing ESE repositories in DS8000 Thin Provisioning, REDP-4554.

Space allocation Space for a space efficient volume is allocated when a write occurs. More precisely, it is allocated when a destage from the cache occurs and there is not enough free space left on the currently allocated extent or track. The TSE allocation unit is a track (64 KB for open systems’ LUNs or 57 KB for CKD volumes).

Because space is allocated in extents or tracks, the system must maintain tables that indicate their mapping to the logical volumes, so the performance of the space efficient volumes is impacted. The smaller the allocation unit, the larger the tables, and the impact.

Virtual space is created as part of the extent pool definition. This virtual space is mapped onto ESE volumes in the extent pool (physical space) and TSE volumes in the repository (physical space) as needed. Virtual space equals the total space of the required ESE volumes and the TSE volumes for FlashCopy SE. No actual storage is allocated until write activity occurs to the ESE or TSE volumes.

Extent pool capacity

mksestg command size

chsestg repository size

repository capacity/ percent

virtual capacity

virtual capacity allocated

opratio

1542 500 X 508.9 / 33 5000 5000 9.8

3664a

a. This was a dynamic change

X X 1209.1 / 33 5000 5000 4.1

3664 X 500 513.0 / 33 5000 5000 9.7

116 IBM DS8870 Architecture and Implementation

Page 135: IBM 8870 Archt

The concept of TSE volumes is shown in Figure 5-10.

Figure 5-10 Concept of track space-efficient (TSE) volumes for FlashCopy SE

The lifetime of data on TSE volumes is expected to be short because they are used only as FlashCopy targets. Physical storage is allocated when data is written to TSE volumes. We need a mechanism to free up physical space in the repository when the data is no longer needed.

The FlashCopy commands include options to release the space of TSE volumes when the FlashCopy relationship is established or removed.

The initfbvol and initckdvol CLI commands also can release the space for space efficient volumes (ESE and TSE).

Extent Pool

Ranks

Repository for track space efficient volumes striped across ranks

normalVolume

Virtual repository capacity

Used tracks

Allocated tracks

Spaceefficientvolume

Chapter 5. Virtualization concepts 117

Page 136: IBM 8870 Archt

The concept of ESE logical volumes is shown in Figure 5-11.

Figure 5-11 Concept of ESE logical volumes without a repository

Use of extent space-efficient volumesLike standard volumes (which are fully provisioned), extent space-efficient (ESE) volumes can be mapped to hosts. They are also supported in combination with Copy Services functions. Copy Services between space efficient and regular volumes are also supported.

Use of track space-efficient volumesTrack space-efficient (TSE) volumes are supported only as FlashCopy target volumes.

Space reclamationWhen using ESE volumes, there can come a point where extents are still being used by the volume, even though the host has already deleted the files for which these extents were being used. This wasted space can be cleaned up using a process called space reclamation.

Because ESE volumes support thin provisioning, the space reclamation is also known as thin reclamation. While this reclamation is supported within the DS8870, it requires host operations in order for it to be performed. This means new SCSI commands are required to support this process.

There is a product suite called Veritas Storage Foundation by Symantec, which now includes support for thin reclamation in the DS8870.

Important: ESE volumes are also supported by the IBM System Storage Easy Tier function.

Allocated extents

Extent Pool

Ranks

Standardvolumes

Used extents

Extent Spaceefficient volume

Free extents inextent pool

virtual capacityper extent pool

118 IBM DS8870 Architecture and Implementation

Page 137: IBM 8870 Archt

5.2.7 Allocation, deletion, and modification of LUNs and CKD volumes

All extents of the ranks that are assigned to an extent pool are independently available for allocation to logical volumes. The extents for a LUN or volume are logically ordered, but they do not have to come from one rank. The extents do not have to be contiguous on a rank.

This construction method of using fixed extents to form a logical volume in the DS8000 series allows flexibility in the management of the logical volumes. You can delete LUNs or CKD volumes, resize LUNs or volumes, and reuse the extents of those LUNs to create other LUNs or volumes, maybe of different sizes. One logical volume can be removed without affecting the other logical volumes that are defined on the same extent pool.

Because the extents are cleaned after you deleted a LUN or CKD volume, it can take some time until these extents are available for reallocation. The reformatting of the extents is a background process.

There are two extent allocation methods (EAMs) for the DS8000: Rotate volumes and Storage Pool Striping (Rotate extents).

Storage Pool Striping: Extent rotationThe preferred storage allocation method is Storage Pool Striping. Storage Pool Striping is an option when a LUN or volume is created. The extents of a volume can be striped across several ranks. An extent pool with more than one rank is needed to use this storage allocation method.

The DS8000 maintains a sequence of ranks. The first rank in the list is randomly picked at each power-on of the storage system. The DS8000 tracks the rank in which the last allocation started. The allocation of the first extent for the next volume starts from the next rank in that sequence. The next extent for that volume is taken from the next rank in sequence, and so on. Thus, the system rotates the extents across the ranks, as shown in Figure 5-12.

Figure 5-12 Rotate extents

Rotate volumes allocation methodExtents can be allocated sequentially. In this case, all extents are taken from the same rank until there are enough extents for the requested volume size or the rank is full. In this case, the allocation continues with the next rank in the extent pool.

Rank A Rank B Rank C Rank D

Same Extent Pool

Volume 3

Volume 4

Volume 2

Volume 1

Chapter 5. Virtualization concepts 119

Page 138: IBM 8870 Archt

If more than one volume is created in one operation, the allocation for each volume starts in another rank. When several volumes are allocated, rotate through the ranks, as shown in Figure 5-13.

Figure 5-13 Rotate volumes

You might want to consider this allocation method when you prefer to manage performance manually. The workload of one volume is going to one rank. This configuration makes the identification of performance bottlenecks easier. However, by putting all the volumes’ data onto one rank, you might introduce a bottleneck, depending on your actual workload.

In a mixed disk characteristics (or hybrid) extent pool that contains different classes (or tiers) of ranks, the Storage Pool Striping EAM is used independently of the requested EAM, and EAM is set to managed.

For extent pools that contain SSD disks, extent allocation is done initially on hard disk drive (HDD) ranks (Enterprise or Near Line) while space remains available. Easy Tier algorithms migrate the extents as needed to SSD ranks. For extent pools that contain a mix of Enterprise and Nearline ranks, initial extent allocation is done on Enterprise ranks first.

When you create striped volumes and non-striped volumes in an extent pool, a rank could be filled before the others. A full rank is skipped when you create new striped volumes.

By using striped volumes, you distribute the I/O load of a LUN or CKD volume to more than one set of eight disk drives. The ability to distribute a workload to many physical drives can

Important: Rotate extents and rotate volume EAMs provide distribution of volumes over ranks. Rotate extents perform this distribution at a granular (1-GiB extent) level, which is the preferred method to minimize hot spots and improve overall performance.

Important: If you must add capacity to an extent pool because it is nearly full, it is better to add several ranks at the same time, not just one. This method allows new volumes to be striped across the newly added ranks.

With the Easy Tier manual mode facility, if the extent pool is a non-hybrid pool, the user can request an extent pool merge followed by a volume relocation with striping to perform the same function. In the case of a hybrid managed extent pool, extents are automatically relocated over time, according to performance needs. For more information, see the IBM Redpaper publication, IBM DS8000 Easy Tier, REDP-4667.

Rotate volume EAM: The rotate volume EAM is not allowed if one extent pool is composed of SSD disks and has a space efficient repository or virtual capacity configured.

Rank A Rank B Rank C Rank D

Same extent pool

Volume 3Volume 4

Volume 2

Volume 1

120 IBM DS8870 Architecture and Implementation

Page 139: IBM 8870 Archt

greatly enhance performance for a logical volume. In particular, operating systems that do not include a volume manager that can do striping benefits most from this allocation method.

Double striping issueIt is possible to use striping methods on the host; for example, AIX Logical Volume Manager (LVM) or VDisk striping on SAN Volume Controller.

In such configurations, the striping methods could compensate each other and eliminate any performance advantage or even lead to performance bottlenecks.

Figure 5-14 shows an example of double striping. The DS8000 provides three volumes to a SAN Volume Controller. The volumes are striped across three ranks. The SAN Volume Controller uses the volumes as managed disks (MDisks). When a striped VDisk is created, extents are taken from each MDisk. The extents are now taken from each of the DS8000 volumes, but in a worst case scenario, all of these extents are on the same rank, which could make this rank a hotspot.

Figure 5-14 Example for double striping issue

However, throughput also could benefit from double striping. If you plan to double stripe, the stripe size at the host level should be different from the DS8000 extent size or identical to the DS8000 extent size. For example, you could use wide physical partition striping in AIX with a stripe size in the MB range. Another example could be a SAN Volume Controller with a stripe size of 1 GB, which equals the DS8000 extent size. The latter might be useful if you want to use Easy Tiering within the DS8000 and the SAN Volume Controller.

Important: If you have extent pools with many ranks and all volumes are striped across the ranks and one rank becomes inaccessible, you lose access to most of the data in that extent pool.

DS8000

1

1

23

123

123

SAN Volume Controller (SVC)

v1v1v1

• 1 extentpool, 3 ranks• 3 volumes, 3 exts. each• EAM: rotate extents

23

• 1 mdisk group• 3 mdisks• 1 vdisk, striping enabled

v1

mdisks

ranks

Hotspot!

Chapter 5. Virtualization concepts 121

Page 140: IBM 8870 Archt

For more information about how to configure extent pools and volumes for optimal performance, see 7.5, “Performance considerations for logical configuration” on page 182.

Dynamic volume expansionThe size of a LUN or CKD volume can be expanded without destroying the data. On the DS8000, you add extents to the volume. The operating system must support this resizing.

A logical volume includes the attribute of being striped across the ranks or not. If the volume was created as striped across the ranks of the extent pool, the extents that are used to increase the size of the volume are striped. If a volume was created without striping, the system tries to allocate the additional extents within the same rank that the volume was created from originally.

Because most operating systems have no means of moving data from the end of the physical disk off to unused space at the beginning of the disk, and because of the risk of data corruption, IBM does not support shrinking a volume. The DS8000 configuration that interfaces DS CLI and DS GUI do not allow you to change a volume to a smaller size.

Dynamic volume migrationDynamic volume migration or dynamic volume relocation (DVR) is a capability that is provided as part of the Easy Tier manual mode facility.

DVR allows data that is stored on a logical volume to be migrated from its currently allocated storage to newly allocated storage while the logical volume remains accessible to attached hosts. The user can request DVR by using the Migrate Volume function that is available through the DS8000 Storage Manager GUI or DS CLI. DVR allows the user to specify a target extent pool and a EAM. The target extent pool can be a separate extent pool than the extent pool where the volume is located, or the same extent pool, but only if it is a non-hybrid (or single-tier) pool. However, the target extent pool must be managed by the same DS8000 internal server.

Dynamic volume migration provides the following capabilities:

� The ability to change the extent pool in which a logical volume is provisioned. This ability provides a mechanism to change the underlying storage characteristics of the logical volume to include the disk class (SSD, Enterprise disk, or near-line disk), disk rpm, and RAID array type. Volume migration also can be used to migrate a logical volume into or out of an extent pool.

� The ability to specify the extent allocation method for a volume migration that allows the extent allocation method to be changed between the available extent allocation method any time after volume creation. Volume migration that specifies the rotate extents EAM can also be used (in non-hybrid extent pools) to redistribute a logical volume's extent allocations across the currently existing ranks in the extent pool if more ranks are added to an extent pool.

Important: Before you can expand a volume, you must delete any Copy Services relationship that involves that volume.

Important: DVR in the same extent pool is not allowed in the case of a managed pool. In managed extent pools, Easy Tier automatic mode automatically relocates extents within the ranks to allow performance rebalancing. DS8870 LMC 7.7.10.xx.xx implemented the fifth generation of Easy Tier, in which you can use Easy Tier Application to manually place volumes in designated tiers within a managed pool. For more information, see IBM System Storage DS8000 Easy Tier Application, REDP-5014.

122 IBM DS8870 Architecture and Implementation

Page 141: IBM 8870 Archt

Each logical volume has a configuration state. To begin a volume migration, the logical volume initially must be in the normal configuration state.

There are more functions that are associated with volume migration that allow the user to pause, resume, or cancel a volume migration. Any or all logical volumes can be requested to be migrated at any time if there is sufficient capacity available to support the reallocation of the migrating logical volumes in their specified target extent pool.

For more information, see the IBM Redpaper publication IBM DS8000 Easy Tier, REDP-4667.

5.2.8 Logical subsystem

A logical subsystem (LSS) is another logical construct. It can also be referred to as a logical control unit (LCU). In reality, it is microcode that is used to manage up to 256 logical volumes. The term LSS is usually used in association with FB volumes, while the term LCU is used in association with CKD volumes. There are a maximum of 255 LSSs that can exist in the DS8870. They each have an identifier from 00 - FE. An individual LSS must manage either FB or CKD volumes.

All even-numbered LSSs (X’00’, X’02’, X’04’, up to X’FE’) are handled by server 0 and all odd-numbered LSSs (X’01’, X’03’, X’05’, up to X’FD’) are handled by server 1. LSS X’FF’ is reserved. This allows both servers to handle host commands to the volumes in the DS8870 as long as the configuration takes advantage of this. If either server is not available, the remaining operational server will handle all LSSs.

LSSs are also placed in address groups of 16 LSSs, except for the last group that has 15. The first address group is 00 - 0F, and so on, until the last group, which is F0 - FE.

Since LSSs manage volumes, an individual LSS has to manage the same type of volumes. As well, an address group must also manage the same type of volumes. The first volume (either FB or CKD) assigned to an LSS in any address group will set that group to manage those types of volumes. There are more details about address groups in “Address groups” on page 125.

Volumes are created in extent pools that are associated with either server 0 or 1. Extent pools are also formatted to support either FB or CKD volumes. So volumes in any server 0 extent pools can be managed by any even-numbered LSS, as long as the LSS and extent pool match the volume type. Volumes in any server 1 extent pools can be managed by any odd-numbered LSS, as long as the LSS and extent pool match the volume type.

Volumes also have an identifier that ranges from 00 - FF. The first volume assigned to an LSS will have an identifier of 00. The second volume will be 01 and so on up to FF, if there are 256 volumes assigned to the LSS.

For FB volumes, the LSSs used to manage them are not significant as long as you have the volumes spread between odd and even LSSs. When the volume is assigned to a host (in the DS8870 configuration), there will be a LUN (logical unit number) assigned to it, which will include the LSS and Volume ID. This LUN is sent to the host when it first communicates with the DS8870, so it can include the LUN in the ‘frame’ sent to the DS8870 when it wants to perform an I/O operation on the volume. This is how the DS8870 knows which volume to perform the operation on.

Alternatively, for CKD volumes, the LCU is very significant. The host has to have the LCU defined in its configuration called the input/output configuration data set (IOCDS). The LCU definition includes a control unit address (CUADD). This CUADD must match the LCU ID in the DS8870. Also included in the IOCDS is a device definition for each volume, which would

Chapter 5. Virtualization concepts 123

Page 142: IBM 8870 Archt

have a unit address (UA) included. This UA needs to match the volume ID of the device. The host must include the CUADD and UA in the ‘frame’ sent to the DS8870 when it wants to perform an I/O operation on the volume. This is how the DS8870 knows which volume to perform the operation on.

For both FB and CKD volumes, when the ‘frame’ sent from the host arrives at a host adapter port in the DS8870, the adapter checks the LSS or LCU identifier to know which server to pass the request to inside the DS8870. See 5.2.9, “Volume access” on page 126 for more details about host access to volumes.

Fixed block LSSs are created automatically when the first fixed block logical volume on the LSS is created. Fixed block LSSs are deleted automatically when the last fixed block logical volume on the LSS is deleted. CKD LCUs require user parameters to be specified and must be created before the first CKD logical volume can be created on the LCU. They must be deleted manually after the last CKD logical volume on the LCU is deleted.

Certain management actions in Metro Mirror, Global Mirror, or Global Copy operate at the LSS level. For example, the freezing of pairs to preserve data consistency across all pairs, in case you have a problem with one of the pairs, is done at the LSS level. The option to put all or most of the volumes of a certain application in one LSS makes the management of remote copy operations easier, as shown in Figure 5-15.

Figure 5-15 Grouping of volumes in LSSs

Fixed block LSSs are created automatically when the first fixed block logical volume on the LSS is created. Fixed block LSSs are deleted automatically when the last fixed block logical volume on the LSS is deleted. CKD LCUs require user parameters to be specified and must be created before the first CKD logical volume can be created on the LCU. They must be deleted manually after the last CKD logical volume on the LCU is deleted.

LSS X'17'DB2

LSS X'18'DB2-test

Physical Drives Logical VolumesPhysical Drives

ArraySite

ArraySite

ArraySite

ArraySite

ArraySite

ArraySite

. ...

24…

.…

. ...

24…

.

. ...

24…

.. .

.. ...

24…

….

124 IBM DS8870 Architecture and Implementation

Page 143: IBM 8870 Archt

Address groupsAddress groups are created automatically when the first LSS that is associated with the address group is created. The groups are deleted automatically when the last LSS in the address group is deleted.

All devices in an LSS must be CKD or FB. This restriction goes even further. LSSs are grouped into address groups of 16 LSSs. LSSs are numbered X'ab', where a is the address group and b denotes an LSS within the address group. For example, X'10' to X'1F' are LSSs in address group 1.

All LSSs within one address group must be of the same type, CKD or FB. The first LSS that is defined in an address group sets the type of that address group.

Figure 5-16 shows the concept of LSSs and address groups.

Figure 5-16 Logical storage subsystems

The LUN identifications X'gabb' are composed of the address group X'g', and the LSS number within the address group X'a', and the ID of the LUN within the LSS X'bb'. For example, FB LUN X'2101' denotes the second (X'01') LUN in LSS X'21' of address group 2.

Important: System z users who still want to use IBM ESCON® to attach hosts to the DS8000 series should be aware that ESCON supports only the 16 LSSs of address group 0 (LSS X'00' to X'0F'). Therefore, this address group should be reserved for ESCON attached CKD devices in this case and not used as FB LSSs. The DS8870 does not support ESCON channels. ESCON devices can be attached only by using FICON/ESCON converters.

Address group X'2x': FB

Address group X'1x' CKD

Extent Pool FB-2

LSS X'10'

LSS X'12'LSS X'14'

LSS X'16'

LSS X'18'

LSS X'1A'

LSS X'1C'

LSS X'1E'

LSS X'20'

LSS X'22'

LSS X'24'

LSS X'26'

LSS X'28'

LSS X'2A'

LSS X'2C'

LSS X'2E'

LSS X'11'

LSS X'13'LSS X'15'

LSS X'17'

LSS X'19'

LSS X'1B'

LSS X'1D'

LSS X'21'

LSS X'1F'

LSS X'23'

LSS X'25'

LSS X'27'

LSS X'29'

LSS X'2B'

Ser

ver0

Extent Pool CKD-1

Rank-a

Rank-b

Extent Pool FB-1

Rank-c

Rank-d

LSS X'2D'

Ser

ver1

Extent Pool CKD-2

Rank-w

Rank-x

Extent Pool FB-2

Rank-y

Rank-z

X'1E00'

X'1E01'

X'2800'

X'1500'

X'1D00'

X'2100'

Volume ID

X'2101'

LSS X'2F'

Chapter 5. Virtualization concepts 125

Page 144: IBM 8870 Archt

An extent pool can have volumes that are managed by multiple address groups. The example in Figure 5-16 on page 125 just shows one address group being used with each extent pool.

5.2.9 Volume access

A DS8000 provides mechanisms to control host access to LUNs. In most cases, a server features two or more host bus adapters (HBAs) and the server needs access to a group of LUNs. For easy management of server access to logical volumes, the DS8000 introduced the concept of host attachments and volume groups.

Host attachmentHBAs are identified to the DS8000 in a host attachment construct that specifies the worldwide port names (WWPNs) of a host’s HBAs. A set of host ports can be associated through a port group attribute that allows a set of HBAs to be managed collectively. This port group is referred to as a host attachment within the configuration.

Each host attachment can be associated with a volume group to define which LUNs that host is allowed to access. Multiple host attachments can share the volume group. The host attachment can also specify a port mask that controls which DS8000 I/O ports the host HBA is allowed to log in to. Whichever ports the HBA logs in on, it sees the same volume group that is defined on the host attachment that is associated with this HBA.

The maximum number of host attachments on a DS8000 is 8192. This host definition is only required for open system hosts. Any System z serve can access any volume in a DS8870 as long as its IOCDS is correct.

Volume groupA volume group is a named construct that defines a set of logical volumes. This is only required for FB volumes. When used with CKD hosts, there is a default volume group that contains all CKD volumes. Any CKD host that logs in to a FICON I/O port has access to the volumes in this volume group. CKD logical volumes are automatically added to this volume group when they are created and are automatically removed from this volume group when they are deleted.

When used with open systems hosts, a host attachment object that identifies the HBA is linked to a specific volume group. You must define the volume group by indicating which FB volumes are to be placed in the volume group. Logical volumes can be added to or removed from any volume group dynamically.

There are two types of volume groups that are used with open systems hosts and the type determines how the logical volume number is converted to a host addressable LUN_ID on the Fibre Channel SCSI interface. A map volume group type is used with FC SCSI host types that poll for LUNs by walking the address range on the SCSI interface. This type of volume group can map any FB logical volume numbers to 256 LUN IDs that have zeros in the last 6 bytes and the first 2 bytes in the range of X'0000' to X'00FF'.

A mask volume group type is used with FC SCSI host types that use the Report LUNs command to determine the LUN IDs that are accessible. This type of volume group can allow any FB logical volume numbers to be accessed by the host where the mask is a bitmap that specifies which LUNs are accessible. For this volume group type, the logical volume number X'abcd' is mapped to LUN_ID X'40ab40cd00000000'. The volume group type also controls whether 512-byte block LUNs or 520-byte block LUNs can be configured in the volume group.

When a host attachment is associated with a volume group, the host attachment contains attributes that define the logical block size and the Address Discovery Method (LUN Polling or

126 IBM DS8870 Architecture and Implementation

Page 145: IBM 8870 Archt

Report LUNs) that is used by the host HBA. These attributes must be consistent with the volume group type of the volume group that is assigned to the host attachment. This consistency ensures that HBAs that share a volume group have a consistent interpretation of the volume group definition and have access to a consistent set of logical volume types. The GUI typically sets these values appropriately for the HBA based on your specification of a host type. You must consider what volume group type to create when a volume group is set up for a particular HBA.

FB logical volumes can be defined in one or more volume groups. This definition allows a LUN to be shared by host HBAs that are configured to separate volume groups. An FB logical volume is automatically removed from all volume groups when it is deleted.

The maximum number of volume groups is 8320 for the DS8000.

Figure 5-17 shows the relationships between host attachments and volume groups. Host AIXprod1 has two HBAs, which are grouped in one host attachment and both are granted access to volume group DB2-1. Most of the volumes in volume group DB2-1 are also in volume group DB2-2, which is accessed by server AIXprod2. In Figure 5-17, there is, however, one volume in each group that is not shared. The server in the lower left part of the figure features four HBAs and they are divided into two distinct host attachments. One HBA can access volumes that are shared with AIXprod1 and AIXprod2. The other HBAs have access to a volume group called docs.

Figure 5-17 Host attachments and volume groups

WWPN-1 WWPN-2

Host attachment: AIXprod1

Volume group: DB2-1

WWPN-3 WWPN-4

Host attachment: AIXprod2

Volume group: docs

WWPN-5 WWPN-6

WWPN-7

WWPN-8

Host att: Test

Host att: Prog

Volume group: DB2-2

Volume group: DB2-test

Chapter 5. Virtualization concepts 127

Page 146: IBM 8870 Archt

5.2.10 Virtualization hierarchy summary

Going through the virtualization hierarchy (shown in Figure 5-18), we start with a number of disks that are grouped in array sites. The array sites are created automatically when the disks are installed. The following steps are completed by a user:

1. An array site is transformed into an array, with spare disks.

2. The array is further transformed into a rank with extents formatted for FB data or CKD.

3. The extents from selected ranks are added to an extent pool. The combined extents from the ranks in the extent pool are used for subsequent allocation for one or more logical volumes. Within the extent pool, you can reserve space for TSE and ESE volumes by creating a repository. ESE and TSE volumes require virtual capacity to be available in the extent pool.

4. Create logical volumes within the extent pools (by default, striping the volumes), and assign them a logical volume number that determines which logical subsystem they would be associated with and which server would manage them. The LUNs are assigned to one or more volume groups.

5. The host HBAs are configured into a host attachment that is associated with a volume group.

Figure 5-18 Virtualization hierarchy

This virtualization concept provides much more flexibility than in previous products. Logical volumes can be dynamically created, deleted, and resized. They can be grouped logically to simplify storage management. Large LUNs and CKD volumes reduce the total number of volumes, which contributes to the reduction of management effort.

Ser

ver0

Array Site

RAIDArray

Spare

Data

Data

Data

Data

Data

Data

Parity

RankType FB

1 G

B F

B

1 G

B F

B

1 G

B F

B

1 G

B F

B

1 G

B F

B

1 G

B F

B

1 G

B F

B

1 G

B F

B

1 G

B F

B

ExtentPool

LogicalVolume

LSSFB

AddressGroup

X'2x' FB4096

addresses

LSS X'27'

X'3x' CKD4096

addresses

VolumeGroup

Host Attachment

128 IBM DS8870 Architecture and Implementation

Page 147: IBM 8870 Archt

5.3 Benefits of virtualization

The DS8000 physical and logical architecture defines new standards for enterprise storage virtualization. Virtualization layers include the following benefits:

� Flexible LSS definition allows maximization and optimization of the number of devices per LSS.

� No strict relationship between RAID ranks and LSSs.

� No connection of LSS performance to underlying storage.

� Number of LSSs can be defined based on the following device number requirements:

– With larger devices, fewer LSSs might be used– Volumes for a particular application can be kept in a single LSS– Smaller LSSs can be defined, if required (for applications that require less storage)– Test systems can have their own LSSs with fewer volumes than production systems

� Increased number of logical volumes:

– Up to 65280 (CKD)– Up to 65280 (FB)– 65280 total for CKD + FB

� Any mixture of CKD or FB addresses in 4096 address groups.

� Increased logical volume size:

– CKD: about 1 TB (1182006 cylinders), designed for 219 TB– FB: 16 TB, designed for 1 PB

� Flexible logical volume configuration:

– Multiple RAID types (RAID 5, RAID 6, and RAID 10)– Storage types (CKD and FB) aggregated into extent pools– Volumes that are allocated from extents of extent pool– Storage pool striping– Dynamically add and remove volumes– Logical volume configuration states– Dynamic Volume Expansion (DVE)– ESE volumes for thin provisioning (FB)– TSE volumes for FlashCopy SE (FB and CKD)– Extended address volumes (CKD)– Dynamic extent pool merging for Easy Tier– Dynamic volume relocation for Easy Tier– Easy Tier Application, available with DS8000 LMC 7.7.10.xx.xx

� Virtualization reduces storage management requirements.

Chapter 5. Virtualization concepts 129

Page 148: IBM 8870 Archt

5.4 zDAC: z/OS FICON Discovery and Auto-Configuration

DS8870 supports the z/OS FICON Discovery and Auto-Configuration (zDAC) feature, which is deployed on the IBM zEnterprise z196 servers and later.

This function was developed to reduce the complexity and skills that are needed in a complex FICON production environment for changing the I/O configuration.

By using zDAC, you can add LCUs to an existing I/O configuration in less time, depending on the policy that you defined. zDAC proposes new configurations that incorporate the current contents of your input/output definition file (IODF) with additions for new and changed LCUs and their devices that are based on the policy you defined in hardware configuration definition (HCD).

The following requirements must be met for using zDAC:

� Your System z must be a zEnterprise z196 or later running z/OS V1 R12 or above.

� LPAR must be authorized to make dynamic I/O configuration (zDCM) changes in the server HSA.

� HCD users must have authority for making dynamic I/O configuration changes.

As its name implies, zDAC provides the following capabilities:

� Discovery:

– Provides capability to discover attached disk that is connected to FICON fabrics

– Detects new and older storage systems

– Detects new control units on existing storage systems

– Proposes control units and device numbering

– Proposes paths for all discovery systems to newly discovered control units, including the sysplex scope

� Auto-Configuration:

– For high availability reasons, when zDAC proposes channel paths, it looks at single point of failure only. It does not consider any channel or port speed, or any current performance information.

– After a storage system is explored, the discovered information is compared against the target IODF, paths are proposed to new control units, and devices are displayed to the user. With that scope of discovery and autoconfiguration, the target work IODF is updated.

– Once the work IODF is complete, it must be used to create a production IODF.

– The production IODF must be activated on the server to take effect.

When zDAC is used, keep in mind the following considerations:

� Physical planning is still required.� Logical configurations of the storage system are still required.� The System z images that should be allowed to use the new devices.� The numbering of the new devices.� The number of paths to new control units (LCUs) that should be configured.

130 IBM DS8870 Architecture and Implementation

Page 149: IBM 8870 Archt

A schematic overview of the zDAC concept is shown in Figure 5-19.

Figure 5-19 zDAC concept

Important: For more information about zDAC, see the z/OS V1R12 HCD User’s Guide, SC33-7988.

System z Discovery and Auto-Configuration (zDAC)

Common FabricsCommon Fabrics

Name Server

IODF

CSSz/OS

z/OS

z/OSFI

CO

NCommon FabricsCommon Fabrics

IODF’

SYSPEX

SYSPEX

Name Server

zDAC

FIC

ON

FIC

ON

New FICON Channel Commands for issuing ELS commands to Name Server

and Storage

HCD

RNID for Topology Discovery

New FICON ELS for rapid

discovery of CU images

Chapter 5. Virtualization concepts 131

Page 150: IBM 8870 Archt

5.5 EAV V2: Extended address volumes

Today's large storage facilities tend to expand to larger CKD volume capacities. Some installations are running out of the z/OS addressable unit control block (UCB) 64-K limitation disk storage. Because of the four-digit device addressing limitation, it is necessary to define larger CKD volumes by increasing the number of cylinders per volume.

Extended address volume (EAV) V1-supported volumes with up to 262,668 volume. EAV V2 now supports up to 1,182,006 cylinders (about 1 TB).

With the introduction of EAVs, the addressing changed from track to cylinder addressing. The partial change from track to cylinder addressing creates the following address areas on EAV volumes:

� Track Managed Space: The area on an EAV that is located within the first 65,520 cylinders. The use of the 16-bit cylinder addressing allows a theoretical maximum address of 65,535 cylinders. To allocate more cylinders, you must have a new format to address the area above 65,520 cylinders.

16-bit cylinder numbers: Existing track address format: CCCCHHHH:

– HHHH: 16-bit track number– CCCC: 16-bit track cylinder

� Cylinder Managed Space: The area on an EAV that is located above the first 65,520 cylinders. This space is allocated in so-called Multicylinder Units (MCUs), which currently have a size of 21 cylinders. A new cylinder-track address format addresses the extended capacity on an EAV:

28-bit cylinder numbers: CCCCcccH, in which the following definitions apply:

– CCCC: The low order 16 bits of a 28-bit cylinder number– ccc: The high order 12 bits of a 28-bit cylinder number– H: A 4-bit track number (0 - 14)

z/OS components and products now support 1,182,006 cylinders:

� DS8870 and z/OS V1.R12 or above support CKD EAV volume size:

– 3390 Model A: 1 - 1,182,006 cylinders (about 1004-TB addressable storage)– 3390 Model A: Up to 1062 x 3390 Model 1 (Four times the size of EAV R1)

� Configuration granularity:

– 1 cylinder boundary sizes: 1 - 56,520 cylinders– 113 cylinders boundary sizes: 56,763 (51 x 1113) to 1,182,006 (1062 x 1113) cylinders

The size of an existing Mod 3/9/A volume can be increased to its maximum supported size by using DVE. The expansion can be done with the DS CLI command, as shown in Example 5-2.

Example 5-2 Dynamically expand CKD volume

dscli> chckdvol -cap 262268 -captype cyl 9ab0

Date/Time: May 10, 2013 07:52:55 AM EST IBM DSCLI Version: 7.7.25.21 DS: IBM.2107-75KAB25CMUC00022I chckdvol: CKD Volume 9AB0 successfully modified.

DVE can be done while the volume remains online (to the host system). A volume table of contents (VTOC) refresh through IBM Device Support Facilities (ICKDSF) is a best practice because it shows the newly added free space. When the relevant volume is in a

132 IBM DS8870 Architecture and Implementation

Page 151: IBM 8870 Archt

Copy Services relationship, that Copy Services relationship must be terminated until the source and target volumes are at their new capacity, and then the Copy Service pair must be re-established.

The VTOC allocation method for an EAV volume was changed compared to the VTOC used for LVS volumes. The size of an EAV VTOC index was increased four-fold, and now has 8192 blocks instead of 2048 blocks. Because there is no space left inside the Format 1 data set control block (DSCB), new DSCB formats (Format 8 and Format 9) were created to protect existing programs from seeing unexpected track addresses. These DSCBs are called extended attribute DSCBs. Format 8 and 9 DSCBs are new for EAV. The existing Format 4 DSCB also was changed to point to the new Format 8 DSCB.

Data set type dependencies on an EAV R2EAV R2 includes the following data set type dependencies:

� All Virtual Storage Access Method (VSAM), sequential data set types: Extended and large format, basic direct access method (BDAM), partitioned data set (PDS), partitioned data set extended (PDSE), VSAM volume data set (VVDS), and basic catalog structure (BCS) can be placed on the extended address space (EAS) (cylinder managed space) of an EAV R2 volume that is running on z/OS V1.12 and above:

– Includes all VSAM data types, such as key-sequenced data set (KSDS), relative record data set (RRDS), entry-sequenced data set (ESDS), linear data set, and IBM DB2, IBM IMS™, IBM CICS®, and zSeries file system (zFS) data sets.

– The VSAM data sets placed on an EAV volume can be storage management subsystem (SMS) or non-SMS managed.

� For EAV Release 2 volume, the following data sets might exist, but are not eligible to have extents in the extended address space (cylinder managed space) in z/OS V1.12:

– VSAM data sets with incompatible control area (CA) sizes– VTOC (it is still restricted to the first 64K-1 tracks)– VTOC index– Page data sets– A VSAM data set with embedded or keyrange attributes is currently not supported– Hierarchical file system (HFS) file system– SYS1.NUCLEUS

All other data sets can be placed on an EAV R2 EAS.

In the actual releases, you can expand all Mod 3/9/A to a large EAV 2 by using DVE. In the case of a sequential data set, VTOC reformat is performed automatically if REFVTOC=ENABLE is enabled in the DEVSUPxx parmlib member.

Chapter 5. Virtualization concepts 133

Page 152: IBM 8870 Archt

The data set placement on EAV as supported on z/OS V1 R12 is shown in Figure 5-20.

Figure 5-20 Data set placement on EAV supported on z/OS V1 R12

z/OS prerequisites for EAV volumesEAV volumes include the following prerequisites:

� EAV volumes are supported only on z/OS V1.10 and above. If you try to bring an EAV volume online for a system with a pre-z/OS V1.10 release, the EAV volume does not come online.

� If you want to use the improvements of EAV R2, it is supported only on z/OS V1.12 and above. A non-VSAM data set that is allocated with EADSCB on z/OS V1.12 cannot be opened on earlier versions of z/OS V1.12.

� After a large volume is upgraded to a Mod1062 (EAV2 with 1,182,006 Cyls) and the system is granted permission, an automatic VTOC refresh and index rebuild is performed. The permission is granted by REFVTOC=ENABLE in parmlib member DEVSUPxx. The trigger to the system is a state change interrupt that occurs after the volume expansion, which is presented by the storage system to z/OS.

� There are no other HCD considerations for the 3390 Model A definitions.

� On parmlib member IGDSMSxx, the USEEAV(YES) parameter must be set to allow data set allocations on EAV volumes. The default value is NO and prevents allocating data sets to an EAV volume. Example 5-3 shows a sample message that you receive when you are trying to allocate a data set on a EAV volume and USEEAV(NO) is set.

Example 5-3 Message IEF021I with USEEVA set to NO

IEF021I TEAM142 STEP1 DD1 EXTENDED ADDRESS VOLUME USE PREVENTED DUE TO SMS USEEAV (NO)SPECIFICATION.

• Continue Exploitation (z/OS 1.11 – z/OS 1.12)– Non-VSAM Extended Format Datasets– Sequential Datasets– PDS– PDSE– BDAM– BCS/VVDS

• XRC Secondary’s in Alternate Subchannel Set• Larger Volumes

– 1 TB Volume Sizes• Dynamic Volume Expansion

– With Copy Service Inactive– Automatic VTOC and Index Rebuild

3390-9 EAV Volume

Track Region

CylinderAddresses

> 65,520

CylinderAddresses

< or = 65,520

Chunk Region

VSAMSequential

PDS PDSEBDAM

VVDS1 TB Volume

EAV R2 layout

134 IBM DS8870 Architecture and Implementation

Page 153: IBM 8870 Archt

� There is a new parameter called Break Point Value (BPV). This parameter determines which size the data set must feature to be allocated on a cylinder-managed area. The default for the parameter is 10 cylinders, which can be set on parmlib member IGDSMSxx and in the Storage Group definition (Storage Group BPV overrides system-level BPV). The BPV value can be 0 - 65520: 0 means that the cylinder-managed area is always preferred and 65520 means that a track-managed area is always preferred.

How to identify an EAV 2Any EAV has more than 65,520 cylinders. To address this volume, the Format 4 DSCB was updated to x’FFFE’ and DSCB 8+9 is used for cylinder-managed address space. Most of the EAV eligible data sets were modified by software with EADSCB=YES.

An easy way to identify any EAV that is used is to list the VTOC summary in TSO/ISPF option 3.4. Example 5-3 shows the VTOC summary of a 3390 Model A with a size of 1 TB CKD usage.

Example 5-4 TSO/ISPF 3.4 panel for a 1 TB EAV volume: VTOC summary

Menu Reflist Refmode Utilities Help

When the data set list is displayed, enter either: "/" on the data set list command field for the command prompt pop-up, an ISPF line command, the name of a TSO command, CLIST, or REXX exec, or "=" to execute the previous command.

Important: Before EAV volumes are implemented, the latest maintenance and z/OS V1.10 and V1.11 coexisting maintenance levels must be applied. For EAV 2, the latest maintenance for z/OS V1.12 must be installed. For more information, see the latest Preventive Service Planning (PSP) information at this website:

www14.software.ibm.com/webapp/set2/psp/srchBroker

Chapter 5. Virtualization concepts 135

Page 154: IBM 8870 Archt

EAV R2 migration considerationsWhen you are reviewing EAV R2 migration, consider the following items:

� Assistance:

Migration assistance is provided by using the Application Migration Assistance Tracker. For more information about Assistance Tracker, see APAR II13752, which is available at this website:

http://www.ibm.com/support/docview.wss?uid=isg1II13752

� Suggested actions:

– Review your programs and look for the calls for the macros OBTAIN, REALLOC, CVAFDIR, CVAFSEQ, CVAFDSM, and CVAFFILT. The macros were changed and you must update your program to reflect those changes.

– Look for programs that calculate volume or data set size by any means, including reading a VTOC or VTOC index directly with a basic sequential access method (BSAM) or EXCP DCB. This task is important because now you have new values that are returning for the volume size.

– Review your programs and look for EXCP and STARTIO macros for direct access storage device (DASD) channel programs and other programs that examine DASD channel programs or track addresses. Now that there is a new addressing mode, programs must be updated.

– Look for programs that examine any of the many operator messages that contain a DASD track, block address, data set, or volume size. The messages now show new values.

� Migrating data:

– Define new EAVs by creating them on the DS8870 or expanding existing volumes by using DVE.

– Add new EAV volumes to storage groups and storage pools, and update automatic class selection (ACS) routines.

– Copy data at the volume level: IBM Transparent Data Migration Facility (TDMF®), Data Facility Storage Management Subsystem data set services (DFSMSdss), Peer-to-Peer Remote Copy (PPRC), Data Facility Storage Management Subsystem (DFSMS), Copy Services Global Mirror, Metro Mirror, Global Copy, and FlashCopy.

– Copy data at the data set level: SMS attrition, LDMF, DFSMSdss, and DFSMShsm.

– With z/OS V1.12, all data set types are currently good volume candidates for EAV R2 except for the following types: Work volumes, TSO batch and load libraries, and system volumes.

136 IBM DS8870 Architecture and Implementation

Page 155: IBM 8870 Archt

Chapter 6. IBM DS8000 Copy Services overview

This chapter provides an overview of the Copy Services functions that are available with the DS8000 series models, including Remote Mirror and Copy, and Point-in-Time Copy functions.

These functions make the DS8000 series a key component for disaster recovery solutions, data migration activities, and for data duplication and backup solutions.

This chapter covers the following topics:

� Introduction to Copy Services

� FlashCopy and FlashCopy Space Efficient

� Remote Pair FlashCopy (Preserve Mirror)

� Remote Mirror and Copy:

– Metro Mirror– Global Copy– Global Mirror– Metro/Global Mirror– z/OS Global Mirror– z/OS Metro/Global Mirror

The information that is provided in this chapter is only an overview. Copy Services are covered in more detail in the following IBM Redbooks and IBM Redpaper publications:

� IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788

� IBM System Storage IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787

� IBM System Storage DS8000 Series: IBM FlashCopy SE, REDP-4368

6

© Copyright IBM Corp. 2014. All rights reserved. 137

Page 156: IBM 8870 Archt

6.1 Copy Services

Copy Services is a collection of functions that provide disaster recovery, data migration, and data duplication functions. With the Copy Services functions, for example, you can create backup data with little or no disruption to your application. You also can back up your application data to the remote site for disaster recovery.

The Copy Services functions run on the DS8870 storage unit and support open systems and System z environments. They are also supported on other DS8000 family models.

DS8000 Copy Services functionsCopy Services in the DS8000 include the following optional licensed functions:

� IBM System Storage FlashCopy and IBM FlashCopy SE, which are point-in-time copy functions

� Remote mirror and copy functions, which include:

– IBM System Storage Metro Mirror, previously known as synchronous Peer-to-Peer Remote Copy (PPRC)

– IBM System Storage Global Copy, previously known as PPRC eXtended Distance

– IBM System Storage Global Mirror, previously known as asynchronous PPRC

– IBM System Storage Metro/Global Mirror, a three-site solution to meet the most rigorous business resiliency needs

– For migration purposes on a request for price quotation (RPQ) base, consider IBM System Storage Metro/Global Copy. Understand that this combination of Metro Mirror and Global Copy is not suited for disaster recovery solutions; it is only intended for migration purposes.

� For IBM System z users, the following options are available:

– z/OS Global Mirror, previously known as Extended Remote Copy (XRC)

– z/OS Metro/Global Mirror, a three-site solution that combines z/OS Global Mirror and Metro Mirror

Many design characteristics of the DS8000, its data copy and mirror capabilities, and features contribute to the full-time protection of your data.

Copy Services management interfacesYou control and manage the DS8000 Copy Services functions by using the following interfaces:

� DS Storage Manager, the graphical user interface (GUI) of the DS8000 (DS GUI).

� Data storage command-line interface (DS CLI), which provides a set command that covers all Copy Service functions and options.

� IBM Tivoli Storage Productivity Center for Replication, with which you manage large Copy Services implementations easily and provides data consistency across multiple systems. Tivoli Storage Productivity Center for Replication is now part of Tivoli Productivity Center 5.2 and IBM SmartCloud® Virtual Storage Center.

� DS Open Application Programming Interface (DS Open API).

138 IBM DS8870 Architecture and Implementation

Page 157: IBM 8870 Archt

System z users can also use the following interfaces:

� Time Sharing Option (TSO) commands� Device Support Facilities (ICKDSF) utility commands� ANTRQST application programming interface (API)� Data Facility Storage Management Subsystem data set services (DFSMSdss) utility

6.2 FlashCopy and FlashCopy Space Efficient

FlashCopy and FlashCopy Space Efficient (SE) provide the capability to create copies of logical volumes with the ability to access the source and target copies immediately. These types of copies are called point-in-time copies.

FlashCopy is an optional, licensed feature of the DS8000. The following variations of FlashCopy are available:

� Standard FlashCopy, also referred to as the Point-in-Time Copy (PTC) licensed function� FlashCopy SE licensed function

FlashCopy and FlashCopy SE are different licenses. If you want to be able to create space-efficient FlashCopies and full volume copies, you need both licenses.

To use FlashCopy, you must have the corresponding licensed function indicator feature in the DS8870. You also must acquire the corresponding DS8000 function authorization with the adequate feature number license in terms of physical capacity. For more information about feature and function requirements, see 10.1, “IBM System Storage DS8000 licensed functions” on page 260.

In this section, we describe the basic characteristics and options of FlashCopy and FlashCopy SE.

6.2.1 Basic concepts

FlashCopy creates a point-in-time copy of the data. For open systems, FlashCopy creates a copy of the logical unit number (LUN). The target LUN must exist before you can use FlashCopy to copy the data from the source LUN to the target LUN.

When a FlashCopy operation is started, it takes only a few seconds to establish the FlashCopy relationship, which consists of the source and target volume pairing and the necessary control bitmaps. Thereafter, a copy of the source volume is available as though all the data was copied. When the pair is established, you can read and write to the source and target volumes.

The following variations of FlashCopy are available:

� Standard FlashCopy uses a normal volume as target volume. This target volume must have at least the same size as the source volume and the space is fully allocated in the storage system.

� FlashCopy Space Efficient (SE) uses space efficient volumes (see 5.2.6, “Space-efficient volumes” on page 114) as FlashCopy targets. An SE target volume features a virtual size that is at least that of the source volume. However, space is not allocated for this volume when the volume is created and the FlashCopy initiated. Space is allocated just for updated tracks only when the source or target volume are written.

FlashCopy and FlashCopy SE can coexist on a DS8000.

Chapter 6. IBM DS8000 Copy Services overview 139

Page 158: IBM 8870 Archt

The basic concepts of a standard FlashCopy are explained in the following section and are shown in Figure 6-1.

Figure 6-1 FlashCopy concepts

If you access the source or the target volumes while the FlashCopy relation exists, I/O requests are handled in the following manner:

� Read from the source volume

When a read request goes to the source, data is directly read from there.

� Read from the target volume

When a read request goes to the target volume, FlashCopy checks the bitmap and takes one of the following actions:

– If the requested data was copied to the target, it is read from there. – If the requested data was not yet copied, it is read from the source.

� Write to the source volume

When a write request goes to the source, the data is first written to the cache and persistent memory (write cache). Later, when the data is destaged to the physical extents of the source volume, FlashCopy checks the bitmap for the location that is to be overwritten and takes one of the following actions:

– If the point-in-time data was already copied to the target, the update is written to the source directly.

– If the point-in-time data was not yet copied to the target, it is now copied immediately and only then is the update written to the source.

Important: In this chapter, track refers to a piece of data in the DS8000. The DS8000 uses the concept of logical tracks to manage Copy Services functions.

FlashCopy provides a point-in-time copy

FlashCopy command issued

Tim

e

Copy immediately available

Read and write to both source and copy possible

Write Read

When copy is complete,relationship betweensource and target ends

Source Target

Read

Write

T0

140 IBM DS8870 Architecture and Implementation

Page 159: IBM 8870 Archt

� Write to the target volume

Whenever data is written to the target volume while the FlashCopy relationship exists, the storage system checks the bitmap and updates it, if necessary. FlashCopy does not overwrite data that was written to the target with point-in-time data.

The FlashCopy background copyBy default, standard FlashCopy (also called FULL copy) starts a background copy process that copies all point-in-time data to the target volume. After the completion of this process, the FlashCopy relation ends and the target volume is independent of the source.

The background copy can slightly affect application performance because the physical copy needs storage resources. The impact is minimal because host I/O always has higher priority than the background copy.

No background copy optionA standard FlashCopy relationship also is established by using the NOCOPY option. With this option, FlashCopy does not initiate a background copy. Point-in-time data is copied only when required because of an update to the source or target. This configuration eliminates the impact of the background copy.

This option is useful in the following situations:

� When the target is not needed as an independent volume� When repeated FlashCopy operations to the same target are expected

FlashCopy SE is automatically started with the NOCOPY option because the target space is not allocated and the available physical space is smaller than the size of the volume. A full background copy would contradict the concept of space efficiency.

6.2.2 Benefits and use

The point-in-time copy that is created by FlashCopy often is used when you need a copy of the production data that is produced with little or no application downtime. Use cases for the point-in-time copy that is created by FlashCopy include online backup, testing new applications, or creating a copy of transactional data for data mining purposes. To the host or application, the target looks exactly like the original source. It is an instantly available, binary copy.

IBM FlashCopy SE is designed for temporary copies. FlashCopy SE is optimized for use cases in which only about 5% of the source volume data is updated during the life of the relationship. If more than 20% of the source data is expected to change, standard FlashCopy likely is the better choice.

Standard FlashCopy often has superior performance to FlashCopy SE. If performance on the source or target volumes is important, the use of standard FlashCopy is a better choice.

The following scenarios are examples of when the use of IBM FlashCopy SE is a good choice:

� Creating a temporary copy and backing it up to tape.

� Creating temporary point-in-time copies for application development or DR testing.

� Performing regular online backup for different points in time.

� FlashCopy target volumes in a Global Mirror (GM) environment. For more information about Global Mirror, see 6.3.3, “Global Mirror” on page 148.

Chapter 6. IBM DS8000 Copy Services overview 141

Page 160: IBM 8870 Archt

In all of these scenarios, the write activity to source and target is the crucial factor that decides whether FlashCopy SE can be used.

6.2.3 FlashCopy options

FlashCopy provides many more options and functions. We explain the following options and capabilities in this section:

� Data Set FlashCopy� Incremental FlashCopy (refresh target volume)� Multiple Relationship FlashCopy� Consistency Group FlashCopy� FlashCopy on existing Metro Mirror or Global Copy primary� Inband commands over remote mirror link

Persistent FlashCopyPersistent FlashCopy allows the FlashCopy relationship to remain even after the (FULL) copy operation completes. You must explicitly delete the relationship to terminate it.

Incremental FlashCopy (refresh target volume)Incremental FlashCopy requires the background copy and the Persistent FlashCopy option to be enabled and the first full copy should be completed.

Refresh target volume refreshes a FlashCopy relation without copying all data from source to target again. When a subsequent FlashCopy operation is initiated, only the changed tracks on the source and target must be copied from the source to the target. The direction of the refresh also can be reversed, from (former) target to source.

In many cases, only a small percentage of the entire data is changed in a day. In this situation, you can use this function for daily backups and save the time for the physical copy of FlashCopy.

Data Set FlashCopyBy using Data Set FlashCopy, you can create a point-in-time copy of individual data sets instead of complete volumes in an IBM System z environment.

Multiple Relationship FlashCopyFlashCopy allows a source to have relationships with up to 12 targets simultaneously. A usage case for this feature is creating regular point-in-time copies as online backups or time stamps. Only one of the multiple relations can be incremental.

Consistency Group FlashCopyBy using Consistency Group FlashCopy, you can freeze and temporarily queue I/O activity to a volume. Consistency Group FlashCopy helps you to create a consistent point-in-time copy without quiescing the application across multiple volumes, and even across multiple storage units.

Consistency Group FlashCopy ensures that the order of dependent writes is always maintained and thus creates host-consistent copies, not application-consistent copies. The copies have power-fail or crash-level consistency. To recover an application from Consistency Group FlashCopy target volumes, you must perform the same recovery as after a system crash or power outage.

142 IBM DS8870 Architecture and Implementation

Page 161: IBM 8870 Archt

FlashCopy on existing Metro Mirror or Global Copy primaryBy using this option, you establish a FlashCopy relationship where the target is a Metro Mirror or Global Copy primary volume. Through this relationship, you create full or incremental point-in-time copies at a local site and then use remote mirroring to copy the data to the remote site.

For more information about Metro Mirror and Global Copy, see 6.3.1, “Metro Mirror” on page 147, and 6.3.2, “Global Copy” on page 148.

Inband commands over remote mirror linkIn a remote mirror environment, commands to manage FlashCopy at the remote site can be issued from the local or intermediate site and transmitted over the remote mirror Fibre Channel links. This ability eliminates the need for a network connection to the remote site solely for the management of FlashCopy.

6.2.4 FlashCopy SE-specific options

Most options for standard FlashCopy (see 6.2.3, “FlashCopy options” on page 142) work identically for FlashCopy SE. The options that differ are described in this section.

Incremental FlashCopyBecause Incremental FlashCopy implies an initial full volume copy and a full volume copy is not possible in an IBM FlashCopy SE relationship, Incremental FlashCopy is not possible with IBM FlashCopy SE.

Data Set FlashCopyFlashCopy SE relationships are limited to full volume relationships. As a result, data set level FlashCopy is not supported within FlashCopy SE.

Multiple Relationship FlashCopy SEStandard FlashCopy supports up to 12 relationships per source volume and one of these relationships can be incremental. A FlashCopy onto a space efficient volume has a certain overhead because more tables and pointers must be maintained. Therefore, it might be advisable to avoid by using all 12 possible relations.

Important: You cannot FlashCopy from a source to a target if the target also is a Global Mirror primary volume.

Important: This function is available by using the DSCLI, TSO, and ICKDSF commands, but not by using the DS Storage Manager GUI.

Chapter 6. IBM DS8000 Copy Services overview 143

Page 162: IBM 8870 Archt

6.2.5 Remote Pair FlashCopy

Remote Pair FlashCopy or Preserve Mirror transmits the FlashCopy command to the remote site if the target volume is mirrored with Metro Mirror. If Preserve Mirror is not used, the mirroring behavior is shown in Figure 6-2.

Figure 6-2 The FlashCopy target is also a Metro Mirror source volume

Complete the following steps to use Preserve Mirroring:

1. FlashCopy is issued at Local A volume, which starts a FlashCopy relationship between the Local A and the Local B volumes.

2. When the FlashCopy operation starts and replicates the data from the Local A to Local B volume, the Metro Mirror volume pair status changes from FULL DUPLEX to DUPLEX PENDING. During the DUPLEX PENDING window, the Remote Volume B does not provide a defined state about its data status and is unusable from a recovery viewpoint.

3. After FlashCopy finishes replicating the data from the Local A volume to the Local B volume, the Metro Mirror volume pair changes its status from DUPLEX PENDING back to FULL DUPLEX. The remote Volume B provides a recoverable state and can be used if there is a planned or unplanned outage at the local site.

Local Storage Server Remote Storage Server

Local A(1)

FlashCopy

M t Mi

( )

Local B Remote BMetro Mirror

Replicate

(2)

FULL DUPLEXFULL DUPLEX

(2) (2) (3)(3)

DUPLEX PENDINGDUPLEX PENDING

144 IBM DS8870 Architecture and Implementation

Page 163: IBM 8870 Archt

As the name implies, Preserve Mirror preserves the existing Metro Mirror status of FULL DUPLEX. Figure 6-3 shows this approach, which guarantees that there is no discontinuity of the disaster recovery readiness.

Figure 6-3 Remote Pair FlashCopy preserves the Metro Mirror FULL DUPLEX state

Complete the following steps as shown in Figure 6-3:

1. The FlashCopy command is issued by an application or by you to the Local A volume with Local B volume as the FlashCopy target. The DS8000 firmware propagates the FlashCopy command through the PPRC links from the local storage server to the remote storage server. This inband propagation of a Copy Services command is possible only for FlashCopy commands.

2. The local storage server and the remote storage server then run the FlashCopy operation independently of each other. The local storage server coordinates the activities at the end of the process and takes action if the FlashCopies do not succeed at both storage servers. Figure 6-4 on page 146 shows an example in which Remote Pair FlashCopy might have the most relevance: A data set level FlashCopy in a Metro Mirror CKD volumes environment where all participating volumes are replicated. Usually the user has no influence where the newly allocated FlashCopy target data set is going to be placed. The key item of this configuration is that disaster recovery protection is not exposed at any time and FlashCopy operations can be freely taken within the disk storage configuration. If Remote Pair FlashCopy is used, the Metro Mirror volume pair status keeps FULL DUPLEX, the DR viewpoint, and the IBM Geographically Dispersed Parallel Sysplex™ (GDPS) recovery standpoint is fully assured.

Remote Pair FlashCopy is now allowed even if one or both pairs are suspended or duplex-pending. If FlashCopy between PPRC primaries A A’ and PPRC secondaries B and B’ are on the same storage system (Storage Facility Image), FlashCopy B B’ is done. This feature is supported on z/OS V1.11, V1.12, and V1.13.

Important: On a DS8870, a Remote Pair FlashCopy is allowed while the PPRC pair is suspended.

Remote Storage ServerFlashCopyIssue

Local Storage Server

Remote A(1)

(1)

(1)FlashCopycommand

Local A

(2)

(2)

FlashCopycommand

FlashCopy

Local B

Remote B

( )

FlashCopy

FULL DUPLEX

FULL DUPLEX

Chapter 6. IBM DS8000 Copy Services overview 145

Page 164: IBM 8870 Archt

Figure 6-4 shows how FlashCopy is allowed when PPRC is suspended.

Figure 6-4 FlashCopy allowed when PPRC is suspended

For a more information about Remote Pair FlashCopy, see IBM System Storage DS8000: Remote Pair FlashCopy (Preserve Mirror), REDP-4504.

6.3 Remote Mirror and Copy

The Remote Mirror and Copy functions of the DS8000 are a set of flexible data mirroring solutions that allow replication between volumes on two or more disk storage systems. These functions are used to implement remote data backup and disaster recovery solutions.

The following Remote Mirror and Copy functions are optional licensed functions of the DS8000:

� Metro Mirror� Global Copy� Global Mirror� Metro/Global Mirror

Remote Mirror functions can be used in open systems and System z environments.

In addition, System z users can use the DS8000 for the following functions:

� z/OS Global Mirror� z/OS Metro/Global Mirror� GDPS

In the following sections, we describe these Remote Mirror and Copy functions.

For more information, see the IBM Redbooks publications that are listed in “Related publications” on page 493.

A

A’

B

B’

146 IBM DS8870 Architecture and Implementation

Page 165: IBM 8870 Archt

Licensing requirementsTo use any of these Remote Mirror and Copy optional licensed functions, you must have the corresponding licensed function indicator feature in the DS8000. You also must acquire the corresponding DS8870 function authorization with the adequate feature number license in terms of physical capacity. For more information about feature and function requirements, see 10.1, “IBM System Storage DS8000 licensed functions” on page 260.

Also, consider that some of the remote mirror solutions, such as Global Mirror, Metro/Global Mirror, or z/OS Metro/Global Mirror, integrate more than one licensed function. In this case, you must have all of the required licensed functions.

6.3.1 Metro Mirror

Metro Mirror, previously known as synchronous PPRC, provides real-time mirroring of logical volumes between two DS8870s, or any other combination of DS8870, DS8100, DS8300, DS8700, DS8800, DS6800, and ESS800, that can be located up to 300 km from each other. It is a synchronous copy solution in which a write operation must be carried out on both copies, at the local and remote sites, before it is considered complete.

The basic operational characteristics of Metro Mirror are shown in Figure 6-5.

Figure 6-5 Metro Mirror basic operation

2

3

1

4Server write

Write to secondary

Writeacknowledge

Write hit on secondary

Secondary(remote)

Primary(local)

Chapter 6. IBM DS8000 Copy Services overview 147

Page 166: IBM 8870 Archt

6.3.2 Global Copy

Global Copy, previously known as Peer-to-Peer Remote Copy/Extended Distance (PPRC-XD), copies data asynchronously and over longer distances than is possible with Metro Mirror. Global Copy is included in the Metro Mirror or Global Mirror license. When you are operating in Global Copy mode, the source does not wait for copy completion on the target before a host write operation is acknowledged. Therefore, the host is not affected by the Global Copy operation. Write data is sent to the target as the connecting network allows an independent of the order of the host writes. This configuration makes the target data lag behind and is inconsistent during normal operation.

You must take extra steps to make Global Copy target data usable at specific points in time. Which of the following steps are used depends on the purpose of the copy:

� Data migration

You can use Global Copy to migrate data over long distances. When you want to switch from old to new data, you must stop the applications on the old site, tell Global Copy to synchronize the data, and wait until it is finished.

� Asynchronous mirroring

Global Copy also is used to create a full-copy of data from an existing machine to a new machine without affecting client performance. If the Global Copy is incomplete, the data at the remote machine is not consistent. When the Global Copy completes, you can stop it and then start with the Copy relationship (Metro Mirror or Global Mirror) starting with a full resynchronization so the data is consistent.

6.3.3 Global Mirror

Global Mirror, previously known as Asynchronous PPRC, is a two-site, long distance, asynchronous, remote copy technology. This solution integrates the Global Copy and FlashCopy technologies. With Global Mirror, the data that the host writes at the local site is asynchronously mirrored to the storage unit at the remote site. With special management steps (under control of the local master storage unit), a consistent copy of the data is automatically maintained and periodically updated by using FlashCopy on the storage unit at the remote site. You need extra storage at the remote site for these FlashCopies.

148 IBM DS8870 Architecture and Implementation

Page 167: IBM 8870 Archt

Global Mirror benefitsGlobal Mirror features the following benefits:

� Support for almost unlimited distances between the local and remote sites, with the distance typically limited only by the capabilities of the network and the channel extension technology. This unlimited distance enables you to choose your remote site location that is based on business needs and enables site separation to add protection from globalized disasters.

� A consistent and restartable copy of the data at the remote site, created with minimal impact to applications at the local site.

� Data currency where, for many environments, the remote site lags behind the local site typically 3 - 5 seconds, which minimizes the amount of data exposure in the event of an unplanned outage. The actual lag in data currency that you experience depends upon a number of factors, including specific workload characteristics and bandwidth between the local and remote sites.

� Dynamic selection of the wanted recovery point objective (RPO), based on business requirements and optimization of available bandwidth.

� Session support: data consistency at the remote site is internally managed across up to eight storage units that are at the local site and the remote site.

� Efficient synchronization of the local and remote sites with support for failover and failback operations, which helps to reduce the time that is required to switch back to the local site after a planned or unplanned outage.

How Global Mirror worksThe basic operational characteristics of Global Mirror are shown in Figure 6-6.

Figure 6-6 Global Mirror basic operation

2

1

Server write

Write to secondary(non-synchronously)

Writeacknowledge

AB

C

FlashCopy(automatically)

Automatic cycle controlled by active session

Chapter 6. IBM DS8000 Copy Services overview 149

Page 168: IBM 8870 Archt

The A volumes at the local site are the production volumes and are used as Global Copy primaries. The data from the A volumes is replicated to the B volumes by using Global Copy. At a certain point, a consistency group is created from all of the A volumes, even if they are in separate storage units. This creation has little impact on applications because the creation of the consistency group is quick (often a few milliseconds).

After the consistency group is created, the application writes can continue updating the A volumes. The missing increment of the consistent data is sent to the B volumes by using the existing Global Copy relations. After all data reaches the B volumes, Global Copy is halted for a brief period while Global Mirror creates a FlashCopy from the B to the C volumes. These volumes now contain a consistent set of data at the secondary site.

The data at the remote site is current within 3 - 5 seconds, but this recovery point depends on the workload and bandwidth that is available to the remote site.

With its efficient and autonomic implementation, Global Mirror is a solution for disaster recovery implementations where a consistent copy of the data always must be available at a remote location that can be separated by a long distance from the production site.

6.3.4 Metro/Global Mirror

Metro/Global Mirror is a three-site, multi-purpose, replication solution. Local site (site A) to intermediate site (site B) provides high availability replication by using Metro Mirror, and intermediate site (site B) to remote site (site C) supports long-distance disaster recovery replication with Global Mirror (see Figure 6-7). This cascaded approach for a three-site solution does not burden the primary storage system with sending out the data twice.

Figure 6-7 Metro/Global Mirror elements

Local site (site A)Intermediate site(site B)

Remote site(site C)

BA C

D

Metro Mirror

Global MirrorMetro Mirror

synchronousshort distance

Global Copyasynchronouslong distance

FlashCopyincrementalNOCOPY

***

Server or Servers

normal application I/Os failover application I/Os

150 IBM DS8870 Architecture and Implementation

Page 169: IBM 8870 Archt

Metro Mirror and Global Mirror are well-established replication solutions. Metro/Global Mirror combines Metro Mirror and Global Mirror to incorporate the following best features of the two solutions:

� Metro Mirror:

– Synchronous operation supports zero data loss.

– The opportunity to locate the intermediate site disk systems close to the local site allows use of intermediate site disk systems in a high-availability configuration.

� Global Mirror:

– Asynchronous operation supports long-distance replication for disaster recovery.

– The Global Mirror methodology has no effect on applications at the local site.

– This solution provides a recoverable, restartable, and consistent image at the remote site with an RPO, typically within 3 - 5 seconds.

6.3.5 Multiple Global Mirror sessions

The DS8870 supports several Global Mirror sessions within a storage system (storage facility image (SFI)). Up to 32 Global Mirror hardware sessions can be supported within the same DS8870, as shown in Figure 6-8.

Figure 6-8 Single GM hardware session support

Long distances: Metro Mirror can be used for distances of up to 300 km. However, when used in a Metro/Global Mirror implementation, a shorter distance for the Metro Mirror connection is more appropriate to effectively guarantee high availability of the configuration.

PrimaryPENDING

SAN

Remote SiteSite 2

Local SiteSite 1

Global Copy

Primary

PENDING

DS8100 / DS8300DS8100 / DS8300

20Session

PrimaryPrimary A 20

PrimaryPrimary A 20

PrimaryPrimary A 20

PrimaryPrimary

PrimaryPrimary A

Secondary

PENDING PrimaryPrimary

PrimaryPrimary A

Secondary

PENDING PrimaryPrimary

PrimaryPrimary A

SecondaryPENDING

PrimaryPrimary

PrimaryPrimary A

Secondary

PENDING PrimaryPrimary

PrimaryPrimary A

Secondary

PENDING PrimaryPrimary

PrimaryPrimary A

SecondaryPENDING

PrimaryPrimary A 20

PrimaryPrimary A 20

PrimaryPrimary A 20 GM master

Subordinate

Chapter 6. IBM DS8000 Copy Services overview 151

Page 170: IBM 8870 Archt

The session that is shown in Figure 6-8 is meant to be a GM master session that controls a GM session. A GM session is identified by a GM session ID (in this example, number 20). The session ID applies to any LSS at Site 1, which contains Global Copy primary volumes that belong to session 20. The two storage systems configuration consists of a GM master in the DS8000 at the bottom and a subordinate DS8000 that also contains Global Copy primary volumes that belong to session 20. The GM master controls the subordinate through PPRC Fibre Channel Protocol (FCP)-based paths between both DS8000 storage systems. Consistency is provided across all primary subsystems.

With the DS8100 and DS8300, it is not possible to create more that one GM session per GM master. Potential impacts with such a single GM session are shown in Figure 6-9.

Figure 6-9 Multiple applications: Single GM session

Assume that a disk storage consolidated environment is used, which is commonly used by various application servers. To provide good performance, all volumes are spread across the primary DS8300s. For disaster recovery purposes, a remote site exists with corresponding DS8300s and the data volumes are replicated through a Global Mirror session with the Global Mirror master function in a DS8100 or a DS8300.

When server 2 with Application 2 fails and the participating volumes that are connected to Application 2 are not accessible from the servers in the remote site or Site 2, the entire GM session 20 must fail over to the remote site.

SAN

Remote Site

Site 2

Local SiteSite 1

Global Copy

DS8100 / DS8300DS8100 / DS8300 20

Session

GM master

Subordinate

Application 3

Application 1

Application 2

152 IBM DS8870 Architecture and Implementation

Page 171: IBM 8870 Archt

Figure 6-10 shows the impact on the other two applications: Application 1 and Application 3. Because there is only one GM session possible with a DS8100 or DS8300 on one SFI, the entire session must be failed over to the remote site to restart Application 2 on the backup server at the remote site. The other two servers with Application 1 and Application 3 are affected and must also be swapped over to the remote site.

Figure 6-10 Multiple applications: Single GM sessions fail over requirements

This configuration implies service interruption to the failed server with Application 2 and service impacts to Application 1 and Application 3. Site 1 must be shut down and restarted in Site 2 after the GM session failover process is completed.

Network

Remote SiteSite 2

Local SiteSite 1

DS8100 / DS8300DS8100 / DS8300

20Session

Application 2

Application 3

Application 1

Application 2

GM master

need to fail over as well

need to fail over as well

fail over

Chapter 6. IBM DS8000 Copy Services overview 153

Page 172: IBM 8870 Archt

Figure 6-11 shows the same server configuration. However, the storage subsystems DS8100 or DS8300 are exchanged by DS8870 or DS8700/DS8800 with Release 6.1 on LMC: 7.6.1.xx.xx. With this configuration, you can use up to 32 dedicated GM master sessions. In the example that is shown in Figure 6-10 on page 153, Application 1 is connected to volumes that are in LSS number 00 to LSS number 3F. Application 2 connects to volumes in LSS 40-7F and the server with Application 3 connects to volumes in LSS 80-BF.

Figure 6-11 DS8000 provides multiple GM master sessions support within R6.1

Each set of volumes on an application server are in its own GM session, which is controlled by the concerned GM master session within the same DS8000. Only a GM session can be in a certain LSS, which you must consider when you are planning on how to divide up volumes into separate GM sessions.

Now when the Application 2 server fails, only GM session 20 is failed over to the remote site and the concerned server in Site 2 restarts with Application 2 after the failover process completes.

This finer granularity and dedicated recovery action is not uncommon because different applications might have different RPO requirements. The ability to fail over only the configuration of a failing server or applications does improve the availability of other applications when compared to the situation on older DS8000 models.

With the DS8870, an installation can now have one or more test sessions in parallel with one or more productive GM sessions within the same SFI to test and gain experience on possible management tools and improvements.

Notice that the basic management of a GM session does not change. The GM session builds on the existing Global Mirror technology and microcode of the DS8000.

SAN

Remote SiteSite 2

Local SiteSite 1

DS8100 / DS8300

Application 2

Application 3

Application 1

Application 2

DS8100 / DS8300

GM master

GM masterGM master

20Session

10Session

30Session

GM master

GM master

154 IBM DS8870 Architecture and Implementation

Page 173: IBM 8870 Archt

6.3.6 Thin provisioning enhancements on open environments

The DS8870 storage system provides a full support for thin provisioned volumes on fixed block volumes only.

All types of Copy Services, such as Metro Mirror, Global Copy, Global Mirror, and Metro Global Mirror are supported, but with the following limitations:

� All volumes must be extent space efficient (ESE) or standard (full-sized) volumes.

� No intermixing of PPRC volumes is allowed. This means that the source and target volume must be of the same type.

� The FlashCopy portion of Global Mirror can be ESE or track space efficient (TSE).

During the initial establish, all space on the secondary volume is released, unless no copy option is selected. The ESE FlashCopy target portion of Global Mirror releases only at initial establish. The same number of extents that are allocated for the primary volume are allocated for the secondary.

Now with thin provisioning, the copy is done only on an effective amount of client data and not on all volume capacity, as shown in Figure 6-12. With this new enhancement, clients can save disk capacity on PPRC devices.

Figure 6-12 Thin provisioning: Full provisioning comparison example

Server Storage

Full Provisioning

Real capacity Real capacityUsedcapacity

CapacityAllocates onThin Provisioning p yPool

Virtual capacityUsed Used

Allocates onwrite

Thin Provisioning

Usedcapacity capacity

1

Chapter 6. IBM DS8000 Copy Services overview 155

Page 174: IBM 8870 Archt

6.3.7 GM and MGM improvement because of collision avoidance

Global Copy and Global Mirror (GM) are asynchronous functions that are suited for long distances between a primary and a secondary DS8000 storage system. At long distance, it is important to allow hosts to complete an I/O operation, even if the transaction on the remote site is incomplete. The previous implementation did not always meet this objective.

During high activities (for example, long running batch jobs), multiple writes might use the same track or block, which results in a collision that increases the response time. In this case, consistency cannot be obtained as requested, which increases RPO and causes increased run times for jobs.

The DS8870 and previous models at LMC release 6.3 (LMC 7.6.30.xx) provide a significant improvement on GM collision avoidance.

Global Mirror locks tracks in the consistency group (CG) on the primary DS8000 at the end of the CG formation window.

This configuration implies that the host writes to CG tracks are held in abeyance while the track is locked. The host might experience an increased response time if collision occurs.

If the host write collides with a locked CG track, the following steps are performed:

� Host adapter copies CG track data to side file to allow host write to complete without having to wait for previous write to complete.

� Side file can grow up to 5% of cache. If the side file exceeds 5% of cache, the DS8000 microcode cancels the current CG formation. If such cancellations occur five times consecutively, microcode allows collision for one CG.

6.3.8 z/OS Global Mirror

z/OS Global Mirror, previously known as Extended Remote Copy (XRC), is a copy function that is available for the z/OS operating systems. It involves a System Data Mover (SDM) that is found only in z/OS. z/OS Global Mirror maintains a consistent copy of the data asynchronously at a remote location, and can be implemented over unlimited distances. It is a combined hardware and software solution that offers data integrity and data availability and can be used as part of business continuance solutions, for workload movement, and for data migration. The z/OS Global Mirror function is an optional licensed function (called Remote Mirroring for System z (RMZ)) of the DS8000 that enables the SDM to communicate with the primary DS8000. No z/OS Global Mirror license is required for the auxiliary storage system (it can be any storage system that is supported by z/OS). However, you should consider that you might want to reverse the mirror, in which case your secondary DS8000 would need a z/OS Global Mirror license.

156 IBM DS8870 Architecture and Implementation

Page 175: IBM 8870 Archt

The basic operational characteristics of z/OS Global Mirror are shown in Figure 6-13.

Figure 6-13 z/OS Global Mirror basic operations

z/OS Global Mirror on zIIPThe IBM z9® Integrated Information Processor (zIIP) is a special engine that is available for System z since the z9 generation. z/OS can use these processors to handle eligible workloads from the SDM in a z/OS Global Mirror (zGM) environment.

Given the appropriate hardware and software, a range of zGM workload can be offloaded to zIIP processors. The z/OS software must be at V1.8 and later with APAR OA23174, specifying zGM parmlib parameter zIIPEnable(YES).

2

1

Server write

Writeacknowledge

Readasynchronously

Primarysite

Secondarysite

SDM manages the data

consistency System DataMover

Chapter 6. IBM DS8000 Copy Services overview 157

Page 176: IBM 8870 Archt

6.3.9 z/OS Metro/Global Mirror

This mirroring capability implements z/OS Global Mirror to mirror primary site data to a location that is a long distance away and also uses Metro Mirror to mirror primary site data to a location within the metropolitan area. This configuration enables a z/OS three-site high-availability and disaster recovery solution for even greater protection against unplanned outages.

The basic operational characteristics of a z/OS Metro/Global Mirror implementation are shown in Figure 6-14.

Figure 6-14 z/OS Metro/Global Mirror

6.3.10 Summary of Remote Mirror and Copy function characteristics

In this section, we summarize the use of and considerations for the set of Remote Mirror and Copy functions that are available with the DS8000 series.

Metro MirrorMetro Mirror is a function for synchronous data copy at a limited distance and includes the following considerations:

� There is no data loss, and it allows for rapid recovery for distances up to 300 km.� There is a slight performance impact for write operations.

P’P

X X’

X”

Unlimiteddistance

DS8000Metro Mirror/z/OS Global MirrorPrimary

DS8000Metro MirrorSecondary

DS8000z/OS Global MirrorSecondary

Intermediate Site

Remote Site

Local Site

Metropolitandistance

Metro Mirror

z/OS Global Mirror

FlashCopywhen required

P’P

X X’

X”

Unlimiteddistance

DS8000Metro Mirror/z/OS Global MirrorPrimary

DS8000Metro MirrorSecondary

DS8000z/OS Global MirrorSecondary

Intermediate Site

Remote Site

Local Site

Metropolitandistance

Metro Mirror

z/OS Global Mirror

FlashCopywhen required

158 IBM DS8870 Architecture and Implementation

Page 177: IBM 8870 Archt

Global CopyGlobal Copy is a function for non-synchronous data copy at long distances, which is limited only by the network implementation and includes the following considerations:

� It can copy your data at nearly an unlimited distance, making it suitable for data migration and daily backup to a remote distant site.

� The copy is normally fuzzy but can be made consistent through a synchronization procedure.

� Global Copy is typically used for data migration to new DS8000s by using the existing PPRC FC infrastructure.

Global MirrorGlobal Mirror is an asynchronous copy technique. You can create a consistent copy in the secondary site with an adaptable RPO. RPO specifies how much data you can afford to re-create if the system must be recovered. The following considerations apply:

� Global Mirror can copy to nearly an unlimited distance.

� It is scalable across multiple storage units.

� It can realize a low RPO if there is enough link bandwidth. When the link bandwidth capability is exceeded with a heavy workload, the RPO grows.

� Global Mirror causes only a slight impact to your application system.

z/OS Global Mirrorz/OS Global Mirror is an asynchronous copy technique that is controlled by z/OS host software called System Data Mover. The following considerations apply:

� It can copy to nearly unlimited distances.

� It is highly scalable.

� It has low RPO. The RPO might grow if the bandwidth capability is exceeded, or host performance might be impacted.

� Additional host server hardware and software are required.

6.3.11 Consistency group considerations

In disaster recovery environments that are running Metro/Global Mirror (MGM), the use of consistency groups is suggested to ensure data consistency across multiple volumes.

Consistency groups suspend all copies simultaneously if a suspension occurs on one of the copies.

Consistency groups should be managed by GDPS or Tivoli Storage Productivity Center for Replication to automate the control and actions in real time and to be able to freeze all copy services I/O to the secondaries to keep all data aligned.

Chapter 6. IBM DS8000 Copy Services overview 159

Page 178: IBM 8870 Archt

6.3.12 GDPS on z/OS environments

Geographically Dispersed Parallel Sysplex (GDPS) is the solution that is offered by IBM to manage large and complex environments and to always keep the client data safe and consistent. It provides an easy interface to manage multiple sites with MGM pairs.

With its HyperSwap capability, GDPS is the ideal solution if you target for 99.9999% availability.

GDPS easily monitors and manages your MGM pairs, and also allows clients to perform disaster recovery tests without affecting production. These features lead to faster recovery from real disaster events.

GDPS functionality includes the following examples:

� Option to hot swap between primary and secondary Metro Mirror is managed concurrently with client operations. Operations can continue if there is a disaster or planned outage.

� Disaster recovery management in case of disaster at the primary site allows operations to restart at the remote site quickly and safely while data consistency is always monitored.

� GDPS freezes the Metro Mirror pairs if there is a problem with mirroring. It restarts the copy process to secondaries after the problem is evaluated and solved, which maintains data consistency on all pairs.

For more information about GDPS, see GDPS Family: An Introduction to Concepts and Capabilities, SG24-6374.

6.3.13 Tivoli Storage Productivity Center for Replication functionality

By using IBM Tivoli Storage Productivity Center for Replication, which is now part of Tivoli Storage Productivity Center 5.2 or IBM SmartCloud Virtual Storage Center, you can manage synchronous and asynchronous mirroring in several environments. Instead of managing individual volume pairs (as you do with the DS CLI), Tivoli Storage Productivity Center for Replication manages groups of volumes (sessions). You can manage your mirroring environment with a few mouse clicks. Tivoli Storage Productivity Center for Replication makes it easy to start, activate, and monitor a full MGM environment.

For more information, see IBM Tivoli Storage Productivity Center V5.1 Release Guide, SG24-7894; and IBM System Storage DS8000, Copy Services for Open Systems SG24-6788.

6.4 Resource Groups for Copy Services

Resource Groups are implemented in such a way that each copy service volume is separated and protected from other volumes in a copy service relationship. Therefore, in a multi-client environment, we protect the client data logically from each other. During Resource Groups definition, we define an aggregation of resources and define certain policies, depending how the resources are configured or managed. This configuration gives you the ability of multi-tenancy by assigning specific resources to specific tenants, which limits the Copy Services relationship so that they exist only between resources within each tenant’s scope of resources.

Resource Groups provide more policy-based limitations to DS8000 users to secure partitioning of Copy Services resources between user-defined partitions. This process of

160 IBM DS8870 Architecture and Implementation

Page 179: IBM 8870 Archt

specifying the appropriate rules is performed by an administrator by using resource group functions. A Resource Scope specifies a selection criteria for a set of Resource Groups.

The use of a resource group on DS8000 introduces the following concepts:

� Resource Group Label (RGL): The RGL is a text string 1 - 32 characters long.

� Resource Scope (RS): The RS is a text string 1 - 32 characters long that selects one or more Resource Group Labels by matching the RS to RGL string.

� Resource Group (RG): An RG consists of new configuration objects. It has a unique RGL within a storage facility image (SFI). An RG contains specific policies volumes and LSSs and LCUs that are associated with a single RG.

� User Resource Scope (URS): Each user includes an ID that is assigned to the URS that contains an RS. The URS cannot equal zero.

The DS8870 supports the Resource Groups concept and is implemented into IBM Storage System DS8700 and DS8800 with microcode Release 6.1. The RG environments also can be managed by Tivoli Storage Productivity Center for Replication, starting on level 4.1.1.6 and later.

Figure 6-15 shows an example of how the multi-tenancy is used in a mixed DS8000 environment and how the OS environment is separated.

Figure 6-15 Example of a multi-tenancy configuration in a mixed environment

Chapter 6. IBM DS8000 Copy Services overview 161

Page 180: IBM 8870 Archt

For more information about implementation of, planning for, and the use of Resource Groups, see IBM System Storage DS8000 Series: Resource Groups, REDP-4758.

Important: Resource Groups are implemented in the code by default and are available at no extra cost.

162 IBM DS8870 Architecture and Implementation

Page 181: IBM 8870 Archt

Chapter 7. Architectured for performance

In this chapter, we describe the performance characteristics of the IBM DS8870 with regards to physical and logical configuration. The considerations that are presented in this chapter can help you plan the physical and logical setup of the DS8870.

This chapter covers the following topics:

� Hardware performance characteristics, and POWER7+� Software performance: Synergy items� Performance considerations for disk drives� DS8000 superior caching algorithms� Performance considerations for logical configuration� I/O Priority Manager� IBM Easy Tier� Performance and sizing considerations for open systems� Performance and sizing considerations for System z

7

© Copyright IBM Corp. 2014. All rights reserved. 163

Page 182: IBM 8870 Archt

7.1 DS8870 hardware: Performance characteristics

The IBM DS8870 is designed to support the most demanding business applications with its exceptional all-around performance and data throughput. These features are combined with world-class business resiliency and encryption capabilities to deliver a unique combination of high availability, performance, and security.

The DS8870 features IBM POWER7+ processor-based server technology and uses a PCI Express I/O infrastructure to help support high performance. Besides the 2-core and 4-core processor options, which also existed in the DS8800 and DS8700 models, the DS8870 includes the options for 8-core and 16-core processors per controller. Up to 1 TB of system memory is available in the DS8870 for increased performance.

In this section, we review the architectural layers of the DS8870 and describe the performance characteristics that differentiate the DS8870 from other disk systems.

7.1.1 Vertical growth and scalability

Scalability details for the DS8870 are shown in Figure 7-1. The DS8870 provides a nondisruptive upgrade from the smallest to the largest configuration, including adding cache and processors for increased performance and adding host ports for increased connectivity. You also can add hard disk drives (HDDs) and solid-state drives (SSDs) and other model 96E frames to the base 961 frame for increased capacity. Other advanced-function software features, such as Easy Tier, I/O Priority Manager, and Storage Pool Striping, contribute to performance potential.

Figure 7-2 on page 165 shows the configuration options for the DS8870 Business and Enterprise class systems.

Figure 7-1 DS8870 scalability detail

System Class

Active Processors per CEC

Total System Memory

Max. Host Adapters

Max. DA pairs

Max. SE Pairs/ SFF disks

Max. Expansion Frames

Business 2-core 16 or 32 GB 4 1 3 / 144 0

4-core 64 GB 8 2 5 / 240 0

8-core 128 or 256 GB 16 6 22 / 1056 1 - 2

16-core 512 GB or 1 TB 16 6 22 / 1056 1 - 2

Enterprise 2-core 16 or 32 GB 4 2 3 / 144 0

4-core 64 GB 8 4 5 / 240 0

8-core 128 GB 16 8 22 / 1056 1 - 2

8-core 256 GB 16 8 32 / 1536 1 - 3

16-core 512 GB or 1 TB 16 8 32 / 1536 1 - 3

164 IBM DS8870 Architecture and Implementation

Page 183: IBM 8870 Archt

Figure 7-2 shows, by using different colors, how the eight device adapter pairs in the I/O enclosure pairs (vertically in the lower left corner of first two frames) correlate with disk enclosure pairs (spread over four frames) in the DS8870 enterprise class.

Figure 7-2 Enterprise class scalability

For detailed information about hardware and architectural scalability, see Chapter 3, “DS8870 hardware components and architecture” on page 35.

Figure 7-3 shows an example of how DS8870’s performance relatively scales as the configuration changes from 2-core to 16-core in an open systems database environment.

Figure 7-3 Linear performance scalability

Chapter 7. Architectured for performance 165

Page 184: IBM 8870 Archt

7.1.2 DS8870 Fibre Channel switched interconnection at the back-end

The Fibre Channel (FC) technology is commonly used to connect a group of disks in a daisy-chained fashion in a Fibre Channel Arbitrated Loop (FC-AL). To overcome the arbitration issue within FC-AL, the DS8870 architecture uses a switch-based approach when FC-AL switched loops are created, as shown in Figure 4-9 on page 84. This system is called a Fibre Channel switched disk system.

These switches use the FC-AL protocol and attach to the serial-attached SCSI (SAS) drives (bridging to SAS protocol) through a point-to-point connection. The arbitration message of a drive is captured in the switch, processed, and propagated back to the drive, without routing it through all of the other drives in the loop.

Performance is enhanced because both device adapters (DAs) connect to the switched Fibre Channel subsystem back-end, as shown in Figure 7-4. Each DA port can concurrently send and receive data.

Figure 7-4 High availability and increased bandwidth connect both DAs to two logical loops

166 IBM DS8870 Architecture and Implementation

Page 185: IBM 8870 Archt

The two switched point-to-point connections to each drive, which also connect both DAs to each switch, result in the following benefits:

� There is no arbitration competition and interference between one drive and all the other drives because there is no hardware in common for all the drives in the FC-AL loop. This configuration increases bandwidth, which works with the full 8-Gbps FC speed up to the back-end place where the FC-to-SAS conversion is made. The full SAS 2.0 speed of 6 Gbps also is used for each individual drive.

� This architecture doubles the bandwidth over conventional FC-AL implementations because of two simultaneous operations from each DA that allow for two concurrent read operations and two concurrent write operations.

� In addition to superior performance, reliability, availability, and serviceability (RAS) are improved in this setup when compared to conventional FC-AL. The failure of a drive is detected and reported by the switch. The switch ports distinguish between intermittent failures and permanent failures. The ports understand intermittent failures, which are recoverable, and collect data for predictive failure statistics. If one of the switches fails, a disk enclosure service processor detects the failing switch and reports the failure by using the other loop. All drives can still connect through the remaining switch.

Thus far we described the physical structure. A virtualization approach that is built on top of the high-performance architectural design contributes even further to enhanced performance, as described in Chapter 5, “Virtualization concepts” on page 99.

7.1.3 Fibre Channel device adapter

The DS8870 relies on eight disk drive modules (DDMs) to form a RAID 5, RAID 6, or RAID 10 array. These DDMs are split between two Fibre Channel fabrics. With the use of the virtualization approach and the concept of extents, the DAs are mapping the virtualization scheme over the disk system back-end, as shown in Figure 7-5. For more information about disk system virtualization, see Chapter 5, “Virtualization concepts” on page 99.

Figure 7-5 Fibre Channel device adapter

The Redundant Array of Independent Disks (RAID) device adapter technology is built on PowerPC technology, along with an application-specific integrated circuit (ASIC) that is a high

Processor

AdapterAdapter

AdapterAdapter

To host servers

Sto

rag

e s

erve

r

Memory PowerPC

Fibre Channel ports

Fibre Channel Protocol Proc

Fibre Channel Protocol Proc

DA

Processor

Chapter 7. Architectured for performance 167

Page 186: IBM 8870 Archt

function/high performance adapter. The adapter also is PCIe Gen2-based and runs at 8 Gbps.

Comparing the DS8870 to DS8800When compared to IBM DS8800’s POWER6+ processor technology with two different processor complex options, the two DS8870 generations use the new IBM POWER7+, respectively POWER7 processor technology with four different processor complex options.

The DS8870 provides from 16 GB to 1 TB of processor memory, which continues to be managed in 4-KB segments for optimal cache efficiency. The total memory that is supported was increased by 166% when compared to the DS8800.

The DS8870 also features some internal communication improvements when compared to the DS8800, which affect its performance. Figure 7-6 shows the changes in the communication and attachment of the central electronics complex (CEC) and some changes in the PCI Express adapters. For cross-cluster communication, the DS8870 uses the Peripheral Component Interconnect Express (PCIe) fabric instead of a separate path (remote input/output (RIO)) in DS8800. Cross cluster communication is far more resilient than the dedicated RIO connection in the previous models. The dedicated RIO had no backup path for cross cluster communication and PCIe can use any I/O enclosure for cross cluster communication in a failure. For more information about the CEC hardware architecture, see 3.2, “DS8870 architecture overview” on page 41.

Figure 7-6 Change in CEC’s and XC communication

Communication between CECs and the DAs ownership boundary has changed from the I/O enclosure level to the DA level. DS8870 uses the DAs in Split Affinity mode. Split Affinity means that each CEC uses one device adapter in every I/O enclosure. This configuration allows both DA adapters in an I/O enclosure to communicate concurrently because each uses a different interface connection between the I/O enclosure and the CEC. This significantly improves performance when compared to the approach that is used in previous DS8000 models.

168 IBM DS8870 Architecture and Implementation

Page 187: IBM 8870 Archt

These enhancements in CPU, memory, and internal communication allow the DS8870 to deliver up to three times the I/O operations per second (IOPS). This change greatly improves performance in transaction processing workload environments with better response time when compared to the DS8800. The DS8870 also provides significant performance improvements in sequential read and sequential write throughputs.

For Peer-to-Peer Remote Copy (PPRC) establish, the POWER7+ processor in the DS8870 provides better bandwidth scaling when compared to DS8800 paths.

7.1.4 POWER7+ and POWER7

Two generations of DS8870 currently exist. Both are based on Power 740 processor complexes, yet different submodels. Both carry a 961 model number. To differentiate them, you need to look into the HMC, which server submodel is used: Between 2012 October and 2013 November, a 8205-E6C server was the processor complex and the server pair was based on POWER7 technology with 3.55 GHz cycle speed. Since 2013 December, the DS8870 is using a 8205-E6D Power server submodel pair with 4.228 GHz clock cycle speed, based on POWER7+.

Figure 7-7 shows the HMC server view.

Figure 7-7 Power 740 servers 8205-E6D: This DS8870 is based on POWER7+

Latest benchmark values: Vendor-neutral independent organizations develop generic benchmarks to allow an easier comparison of intrinsic storage product values in the marketplace.

See the following sections of the Storage Performance Council (SPC) website for the latest benchmark values of the DS8870 to compare with other IBM and non-IBM storage products:

� For an SPC-1 benchmark result, where random I/O workload is used for testing, see this website:

http://www.storageperformance.org/results/benchmark_results_spc1

� For an SPC-2 benchmark result, where large block-sized sequential workload is used for testing, see this website:

http://www.storageperformance.org/results/benchmark_results_spc2

Chapter 7. Architectured for performance 169

Page 188: IBM 8870 Archt

For CPU upgrades, existing POWER7 DS8870 models will be fully converted to POWER7+ also.

POWER7+ is based on 32-nm technology, which allows higher frequencies than the 45 nm processor die size on which the earlier POWER7 models were based. The number of transistors per chip has almost doubled (2.1 bn vs. 1.2 bn). Additionally, POWER7+ comes with an increased Level-3 cache (2.5 times larger). The L3 increase together with the higher cycle frequencies of POWER7+ have a positive impact on application and I/O handling performance.

Another processor feature of POWER7+ is specialized hardware accelerators. The memory compression accelerator is a significant advancement over previous software-enabled memory compression. With a hardware assist, compression processing can be performed on the chip itself, which boosts efficiency and allows more cycles to be available to process other workload demands.

With the other hardware components unchanged between both DS8870 models, the current high-performance DS8870 model based on POWER7+ server technology can deliver up to 15% performance improvement in maximum IOPS in transaction-processing workload environments over the prior POWER7 processor-based DS8870 961 model.With additional microcode performance improvements introduced with R7.2 (bundle 87.20), especially for all-flash drive configurations, another additional 5% performance gain has been achieved in maximum IOPS for random-I/O workloads.

7.1.5 Eight-port and four-port host adapters

Before we examine the heart of the DS8870, we briefly review the host adapters and their design characteristics to address performance. Figure 7-8 shows the host adapters. These adapters are designed to hold eight or four Fibre Channel (FC) ports, which can be configured to support Fibre Channel Protocol (FCP) or Fibre Channel Connection (FICON).

Figure 7-8 Host adapter with four Fibre Channel ports

Processor

AdapterAdapter

AdapterAdapter

To host servers

Sto

rag

e se

rve

r

MemoryPowerPC

Fibre Channel Host ports

Fibre Channel Protocol Proc

HA

Fibre Channel Protocol Proc

Processor

170 IBM DS8870 Architecture and Implementation

Page 189: IBM 8870 Archt

With FC adapters that are configured for FICON, the DS8000 series provides the following configuration capabilities:

� Fabric or point-to-point topologies� A maximum of 128 host adapter ports, depending on the DS8870 processor feature� A maximum of 509 logins per FC port� A maximum of 8192 logins per storage unit� A maximum of 1280 logical paths on each FC port� Access to all control-unit images over each FICON port� A maximum of 512 logical paths per control unit image

FICON host channels limit the number of devices per channel to 16,384. To fully access 65,280 devices on a storage unit, it is necessary to connect a minimum of four FICON host channels to the storage unit. By using a switched configuration, you can expose 64 control-unit images (16,384 devices) to each host channel.

The front end with the 8-Gbps ports scales up to 128 ports for a DS8870 by using the eight-port host bus adapters (HBAs). This configuration results in a theoretical aggregated host I/O bandwidth of 128 times 8 Gbps. Each port provides industry-leading throughput and I/O rates for FICON and FCP.

The host adapter architecture of DS8870 includes the following characteristics (which are identical to the details of DS8800):

� The architecture is fully at 8 Gbps� Uses Gen2 PCIe interface� Features dual-core 1.5-GHz PowerPC processor� The adapter memory is increased fourfold from the previous DS8000 model

The 8-Gbps adapter ports can negotiate to 8, 4, or 2 Gbps (1 Gbps is not possible). To attach to 1-Gbps hosts, storage area network (SAN) switches are required.

7.2 Software performance: Synergy items

There are a number of performance features in the DS8870 that work together with the software on the IBM hosts and are collectively referred to as synergy items. These items allow the DS8870 to cooperate with the host systems in manners beneficial to the overall performance of the systems.

7.2.1 Synergy with Power Systems

The IBM DS8870 can work in cooperation with Power Systems to provide the following performance enhancement functions.

Easy Tier supportIBM Easy Tier is an intelligent data placement algorithm of DS8870, which is designed to support both open systems and System z workloads.

This feature brings the following values:

� Server and storage resources remain optimized for performance and cost objectives� Significant performance increase� Reduction in SAN utilization and I/O traffic� Reduction in administrative costs

Chapter 7. Architectured for performance 171

Page 190: IBM 8870 Archt

Specifically for Power Systems, starting with Licensed Machine Code (LMC) 7.7.10.xx (bundle version 87.10.xxx) on DS8870, IBM Easy Tier is able to manage direct-attached high-performance flash storage on the host as a large and low-latency cache for the hottest DS8870 data. Advanced disk system functions such as RAID protection and remote mirroring are preserved while using this function. This capability is also known as cooperative caching and is implemented by the Easy Tier Server feature. The flash cache in the Power server can be sized in a way that it only serves more important applications, and you can specify for which hdisks the local flash caching is enabled only. I/O read requests are handled locally in the AIX server with very short response times because they do not need to travel through the SAN.

For more information, see 7.7, “IBM Easy Tier” on page 191, or refer to the IBM Redpaper publication: IBM System Storage DS8000 Easy Tier Server, REDP-4513.

End-to-end I/O priority: Synergy with AIX and DB2 on Power SystemsEnd-to-end I/O priority is a new addition (requested by IBM), to the SCSI T10 standard. This feature allows trusted applications to override the priority that is given to each I/O by the operating system. This feature is only applicable to raw volumes (no file system) and with the 64-bit kernel. Currently, AIX supports this feature with DB2. The priority is delivered to the storage subsystem in the FCP Transport Header.

The priority of an AIX process can be 0 (no assigned priority) or any integer value from 1 (highest priority) to 15 (lowest priority). All I/O requests that are associated with a process inherit its priority value. However, with end-to-end I/O priority, DB2 can change this value for critical data transfers. At the DS8870, the host adapter gives preferential treatment to higher priority I/O, which improves performance for specific requests that are deemed important by the application, such as requests that might be prerequisites for others (for example, DB2 logs).

Cooperative caching: Synergy with AIX and DB2 on Power SystemsAnother software-related performance item is cooperative caching, a feature that provides a way for the host to send cache management hints to the storage facility. Currently, the host can indicate that the information recently accessed is unlikely to be accessed again soon. This status decreases the retention period of the data cached at the host, which allows the subsystem to conserve its cache for data that is more likely to be reaccessed, thus improving the cache hit ratio.

With the implementation of cooperative caching, the AIX operating system allows trusted applications, such as DB2, to provide cache hints to the DS8000. This ability improves the performance of the subsystem by keeping more of the repeatedly accessed data cached in the high performance flash at the host. Cooperative caching is supported in IBM System p® AIX with the Multipath I/O (MPIO) Path Control Module (PCM) that is provided with the Subsystem Device Driver (SDD). It is only applicable to raw volumes (no file system) and with the 64-bit kernel.

Long busy wait host tolerance: Synergy with AIX on Power SystemsAnother addition to the SCSI T10 standard is SCSI long busy wait, which provides a way for the target system to specify that it is busy and how long the initiator should wait before an I/O is tried again.

This information, provided in the FCP status response, prevents the initiator from trying again too soon. This delay, in turn, reduces unnecessary requests and potential I/O failures because of exceeding a set threshold for the number of times it is tried again. IBM System p AIX supports SCSI long busy wait with MPIO, and it is also supported by the DS8870.

172 IBM DS8870 Architecture and Implementation

Page 191: IBM 8870 Archt

7.2.2 Synergy with System z

The IBM DS8870 can work in cooperation with System z to provide the following performance enhancement functions.

Parallel access volume and HyperPAVParallel access volume (PAV) is an optional licensed function of the DS8000 series for the z/OS and z/VM operating systems. It provides the ability to perform multiple I/O requests to the same volume at the same time. With dynamic PAV, the z/OS Workload Manager (WLM) manages the assignment of so called alias addresses to base addresses. The number of alias addresses defined the parallelism of I/Os to a volume. However, the reaction time of WLM is too slow to cope with rapidly changing workload. HyperPAV is an extension to PAV where the WLM no longer is involved and any alias address from a pool of addresses can be used to drive the I/O. For more information about PAV, see 7.9.2, “Parallel access volume” on page 201.

DS8000 I/O Priority Manager with z/OS Workload ManagerI/O Priority Manager, together with z/OS Workload Manager (zWLM), enable more effective storage consolidation and performance management when different workloads share a common disk pool (extent pool). This function, now tightly integrated with zWLM, is intended to improve disk I/O performance for important workloads. It also drives I/O prioritization to the disk system by allowing WLM to give priority to the system’s resources (disk arrays) automatically when higher priority workloads are not meeting their performance goals. I/O of less prioritized workloads to the same extent pool are slowed down to give the higher prioritized workload a higher share of the resources, mainly the disk drives. Integration with zWLM is exclusive to DS8000 and System z systems. For more information about I/O Priority Manager, see DS8000 I/O Priority Manager, REDP-4760.

Easy Tier supportIBM Easy Tier is an intelligent data placement algorithm of DS8870, which is designed to support both open systems and System z workloads.

This feature brings the following values:

� Server and storage resources remain optimized for performance and cost objectives� Significant performance increase� Reduction in administrative costs

For more information, see 7.7, “IBM Easy Tier” on page 191.

Extended Address VolumesThis capability can help relieve address constraints to support large storage capacity needs by addressing the capability of System z environments to support volumes that can scale up to approximately 1 TB (1,182,006 cylinders). Refer also to 5.5, “EAV V2: Extended address volumes” on page 132.

High Performance FICON for zHigh Performance FICON for z (zHPF) is an enhancement of the FICON channel architecture. You can reduce the FICON channel I/O traffic overhead by using zHPF with the FICON channel, the z/OS operating system, and the control unit. zHPF allows the control unit

Important: I/O Priority Manager is an optional feature. It is not supported for the DS8870 business class configuration with 16-GB cache.

Chapter 7. Architectured for performance 173

Page 192: IBM 8870 Archt

to stream the data for multiple commands back in a single data transfer section for I/Os that are initiated by various access methods, which improves the channel throughput on small block transfers.

zHPF is an optional feature of the DS8870. Recent enhancements to zHPF include Extended Distance capability, zHPF List Pre-fetch, Format Write, and zHPF support for sequential access methods. DS8870 with zHPF and z/OS V1.13 has significant I/O performance improvements for certain I/O transfers for workloads that use queued sequential access method (QSAM), basic partitioned access method (BPAM), and basic sequential access method (BSAM) access methods.

zHPF is enhanced to support DB2 list prefetch. These enhancements include a new cache optimization algorithm that can greatly improve performance and hardware efficiency. When combined with the latest releases of z/OS and DB2, it can demonstrate up to a 14 – 60 increase in sequential or batch-processing performance. All DB2 I/Os, including format writes and list prefetches, are eligible for zHPF. In addition, DB2 can benefit from the new caching algorithm at the DS8000 level called List Pre-fetch Optimizer (LPO).

For more information about list prefetch, see DB2 for z/OS and List Prefetch Optimizer, REDP-4862. For more information about zHPF, see 7.9.9, “High Performance FICON for z” on page 212.

Quick initialization (System z)IBM System Storage DS8000 supports quick volume initialization for System z environments, which can help customers who frequently delete volumes, allowing capacity to be reconfigured without waiting for initialization. Quick initialization initializes the data logical tracks or block within a specified extent range on a logical volume with the appropriate initialization pattern for the host.

Normal read and write access to the logical volume is allowed during the initialization process. Therefore, the extent metadata must be allocated and initialized before the quick initialization function is started. Depending on the operation, the quick initialization can be started for the entire logical volume or for an extent range on the logical volume.

Quick initialization improves device initialization speeds and allows a Copy Services relationship to be established after a device is created.

174 IBM DS8870 Architecture and Implementation

Page 193: IBM 8870 Archt

7.3 Performance considerations for disk drives

When you are planning your system, you can determine the number and type of ranks that are required based on the needed capacity and on the workload characteristics in terms of access density, read-to-write ratio, and cache hit rates.

You can approach this task from the disk side and look at basic disk performance figures. Current SAS 15-K rpm disks, for example, provide an average seek time of approximately 3 ms and an average latency of 2 ms. For transferring only a small block, the transfer time can be neglected. This time is an average 5 ms per random disk I/O operation or 200 IOPS. A combined number of eight disks (as is the case for a DS8000 array) thus potentially sustains 1600 IOPS when spinning at 15-K rpm. Reduce the number by 12.5% (1400) when you assume a spare drive in the array site.

Back on the host side, consider an example with 1000 IOPS from the host, a read-to-write ratio of 70/30, and 50% read cache hits. This configuration leads to the following IOPS numbers:

� 700 read IOPS

� 350 read I/Os must be read from disk (based on the 50% read cache hit ratio)

� 300 writes with RAID 5 results in 1200 disk operations because of the RAID 5 write penalty (read old data and parity, write new data and parity)

� Totals to 1550 disk I/Os

With 15K rpm DDMs performing 1000 random IOPS from the server, we actually complete 1550 I/O operations on disk that is compared to a maximum of 1600 operations for 7+P configurations or 1400 operations for 6+P+S configurations. Thus, in this scenario, 1000 random I/Os from a server with a given read-to-write ratio and a given cache hit ratio saturate the disk drives. We made the assumption that server I/O is purely random. When there are sequential I/Os, track-to-track seek times are much lower and higher I/O rates are possible. We also assumed that reads have a cache-hit ratio of only 50%. With higher hit ratios, higher workloads are possible. These considerations show the importance of intelligent caching algorithms as used in the DS8000, which are described in 7.4, “DS8000 superior caching algorithms” on page 178.

For a single disk drive, various disk vendors provide the disk specifications on their websites. Because the access times for the disks are the same for the same rpm speeds, but they have different capacities, the I/O density is different. A 146 GB 15K rpm disk drive can be used for access densities up to, and slightly over, 1 I/O per GB. For 600-GB drives, it is approximately 0.25 I/O per GBs. Although this discussion is theoretical in approach, it provides a first estimate.

After the speed of the disk is decided, the capacity can be calculated based on your storage capacity needs and the effective capacity of the RAID configuration you use. For more information about calculating these needs, see Table 8-8 on page 238.

Important: When a storage system is sized, you should consider the capacity and the number of disk drives that are needed to satisfy the performance requirements.

Chapter 7. Architectured for performance 175

Page 194: IBM 8870 Archt

Flash drivesFrom a performance point of view, the best choice for your DS8870 disks would be the solid-state flash drives. Flash drives have no moving parts (no spinning platters and no actuator arm). They also have a lower energy consumption. The performance advantages are the fast seek time and average access time. They are targeted at applications with heavy IOPS, bad cache hit rates, and random access workload, which necessitates fast response times. Database applications with their random and intensive I/O workloads are prime candidates for deployment on SSDs.

Enterprise SAS drivesEnterprise SAS drives provide high performance, reliability, availability, and serviceability. Enterprise drives rotate at 15,000 or 10,000 rpm. If an application requires high-performance data throughput and continuous, intensive I/O operations, enterprise drives are the best price–performance option.

Nearline-SASWhen disk alternatives are analyzed, keep in mind that the 4-TB Nearline drives are the largest and slowest of the drives that are available for the DS8870. Due to the poorer seek time when comparing to Enterprise SAS drives, Nearline-SAS drives are not designed to support high performance or I/O intensive applications for which data is accessed mostly in random manner. Nearline-SAS drives can be a cost-efficient storage option for sequential workloads.

RAID levelThe DS8000 series offers RAID 5, RAID 6, and RAID 10.

RAID 5Normally, RAID 5 is used because it provides good performance for random and sequential workloads and it does not need much more storage for redundancy (one parity drive). The DS8000 series can detect sequential workload. When a complete stripe is in cache for destage, the DS8000 series switches to a RAID three-like algorithm. Because a complete stripe must be destaged, the old data and parity do not need to be read. Instead, the new parity is calculated across the stripe, and the data and parity are destaged to disk. This configuration provides good sequential performance. A random write causes a cache hit, but the I/O is not complete until a copy of the write data is put in Non-volatile Storage (NVS). When data is destaged to disk, a write in RAID 5 causes the following four disk operations, the so-called write penalty:

� Old data and the old parity information must be read. � New parity is calculated in the device adapter.� Data and parity are written to disk.

Most of this activity is hidden to the server or host because the I/O is complete when data enters cache and NVS.

Cost-effective option: The Nearline-SAS drives offer a cost-effective option for lower priority data, such as various fixed content, data archival, reference data, and Nearline applications that require large amounts of storage capacity for lighter workloads. These drives are meant to complement, not compete with, existing Enterprise SAS drives.

Important: Solid-state flash drives should be configured as RAID 5 arrays and have the option for other RAID level types via “request for price quotation” (RPQ).

176 IBM DS8870 Architecture and Implementation

Page 195: IBM 8870 Archt

RAID 6RAID 6 is an option that increases data fault tolerance. It allows two disk failures when compared to RAID 5, which is single disk fault tolerant, by using a second independent distributed parity scheme (dual parity). RAID 6 provides a Read Performance similar to RAID 5, but has more write penalty than RAID 5 because it must write a second parity stripe.

RAID 6 should be considered in situations where you would consider RAID 5, but need increased reliability. RAID 6 was designed for protection during longer rebuild times on larger capacity drives to cope with the risk of having a second drive failure within a rank while the failed drive is being rebuilt. It has the following characteristics:

� Sequential Read: About 99% x RAID 5 Rate� Sequential Write: About 65% x RAID 5 Rate� Random 4 K 70%R/30%W IOPS: About 55% x RAID 5 Rate

The performance is degraded with two failing disks.

RAID 10A workload that is dominated by random writes benefits from RAID 10. Here, data is striped across several disks and mirrored to another set of disks. A write causes only two disk operations when compared to four operations of RAID 5. However, you need nearly twice as many disk drives for the same capacity when compared to RAID 5. Thus, for twice the number of drives (and cost), you can achieve four times more random writes, so it is worth considering the use of RAID 10 for high-performance random write workloads.

The decision to configure capacity as RAID 5, RAID 6, or RAID 10, and the amount of capacity to configure for each type, can be made at any time. RAID 5, RAID 6, and RAID 10 arrays can be intermixed within a single system and the physical capacity can be logically reconfigured later (for example, RAID 6 arrays can be reconfigured into RAID 5 arrays). However, the arrays must first be emptied because changing the RAID level is not permitted when logical volumes exist.

Important: The 4-TB Nearline drives should be configured as RAID 6 arrays. RAID 6 is an option for the enterprise SAS drives.

Important: Consult with your IBM representative for the latest information about supported RAID configurations. For more information about important restrictions on DS8870 RAID configurations, see 4.6.1, “RAID configurations” on page 83.

Chapter 7. Architectured for performance 177

Page 196: IBM 8870 Archt

7.4 DS8000 superior caching algorithms

Most, if not all, high-end disk systems have an internal cache that is integrated into the system design. The DS8870 can be equipped with up to 1024 GB of memory of which the major part is used as cache. This configuration means that more cache is available than double of what the DS8800 model offers for the same maximum disk capacity.

With its powerful POWER7+ processors, the server architecture of the DS8870 makes it possible to manage such large caches with small cache segments of 4 KB (and hence large segment tables). The POWER7+ processors have enough power to implement sophisticated caching algorithms, which are the significant advantages of the IBM DS8870 from a performance perspective. These algorithms and the small cache segment size optimize cache hits and cache utilization. Cache hits are also optimized for different workloads, such as sequential workload and transaction-oriented random workload, are active at the same time. Therefore, the DS8870 provides excellent I/O response times.

Write data is always protected by maintaining a copy of write-data in NVS of the other POWER server in DS8000 until the data is destaged to disks.

7.4.1 Sequential Adaptive Replacement Cache

The DS8000 series uses the Sequential Adaptive Replacement Cache (SARC) algorithm, which was developed by IBM Storage Development in partnership with IBM Research. It is a self-tuning, self-optimizing solution for a wide-range of workloads with a varying mix of sequential and random I/O streams. SARC is inspired by the Adaptive Replacement Cache (ARC) algorithm and inherits many features of it. For more information about ARC, see “Outperforming LRU with an adaptive replacement cache algorithm” by N. Megiddo, et al., in IEEE Computer, volume 37, number 4, pages 58–65, 2004. For more information about SARC, see “SARC: Sequential Prefetching in Adaptive Replacement Cache” by Binny Gill, et al., in “Proceedings of the USENIX 2005 Annual Technical Conference”, pages 293–308.

SARC attempts to determine the following cache characteristics:

� When data is copied into the cache.� Which data is copied into the cache.� Which data is evicted when the cache becomes full.� How the algorithm dynamically adapts to different workloads.

The DS8000 series cache is organized in 4-KB pages that are called cache pages or slots. This unit of allocation (which is smaller than the values that are used in other storage systems) ensures that small I/Os do not waste cache memory.

The decision to copy data into the DS8870 cache can be triggered from the following policies:

� Demand paging

Eight disk blocks (a 4 K cache page) are brought in only on a cache miss. Demand paging is always active for all volumes and ensures that I/O patterns with some locality discover at least recently used data in the cache.

� Prefetching

Data is copied into the cache speculatively even before it is requested. To prefetch, a prediction of likely data accesses is needed. Because effective, sophisticated prediction schemes need an extensive history of page accesses (which is not feasible in real systems), SARC uses prefetching for sequential workloads. Sequential access patterns naturally arise in video-on-demand, database scans, copy, backup, and recovery. The goal

178 IBM DS8870 Architecture and Implementation

Page 197: IBM 8870 Archt

of sequential prefetching is to detect sequential access and effectively prefetch the likely cache data to minimize cache misses. Today, prefetching is ubiquitously applied in web servers and clients, databases, file servers, on-disk caches, and multimedia servers.

For prefetching, the cache management uses tracks. A track is a set of 128 disk blocks (16 cache pages). To detect a sequential access pattern, counters are maintained with every track to record whether a track was accessed together with its predecessor. Sequential prefetching becomes active only when these counters suggest a sequential access pattern. In this manner, the DS8870 monitors application read-I/O patterns and dynamically determines whether it is optimal to stage into cache the following I/O elements:

� Only the page requested� The page that is requested plus the remaining data on the disk track� An entire disk track (or a set of disk tracks) that was not requested

The decision of when and what to prefetch is made in accordance with the Adaptive Multi-stream Prefetching (AMP) algorithm, which dynamically adapts the amount and timing of prefetches optimally on a per-application basis (rather than a system-wide basis). For more information about AMP, see 7.4.2, “Adaptive Multi-stream Prefetching” on page 180.

To decide which pages are evicted when the cache is full, sequential and random (non-sequential) data is separated into separate lists. The SARC algorithm for random and sequential data is shown in Figure 7-9.

Figure 7-9 Sequential Adaptive Replacement Cache

A page that was brought into the cache by simple demand paging is added to the head of Most Recently Used (MRU) of the RANDOM list. Without further I/O access, it goes down to the bottom of Least Recently Used (LRU). A page that was brought into the cache by a sequential access or by sequential prefetching is added to the head of MRU of the SEQ list and then goes in that list. Other rules control the migration of pages between the lists to not keep the same pages in memory twice.

To follow workload changes, the algorithm trades cache space between the RANDOM and SEQ lists dynamically and adaptively. This function makes SARC scan-resistant so that one-time sequential requests do not pollute the whole cache. SARC maintains a wanted size parameter for the sequential list. The wanted size is continually adapted in response to the workload. Specifically, if the bottom portion of the SEQ list is found to be more valuable than the bottom portion of the RANDOM list, the wanted size is increased; otherwise, the wanted

RANDOM

LRU

MRU

RANDOM bottom

SEQ

LRU

MRU

SEQ bottom

Desired size

Chapter 7. Architectured for performance 179

Page 198: IBM 8870 Archt

size is decreased. The constant adaptation strives to make optimal use of limited cache space and delivers greater throughput and faster response times for a specific cache size.

Additionally, the algorithm dynamically modifies the sizes of the two lists and the rate at which the sizes are adapted. In a steady state, pages are evicted from the cache at the rate of cache misses. A larger (or smaller) rate of misses affects a faster (or slower) rate of adaptation.

Other implementation details take into account the relationship of read and write (NVS) cache, efficient destaging, and the cooperation with Copy Services. In this manner, the DS8870 cache management goes far beyond the usual variants of the Least Recently Used/Least Frequently Used (LRU/LFU) approaches, which are widely used in other storage systems on the market.

7.4.2 Adaptive Multi-stream Prefetching

As described previously, SARC dynamically divides the cache between the RANDOM and SEQ lists, where the SEQ list maintains pages that are brought into the cache by sequential access or sequential prefetching.

In DS8870, Adaptive Multi-stream Prefetching (AMP), an algorithm that was developed by IBM Research, manages the SEQ list. AMP is an autonomic, workload-responsive, self-optimizing prefetching technology that adapts the amount of prefetch and the timing of prefetch on a per-application basis to maximize the performance of the system. The AMP algorithm solves the following problems that plague most other prefetching algorithms:

� Prefetch wastage occurs when prefetched data is evicted from the cache before it can be used.

� Cache pollution occurs when less useful data is prefetched instead of more useful data.

By wisely choosing the prefetching parameters, AMP provides optimal sequential read performance and maximizes the aggregate sequential read throughput of the system. The amount that is prefetched for each stream is dynamically adapted according to the application's needs and the space that is available in the SEQ list. The timing of the prefetches is also continuously adapted for each stream to avoid misses and any cache pollution.

SARC and AMP play complementary roles. While SARC is carefully dividing the cache between the RANDOM and the SEQ lists to maximize the overall hit ratio, AMP is managing the contents of the SEQ list to maximize the throughput obtained for the sequential workloads. Whereas SARC impacts cases that involve both random and sequential workloads, AMP helps any workload that has a sequential read component, including pure sequential read workloads.

AMP dramatically improves performance for common sequential and batch processing workloads. It also provides excellent performance synergy with DB2 by preventing table scans from being I/O bound and improves performance of index scans and DB2 utilities, such as Copy and Recover. Furthermore, AMP reduces the potential for array hot spots that result from extreme sequential workload demands.

For more information about AMP and the theoretical analysis for its optimal usage, see “AMP: Adaptive Multi-stream Prefetching in a Shared Cache” by Binny Gill, et al., in USENIX File and Storage Technologies (FAST), 13 – 16 February 2007, San Jose, CA. For a more detailed description, see “Optimal Multistream Sequential Prefetching in a Shared Cache” by Binny Gill, et al., in the ACM Journal of Transactions on Storage, October 2007.

180 IBM DS8870 Architecture and Implementation

Page 199: IBM 8870 Archt

7.4.3 Intelligent Write Caching

Another cache algorithm, referred to as Intelligent Write Caching (IWC), was implemented in the DS8000 series. IWC improves performance through better write cache management and a better destaging order of writes. This algorithm is a combination of CLOCK, a predominantly read cache algorithm, and CSCAN, an efficient write cache algorithm. Out of this combination, IBM produced a powerful and widely applicable write cache algorithm.

The CLOCK algorithm uses temporal ordering. It keeps a circular list of pages in memory, with the hand that points to the oldest page in the list. When a page must be inserted in the cache, then an R (recency) bit is inspected at the hand's location. If R is zero, the new page is put in place of the page the hand points to and R is set to 1; otherwise, the R bit is cleared and set to zero. Then, the clock hand moves one step clockwise forward and the process is repeated until a page is replaced.

The CSCAN algorithm uses spatial ordering. The CSCAN algorithm is the circular variation of the SCAN algorithm. The SCAN algorithm tries to minimize the disk head movement when servicing read and write requests. It maintains a sorted list of pending requests with the position on the drive of the request. Requests are processed in the current direction of the disk head until it reaches the edge of the disk. At that point, the direction changes. In the CSCAN algorithm, the requests are always served in the same direction. After the head arrives at the outer edge of the disk, it returns to the beginning of the disk and services the new requests in this one direction only. This process results in more equal performance for all head positions.

The basic idea of IWC is to maintain a sorted list of write groups, as in the CSCAN algorithm. The smallest and the highest write groups are joined, forming a circular queue. The new idea is to maintain a recency bit for each write group, as in the CLOCK algorithm. A write group is always inserted in its correct sorted position and the recency bit is set to zero at the beginning. When a write hit occurs, the recency bit is set to one. The destage operation proceeds, where a destage pointer is maintained that scans the circular list and looks for destage victims. Now this algorithm allows destaging of only write groups whose recency bit is zero. The write groups with a recency bit of one are skipped and the recent bit is then turned off and reset to zero, which gives an extra life to those write groups that were hit since the last time the destage pointer visited them. The concept of how this mechanism is illustrated is in Figure 7-10 on page 182.

In the DS8000 implementation, an IWC list is maintained for each rank. The dynamically adapted size of each IWC list is based on workload intensity on each rank. The rate of destage is proportional to the portion of NVS that is occupied by an IWC list (the NVS is shared across all ranks in a cluster). Furthermore, destages are smoothed out so that write bursts are not translated into destage bursts.

Another enhancement to IWC is an update to the cache algorithm that increases the residency time of data in NVS. This improvement focuses on maximizing throughput with good average response time.

Chapter 7. Architectured for performance 181

Page 200: IBM 8870 Archt

Figure 7-10 shows the concept of IWC.

Figure 7-10 Intelligent Write Caching

In summary, IWC has better or comparable peak throughput to the best of CSCAN and CLOCK across a wide gamut of write cache sizes and workload configurations. In addition, even at lower throughputs, IWC has lower average response times than CSCAN and CLOCK.

7.5 Performance considerations for logical configuration

To determine the optimal DS8870 layout, the I/O performance requirements of the servers and applications should be defined up front because they play a large part in dictating the physical and logical configuration of the disk system. Before the disk system is designed, the disk space requirements of the application should be well-understood.

7.5.1 Workload characteristics

The answers to questions such as “How many host connections do I need?” and “How much cache do I need?” always depend on the workload requirements, such as, how many I/Os per second per server, and I/Os per second per gigabyte of storage.

The following information must be considered for a detailed modeling:

� Number of I/Os per second� I/O density� Megabytes per second � Relative percentage of reads and writes� Random or sequential access characteristics� Cache-hit ratio� Response time

182 IBM DS8870 Architecture and Implementation

Page 201: IBM 8870 Archt

7.5.2 Data placement in the DS8000

After you determine the disk subsystem throughput, disk space, and the number of disks that are required by your hosts and applications, you must make a decision regarding data placement.

As is common for data placement, and to optimize DS8870 resource utilization, use the following guidelines:

� Equally spread the workload across the DS8870 servers. Spreading the volumes equally on rank group 0 and 1 balances the load across the DS8870 units.

� Balance the ranks and extent pools between the two DS8870 servers to support the corresponding workloads on them.

� Use as many disks as possible. Avoid idle disks, even if all storage capacity is not to be initially used.

� Distribute capacity and workload across DA pairs.

� Use multi-rank extent pools.

� Stripe your logical volume across several ranks (the default for multi-rank extent pools).

� Consider placing specific database objects (such as logs) on separate ranks.

� For an application, use volumes from even and odd-numbered extent pools (even-numbered pools are managed by server 0, and odd numbers are managed by server 1).

� For large, performance-sensitive applications, consider the use of two dedicated extent pools (one managed by server 0, the other managed by server 1).

� Consider mixed extent pools with multiple tiers with solid-state flash as the highest tier, managed by IBM Easy Tier.

Generally speaking, in a typical DS8870 configuration with equally distributed workloads on two servers, the two extent pools (Extpool 0 and Extpool 1) are created, each with half of the ranks inside, as shown in Figure 7-11. The ranks in each extent pool spread equally on each DA pair.

Figure 7-11 Ranks in a multi-rank extent pool configuration that is balanced across DS8000 servers

All disks in the storage disk system should have roughly equivalent utilization. Any disk that is used more than the other disks becomes a bottleneck to performance. A practical method is

ExtPool 0 ExtPo ol 1

DA2 DA2

DA0 DA0

DA3 DA3

DA1 DA1

Server 0 Server 1

Chapter 7. Architectured for performance 183

Page 202: IBM 8870 Archt

to use IBM Easy Tier auto-rebalancing. Alternatively, make extensive use of volume-level striping across disk drives.

Data StripingFor optimal performance, your data should be spread across as many hardware resources as possible. RAID 5, RAID 6, or RAID 10 already spreads the data across the drives of an array, but this configuration is not always enough. The following approaches can be used to spread your data across even more disk drives:

� Storage-Pool Striping (usually combined with automated intra-tier auto-rebalancing)� Striping at the host level

Intra-tier auto-rebalancing or auto-rebalance is a capability of IBM Easy Tier that automatically rebalances the workload across all ranks of a storage tier within a managed extent pool. Auto-rebalance migrates extents across ranks within a storage tier to achieve a balanced workload distribution across the ranks and avoid hotspots. By doing so, auto-rebalance reduces performance skew within a storage tier and provides the best available I/O performance from each tier. Furthermore, auto-rebalance also automatically populates new ranks that are added to the pool when the workload is rebalanced within a tier. Auto-rebalance can be enabled for hybrid and homogeneous extent pools.

Figure 7-12 Select the All Pools option to balance not only mixed storage pools

Important: It is suggested to use IBM Easy Tier to balance workload across all ranks even when solid-state flash drives are not installed. Use the option that is shown in Figure 7-12 to balance all of the storage pools.

184 IBM DS8870 Architecture and Implementation

Page 203: IBM 8870 Archt

Storage-Pool Striping: Extent RotationStorage-Pool Striping is a technique for spreading the data across several disk arrays. The I/O capability of many disk drives can be used in parallel to access data on the logical volume.

The easiest way to stripe is to use extent pools with more than one rank and use Storage-Pool Striping when a new volume (see Figure 7-13) is allocated. This striping method is independent of the operating system.

Figure 7-13 Storage Pool Striping

In 7.3, “Performance considerations for disk drives” on page 175, we describe how many random I/Os can be performed for a standard workload on a rank. If a volume is on just one rank, the I/O capability of this rank also applies to the volume. However, if this volume is striped across several ranks, the I/O rate to this volume can be much higher.

The total number of I/Os that can be performed on a set of ranks does not change with Storage-Pool Striping.

Important: Use Storage-Pool Striping and extent pools with a minimum of four to eight ranks to avoid hot spots on the disk drives. In addition to this configuration, consider combining it with Easy Tier auto-rebalancing.

Rank 1Extent pool

Rank 3

Rank 4

Extent

1GBRank 2

Storage Pool Striping4 rank per Extent Pool

8 GB LUN

Chapter 7. Architectured for performance 185

Page 204: IBM 8870 Archt

A good configuration is shown Figure 7-14. The ranks are attached to DS8870 server 0 and server 1 in a half-and-half configuration, and ranks on separate device adapters are used in a multi-rank extent pool.

Figure 7-14 Balanced extent pool configuration

Striping at the host levelMany operating systems include the option to stripe data across several (logical) volumes. An example is the AIX Logical Volume Manager (LVM).

LVM striping is a technique for spreading the data in a logical volume across several disk drives in such a way that the I/O capacity of the disk drives can be used in parallel to access data on the logical volume. The primary objective of striping is high-performance reading and writing of large sequential files, but there are also benefits for random access.

Other examples for applications that stripe data across the volumes include the SAN Volume Controller and IBM System Storage N series Gateway.

6+P +SDA2

DA0

DA3

DA1

DA2

DA0

DA3

DA1

DA2

DA0

DA3

DA1

DA2

DA0

DA3

DA1

6+P +S

6+P +S

6+P +S

6+P +S

6+P +S

6+P +S

6+P+S

6+P+S

6+P+S

6+P+S

6+P+S

6+P+S

6+P+S

7+P

7+P

7+P

7+P

7+P

7+P

7+P

7+P

6+P+S

7+P

7+P

7+P

7+P

7+P

7+P

7+P

7+P

6+P +S

DA2

DA0

DA3

DA1

DA2

DA0

DA3

DA1

DA2

DA0

DA3

DA1

DA2

DA0

DA3

DA1

DS8000

Server 0 Server 1

Extent Pool P0

Extent Pool P2

Extent Pool P1

Extent Pool P3

186 IBM DS8870 Architecture and Implementation

Page 205: IBM 8870 Archt

If you use a Logical Volume Manager (such as LVM on AIX) on your host, you can create a host logical volume from several DS8000 series logical volumes (LUNs). You can select LUNs from different DS8870 servers and device adapter pairs, as shown in Figure 7-15. By striping your host logical volume across the LUNs, the best performance for this LVM volume is realized.

Figure 7-15 Optimal placement of data

Figure 7-15 shows an optimal distribution of eight logical volumes within a DS8870. You could have more extent pools and ranks, but when you want to distribute your data for optimal performance, you should make sure that you spread it across the two servers, across different device adapter pairs, and across several ranks.

The striping on host level can work with Storage-Pool Striping to help you spread the workloads on more ranks and disks. Although combining extent pools that are made up of one rank and then striping in LVM, LUNs that were created on each extent pool offer another alternative balanced method to evenly spread data across the DS8870 without the use of extent pool striping, as shown on the left side of Figure 7-16 on page 188.

If you use multi-rank extent pools and you do not use Storage-Pool Striping nor the Easy Tier auto-rebalance, you must be careful where to put your data, or you can easily unbalance your system (as shown on the right side of Figure 7-16 on page 188).

Ser

ver

0

Ser

ver

1

Extent Pool FB-0c

Extent Pool FB-1aExtent Pool FB-0a

Extent Pool FB-1c

DA

pa

ir 1

DA

pai

r 1

DA

pai

r 2

DA

pai

r 2

Host LVM volume

Extent Pool FB-1bExtent Pool FB-0b

Extent Pool FB-0d Extent Pool FB-1d

LSS 00 LSS 01

Chapter 7. Architectured for performance 187

Page 206: IBM 8870 Archt

Figure 7-16 Spreading data across ranks (with Easy Tier auto-rebalance not used)

Each striped logical volume that is created by the host’s logical volume manager has a stripe size that specifies the fixed amount of data that is stored on each DS8000 LUN at one time.

The stripe size must be large enough to keep sequential data relatively close together, but not too large to keep the data on a single array.

We suggest that you define stripe sizes by using your host’s logical volume manager in the range of 4 MB – 64 MB. You should choose a stripe size close to 4 MB if you have many applications that share the arrays and a larger size when you have few servers or applications that share the arrays.

Combining extent pool striping and Logical Volume Manager stripingStriping by a Logical Volume Manager (LVM) is done on a stripe size in the MB range (about 64 MB). Extent pool striping is done at a 1 GiB stripe size. Both methods can be combined. LVM striping can stripe across extent pools and use volumes from extent pools that are attached to server 0 and server 1 of the DS8000 series. If you already use LVM physical partition (PP) wide striping, you might want to continue to use that striping.

Important: Striping at the host layer contributes to an equal distribution of I/Os to the disk drives to reduce hot spots. But, if you are using tiered extent pools with solid-state flash drives, IBM Easy Tier can work best if there are hot extents that can be moved to flash drives.

Extent PoolRank 5

Rank 6

Rank 7

Rank 8

8GB LUN

Extent

1GB

Rank 1 Extent pool 1

Rank 3

Rank 4

2 GB LUN 1

Extent

1GB

Extent Pool 5

Rank 2

Non-balanced implementation: LUNs across ranksMore than 1 rank per extent pool

2GB LUN 2

2GB LUN 3

2GB LUN 4

Extent pool 2

Extent pool 3

Extent pool 4

Balanced implementation: LVM striping1 rank per extent pool

LV striped across 4 LUNs

188 IBM DS8870 Architecture and Implementation

Page 207: IBM 8870 Archt

7.6 I/O Priority Manager

It is common practice to have large extent pools and stripe data across all disks. However, when production workload and, for example, test systems, share the physical disk drives, potentially the test system could negatively affect the production performance.

DS8000 series I/O Priority Manager is a licensed function feature that is available for the DS8870. It enables more effective storage consolidation and performance management and the ability to align quality of service (QoS) levels to separate workloads in the system, which are competing for the same shared and possibly constrained storage resources.

I/O Priority Manager constantly monitors system resources to help applications meet their performance targets automatically, without operator intervention. The DS8870 storage hardware resources that are monitored by the I/O Priority Manager for possible contention are the RAID ranks and device adapters.

I/O Priority Manager uses QoS to assign priorities for different volumes and applies network QoS principles to storage by using a particular algorithm that is called Token Bucket Throttling for traffic control. I/O Priority Manager is designed to understand the load on the system and modify it by using dynamic workload control.

The I/O of less important workload is slowed down to give the higher priority workload a higher share of the resources.

Figure 7-17 shows a three-step example of how I/O Priority Manager uses dynamic workload control.

Figure 7-17 Automatic control of disruptive workload

In step 1, critical application A works normally. In step 2, a non-critical application B begins to work causing performance degradation for application A. In step 3, I/O Priority Manager

Important: If you separated production and non-production data by using different extent pools and different device adapters, you do not need the I/O Priority Manager.

Chapter 7. Architectured for performance 189

Page 208: IBM 8870 Archt

detects automatically the QoS impact on critical application A and dynamically restores the performance for application A.

7.6.1 Performance policies for open systems

When I/O Priority Manager is enabled, each volume is assigned to a performance group when the volume is created. Each performance group has a QoS target. This QoS target is used to determine whether a volume is experiencing appropriate response times.

A performance group associates the I/O operations of a logical volume with a performance policy that sets the priority of a volume relative to other volumes. All volumes fall into one of the performance policies.

For open systems, I/O Priority Manager includes four defined performance policies: Default (unmanaged), high priority, medium priority, and low priority. I/O Priority Manager includes 16 performance groups: five performance groups each for the high, medium, and low performance policies, and one performance group for the default performance policy.

The following performance policies are available:

� Default performance policy

The default performance policy does not have a QoS target that is associated with it. I/Os to volumes that are assigned to the default performance policy are never delayed by I/O Priority Manager.

� High priority performance policy

The high priority performance policy has a QoS target of 70. I/Os from volumes that are associated with the high performance policy attempt to stay under approximately 1.5 times the optimal response of the rank. I/Os in the high performance policy are never delayed.

� Medium priority performance policy

The medium priority performance policy has a QoS target of 40. I/Os from volumes with the medium performance policy attempt to stay under 2.5 times the optimal response time of the rank.

� Low performance policy

Volumes with a low performance policy have no QoS target and have no goal for response times. If there is no bottleneck for a shared resource, low priority workload is not pruned. However, if a higher priority workload does not achieve its goal, the I/O of low priority workload is slowed down first by delaying the response to the host. This delay is increased until the higher-priority I/O meets its goal. The maximum delay added is 200 ms.

7.6.2 Performance policies for System z

With System z, there are 14 performance groups: Three performance groups for high-performance policies, four performance groups for medium-performance policies, six performance groups for low-performance policies, and one performance group for the default performance policy.

Two operation modes are available for I/O Priority Manager only with System z: without software support or with software support.

Important: Only z/OS operating systems use the I/O Priority Manager with software support.

190 IBM DS8870 Architecture and Implementation

Page 209: IBM 8870 Archt

I/O Priority Manager count key data supportIn a System z environment, I/O Priority Manager includes the following characteristics:

� User assigns a performance policy to each count key data (CKD) volume that applies in the absence of more software support.

� z/OS can optionally specify parameters that determine priority of each I/O operation and allow multiple workloads on a single CKD volume to have different priorities.

� Supported on z/OS V1.11, V1.12, V1.13, and above

� Without z/OS software support, on ranks in saturation, the volume’s I/O is managed according to the volume’s performance group performance policy

� With z/OS software support:

– User assigns application priorities via eWLM

– z/OS assigns an importance value to each I/O based on eWLM inputs

– z/OS assigns an achievement value to each I/O based on prior history of I/O response times for I/O with the same importance and based on eWLM expectations for response time

– Importance and achievement value on I/O associates this I/O with a performance policy (independently of the volume’s performance group/performance policy)

– On ranks in saturation, I/O is managed according to the I/O’s performance policy

If there is no bottleneck for a shared resource, low priority workload is not pruned. However, if a higher priority workload does not achieve its goal, the I/O of low priority workload is slowed down first by delaying the response to the host. This delay is increased until higher-priority I/O meets its goal. The maximum delay added is 200 ms.

7.7 IBM Easy Tier

IBM Easy Tier on the DS8000 can enhance performance and balance workloads through the following capabilities:

� Automated hot spot management and data relocation� Auto-rebalancing� Manual volume rebalancing and volume migration� Rank depopulation� Extent pool merging� Cooperative caching between DS8870 and AIX server direct-attached SSDs� Directive data placement from applications� On FlashCopy activities, adequately assigning workloads to source and target volumes for

best production optimization� Heat map transfer from PPRC source to target

For more detailed information about IBM Easy Tier, see IBM DS8870 Easy Tier, REDP-4667.

For more information: For more information about I/O Priority Manager, see DS8000 I/O Priority Manager, REDP-4760.

Chapter 7. Architectured for performance 191

Page 210: IBM 8870 Archt

7.7.1 Easy Tier generations

Figure 7-18 shows the evolution of IBM Easy Tier.

Figure 7-18 Easy Tier generations

The first generation of IBM Easy Tier introduced automated storage performance management by efficiently boosting Enterprise-class performance with SSDs. It also automated storage tiering from Enterprise-class drives to SSDs, thus optimizing SSD deployments with minimal costs. It also introduced dynamic volume relocation and dynamic extent pool merge.

The second generation of IBM Easy Tier added automated storage economics management by combining Enterprise-class drives with Nearline drives to maintain Enterprise-tier performance while shrinking the footprint and reducing costs with large capacity Nearline drives. The second generation also introduced intra-tier performance management (auto-rebalance) for hybrid pools and manual volume rebalance and rank depopulation.

The third generation of IBM Easy Tier introduced further enhancements, which provided automated storage performance and storage economics management across all three drive tiers. With these enhancements, you can consolidate and efficiently manage more workloads on a single DS8000 system. It also introduced support for auto-rebalance in homogeneous pools and support for thin provisioned extent space-efficient (ESE) volumes.

The fourth generation brought the support for Full Disk Encryption (FDE) drives. IBM Easy Tier can perform volume migration, auto performance rebalancing in homogeneous and hybrid pools, hot spot management, rank depopulation, and thin provisioning (ESE volumes only) on encrypted drives and the non-encrypted drives.

Full Disk Encryption support: All drive types in DS8870 support Full Disk Encryption. Encryption usage is optional. Whether you use encryption or not, there is no difference in performance and Easy Tier functionality.

Easy Tier Functions by ReleaseEasy Tier Release/DS8000 Model

Introduced in

Microcode Release

Tier Support Auto Mode(Sub Volume)

Manual Mode(Full Volume)

Easy Tier 1DS8700 R5.1

Two tierSSD + ENTSSD + NL

• Promote• Demote• Swap

• Dynamic extent pool merge

• Dynamic volume relocation

Easy Tier 2DS8700DS8800

R6.1R6.1

Any two tiersSSD + ENTSSD + NLENT + NL

• Promote• Demote• Swap• Auto Rebalance (Hybrid poo l only)

• Rank depopula tion• Manual volume

reba lance

Easy Tier 3DS8700DS8800

R6.2R6.2

Any three tiersSSD + ENT + NL

• Auto Rebalance (Homogeneous Pool)• ESE Volume support

Easy Tier 4DS8700 DS8800DS8870

R6.3R6.3R7.0

Full support for FDE (encryption) drives

• Automatic data re location capabilities for all FDE disk environments

• Support for all manual mode command for FDE environments

Easy Tier 5DS8870 R7.1

Easy Tier Application

Easy Tier Heat Map Transfer

Easy Tier Server

• Storage administrators can control data placement via CLI

• Provides di rective data placement API to enab le software integration

• Learn ing data capture and app ly for heat map transfer for remote copy environments

• Unified storage caching and tieringcapabili ty fo r AIX servers

1

192 IBM DS8870 Architecture and Implementation

Page 211: IBM 8870 Archt

Detailed information about Easy Tier in general is available in the IBM Redpaper publication: IBM DS8870 EasyTier, REDP-4667.

The IBM Easy Tier fifth generation brings three new features:

� Easy Tier Server is a unified storage caching and tiering capability that can be used when attaching to AIX servers with any supported direct-attached flash. Easy Tier can manage the data placement across direct-attached flash within scale-out servers and DS8870 storage tiers by placing a copy of the “hottest” data on the direct attached flash, while maintaining the persistent copy of data on DS8870 and supporting DS8870 advanced functions.

� Easy Tier Application is designed to be an application-aware storage interface to help deploy storage more efficiently through enabling applications and middleware to direct more optimal placement of data. The new Easy Tier Application feature enables administrators to assign distinct application volumes to a particular tier in the Easy Tier pool. This provides a flexible option for customers that want certain applications to remain on a particular tier to meet performance and cost requirements.

� Easy Tier Heat Map Transfer is able to provide whatever the data placement algorithm is on the Metro Mirror/Global Copy/Global Mirror (MM/GC/GM) primary site and reapply it on the MM/GC/GM secondary site through the Easy Tier Heat Map Transfer utility. With this capability, DS8000 systems can maintain application-level performance at an application secondary site when it takes over in supporting a workload after a failover from the primary to secondary site.

For more information about the new features of the fifth generation of IBM Easy Tier, see the following IBM Redpaper publications:

� IBM System Storage DS8000 Easy Tier Server, REDP-5013

� IBM System Storage DS8000 Easy Tier Application, REDP-5014

� IBM System Storage DS8000 Easy Tier Heat Map Transfer, REDP-5015

7.7.2 Easy Tier license

There are two optional and no-charge licenses in IBM Easy Tier scope on DS8870.

Easy Tier: feature code 0713 (Function Authorization 7083)Easy Tier Server: feature code 0715 (Function Authorization 7084)

Note: The fifth generation of IBM Easy Tier requires DS8870 with Licensed Machine Code (LMC) 7.7.10.xx (bundle version 87.10.xxx.xx) or later.

Chapter 7. Architectured for performance 193

Page 212: IBM 8870 Archt

The Easy Tier license (7083) covers all the IBM Easy Tier capabilities except Easy Tier Server. If the Easy Tier Server function needs to be enabled, both Easy Tier (7083) and Easy Tier Server (7084) licenses are required.

7.7.3 Easy Tier basic concepts

IBM Easy Tier monitors I/O access at the Extent level and keeps a history of access density, read/write ratio, sequential or random access, and cache hit ratio. Because IBM Easy Tier is used to optimize drive usage, it looks only at I/Os to the drives and ignores cache hits. The history of access pattern is exponentially weighted, which gives a higher weight to the last 24 hours than to the older observations.

IBM Easy Tier assumes that SSDs do not benefit much from sequential workloads and that Nearline disks are good candidates for data that is primarily accessed sequentially. Do not worry that IBM Easy Tier might shift around all data when the nightly batch processing starts or data backup jobs run. These processes are sequential in nature and despite high I/O activity, IBM Easy Tier does not move this data to SSDs immediately.

IBM Easy Tier also considers that data movement puts some load on the disk backend. Therefore, there must be a real benefit in moving extents around; otherwise, if there is only a small difference in I/O activity between extents, Easy Tier does not move an extent.

Important: Easy Tier (7083) and Easy Tier Server (7084) are both no-charge features on DS8870. However, as with any other acquired licensed function, they must first be ordered from IBM. The necessary activation codes can then be obtained from the IBM data storage feature activation (DSFA) website:

http://www.ibm.com/storage/dsfa

These codes are then applied using the respective DS command-line interface (CLI) or graphical user interface (GUI) command. See Chapter 10, “IBM System Storage DS8000 features and licensed functions” on page 259 for more information about how to obtain and activate DS8000 license keys.

Important: To move extents, IBM Easy Tier needs at least a few unused extents in each extent pool. As a guideline, consider at least one to three extents for each rank in an extent pool.

194 IBM DS8870 Architecture and Implementation

Page 213: IBM 8870 Archt

The basic IBM Easy Tier migration cycle is shown in Figure 7-19. Tier 2 features the lowest disks (Nearline-SAS). Tier 1 features the next performance level (SAS), and tier 0 features the fastest disks (Flash SSD type).

Figure 7-19 IBM Easy Tier migration cycle

The following tasks are part of the basic cycle:

1. IBM Easy Tier monitors the performance of each extent to determine the data temperature (I/O Activity).

2. An extent migration plan is created for optimal data placement every 24 hours that is based on performance statistics.

3. Extents are migrated within an extent pool according to the plan over a 24-hour period. A limited number of extents are chosen for migration every 5 minutes.

7.7.4 IBM Easy Tier operating modes

IBM Easy Tier has two different operating modes to optimize the data placement inside a DS8000: automatic and manual. In this section, we describe these modes of operation.

IBM Easy Tier Manual ModeIBM Easy Tier Manual Mode provides the following extended capabilities for logical configuration management:

� Dynamic volume relocation� Dynamic extent pool merge� Rank depopulation capabilities

Tier 0: Solid-State Flash

Tier 2: Nearline-SAS

Tier 1: Enterprise SASTier 1: Enterprise SAS

Workloadhotspotanalysis

Continuousworkload

monitoring

Smart dataplacement

Chapter 7. Architectured for performance 195

Page 214: IBM 8870 Archt

Volume-Based Data Relocation (Dynamic Volume Relocation)As shown in Figure 7-20, IBM Easy Tier is a DS8000 built-in dynamic data relocation feature that allows host-transparent movement of data among the storage system resources. This feature significantly improves configuration flexibility and performance tuning and planning. It allows a user to initiate a volume migration from its current extent pool (source extent pool) to another extent pool (target extent pool). During the volume relocation process, the volume remains accessible to hosts.

Figure 7-20 Volume-Based Data Relocation (Dynamic volume relocation)

Dynamic extent pool mergeDynamic extent pool merge is an IBM Easy Tier Manual Mode capability that allows the initiation of a merging process of one extent pool (source extent pool) into another extent pool (target extent pool). During this process, all of the volumes in the source and target extent pools remain accessible to the hosts.

Limitations: Dynamic volume relocation is allowed only among extent pools with the same server affinity or rank group. Additionally, the dynamic volume relocation is not allowed in the following circumstances:

� If source and target pools feature different storage types (FB and CKD)� If the volume to be migrated is a track space-efficient (TSE) volume

Limitations: Dynamic extent pool merge is allowed only among extent pools with the same server affinity or rank group. Additionally, the dynamic extent pool merge is not allowed in the following circumstances:

� If source and target pools feature different storage types (FB and CKD)� If both extent pools contain track space-efficient (TSE) volumes� If there are TSE volumes on the SSD ranks� If you selected an extent pool that contains volumes that are being migrated� If the combined extent pools include 2 PB or more of ESE logical capacity (virtual

capacity).

Important: No actual data movement is performed during a dynamic extent pool merge; only logical definition updates occur.

196 IBM DS8870 Architecture and Implementation

Page 215: IBM 8870 Archt

Rank DepopulationRank depopulation is an IBM Easy Tier Manual Mode capability that allows a user to unassign a rank from an extent pool, even if the rank includes extents that are allocated by volumes in the pool. For the rank to be unassigned, IBM Easy Tier automatically attempts to migrate all of the allocated extents to other ranks within the same extent pool. During this process, the affected volumes remain accessible to hosts.

IBM Easy Tier Automatic ModeIn Automatic Mode, IBM Easy Tier dynamically manages the capacity in single-tier (homogeneous) extent pools (auto-rebalance) and multitier (hybrid) extent pools that contain up to three different disk tiers.

IBM Easy Tier Automatic Mode can be enabled for all extent pools (including single-tier pools), for only multi-tier pools, or no extent pools, which means disabled. Extent pools that are handled by Easy Tier are referred to as managed pools. Extent pools that are not handled by IBM Easy Tier Automatic Mode are referred to as non-managed pools.

IBM Easy Tier Automatic Mode manages the data relocation across different tiers (inter-tier or cross-tier management) and within the same tier (intra-tier management). The cross-tier or inter-tier capabilities deal with the Automatic Data Relocation (ADR) feature that aims to relocate the extents of each logical volume to the most appropriate storage tier within the extent pool to improve the overall storage cost-to-performance ratio. This task is done without any user intervention and is fully transparent to the application host. Logical volume extents with high latency in the rank are migrated to storage media with higher performance characteristics. Extents with low latency in the rank are kept in storage media with lower performance characteristics.

After a migration of extents is finished, the degree of hotness of the extents does not stay the same over time. Eventually, certain extents on a higher performance tier become cold and other extents on a lower-cost tier become hotter compared to cold extents on the higher performance tier. When this event occurs, cold extents on a higher performance tier are eventually demoted or swapped to a lower-cost tier and replaced by new hot extents from the lower-cost tier. IBM Easy Tier always evaluates first if the cost of moving an extent to a higher performance tier is worth the expected performance gain. This migration scenario is shown in Figure 7-21.

Figure 7-21 Automatic Mode

Chapter 7. Architectured for performance 197

Page 216: IBM 8870 Archt

For more information about IBM Easy Tier, see IBM DS8870 Easy Tier, REDP-4667.

7.8 Performance and sizing considerations for open systems

In the following sections, we describe topics that are relevant to open systems.

7.8.1 Determining the number of paths to a LUN

When configuring a DS8000 series for an open systems host, a decision must be made regarding the number of paths to a particular LUN because the multipathing software allows (and manages) multiple paths to a LUN. The following opposing factors must be considered when you are deciding on the number of paths to a LUN:

� Increasing the number of paths increases availability of the data, which protects against outages.

� Increasing the number of paths increases the amount of CPU that is used because the multipathing software must choose among all available paths each time an I/O is issued.

A good compromise is between two and four paths per LUN; eight paths can be considered if a high data rate is required.

7.8.2 Dynamic I/O load-balancing: Subsystem Device Driver

The Subsystem Device Driver (SDD) is an IBM provided pseudo-device driver that is designed to support the multipath configuration environments in the DS8000. It is in a host system with the native disk device driver.

The dynamic I/O load-balancing option (default) of SDD is suggested to ensure better performance for the following reasons:

� SDD automatically adjusts data routing for optimum performance. Multipath load balancing of data flow prevents a single path from becoming overloaded, causing I/O congestion that occurs when many I/O operations are directed to common devices along the same I/O path.

� The path to use for an I/O operation is chosen by estimating the load on each adapter to which each path is attached. The load is a function of the number of I/O operations currently in process. If multiple paths include the same load, a path is chosen at random from those paths.

IBM SDD is available for most operating environments. On some operating systems, SDD offers an installable package to work with their native multipathing software as well. For example, there is the SDDPCM available for AIX and SDDDSM available for Windows.

For more information about the multipathing software that might be required for various operating systems, see the IBM System Storage Interoperability Center (SSIC) at this website:

http://www.ibm.com/systems/support/storage/config/ssic/index.jsp

SDD is covered in more detail in the following IBM publications:

� IBM System Storage DS8000: Host attachment and Interoperability, SG24-8887� IBM System Storage DS8000 Host Systems Attachment Guide, SC27-4210

198 IBM DS8870 Architecture and Implementation

Page 217: IBM 8870 Archt

7.8.3 Automatic port queues

When there is I/O between a server and a DS8870 Fibre Channel port, the server host bus adapter (HBA) and the DS8870 host adapter support queuing I/Os. How long this queue can be is called the queue depth. Because several servers can and usually do communicate with few DS8870 ports, the queue depth of a storage host adapter should be larger than the one on the server side. This parameter is also true for the DS8870, which supports 2048 FC commands queued on a port. However, sometimes the port queue in the DS8870 host adapter can be flooded.

When the number of commands that are sent to the DS8000 port exceeds the maximum number of commands that the port can queue, the port discards these additional commands.

This operation is a normal error recovery operation in the Fibre Channel protocol to allow for over provisioning on the SAN. The normal recovery is a 30-second timeout for the server. After that time, the command is resent. The server includes a command retry count before it fails the command. Command Timeout entries are seen in the server logs.

Automatic Port Queues is a mechanism that the DS8870 uses to self-adjust the queue that is based on the workload. This mechanism allows higher port queue oversubscription while maintaining a fair share for the servers and the accessed LUNs.

The port that the queue is filling up goes into SCSI Queue Full mode, where it accepts no additional commands to slow down the I/Os.

By avoiding error recovery and the 30-second blocking SCSI Queue Full recovery interval, the overall performance is better with Automatic Port Queues.

Chapter 7. Architectured for performance 199

Page 218: IBM 8870 Archt

7.8.4 Determining where to attach the host

When you are determining where to attach multiple paths from a single host system to I/O ports on a host adapter to the storage system, the following considerations apply:

� Choose the attached I/O ports on separate host adapters.� Spread the attached I/O ports evenly between the I/O enclosures.

The DS8000 series host adapters have no server affinity, but the device adapters and the rank have server affinity. Figure 7-22 shows a host that is connected through two FC adapters to two DS8000 host adapters in separate I/O enclosures.

Figure 7-22 Dual-port host attachment

The host has access to LUN 0, which is created in the extent pool 0 that is controlled by the DS8000 server 0. The host system sends read commands to the storage server.

When a read command is executed, one or more logical blocks are transferred from the selected logical drive through a host adapter over an I/O interface to a host. In this case, the logical device is managed by server 0, and the data is handled by server 0.

Options for four-port and eight-port host adapters are available in the DS8870. These eight-port cards provide only more connectivity, not more total throughput. If you want the maximum throughput of the DS8870, consider doubling the number of host adapters and use only two ports of a four-port host adapter.

ProcessorMemory

Processor

L1,2 Memory

L3 Memory

L1,2 Memory

RIO-G Module

SERVER 0Processor Memory

Processor

L1,2 Memory

L3 Memory

L1,2 Memory

RIO-G Module

SERVER 1

DA

PCIe Interconnect

ooo

20 port switch

20 port switch

16 DDM

ooo

20 port switch

20 port switch16 DDM

DA

IBM

DAs with an affinity to server 0

FC1HA

HAs do not have DS8000 server affinity

Extent pool 1

LUN1

LUN1

LUN1

Extent pool 1controlled by server 0

oo oo Extent pool 4controlled by server 1

FC0 I/OsI/Os

ReadsReads

DAs with an affinity to server 1

Extent pool 4

200 IBM DS8870 Architecture and Implementation

Page 219: IBM 8870 Archt

7.9 Performance and sizing considerations for System z

Here we describe several System z specific topics regarding the performance potential of the DS8000 series. We also describe the considerations that you must have when you configure and size a DS8000 that replaces older storage hardware in System z environments.

7.9.1 Host connections to System z servers

Each I/O enclosure can hold up to two host adapters. You can configure each port of a host adapter to operate as a Fibre Channel port (for example, for mirroring) or as a FICON port. You can mix ports on an adapter (some to operate as FICON ports, some as Fibre Channel ports), but you might also want to use dedicated adapters for FICON.

FICON ports can be directly attached to a System z host or through a FICON capable SAN switch or director. It is suggested that the switch/director supports the Control Unit Port (CUP) feature to enable switch management through z/OS. Distribute the FICON ports across the host adapters and across the I/O enclosures.

DS8870 host adapters are available as four-port and eight-port cards. These eight-port cards provide more connectivity, not more total throughput. If you want the maximum throughput of the DS8870, consider doubling the number of host adapters and use only two ports of a four-port host adapter.

The DS8870 FICON ports support zHPF I/O from z/OS if the zHPF feature is present.

7.9.2 Parallel access volume

Parallel access volume (PAV) is an optional licensed function of the DS8000 for the z/OS and z/VM operating systems, which helps the System z servers that are running applications to concurrently share logical volumes.

The ability to handle multiple I/O requests to the same volume nearly eliminates I/O supervisor queue (IOSQ) delay time, one of the major components in z/OS response time. Traditionally, access to highly active volumes involved manual tuning, splitting data across multiple volumes, and more. With PAV and the Workload Manager (WLM), you can almost forget about manual performance tuning. WLM manages PAVs across all the members of a sysplex, too.

Traditional z/OS behavior without PAVTraditional storage disk subsystems allowed for only one channel program to be active to a volume at a time to ensure that data that is accessed by one channel program cannot be altered by the activities of another channel program.

Chapter 7. Architectured for performance 201

Page 220: IBM 8870 Archt

The traditional z/OS behavior without PAV, where subsequent simultaneous I/Os to volume 100 are queued while volume 100 is still busy with a preceding I/O, is shown in Figure 7-23.

Figure 7-23 Traditional z/OS behavior

From a performance standpoint, it did not make sense to send more than one I/O at a time to the storage system because the hardware could process only one I/O at a time. Knowing this fact, the z/OS systems did not try to issue another I/O to a volume, which, in z/OS, is represented by a unit control block (UCB), while an I/O was already active for that volume, as indicated by a UCB busy flag (see Figure 7-23).

Not only were the z/OS systems limited to processing only one I/O at a time, but the storage subsystems accepted only one I/O at a time from different system images to a shared volume, for the same reasons that were previously mentioned (see Figure 7-23).

Parallel I/O capability z/OS behavior with PAVThe DS8000 performs more than one I/O to a CKD volume. By using the alias address and the conventional base address, a z/OS host can use several UCBs for the same logical volume instead of one UCB per logical volume. For example, base address 100 might include alias addresses 1FF and 1FE, which allows for three parallel I/O operations to the same volume, as shown in Figure 7-24 on page 203.

PAV allows parallel I/Os to a volume from one host. The following basic concepts are featured in PAV functionality:

� Base address

The base device address is the conventional unit address of a logical volume. There is only one base address that is associated with any volume.

� Alias address

An alias device address is mapped to a base address. I/O operations to an alias run against the associated base address storage space. No physical space is associated with an alias address. You can define more than one alias per base.

100

Appl. A Appl. B

UCB 100UCB 100

UCB Busy

Appl. C

UCB 100

Device Busy

One I/O toone volumeat one time System zSystem z

202 IBM DS8870 Architecture and Implementation

Page 221: IBM 8870 Archt

Figure 7-24 z/OS behavior with PAV

Alias addresses must be defined to the DS8000 and to the I/O definition file (IODF). This association is predefined, and you can add new aliases nondisruptively. Still, the association between base and alias is not fixed; the alias address can be assigned to another base address by the z/OS Workload Manager (WLM).

For more information about PAV definition and support, see DS8000: Host Attachment and Interoperability, SG24-8887.

7.9.3 z/OS Workload Manager: Dynamic PAV tuning

It is not always easy to predict which volumes should have an alias address assigned, and how many. Your software can automatically manage the aliases according to your goals. z/OS can use automatic PAV tuning if you are using the z/OS WLM in Goal mode. The WLM can dynamically tune the assignment of alias addresses. The Workload Manager monitors the device performance and is able to dynamically reassign alias addresses from one base to another if predefined goals for a workload are not met.

z/OS recognizes the aliases that are initially assigned to a base during the nucleus initialization program (NIP) phase. If dynamic PAVs are enabled, the WLM can reassign an alias to another base by instructing the I/Os to do so when necessary, as shown in Figure 7-25 on page 204.

Optional licensed function: PAV is an optional licensed function on the DS8000 series. PAV also requires the purchase of the FICON Attachment feature.

Physical layer

Appl. A Appl. C

UCB 100UCB 1FEalias toUCB 100

100

z/OS Single image

Appl. B

UCB 1FFalias toUCB 100

Logical volumeDS8000 with PAV

System z

concurrent I/Os to volume 100using different UCBs --- no one is queued

Chapter 7. Architectured for performance 203

Page 222: IBM 8870 Archt

Figure 7-25 WLM assignment of alias addresses

z/OS Workload Manager in Goal mode tracks system workloads and checks whether workloads are meeting their goals as established by the installation.

WLM also tracks the devices that are used by the workloads, accumulates this information over time, and broadcasts it to the other systems in the same sysplex. If WLM determines that any workload is not meeting its goal because of IOSQ time, WLM attempts to find an alias device that can be reallocated to help this workload achieve its goal, as shown in Figure 7-26.

Figure 7-26 Dynamic PAVs in a sysplex

Base Alias Alias

IOS

UCB 100

Alias Base100 110to 100 to 100 to 110

Aliasto 110

WLM

WLM can dynamicallyreassign an alias toanother base

Assign to base 100

1F0 1F1 1F2 1F3

WLMs exchange performance informationGoals not met because of IOSQ ?Who can donate an alias ?

DS8000Dynamic PAVs Dynamic PAVs

Base Alias Alias100 to 100 to 100

Alias Base110to 110

Aliasto 110

WLMIOSQ on 100 ?

System zWLM

IOSQ on 100 ?

System zWLM

IOSQ on 100 ?

System zWLM

IOSQ on 100 ?

System z

204 IBM DS8870 Architecture and Implementation

Page 223: IBM 8870 Archt

7.9.4 HyperPAV

Dynamic PAV requires the WLM to monitor the workload and goals. It takes time until the WLM detects an I/O bottleneck. Then, the WLM must coordinate the reassignment of alias addresses within the sysplex and the DS8000. This process takes time, and if the workload is fluctuating or is characterized by burst, the job that caused the overload of one volume could end before the WLM reacts. In these cases, the IOSQ time was not eliminated completely.

With HyperPAV, an on demand proactive assignment of aliases is possible, as shown in Figure 7-27.

Figure 7-27 HyperPAV: Basic operational characteristics

With HyperPAV, the WLM is no longer involved in managing alias addresses. For each I/O, an alias address can be picked from a pool of alias addresses within the same logical control unit (LCU).

This capability also allows multiple HyperPAV hosts to use one alias to access different bases, which reduces the number of alias addresses that are required to support a set of bases in an IBM System z environment, with no latency in assigning an alias to a base. This functionality is also designed to enable applications to achieve better performance than is possible with the original PAV feature alone, while the same or fewer operating system resources are used.

Aliases assigned to specific addressesz/OS Image

z/OS Image

z/O

S S

ysp

lex

DS8000

Logical Subsystem (LSS) 0800

Base UA=01Alias UA=F0

Alias UA=F1

Base UA=02Alias UA=F2

Alias UA=F3

UCB 08F3UCB 08F2

UCB 0802

UCB 08F1UCB 08F0

UCB 0801

Applications do I/O to

base volumes

Applications do I/O to

base volumes

UCB 08F3UCB 08F2

UCB 0802

UCB 08F1UCB 08F0

UCB 0801

Applications do I/O to

base volumes

Applications do I/O to

base volumes

Chapter 7. Architectured for performance 205

Page 224: IBM 8870 Archt

Benefits of HyperPAVHyperPAV includes the following benefits:

� Provide an even more efficient PAV function.

� Help clients who implement larger volumes to scale I/O rates without the need for more PAV alias definitions.

� Use the FICON architecture to reduce impact, improve addressing efficiencies, and provide the following storage capacity and performance improvements:

– More dynamic assignment of PAV aliases improves efficiency

– Number of PAV aliases that are needed might be reduced, taking fewer from the 64-K device limitation and leaving more storage for capacity use

� Enable a more dynamic response to changing workloads.

� Simplify alias management.

� Make it easier for users to decide to migrate to larger volume sizes.

Optional licensed functionHyperPAV is an optional licensed function of the DS8000 series. It is required in addition to the normal PAV license (which is capacity-dependent) and the FICON Attachment feature. The HyperPAV license is independent of the capacity.

HyperPAV alias consideration on EAVHyperPAV provides a far more agile alias management algorithm, as aliases are dynamically bound to a base during the I/O for the z/OS image that issued the I/O. When I/O completes, the alias is returned to the pool in the LCU. It then becomes available to subsequent I/Os.

Our rule of thumb is that the number of aliases that are required can be approximated by the peak of the following multiplication: I/O rate that is multiplied by the average response time. For example, if the peak of the calculation that happened when the I/O rate is 2000 I/O per second and the average response time is 4 ms (which is 0.004 sec), the result of our calculation is:

2000 IO/sec x 0.004 sec/IO = 8

This result means that the average number of I/O operations that are executing at one time for that LCU during the peak period is eight. Therefore, eight aliases should be able to handle the peak I/O rate for that LCU. However, because this calculation is based on the average during the IBM Resource Measurement Facility™ (RMF™) period, multiply the result by two to accommodate higher peaks within that RMF interval. So, in this case, the recommended number of aliases would be 16 (2 x 8 = 16).

Depending on the workload, there is a huge reduction in PAV-alias UCBs with HyperPAV. The combination of HyperPAV and EAV allows you to significantly reduce the constraint on the 64-K device address limit and, in turn, increase the amount of addressable storage that is available on z/OS. With Multiple Subchannel Sets (MSS) on IBM System zEnterprise zEC12, zBC12, z196, z10, z9, and z114, you have even more flexibility in device configuration. The EAV volumes are supported only on IBM z/OS V1.10 and later. For more information about EAV specifications and considerations, see IBM System Storage DS8000: Host Attachment and Interoperability, SG24-8887.

For more information: For more information about MSS, see Multiple Subchannel Sets: An Implementation View, REDP-4387, which is found at this website:

http://www.redbooks.ibm.com/abstracts/redp4387.html?Open

206 IBM DS8870 Architecture and Implementation

Page 225: IBM 8870 Archt

HyperPAV implementation and system requirementsFor more information about support and implementation guidance, see DS8000: Host Attachment and Interoperability, SG24-8887.

Resource Measurement Facility reporting on PAVResource Measurement Facility (RMF) reports the number of exposures for each device in its Monitor/DASD Activity report and in its Monitor II and Monitor III Device reports. If the device is a HyperPAV base device, the number is followed by an H (for example, 5.4H). This value is the average number of HyperPAV volumes (base and alias) in that interval. RMF reports all I/O activity against the base address, not by the individual base and associated aliases. The performance information for the base includes all base and alias I/O activity.

HyperPAV helps minimize the IOSQ Time. You still see IOSQ Time for one of the following reasons:

� More aliases are required to handle the I/O load when compared to the number of aliases that are defined in the LCU.

� There is Device Reserve issued against the volume. A Device Reserve makes the volume unavailable to the next I/O, which causes the next I/O to be queued. This delay is recorded as IOSQ Time.

7.9.5 PAV in z/VM environments

z/VM provides PAV support in the following ways:

� As traditionally supported, for VM guests as dedicated guests through the CP ATTACH command or DEDICATE user directory statement.

� Starting with z/VM 5.2.0, with APAR VM63952, VM supports PAV minidisks.

PAV in a z/VM environment is shown in Figure 7-28 and Figure 7-29.

Figure 7-28 z/VM support of PAV volumes that are dedicated to a single guest virtual machine

RDEVE101

RDEVE102

RDEVE100

aliasbase alias

DASD E100-E102 access same time

DSK001

E100 E102E101

9800 9801 9802

Guest 1

Chapter 7. Architectured for performance 207

Page 226: IBM 8870 Archt

Figure 7-29 Linkable minidisks for guests that use PAV

In this way, PAV provides to the z/VM environments the benefits of a greater I/O performance (throughput) by reducing I/O queuing.

With the small programming enhancement (SPE) that was introduced with z/VM 5.2.0 and APAR VM63952, other enhancements are available when PAV with z/VM is used. For more information, see 10.4, “z/VM considerations” in DS8000: Host Attachment and Interoperability, SG24-8887.

7.9.6 Multiple Allegiance

If any System z host image (server or LPAR) performs an I/O request to a device address for which the storage disk subsystem is already processing an I/O that came from another System z host image, the storage disk subsystem sends back a device busy indication, as shown in Figure 7-23 on page 202. This result delays the new request and adds to the overall response time of the I/O. This delay is shown in the Device Busy Delay (AVG DB DLY) column in the RMF DASD Activity Report. Device Busy Delay is part of the Pend time.

The DS8000 series accepts multiple I/O requests from different hosts to the same device address, which increases parallelism and reduces channel impact. In older storage disk systems, a device had an implicit allegiance, that is, a relationship that was created in the control unit between the device and a channel path group when an I/O operation is accepted by the device. The allegiance causes the control unit to guarantee access (no busy status presented) to the device for the remainder of the channel program over the set of paths that are associated with the allegiance.

DSK001

E100 E102E101

9800 9801 9802

Guest 2

9800 9801 9802

Guest 3

9800 9801 9802

Guest 1

208 IBM DS8870 Architecture and Implementation

Page 227: IBM 8870 Archt

With Multiple Allegiance, the requests are accepted by the DS8000 and all requests are processed in parallel, unless there is a conflict when writing to the same data portion of the CKD logical volume, as shown in Figure 7-30.

Figure 7-30 Parallel I/O capability with Multiple Allegiance

Nevertheless, good application software access patterns can improve global parallelism by avoiding reserves, limiting the extent scope to a minimum, and setting an appropriate file mask; for example, if no write is intended.

In systems without Multiple Allegiance, except the first I/O request, all requests to a shared volume are rejected, and the I/Os are queued in the System z channel subsystem. The requests show up in Device Busy Delay and PEND time in the RMF DASD Activity reports. Multiple Allegiance allows multiple I/Os to a single volume to be serviced concurrently. However, a device busy condition can still happen. This condition occurs when an active I/O is writing a certain data portion on the volume and another I/O request comes in and tries to read or write to that same data. To ensure data integrity, those subsequent I/Os get a busy condition until that previous I/O is finished with the write operation.

Multiple Allegiance provides significant benefits for environments that are running a sysplex, or System z systems that are sharing access to data volumes. Multiple Allegiance and PAV can operate together to handle multiple requests from multiple hosts.

Physicallayer

Appl. A Appl. B

UCB 100

100

MultipleAllegiance

DS8000

System z

parallel I/O capability

UCB 100

System z

Logical volume

Chapter 7. Architectured for performance 209

Page 228: IBM 8870 Archt

7.9.7 I/O priority queuing

The concurrent I/O capability of the DS8000 allows it to execute multiple channel programs concurrently, while the data accessed by one channel program is not altered by another channel program.

Queuing of channel programsWhen the channel programs conflict with each other and must be serialized to ensure data consistency, the DS8000 internally queues channel programs. This subsystem I/O queuing capability provides the following significant benefits:

� Compared to the traditional approach of responding with a device busy status to an attempt to start a second I/O operation to a device, I/O queuing in the storage disk subsystem eliminates the effect that is associated with posting status indicators and redriving the queued channel programs.

� Contention in a shared environment is eliminated. Channel programs that cannot run in parallel are processed in the order that they are queued. A fast system cannot monopolize access to a volume that also is accessed from a slower system. Each system receives a fair share.

Priority queuingI/Os from different z/OS system images can be queued in a priority order. It is the z/OS Workload Manager that uses this priority to privilege I/Os from one system against the others. You can activate I/O priority queuing in WLM Service Definition settings. WLM must run in Goal mode.

When a channel program with a higher priority comes in and is put in front of the queue of channel programs with lower priority, the priority of the low-priority programs is increased, as shown in Figure 7-31 on page 210. This configuration prevents high-priority channel programs from dominating lower priority programs and gives each system a fair share.

Figure 7-31 I/O priority queuing

I/O from A Pr X'FF'

I/O from B Pr X'9C'

I/O from B Pr X'21'

:

:

::

3390

xecute

System B

DS8000

WLM

IO withPriorityX'80'

WLM

IO withPriorityX'21'

IO Queuefor I/Os that have to be queued

:

System A

210 IBM DS8870 Architecture and Implementation

Page 229: IBM 8870 Archt

7.9.8 Performance considerations on Extended Distance FICON

The function that is known as Extended Distance FICON (EDF) produces performance results similar to z/OS Global Mirror (zGM) Emulation/XRC Emulation at long distances. Extended Distance FICON does not actually extend the distance that is supported by FICON. However, it can provide the same benefits as XRC Emulation. With Extended Distance FICON, there is no need to have XRC Emulation on Channel extenders, which saves costs.

For more information about support and implementation, see Chapter 12.5.4, “Extended Distance FICON” in IBM System Storage DS8000: Host Attachment and Interoperability, SG24-8887.

Figure 7-32 on page 211 shows EDF performance comparisons for a sequential write workload. The workload consists of 64 jobs that are performing 4-KB sequential writes to 64 data sets with 1113 cylinders each, which all are on one large disk volume. There is one SDM configured with a single, non-enhanced reader to handle the updates. When the XRC Emulation (Brocade emulation in the diagram) is turned off, the performance drops significantly, especially at longer distances. However, after the Extended Distance FICON (Persistent IU Pacing) function is installed, the performance returns to where it was with XRC Emulation on.

Figure 7-32 Extended Distance FICON with small data blocks sequential writes on one SDM reader

Important: Do not confuse I/O priority queuing with I/O Priority Manager. I/O priority queuing works on a host adapter level and is available at no charge. I/O Priority Manager works on the device adapter and array levels and is a licensed function.

Chapter 7. Architectured for performance 211

Page 230: IBM 8870 Archt

Figure 7-33 shows EDF performance, which is used this time with Multiple Reader support. There is one SDM configured with four enhanced readers.

Figure 7-33 Extended Distance FICON with small data blocks sequential writes on four SDM readers

These results again show that when the XRC Emulation is turned off, performance drops significantly at long distances. When the Extended Distance FICON function is installed, the performance again improves significantly.

7.9.9 High Performance FICON for z

The FICON protocol involved several exchanges between the channel and the control unit, which led to unnecessary I/O overhead. With High Performance FICON, the protocol is streamlined and the number of exchanges was reduced, as shown in Figure 7-34.

Figure 7-34 zHPF protocol

CCWs TCWs (Transport Control Word)

212 IBM DS8870 Architecture and Implementation

Page 231: IBM 8870 Archt

High Performance FICON for z (zHPF) is an enhanced FICON protocol and system I/O architecture that results in improvements in response time and throughput. Instead of channel command words (CCWs), transport control words (TCWs) are used. I/O that uses the Media Manager, such as DB2, PDSE, VSAM, zFS, VTOC Index (CVAF), Catalog BCS/VVDS, or Extended Format SAM, benefit from zHPF.

zHPF is an optional licensed feature.

In situations where zHPF is the exclusive access in use, it can improve FICON I/O throughput on a single DS8000 port by 100%. Realistic workloads with a mix of data set transfer sizes can see a 30% – 70% increase in FICON I/Os that use zHPF, which results in a 10% to 30% channel usage savings.

Although clients can see I/Os complete faster as a result of implementing zHPF, the real benefit is expected to be obtained by using fewer channels to support existing disk volumes, or increasing the number of disk volumes that are supported by existing channels.

Additionally, the changes in architecture offer end-to-end system enhancements to improve reliability, availability, and serviceability (RAS).

Systems zEC12, zBC12, z114, z196, or z10 processors support zHPF. FICON Express8S cards on the host provide the most benefit, but older cards are also supported. The old FICON Express adapters are not supported. The required software is z/OS V1.7 with IBM Lifecycle Extension for z/OS V1.7 (5637-A01), z/OS V1.8, z/OS V1.9, or z/OS V1.10 with program temporary fixes (PTFs), or z/OS 1.11 and higher.

IBM Laboratory testing and measurements are available at the following website:

http://www.ibm.com/systems/z/hardware/connectivity/ficon_performance.html

zHPF is transparent to applications. However, z/OS configuration changes are required. Hardware configuration definition (HCD) must have channel-path identifier (CHPID) type FC defined for all the CHPIDs that are defined to the 2107 control unit, which also supports zHPF. For the DS8000, installation of the Licensed Feature Key for the zHPF feature is required. After these items are addressed, existing FICON port definitions in the DS8000 function in FICON or zHPF protocols in response to the type of request that is being performed. These changes are nondisruptive.

For z/OS, after the PTFs are installed in the LPAR, you must set ZHPF=YES in IECIOSxx in SYS1.PARMLIB or issue the SETIOS ZHPF=YES command. ZHPF=NO is the default setting. IBM suggests that clients use the ZHPF=YES setting after the required configuration changes and prerequisites are met.

Over time, more access methods were changed in z/OS to support zHPF. Although the original zHPF implementation supported the new TCWs only for I/O that did not span more than a track, the DS8870 also supports TCW for I/O operations on multiple tracks. zHPF is also supported for DB2 List-prefetch, Format Writes, and sequential access methods.

To use zHPF for QSAM/BSAM/BPAM, you might need to enable it. It can be dynamically enabled by SETSMS or by the entry SAM_USE_HPF(YES | NO) in IGDSMSxx parmlib member. The default for z/OS 1.13 is YES, and default for z/OS 1.11 and 1.12 is NO.

For more information about zHPF, see this website:

http://www.ibm.com/systems/z/resources/faq/index.html

Chapter 7. Architectured for performance 213

Page 232: IBM 8870 Archt

214 IBM DS8870 Architecture and Implementation

Page 233: IBM 8870 Archt

Part 2 Planning and installation

In this part of the book, we discuss matters related to the installation planning process for the IBM DS8870.

The following topics are included:

� DS8870 physical planning and installation� DS8870 HMC planning and setup� IBM System Storage DS8000 features and licensed functions

Part 2

© Copyright IBM Corp. 2014. All rights reserved. 215

Page 234: IBM 8870 Archt

216 IBM DS8870 Architecture and Implementation

Page 235: IBM 8870 Archt

Chapter 8. DS8870 physical planning and installation

This chapter describes the various steps that are involved in the planning and installation of the IBM DS8870. It includes a reference listing of the information that is required for the setup and where to find detailed technical reference material.

This chapter covers the following topics:

� Considerations before installation

� Planning for the physical installation

� Network connectivity planning

� Secondary HMC, IBM Tivoli Storage Productivity Center, IBM Security Key Lifecycle Manager, LDAP, and AOS planning

� Remote mirror and copy connectivity

� Disk capacity considerations

� Planning for growth

For more information about the configuration and installation process, see IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

8

© Copyright IBM Corp. 2014. All rights reserved. 217

Page 236: IBM 8870 Archt

8.1 Considerations before installationStart by developing and following a project plan to address the many topics that are needed for a successful implementation. In general, the following items should be considered for your installation planning checklist:

� Plan for growth to minimize disruption to operations. Expansion frames can be placed only to the right (from the front) of the DS8870 base frame.

� Consider location suitability, floor loading, access constraints, elevators, doorways, and so on.

� Analyze power requirements, such as redundancy and the use of uninterruptible power supply (UPS).

� Examine environmental requirements, such as adequate cooling capacity.

� Determine a place and connection for the secondary Hardware Management Console (HMC).

� Full disk encryption drives are a standard feature for the DS8870. If you want to activate encryption, consider a place and connection needs for the IBM Tivoli Key Lifecycle Manager v2.0.1 or, better, its follow-on product, the IBM Security Key Lifecycle Manager v2.5, servers.

� Consider integration of Lightweight Directory Access Protocol (LDAP) to allow a single user ID and password management.

� Consider Assist On-site (AOS) installation to provide a continued secure connection to IBM support center.

� Plan for type of disks, such as solid-state drives (SSDs), Enterprise, and Nearline.

� Create a plan that details the wanted logical configuration of the storage.

� Consider IBM Tivoli Storage Productivity Center for monitoring and for DS8000 storage management, including Copy Services, in your environment.

� Consider the use of the I/O Priority Manager feature to prioritize specific applications.

� Consider implementing Easy Tier, which is available at no charge, to increase data placement flexibility and performance.

� Oversee the available services from IBM to check for microcode compatibility and configuration checks.

� Consider available Copy Services and backup technologies.

� Consider new Resource Groups function feature for the IBM System Storage DS8000.

� Plan for staff education and availability to implement the storage plan. Alternatively, you can use IBM or IBM Business Partner services.

218 IBM DS8870 Architecture and Implementation

Page 237: IBM 8870 Archt

Client responsibilities for the installationThe DS8870 is specified as an IBM or IBM Business Partner installation and setup system. However, the following activities are some of the required planning and installation activities for which the client is responsible at a high level:

� Physical configuration planning. Your Storage Marketing Specialist can help you plan and select the DS8870 model physical configuration and features.

� Installation planning.

� Integration of LDAP. IBM can help in planning and implementation upon client request.

� Installation of AOS if wanted. IBM can help in planning and implementation upon client request.

� Integration of Tivoli Storage Productivity Center and Simple Network Management Protocol (SNMP) into the client environment for monitoring of performance and configuration. IBM can provide services to set up and integrate these components.

� Configuration and integration of Tivoli Key Lifecycle Manager or Security Key Lifecycle Manager servers and DS8000 Encryption for extended data security. IBM provides services to set up and integrate these components. You can order FC #1760, which is an x3350 server running the key management software under a SUSE 11 SP3 Linux. For a z/OS environment, Security Key Lifecycle Manager is also available to manage encryption keys. If you run Security Key Lifecycle Manager on DS8000 systems that use encryption, you can run into an encryption deadlock situation when you power on the system and the DS8000 must talk to a key server, but Security Key Lifecycle Manager cannot start. Therefore, a stand-alone Key Lifecycle Manager server is always required.

� Logical configuration planning and application. Logical configuration refers to the creation of Redundant Array of Independent Disks (RAID) ranks, volumes, and logical unit numbers (LUNs), and to the assignment of the configured capacity to servers. Application of the initial logical configuration and all subsequent modifications to the logical configuration also are client responsibilities. The logical configuration can be created, applied, and modified by using the DS Storage Manager, DS command-line interface (CLI), or DS Open application programming interface (API).

IBM Global Services also can apply or modify your logical configuration, which is a fee-based service.

8.1.1 Who should be involvedHave a project manager coordinate the many tasks that are necessary for a successful installation. Installation requires close cooperation with the user community, the IT support staff, and the technical resources that are responsible for floor space, power, and cooling.

A storage administrator should also coordinate requirements from the user applications and systems to build a storage plan for the installation. This plan is needed to configure the storage after the initial hardware installation is complete.

The following people should be briefed and engaged in the planning process for the physical installation:

� Systems and storage administrators� Installation planning engineer� Building engineer for floor loading, air conditioning, and electrical considerations � Security engineers for virtual private network (VPN), LDAP, Tivoli Key Lifecycle Manager

or Security Key Lifecycle Manager, and encryption� Administrator and operator for monitoring and handling considerations� IBM Service Representative or IBM Business Partner installation engineer

Chapter 8. DS8870 physical planning and installation 219

Page 238: IBM 8870 Archt

8.1.2 What information is requiredA validation list to help the installation process should include the following items:

� Drawings that detail the DS8000 placement as specified and agreed upon with a building engineer, which ensures that the weight is within limits for the route to the final installation position.

� Approval to use elevators if the weight and size are acceptable.

� Connectivity information, servers, storage area network (SAN), and mandatory local area network (LAN) connections.

� Agreement on the security structure of the installed DS8000 with all security engineers.

� Ensure that you have a detailed storage plan agreed upon. Ensure that the configuration specialist has all the information to configure all of the arrays and set up the environment as required.

� Activation codes for the Operating Environment License (OEL), which are mandatory, and any optional feature activation codes.

8.2 Planning for the physical installationThis section describes the physical installation planning process and gives important tips and considerations.

8.2.1 Delivery and staging areaThe shipping carrier is responsible for delivering and unloading the DS8870 as close to its final destination as possible. Inform your carrier of the weight and size of the packages to be delivered and inspect the site and the areas where the packages will be moved (for example, hallways, floor protection, elevator size, and loading).

Table 8-1 lists the final packaged dimensions and maximum packaged weight of the DS8870 storage unit ship group.

Table 8-1 Packaged dimensions and weight for DS8870 models

Shipping container Packaged dimensions (in centimeters and inches)

Maximum packaged weight (in kilograms and pounds)

Model 961pallet or crate

Height 207.5 cm (81.7 in.)Width 101.5 cm (40 in.)Depth 137.5 cm (54.2 in.)

1325 kg (2920 lb)

Model 96E expansion unitpallet or crate

Height 207.5 cm (81.7 in.)Width 101.5 cm (40 in.)Depth 137.5 cm (54.2 in.)

1311 kg (2810 lb)

Ship group(height might be lower and weight might be less)

Height 105.0 cm (41.3 in.)Width 65.0 cm (25.6 in.)Depth 105.0 cm (41.3 in.)

Up to 90 kg (199 lb)

(if ordered as MES) External HMC container

Height 69.0 cm (27.2 in.)Width 80.0 cm (31.5 in.)Depth 120.0 cm (47.3 in.)

75 kg (71 lb)

Important: A fully configured model in the packaging can weigh over 1406 kg (3100 lbs). The use of fewer than three persons to move it can result in injury.

220 IBM DS8870 Architecture and Implementation

Page 239: IBM 8870 Archt

By using the shipping weight reduction option, you receive delivery of a DS8870 model in multiple shipments that do not exceed 909 kg (2000 lb) each.

For more information about the Shipping Weight Reduction option, see Chapter 10, “IBM System Storage DS8000 features and licensed functions” on page 259.

8.2.2 Floor type and loadingThe DS8870 can be installed on a raised or nonraised floor. Installing the unit on a raised floor is preferable because with this option, you can operate the storage unit with better cooling efficiency and cabling layout protection.

The total weight and space requirements of the storage unit depend on the configuration features that you ordered. You might consider calculating the weight of the unit and the expansion box (if ordered) in their maximum capacity to allow for the addition of new features.

Table 8-2 lists the weights of the various DS8870 models.

Table 8-2 DS8870 weights

Figure 8-1 for DS8870 shows the location of the cable cutouts. You can use the following measurements when you cut the floor tile:

� Width: 45.7 cm (18.0 in.)� Depth: 16 cm (6.3 in.)

Figure 8-1 Floor tile cable cutout for DS8870

Model Maximum weight

Model 961 (two-core) 1172 kg (2585 lb)

Model 961 (four-core) 1324 kg (2920 lb)

Model 96E expansion model 1268 kg (2790 lb)

Model 961 and one 96E expansion model 2601 kg (5735 lb)

Model 961 and two 96E expansion models 3923 kg (8650 lb)

Important: You must check with the building engineer or other appropriate personnel to ensure that the floor loading was properly considered.

Chapter 8. DS8870 physical planning and installation 221

Page 240: IBM 8870 Archt

8.2.3 Overhead cabling featuresThe overhead cabling (top exit) feature, as shown in Figure 8-2, is available for DS8870 as an alternative to the standard rear cable exit. Verify whether you ordered the top exit feature before the tiles for a raised floor are cut.

This feature requires the following items:

� Feature Code (FC) 1400 Top exit bracket for overhead cabling� FC 1101 Safety-approved fiberglass ladder� Multiple FC for power cords, depending on the AC power characteristics of your

geography.

For more information, see IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

Figure 8-2 Overhead cabling for DS8870

222 IBM DS8870 Architecture and Implementation

Page 241: IBM 8870 Archt

8.2.4 Room space and service clearanceThe total amount of space that is needed by the storage units can be calculated by using the dimensions that are shown in Table 8-3.

Table 8-3 DS8870 dimensions

The storage unit location area also should cover the service clearance that is needed by IBM service representatives when the front and rear of the storage unit is accessed. You can use the following minimum service clearances. Verify your configuration and the maximum configuration for your needs, keeping in mind that the DS8870 has a maximum of three expansion frames (for a total of four frames).

An example of the dimensions for a DS8870 with two expansion frames is shown in Figure 8-3:

� For the front of the unit, allow a minimum of 121.9 cm (48 in.) for the service clearance.� For the rear of the unit, allow a minimum of 76.2 cm (30 in.) for the service clearance.� For the sides of the unit, allow a minimum of 5.1 cm (2 in.) for the service clearance.

Figure 8-3 DS8870 three frames service clearance requirements

Dimension with covers

Model 961

Height 193.4 cm

Width 84.8 cm

Depth 122.8 cm

Chapter 8. DS8870 physical planning and installation 223

Page 242: IBM 8870 Archt

8.2.5 Power requirements and operating environmentConsider the following basic items when the DS8870 power requirements are planned for:

� Power connectors� Input voltage� Power consumption and environment� Power control features� Extended Power Line Disturbance (ePLD) feature

Power connectorsEach DS8870 base and expansion unit features redundant power supply systems. The two power cords to each frame should be supplied by separate AC power distribution systems. Use a 60-A rating for the low voltage feature and a 32-A rating for the high-voltage feature. For more information about power connectors and power cords, see IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

Input voltageThe DS8870 supports a three-phase input voltage source. Table 8-4 lists the power specifications for each feature code. The DC Supply Unit (DSU) is designed to operate with three-phase delta, three-phase wye, or one-phase input power.

Table 8-4 DS8870 input voltages and frequencies

Power consumptionTable 8-5 lists the power consumption specifications of the DS8870. The power estimates presented here are conservative and assume a high transaction rate workload.

Table 8-5 DS8870 power consumption

The values represent data that was obtained from the following configured systems:

� Model 961 base models that contain 15 disk drive sets (240 disk drives) and Fibre Channel adapters

� Model 96E first expansion models that contain 21 disk drive sets (336 disk drives) and Fibre Channel adapters

Characteristic Low voltage High voltage

Nominal input voltage 200, 208, 220, or 240 RMS Vac 380, 400, 415, 440, or 480 RMS Vac

Minimum input voltage 180 RMS Vac 333 RMS Vac

Maximum input voltage 264 RMS Vac 456 RMS Vac

Customer wall breaker rating (1-ph, 3-ph)

50-60 Amps 30-35 Amps

Steady-state input frequency 50 ± 3 or 60 ± 3.0 Hz 50 ± 3 or 60 ± 3.0 Hz

PLD input frequencies (<10 seconds)

50 ± 3 or 60 ± 3.0 Hz 50 ± 3 or 60 ± 3.0 Hz

Measurement Model 961(four-way)

Model 96Ewith I/O enclosure

Peak electric power 6 kVA 5.8 kVA

Thermal load (BTU/hr) 20,612 19,605

224 IBM DS8870 Architecture and Implementation

Page 243: IBM 8870 Archt

DS8000: Cooling the storage complexIn this section, we describe the cooling system.

DS8870 coolingAir circulation for the DS8870 is provided by the various fans that are installed throughout the frame. All of the fans in the DS8870 direct air flow from the front of the frame to the rear of the frame. No air exhausts through the top of the frame. The use of a directional air flow in this manner allows for cool aisles to the front and hot aisles to the rear of the systems, as shown in Figure 8-4.

Figure 8-4 DS8870 air flow

The operating temperature for the DS8870 is 16o - 32oC (60o - 90oF) at a relative humidity range of 40% - 50%.

Power control featuresThe DS8870 has remote power control features that you use to control the power of the storage complex through the HMC. Another power control feature is available for the System z environment.

For more information about power control features, see IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

Extended Power Line Disturbance featureThe extended Power Line Disturbance (ePLD) feature stretches the available uptime of the DS8870 to 50 seconds during a PLD event (the uptime was 4 seconds without ePLD). This feature would be recommended in environments without UPS. There is no additional physical connection planning that is needed for the client with or without the ePLD.

Important: The following factors must be considered when the DS8870 is installed:

� Ensure that air circulation for the DS8870 base unit and expansion units is maintained free from obstruction to keep the unit operating in the specified temperature range.

� Although the DS8870 does not vent though the top of the rack, do not store anything on top of the DS8870 for safety reasons.

New airflow des ign is a ls o m or e energy eff icient

• M o re da ta ce n te rs a re m o vin g to ho t -a is le / co ld -a isl e d e sig ns to o pt i m ize e ne rg y ef f ic ie ncy

• D S8 87 0 i s no w d e si gn ed wi th co m p le te f ro nt -to -b ack a i rf lo w

Fro nt- to -ba ck a ir flo w fo r h ot -a isle– cold -a is le d at a ce nt ers

B e ne f it : G re a te r en erg y e f f ici en cy a nd co n tri bu te s to l o w er e ne rg y co s t s

Chapter 8. DS8870 physical planning and installation 225

Page 244: IBM 8870 Archt

8.2.6 Host interface and cablesThe DS8870 can support the number of host adapters, as shown in Table 8-6.

Table 8-6 Maximum host adapter

The DS8870 Model 961/96E supports four-port and eight-port cards per host adapter. All ports are 8 Gb capable; therefore, the DS8870 has a maximum of 128 ports at 8 Gb.

Fibre Channel and Fibre Channel connectionThe DS8870 Fibre Channel and Fibre Channel connection (FICON) adapter has four or eight ports per card. Each port supports Fibre Channel Protocol (FCP) or FICON, but not simultaneously. Fabric components from various vendors, including IBM, QLogic, Brocade, and Cisco, are supported by both environments.

The Fibre Channel and FICON shortwave host adapter, feature 3153, when used with 50 micron multi-mode fibre cable, supports point-to-point distances of up to 500 meters on 8-Gbps link speed with four ports. The Fibre Channel and FICON shortwave host adapter, feature 3157, when used with 50 micron multi-mode fibre cable, supports point-to-point distances of up to 500 meters on 8-Gbps link speed with eight ports.

The Fibre Channel and FICON longwave host adapter, when used with 9 micron single-mode fibre cable, extends the point-to-point distance to 10 km for feature 3253 (8 Gb 10-km LW host adapter with four ports). Feature 3257 (8-Gb LW host adapter with eight ports) supports point-to-point distances of up to 10 km.

A 31-meter fiber optic cable or a 2-meter jumper cable can be ordered for each Fibre Channel adapter port.

Table 8-7 lists the fiber optic cable features for the FCP/FICON adapters.

Table 8-7 FCP/FICON cable features

Base model Attached expansion model Host adapter

961 two-core None (single rack) 2 - 4

961 four-core None (single rack) 2 - 8

961 (8-way and 16-way) enterprise class

96E models (1-3) 2 - 16

Feature Length Connector Characteristic

1410 40 m (131 ft) LC/LC 50 micron, multimode

1411 31 m (102 ft) LC/SC 50 micron, multimode

1412 2 m (6.5 ft) SC to LC adapter 50 micron, multimode

1420 31 m (102 ft) LC/LC 9 micron, single mode

1421 31 m (102 ft) LC/SC 9 micron, single mode

1422 2 m (6.5 ft) SC to LC adapter 9 micron, single mode

Important: The Remote Mirror and Copy functions use FCP as the communication link between the IBM System Storage DS8000 series, DS6000s, and ESS Models 800 and 750 also in z/OS environments.

226 IBM DS8870 Architecture and Implementation

Page 245: IBM 8870 Archt

For more information about IBM supported attachments, see IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917 and IBM System Storage DS8000: Host Attachment and Interoperability, SG24-8887.

For the latest information about host types, models, adapters, and operating systems that are supported by the DS8870, see the DS8000 System Storage Interoperability Center at this website:

http://www.ibm.com/systems/support/storage/ssic/interoperability.wss

8.2.7 Host adapter Fibre Channel specifics for open environmentsEach storage unit host adapter has four or eight ports, and each port has a unique worldwide port name (WWPN). You can configure a port to operate with the SCSI-FCP upper-layer protocol by using the DS Storage Manager or the data storage command-line interface (DS CLI). You can add Fibre Channel shortwave and longwave adapters to I/O enclosures of an installed DS8870.

With host adapters that are configured as FC, the DS8870 provides the following configuration capabilities:

� A maximum of 128 Fibre Channel ports

� A maximum of 509 logins per Fibre Channel port, which includes host ports and Peer-to-Peer Remote Copy (PPRC) target and initiator ports

� Access to 63750 logical unit numbers (LUNs) per target (one target per host adapter), depending on host type

� Either arbitrated loop, switched-fabric, or point-to-point topologies

8.2.8 FICON specifics on z/OS environmentWith host adapters that are configured for FICON, the DS8870 provides the following configuration capabilities:

� Fabric or point-to-point topologies� A maximum of 128 host adapter ports, depending on the DS8870 processor feature� A maximum of 509 logins per Fibre Channel port� A maximum of 8192 logins per storage unit� A maximum of 1280 logical paths on each Fibre Channel port� Access to all control-unit images over each FICON port� A maximum of 512 logical paths per control unit image

FICON host channels limit the number of devices per channel to 16,384. To fully access 65,280 devices on a storage unit, it is necessary to connect a minimum of four FICON host channels to the storage unit. By using a switched configuration, you can expose 64 control-unit images (16,384 devices) to each host channel.

Chapter 8. DS8870 physical planning and installation 227

Page 246: IBM 8870 Archt

8.2.9 Best practice for host adaptersFor optimum availability and performance, the following best practices are recommended:

� To obtain the maximum ratio for availability and performance, one host adapter (HA) card should be installed on each available I/O enclosure before the second HA card is installed on same I/O enclosure.

� If there is no real need for a maximum port configuration, use four-port, 8-Gbps cards.

� Copy Services best performance can be obtained by using dedicated HA cards for Copy Services links.

8.2.10 WWNN and WWPN determinationThe incoming and outgoing data to the DS8870 is tracked via worldwide node name (WWNN) and worldwide port name (WWPN). These ports are assigned to the Fibre Channel Fabric and are used as a MAC address for the Ethernet protocol. The addresses can be determined by using the DS CLI or GUI. To determine these addresses, we analyze how they are composed.

Determining a WWNN by using DS CLI

When the DS8870 WWNN is on, we have an address similar to the following strings:

50:05:07:63:0z:FF:Cx:xx or 50:50:07:63:0z:FF:Dx:xx

The z and x:xx values are unique combinations for each system and each Storage Facility Image (SFI) that is based on a machine serial number.

Each SFI has its own WWNN. The storage unit itself has its unique WWNN, but the SFI WWNN is used only for any configuration because the SFI is the machine that the host knows.

After you connect to the storage system through the DS CLI, use the lssi command to determine the SFI WWNN, as shown in Example 8-1.

Example 8-1 SFI WWNN determination

dscli> lssiName ID Storage Unit Model WWNN State ESSNet==============================================================================ATS_02 IBM.2107-75ZA571 IBM.2107-75ZA571 961 5005076303FFD5AA Online Enabled

Do not use the lssu command because it determines the machine WWNN. It is not used as reference because hosts can see only the SFI, as shown in Example 8-2.

Example 8-2 Machine WWNN

dscli> lssuName ID Model WWNN pw state=============================================================DS8870_ATS02 IBM.2107-75ZA570 961 5005076303FFEDAA On

Determining a WWPN by using DS CLISimilar to the WWNN, we have a WWPN in the DS8870 that looks like the following address:

50:05:07:63:0z:YY:Yx:xx

228 IBM DS8870 Architecture and Implementation

Page 247: IBM 8870 Archt

However, the DS8870 WWPN is a child of SFI WWNN, where the WWPN inserts the z and x:xx values from SFI WWNN. It also includes the YY:Y, from the logical port naming, which is derived from where the HA card is physically installed.

After you are connected to the storage system through the DS CLI, use the lsioport command to determine the SFI WWPN, as shown in Example 8-3.

Example 8-3 WWPN determination

dscli> lsioportIBM.2107-75ZA571ID WWPN State Type topo portgrp===============================================================I0000 50050763030015AA Online Fibre Channel-SW SCSI-FCP 0I0001 50050763030055AA Online Fibre Channel-SW SCSI-FCP 0I0002 50050763030095AA Online Fibre Channel-SW SCSI-FCP 0I0003 500507630300D5AA Online Fibre Channel-SW SCSI-FCP 0I0030 50050763030315AA Online Fibre Channel-SW SCSI-FCP 0I0031 50050763030355AA Online Fibre Channel-SW SCSI-FCP 0I0032 50050763030395AA Online Fibre Channel-SW FICON 0I0033 500507630303D5AA Online Fibre Channel-SW SCSI-FCP 0I0034 50050763034315AA Online Fibre Channel-SW SCSI-FCP 0I0035 50050763034355AA Online Fibre Channel-SW SCSI-FCP 0I0036 50050763034395AA Online Fibre Channel-SW SCSI-FCP 0I0037 500507630343D5AA Online Fibre Channel-SW SCSI-FCP 0I0100 50050763030815AA Online Fibre Channel-SW SCSI-FCP 0I0101 50050763030855AA Online Fibre Channel-SW SCSI-FCP 0I0102 50050763030895AA Online Fibre Channel-SW FICON 0I0103 500507630308D5AA Online Fibre Channel-SW SCSI-FCP 0I0104 50050763034815AA Online Fibre Channel-SW SCSI-FCP 0I0105 50050763034855AA Online Fibre Channel-SW SCSI-FCP 0I0106 50050763034895AA Online Fibre Channel-SW SCSI-FCP 0I0107 500507630348D5AA Online Fibre Channel-SW SCSI-FCP 0I0130 50050763030B15AA Online Fibre Channel-SW SCSI-FCP 0I0131 50050763030B55AA Online Fibre Channel-SW SCSI-FCP 0I0132 50050763030B95AA Online Fibre Channel-SW SCSI-FCP 0I0133 50050763030BD5AA Online Fibre Channel-SW SCSI-FCP 0I0200 50050763031015AA Online Fibre Channel-SW SCSI-FCP 0I0201 50050763031055AA Online Fibre Channel-SW SCSI-FCP 0I0202 50050763031095AA Online Fibre Channel-SW FICON 0I0203 500507630310D5AA Online Fibre Channel-SW SCSI-FCP 0I0204 50050763035015AA Online Fibre Channel-SW SCSI-FCP 0I0205 50050763035055AA Online Fibre Channel-SW SCSI-FCP 0I0206 50050763035095AA Online Fibre Channel-SW SCSI-FCP 0I0207 500507630350D5AA Online Fibre Channel-SW SCSI-FCP 0I0230 50050763031315AA Online Fibre Channel-LW FICON 0I0231 50050763031355AA Online Fibre Channel-LW FICON 0I0232 50050763031395AA Online Fibre Channel-LW FICON 0I0233 500507630313D5AA Online Fibre Channel-LW FICON 0I0300 50050763031815AA Online Fibre Channel-LW FICON 0I0301 50050763031855AA Online Fibre Channel-LW FICON 0I0302 50050763031895AA Online Fibre Channel-LW FICON 0I0303 500507630318D5AA Online Fibre Channel-LW FICON 0I0330 50050763031B15AA Online Fibre Channel-SW SCSI-FCP 0I0331 50050763031B55AA Online Fibre Channel-SW SCSI-FCP 0I0332 50050763031B95AA Online Fibre Channel-SW FICON 0

Chapter 8. DS8870 physical planning and installation 229

Page 248: IBM 8870 Archt

I0333 50050763031BD5AA Online Fibre Channel-SW SCSI-FCP 0I0334 50050763035B15AA Online Fibre Channel-SW SCSI-FCP 0I0335 50050763035B55AA Online Fibre Channel-SW SCSI-FCP 0I0336 50050763035B95AA Online Fibre Channel-SW SCSI-FCP 0I0337 50050763035BD5AA Online Fibre Channel-SW SCSI-FCP 0

Determine WWNN by using a web GUIUse the following guidelines to determine the WWNN by using the DS8000 GUI from the HMC:

1. Connect via a web browser to this website:

https://< hmc ip address >:8452/DS8000/Login

Select System Status.

2. Right-click the SFI in the status column and select Storage Image.

3. Select Properties.

4. Select Advanced and you can find the WWNN value, as shown in Figure 8-5.

Figure 8-5 SFI WWNN value

Determine WWPN by using a web GUIJust as we used the GUI options from the HMC to determine the WWNN, we can use it to find the WWPN:

1. Connect via a web browser to this website:

https://< hmc ip address >:8452/DS8000/Login

Select System Status.

2. Right-click the SFI in the status column and select Storage Image.

230 IBM DS8870 Architecture and Implementation

Page 249: IBM 8870 Archt

3. Select Configure I/O ports and you receive the full list of each installed I/O port with its WWPN and its physical location, as shown in Figure 8-6.

Figure 8-6 I/O ports WWPN determination

8.3 Network connectivity planningImplementing the DS8870 requires that you consider the physical network connectivity of the storage adapters and the HMC within your local area network.

Check your local environment for the following DS8870 unit connections:

� HMC and network access� Tivoli Storage Productivity Center Basic Edition (if used) and network access � DS CLI� Remote support connection� Remote power control� SAN connection� Key Lifecycle Manager connection� LDAP connection

For more information about physical network connectivity, see IBM System Storage DS8000 User´s Guide, SC26-7915, and IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

8.3.1 Hardware Management Console and network accessHMCs are the focal point for configuration, Copy Services management, and maintenance for a DS8870 unit. The internal HMC that is included with every base frame is mounted in a pull-out tray for convenience and security. The HMC consists of a notebook with adapters for the modem and 10/100/1000 Mb Ethernet. Ethernet cables connect the HMC to the storage unit in a redundant configuration.

A second, redundant, external HMC is orderable. Having a second HMC is a good idea for environments that use key encryption management and Advanced Copy Services functions. The second HMC is external to the DS8870 rack and consists of a similar mobile workstation as the primary HMC. A secondary HMC is also suggested for environments that perform

Chapter 8. DS8870 physical planning and installation 231

Page 250: IBM 8870 Archt

frequent logical reconfiguration tasks. Valuable time can be saved if there are problems with the primary HMC.

The HMC can be connected to your network (eth2 - client network) for the following tasks:

� Remote management of your system by using the DS CLI

� Remote DS Storage Manager GUI management of your system that connects directly from the client’s notebook by opening a browser and navigating to the following website:

https://< HMC IP address>:8452/DS8000/Login

To connect the HMCs (internal and external, if present) to your network, you must provide the following settings to your IBM service representative so the management consoles can be configured for attachment to your LAN:

� Management console network IDs, host names, and domain name.

� Domain name server (DNS) settings. If you plan to use DNS to resolve network names, verify that the DNS servers are reachable from HMC to avoid HMC internal network slowdown for timeouts on external network.

� Gateway routing information.

For more information about HMC planning, see Chapter 9, “DS8870 HMC planning and setup” on page 241.

8.3.2 IBM Tivoli Storage Productivity CenterThe IBM Tivoli Storage Productivity Center is an integrated software solution that can help you improve and centralize the management of your storage environment through the integration of products. With the Tivoli Storage Productivity Center, it is possible to manage and configure multiple DS8000 storage systems from a single point of control.

The IBM System Storage DS8000 Storage Manager is also accessible by using the IBM Tivoli Storage Productivity Center. IBM Tivoli Storage Productivity Center provides a DS8000 management interface. You can use this interface to add and manage multiple DS8000 series storage units from one console.

8.3.3 DS command-line interfaceThe IBM System Storage DS® command-line interface (CLI) can be used to create, delete, modify, and view Copy Services functions and for the logical configuration of a storage unit. These tasks can be performed interactively, in batch processes (operating system shell scripts), or in DS CLI script files. A DS CLI script file is a text file that contains one or more DS CLI commands and can be issued as a single command. DS CLI can be used to manage logical configuration, Copy Services configuration, and other functions for a storage unit, including managing security settings, querying point-in-time performance information or status of physical resources, and exporting audit logs.

Important: The external HMC must be directly connected to the private DS8870 ETH switches. An Ethernet connection to the client network for ETH2 also should be provided.

Important: The DS8870 uses 172.16.y.z and 172.17.y.z private network addresses. If the client network uses the same addresses, IBM must be informed as early as possible to avoid conflicts.

232 IBM DS8870 Architecture and Implementation

Page 251: IBM 8870 Archt

The DS CLI can be installed on and used from a LAN-connected system, such as the storage administrator’s workstation or any separate server that is connected to the LAN of the storage unit.

For more information about the hardware and software requirements for the DS CLI, see IBM System Storage DS Command-Line Interface User’s Guide for DS8000 series, SC53-1127.

8.3.4 Remote support connectionRemote support connection is available from the HMC by using a modem (dial-up), VPN over the Internet through the client LAN. Tivoli AOS is also available and allows the IBM Support Center to establish a connection tunnel to the DS8000 through a secure VPN.

You can take advantage of the DS8000 remote support feature for outbound calls (call home function) or inbound calls (remote service access by an IBM technical support representative). If you are using a modem for remote support, you must provide an analog telephone line for the HMC.

For more information, see Chapter 16, “Remote support” on page 421.

A typical remote support connection is shown in Figure 8-7.

Figure 8-7 DS8000 HMC remote support connection

Complete the following steps to prepare for attaching the DS8870 to the client’s LAN:

1. Assign a TCP/IP address and host name to the HMC in the DS8870.

2. If email notification for service alerts is allowed, enable the support on the mail server for the TCP/IP addresses that is assigned to the DS8870.

3. Use the information that was entered on the installation worksheets during your planning.

Generally, use a service connection through the high-speed VPN network by using a secure Internet connection. You must provide the network parameters for your HMC before the console is configured. For more information, see Chapter 9, “DS8870 HMC planning and setup” on page 241.

Chapter 8. DS8870 physical planning and installation 233

Page 252: IBM 8870 Archt

Your IBM service support representative (SSR) needs the configuration worksheet during the configuration of your HMC. A worksheet is available in IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

For more information about remote support connection, see Chapter 16, “Remote support” on page 421.

8.3.5 Remote power controlBy using the System z remote power control setting, you can power on and off the storage unit from a System z interface. If you plan to use the System z power control feature, be sure that you order the System z power control feature. This feature comes with four power control cables.

When you use this feature, you must specify the System z power control setting in the Power Control Pane menu, then select the option IBM zSeries Power Mode in the HMC GUI.

In a System z environment, the host must have the Power Sequence Controller (PSC) feature installed to turn on and off specific control units, such as the DS8870. The control unit is controlled by the host through the power control cable. The power control cable comes with a standard length of 31 meters, so be sure to consider the physical distance between the host and DS8870.

8.3.6 Storage area network connectionThe DS8870 can be attached to a storage area network (SAN) environment through its Fibre Channel ports. SANs provide the capability to interconnect open systems hosts, System z hosts, and other storage systems.

A SAN allows your single Fibre Channel host ports to have physical access to multiple Fibre Channel ports on the storage unit. You might need to implement zoning to limit the access (and provide access security) of host ports to your storage ports. Shared access to a storage unit Fibre Channel port might come from hosts that support a combination of bus adapter types and operating systems.

Important: A SAN administrator should verify periodically that the SAN is working correctly before any new devices are installed. SAN bandwidth should also be evaluated to handle the new workload.

234 IBM DS8870 Architecture and Implementation

Page 253: IBM 8870 Archt

8.3.7 IBM Security Key Lifecycle Manager server for encryptionThe DS8870 is configured with Full Disk Encryption (FDE) drives and can have encryption activated. If encryption is wanted, then Feature Code #1750 should be included in the order. This feature enables the client to download from the DSFA website the function authorization and to elect to activate encryption. Feature Code #1754 is used to deactivate encryption.

When you activate encryption, you also require an isolated Tivoli Key Lifecycle Manager or Security Key Lifecycle Manager server. A Security Key Lifecycle Manager license is required for use with the Security Key Lifecycle Manager software.

The software is purchased separately from the isolated key server hardware. IBM System Storage DS8000 series offers IBM Security Key Lifecycle Manager server with hardware Feature Code #1760.

For more information about the required IBM Security Key Lifecycle Manager server and other requirements and guidelines, see IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines, REDP-4500. For more information about supported products and platforms, see this website:

http://www.ibm.com/support/docview.wss?uid=swg21386446

For more information about the IBM Security Key Lifecycle Manager product and features, see this website:

http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/index.jsp?topic=/com.ibm.tklm.doc_2.0/welcome.htm

Encryption planningEncryption planning is a client responsibility. The following major planning components are part of the implementation of an encryption environment. Review all planning requirements and include them in your installation considerations:

� Key Server Planning

Introductory information, including required and optional features, can be found in the IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

� IBM Security Key Lifecycle Manager Planning

The DS8000 series supports IBM Tivoli Key Lifecycle Manager V2.0.1 and IBM Security Key Lifecycle Manager v2.5 or later.

– For encryption, at a minimum, a Tivoli Key Lifecycle Manager v2.0.1 or later or IBM Security Key Lifecycle Manager for zOS v1.0.1 or later is required.

– For encryption and NIST SP-8131, a compliant connection between the HMC and the key server, IBM Security Key Lifecycle Manager v2.5 or later, is required. Starting from v2.5, Tivoli Key Lifecycle Manager changed its name to IBM Security Key Lifecycle Manager. This is not to be confused with IBM Security Key Lifecycle Manager v 1.0, which was available for z/OS only.

Isolated key servers that are ordered with Feature Code #1760 include a Linux operating system and IBM Tivoli Key Lifecycle Manager or IBM Security Key Lifecycle Manager software that is preinstalled (you are advised to upgrade to the latest version of the IBM Security Key Lifecycle Manager). Clients must acquire an IBM Security Key Lifecycle

Important: No other hardware or software is allowed on the isolated key server. An isolated server must use an internal disk for all files that are necessary to boot and have the Security Key Lifecycle Manager key server become operational.

Chapter 8. DS8870 physical planning and installation 235

Page 254: IBM 8870 Archt

Manager license for use of the IBM Security Key Lifecycle Manager software, which is ordered separately from the stand-alone server hardware.

� Encryption Activation Review Planning

IBM Encryption offerings must be activated before use. This activation is part of the installation and configuration steps that are required to use the technology.

Security Key Lifecycle Manager connectivity and routing informationTo connect the IBM Security Key Lifecycle Manager to your network, you must provide the following settings to your IBM service representative:

� IBM Security Key Lifecycle Manager server network IDs, host names, and domain name� DNS settings (if you plan to use DNS to resolve network names)

Two network ports must be opened on a firewall to allow DS8870 connection and have an administration management interface to the IBM Security Key Lifecycle Manager server. These ports are defined by the IBM Security Key Lifecycle Manager administrator.

8.3.8 Lightweight Directory Access Protocol server for single sign-onA Lightweight Directory Access Protocol (LDAP) server can be used to provide directory services to the DS8870 through Tivoli Storage Productivity Center. This configuration can enable a single sign-on interface to all DS8000s in the client environment.

Typically, one LDAP server is installed in the client environment to provide directory services. For more information, see IBM System Storage DS8000: LDAP Authentication, REDP-4505.

For information on Configuring Jazz for Service Management and DS8000 for LDAP authentication, refer to the IBM Tivoli Storage Productivity Center V5.2 InfoCenter at:

http://pic.dhe.ibm.com/infocenter/tivihelp/v59r1/topic/com.ibm.tpc_V52.doc/fqz0_r_config_ds8000_ldap.html

LDAP connectivity and routing informationTo connect the LDAP server to the Tivoli Storage Productivity Center, you must provide the following settings to your IBM service representative:

� LDAP network IDs, and the host name’s domain name and port� User ID and password of the LDAP server

If the LDAP server is isolated from the Tivoli Storage Productivity Center by a Secure Sockets Layer (SSL), the LDAP port (which is verified during the Tivoli Storage Productivity Center installation) must be opened in that socket. There also might be an SSL between the Tivoli Storage Productivity Center that must be opened to allow LDAP traffic between them.

8.4 Remote Mirror and Copy connectivityThe DS8000 uses the high-speed Fibre Channel Protocol (FCP) for Remote Mirror and Copy connectivity.

Ensure that you have enough FCP paths that are assigned for your remote mirroring between your source and target sites to address performance and redundancy issues. When you plan to use Metro Mirror and Global Copy modes between a pair of storage units, use separate logical and physical paths for the Metro Mirror. Use another set of logical and physical paths for the Global Copy.

236 IBM DS8870 Architecture and Implementation

Page 255: IBM 8870 Archt

Plan the distance between the primary and auxiliary storage units to properly acquire the necessary length of fiber optic cables that you need. If necessary, your Copy Services solution can use hardware such as channel extenders or dense wavelength division multiplexing (DWDM).

For more information, see IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788, and IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787.

8.5 Disk capacity considerationsThe effective capacity of the DS8000 is determined by the following factors, which apply equally to standard and encrypted storage drives:

� The spares configuration� The size of the installed disk drives� The selected RAID configuration: RAID 5, RAID 6, or RAID 10, in two sparing combinations� The storage type: fixed block (FB) or count key data (CKD)

8.5.1 Disk sparingOn internal storage, RAID arrays automatically attempt to recover from a DDM failure by rebuilding the data for the failed DDM on a spare DDM. For sparing to occur, a DDM with a disk capacity equal to or greater than failed disk capacity must be available on the same device adapter pair. After sparing is initiated, the spare and the failing DDMs are swapped between their respective array sites such that the spare DDM becomes part of the array site that is associated with the array at the failed DDM. The failing DDM becomes a failed spare DDM in the array site from which the spare came.

The DS8000 assigns spare disks automatically. The first four array sites (a set of eight disk drives) on a device adapter (DA) pair normally each contribute one spare to the DA pair. A minimum of one spare is created for each array site that is defined until the following conditions are met:

� A minimum of four spares per DA pair� A minimum of four spares of the largest capacity array site on the DA pair� A minimum of two spares of capacity and rpm greater than or equal to the fastest array

site of any capacity on the DA pair

The DDM sparing policies support enhanced sparing. This feature allows the repair of some DDM failures to be deferred until a later repair action is required. For more information about the DS8000 sparing concepts, see IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209, and 4.6.9, “Spare creation” on page 90.

8.5.2 Disk capacityThe DS8870 operates in a RAID 5, RAID 6, or RAID 10 configuration. The following RAID configurations are possible:

� 6+P RAID 5 configuration: The array consists of six data drives and one parity drive. The remaining drive on the array site is used as a spare.

Important: Starting from Release 6.2, an improved sparing management named Smart Rebuild was added into microcode to better manage errors on disks and arrays that are sparing.

Chapter 8. DS8870 physical planning and installation 237

Page 256: IBM 8870 Archt

� 7+P RAID 5 configuration: The array consists of seven data drives and one parity drive.

� 5+P+Q RAID 6 configuration: The array consists of five data drives and two parity drives. The remaining drive on the array site is used as a spare.

� 6+P+Q RAID 6 configuration: The array consists of six data drives and two parity drives.

� 3+3 RAID 10 configuration: The array consists of three data drives that are mirrored to three copy drives. Two drives on the array site are used as spares.

� 4+4 RAID 10 configuration: The array consists of four data drives that are mirrored to four copy drives.

Table 8-8 shows the effective capacity of one rank in the various possible configurations. A disk drive set contains 16 drives, which form two array sites. Hard disk drive (HDD) capacity is added in increments of one disk drive set. SSD capacity can be added in increments of a half disk drive set (eight drives). The capacities in the table are expressed in decimal gigabytes and as the number of extents.

Table 8-8 DS8870 disk drive set capacity for open systems and System z environments

Important: Because of the larger metadata area, the net capacity of the ranks is lower than in previous DS8000 models.

Disk size / Rank type

Effective capacity of one rank in decimal GB (Number of extents)

Rank of RAID 10 arrays Rank of RAID 6 arrays Rank of RAID 5 arrays

3 + 3 4 + 4 5 + P + Q 6 + P + Q 6 + P 7 + P

146 GB /FB

379.03(353)

517.55(482)

640(596)

777.39(724)

794.6(740)

933.1(869)

146 GB /CKD

378.16(395)

516.97(540)

639.52(668)

777.38(812)

793.66(829)

931.5(973)

300 GB /FB

809.60(754)

1092.35(1017)

1341.10(1249)

1621.35(1510)

1655.7(1542)

1936(1803)

300 GB /CKD

808.01(844)

1090.43(1139)

1340.30(1400)

1618.89(1691)

1653.36(1727)

1934(2020)

600 GB /FB

1667.52(1553)

2236.60(2083)

2738.04(2550)

3301.75(3075)

3372.62(3141)

3936.33(3666)

600 GB /CKD

1665.80(1740)

2232.56(2332)

2736.13(2858)

3298.09(3445)

3367.99(3518)

3931.87(4107)

1.2 TB /FB

3360.09(3129)

4499.69 (4190)

5500.42(5122)

6628.57(6173)

6770.30 (6305)

7900(7357.33)

1.2 TB /CKD

2516.89(3585)

4490.66(4690)

5496.50 (5741)

6622.36 (6917)

6761.50 (7062)

7889.90 (8241.33)

400 GB (SSD) / FB

N/A N/A N/A N/A 2277.4(2121)

2660.7(2478)

400 GB (SSD) / CKD

N/A N/A N/A N/A 2274.7.77(2376)

2658.6(2777)

4 TB (NL) / FB

N/A N/A 18454.04 (17186)

22216.44 (20690)

N/A N/A

238 IBM DS8870 Architecture and Implementation

Page 257: IBM 8870 Archt

An updated version of Capacity Magic (see “Capacity Magic” on page 466) aids you in determining the raw and net storage capacities, and the numbers for the required extents for each available type of RAID.

8.5.3 DS8000 solid-state drive considerationsSSDs (flash drives) are a higher performance option when compared to HDDs. For the DS8870, SSDs are available in 400 GB capacity with FDE capability.

All disks that are installed in a storage enclosure pair must be of the same capacity and speed.

SSDs can be ordered and installed in eight-drive installation groups (half drive sets) or 16-drive installation groups (full drive sets). A half drive set (8) is always upgraded to a full drive set (16) when SSD capacity is added. A frame can contain one SSD half drive set at most.

To achieve the optimal price-to-performance ratio in DS8000, SSDs have limitations and considerations that differ from HDDs.

LimitationsThe following limitations apply to SSDs:

� Drives of different capacities and speeds cannot be intermixed in a storage enclosure pair.

� A DS8870 system is limited to 48 SSDs per DA pair. The maximum number of SSDs in a DS8870 system is 384 drives that are spread over eight DA pairs.

� RAID 5 is the main supported implementation for SSDs (RAID 6 is not supported). SSDs follow normal sparing rules. The array configuration is 6+P+S or 7+P.

� RAID 10 is supported only with a client-requested request for price quotation (RPQ).

SSD PlacementThe following rules apply to SSD placement:

� SSD sets have a default location when a new machine is ordered and configured.

� SSDs are installed in default locations from manufacturing, which is the first storage enclosure pair on each DA pair. This installation is done to spread the SSDs over as many DA pairs as possible to achieve an optimal price-to-performance ratio.

� The default locations for SSDs in a DS8870 are split among eight DA pairs (if installed) in the first two frames: four in the first frame and four in the second frame.

4 TB(NL) / CKD

N/A N/A 18442.53(19264)

22192.84 (23181)

N/A N/A

Important: When reviewing Table 8-8 on page 238, keep in mind the following points:

� Effective capacities are in decimal gigabytes (GB); 1 GB is 1,000,000,000 bytes.

� Although disk drive sets contain 16 drives, arrays use only eight drives. The effective capacity assumes that you have two arrays for each disk drive set.

Important: An eight-drive installation increment means that the SSD rank that is added is assigned to only one DS8000 server (central electronics complex (CEC)). This configuration is not preferred for performances reasons.

Chapter 8. DS8870 physical planning and installation 239

Page 258: IBM 8870 Archt

240 IBM DS8870 Architecture and Implementation

Page 259: IBM 8870 Archt

Chapter 9. DS8870 HMC planning and setup

This chapter describes the planning tasks that are involved in the setup of the required DS8870 Hardware Management Console.

This chapter covers the following topics:

� Hardware Management Console overview� Hardware Management Console software� HMC activities� HMC and IPv6� HMC user management� External HMC� Configuration flow

9

© Copyright IBM Corp. 2014. All rights reserved. 241

Page 260: IBM 8870 Archt

9.1 Hardware Management Console overviewThe Hardware Management Console (HMC) is a multi-purpose piece of equipment that provides the services that the client needs to configure and manage the storage and manage some of the operational aspects of the storage system. It also provides the interface where service personnel perform diagnostic and repair tasks. The HMC does not process any of the data from hosts; it is not even in the path that the data takes from a host to the storage. The HMC is a configuration and management station for the DS8870.

The HMC is the focal point for DS8870 management that includes the following functions:

� DS8870 power control

� Storage provisioning

� Advanced Copy Services management

� Interface for onsite service personnel

� Collection of diagnostic data and call home

� Problem management

� Remote support access through various LAN options or by modem

� Storage management through Storage Manager graphical user interface (GUI)

� Connection to Tivoli Key Lifecycle Manager or IBM Security Key Lifecycle Manager (ISKLM) for encryption management functions, if required

� Interface for microcode and other firmware updates

Every DS8870 installation includes an HMC that is in the base frame. A second HMC, which is external to the DS8870, is available as an option to provide redundancy.

9.1.1 Storage HMC hardwareThe HMC consists of a notebook. The use of a notebook makes the HMC efficient in many ways, including power consumption. The HMC is mounted on a slide-out tray that pulls forward when the door is fully open. Because of width constraints, the HMC is seated on the tray sideways, on a side slip able platter. When the tray is extended forward, there is a latch on the platter in the front of the notebook HMC. Lift this latch to allow the workstation to slide forward, then lift and release it to the service rail. The service rail is on top of the direct current-uninterruptible power supply (dc-UPS). The DS8870 HMC location is shown in Figure 9-1 on page 243.

242 IBM DS8870 Architecture and Implementation

Page 261: IBM 8870 Archt

.

Figure 9-1 HMC location in DS8870

The mobile workstation is equipped with adapters for a modem and 10/100/1000 Mb Ethernet. These adapters are routed to special connectors in the rear of the DS8870 frame, as shown in Figure 9-2. These connectors are only in the base frame of a DS8870 and not in any of the expansion frames.

Figure 9-2 DS8870 HMC modem and Ethernet connections (rear)

A second, redundant mobile HMC workstation is orderable and should be used in environments that use encryption management or Advanced Copy Services functions. The second HMC is external to the DS8870 frame. For more information about adding an external HMC, see 9.6, “External HMC” on page 254.

Chapter 9. DS8870 HMC planning and setup 243

Page 262: IBM 8870 Archt

9.1.2 Private Ethernet networksThe HMC communicates with the storage facility through a pair of redundant Ethernet networks, which are designated as the Black Network and Gray Network. Two switches are included in the rear of the DS8870 base frame. Each HMC and each central electronics complex (CEC) is connected to both switches. Figure 9-3 shows how each port is used on the pair of DS8870 Ethernet switches. Do not connect the client network (or any other equipment) to these switches because they are for the DS8870 internal use only.

Figure 9-3 DS8870 Internal Ethernet Switches

In most DS8870 configurations, two or three ports might be unused on each switch.

9.2 Hardware Management Console softwareThe Linux based HMC includes the following application servers:

� DS Storage Management server

The DS Storage Management server is the logical server that communicates with the outside world to perform DS8870-specific tasks.

� IBM Enterprise Storage Server Network Interface server (ESSNI)

ESSNI is the logical server that communicates with the DS Storage Management server and interacts with the two CECs of the DS8870. It is also referred to as the DS Network Interface or DSNI.

The DS8870 HMC provides the following management interfaces:

� DS Storage Manager graphical user interface (GUI)� Data storage command-line interface (DS CLI)� DS Open application programming interface (DS Open API)� Web-based user interface (WebUI)

The GUI and the DS CLI are comprehensive, easy-to-use interfaces for a storage administrator to perform DS8870 management tasks to provision the storage arrays, manage application users, and change HMC options. The interfaces can be used interchangeably, depending on the particular task.

The DS Open API provides an interface for external storage management programs, such as Tivoli Storage Productivity Center, to communicate with the DS8870. It channels traffic

Important: The internal Ethernet switches that are shown in Figure 9-3 are for the DS8870 private network only. No client network connection should ever be made directly to these internal switches.

244 IBM DS8870 Architecture and Implementation

Page 263: IBM 8870 Archt

through the IBM System Storage Common Information Model (CIM) agent, a middleware application that provides a CIM-compliant interface.

The DS8870 uses a slim, lightweight, and fast interface that is called WebUI that can be used remotely over a VPN by support personnel to check the health status and perform service tasks.

9.2.1 DS Storage Manager GUIAlthough the DS Storage Manager (DS SM) runs on the HMC, it is not possible to access it when logged into the HMC console. It can be accessed remotely using either a web browser on a workstation that is attached to the customer network that the HMC is attached to, or through Tivoli Storage Productivity Center. For details, see 12.1.1, “Accessing the DS GUI” on page 294.

9.2.2 DS command-line interfaceThe DS command-line interface (CLI), which must be executed in the command environment of an external workstation, is a second option to communicate with the HMC. The DS CLI might be a good choice for configuration tasks when there are many updates to be done. This option avoids the web page load time for each window in the DS Storage Manager GUI.

For more information about DS CLI use and configuration, see Chapter 13, “Configuration with the DS command-line interface” on page 353. For a complete list of DS CLI commands, see IBM Systems Storage DS8000 Series: Command-Line Interface User’s Guide, GC53-1127.

9.2.3 DS Open application programming interfaceCalling DS Open application programming interfaces (DS Open APIs) from within a program is a third option to implement communication with the HMC. DS CLI and DS Open API communicate directly with the ESSNI server software that is running on the HMC.

For the DS8000, the CIM agent is preinstalled with the HMC code and is started when the HMC boots. An active CIM agent allows access only to the DS8000s that are managed by the HMC on which it is running. Configuration of the CIM agent must be performed by an IBM service representative by using the DS CIM command-line interface (DSCIMCLI). For more information about the CIM agent, see this website:

http://www.snia.org/forums/smi/tech_programs/lab_program

9.2.4 Web-based user interfaceThe web-based user interface (WebUI) is a browser-based interface that is used for remote access to system utilities. If a VPN connection is set up, WebUI can be used by support personnel for DS8870 diagnostic tasks, data offloading, and many service actions. The connection uses port 443 over Secure Sockets Layer (SSL) or Transport Layer Security (TLS), which provides a secure and full interface to utilities that are running on the HMC.

Important: Use of a secure VPN or Assist On-site (AOS) VPN allows service personnel to quickly respond to client needs by using the WebUI.

Chapter 9. DS8870 HMC planning and setup 245

Page 264: IBM 8870 Archt

Complete the following steps to log in to the HMC by using the WebUI:

1. Log on at the HMC, as shown in Figure 9-4. Click Log on and launch the Hardware Management Console web application to open the login window and log in. The default user ID is customer and the default password is cust0mer.

Figure 9-4 Hardware Management Console

2. If you are successfully logged in, you see the HMC window, in which you can select Status Overview to see the status of the DS8870. Other areas of interest are shown in Figure 9-5.

Figure 9-5 WebUI main window

Help LogoffServiceManagement

HMCManagement

Status Overview

ExtraInformation

246 IBM DS8870 Architecture and Implementation

Page 265: IBM 8870 Archt

Because the HMC WebUI is mainly a services interface, it is not covered here. More information can be found in the Help menu.

9.3 HMC activitiesThis section covers planning and maintenance tasks for the DS8870 HMC. For more information about overall planning, see Chapter 8, “DS8870 physical planning and installation” on page 217. If a second external HMC was ordered for the DS8870, information about planning that installation is included. If a second, external HMC was not ordered, the information can be safely ignored.

9.3.1 HMC planning tasksThe following tasks are needed to plan the installation or configuration:

� A connection to the client network is needed at the base frame for the internal HMC. Another connection also is needed at the location of the second, external HMC. The connections should be standard CAT5/6 Ethernet cabling with RJ45 connectors.

� IP addresses for the internal and external HMCs are needed. The DS8870 can work with IPv4 and IPv6 networks. For more information about procedures to configure the DS8870 HMC for IPv6, see 9.4, “HMC and IPv6” on page 250.

� If modem access is allowed, a phone line is needed at the base frame for the internal HMC. If ordered, another line also is needed at the location of the second, external HMC. The connections should be standard phone cabling with RJ11 connectors.

� Most users access the DS SM GUI remotely through a browser. You can also use Tivoli Storage Productivity Center in your environment to access the DS SM GUI.

� The web browser to be used on any administration workstation must be supported, as described in IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

� The IP addresses of SNMP recipients must be identified if the client wants the DS8870 HMC to send SNMP traps to a monitoring station.

� Email accounts must be identified if the client wants the DS8870 HMC to send email messages for problem conditions.

� The IP addresses of NTP servers must be identified if the client wants the DS8870 HMC to use Network Time Protocol for time synchronization.

� When a DS8870 is ordered, the license and certain optional features must be activated as part of the customization of the DS8870. For more information, see Chapter 10, “IBM System Storage DS8000 features and licensed functions” on page 259.

� The installation tasks for the optional external HMC must be identified as part of the overall project plan and agreed upon with the responsible IBM personnel.

Important: While R7.2 does not require NIST SP 800 131a compliance, if your account requires it, additional setup tasks are necessary.

Important: Applying increased feature activation codes is a concurrent action.

Chapter 9. DS8870 HMC planning and setup 247

Page 266: IBM 8870 Archt

9.3.2 Planning for microcode upgradesThe following tasks must be considered regarding the microcode upgrades on the DS8870:

� Microcode changes

IBM might release changes to the DS8870 series Licensed Machine Code.

� Microcode installation

An IBM service representative can install the changes. Check whether the new microcode requires new levels of DS Storage Manager, DS CLI, and DS Open API. Plan on upgrading them on the relevant workstations, if necessary.

� Host prerequisites

When you are planning for initial installation or for microcode updates, make sure that all prerequisites for the hosts are identified correctly. Sometimes a new level also is required for the SDD. DS8870 interoperability information can be found at the IBM System Storage Interoperability Center (SSIC) at this website:

http://www.ibm.com/systems/support/storage/config/ssic

To prepare for downloading the drivers, see the host bus adapter (HBA) Support Matrix that is referenced in the Interoperability Matrix and make sure that drivers are downloaded from the IBM Internet site. This requirement is necessary to make sure that drivers are used with the settings that correspond to the DS8870 and not some other IBM storage subsystem.

� Maintenance windows

Normally, the microcode update of the DS8870 is a nondisruptive action. However, any prerequisites that are identified for the hosts (for example, patches, new maintenance levels, or new drivers) could make it necessary to schedule a maintenance window. The host environments can then be upgraded to the level needed in parallel to the microcode update of the DS8870 taking place.

For more information about microcode upgrades, see Chapter 14, “Licensed machine code” on page 399.

9.3.3 Time synchronizationFor proper error analysis, it is important to have the date and time information synchronized as much as possible on all components in the DS8870 environment. The components include the DS8870 HMC, the DS Storage Manager, and DS CLI workstations.

With the DS8870, the HMC can use the Network Time Protocol (NTP) service. Customers can specify NTP servers on their internal or external network to provide the time to the HMC. It is a client responsibility to ensure that the NTP servers are working, stable, and accurate. An IBM service representative enables the HMC to use NTP servers (ideally at the time of the initial DS8870 installation). Changes can be made by the customer using the Change Date and Time action under HMC Management on the HMC.

Important: The Interoperability Center includes information about the latest supported code levels. This availability does not necessarily mean that former levels of HBA firmware or drivers are no longer supported. If you are in doubt about any supported levels, contact your IBM representative.

248 IBM DS8870 Architecture and Implementation

Page 267: IBM 8870 Archt

9.3.4 Monitoring DS8870 with the HMCA client can receive notifications from the HMC through SNMP traps and email messages. Notifications contain information about your storage complex, such as open serviceable events. You can choose one or both of the following notification methods:

� Simple Network Management Protocol (SNMP) traps

For monitoring purposes, the DS8870 uses SNMP traps. An SNMP trap can be sent to a server in the client’s environment, perhaps with System Management Software, which handles the trap that is based on the MIB that was delivered with the DS8870 software. A MIB that contains all of traps can be used for integration purposes into System Management Software. The supported traps are described in the documentation that comes with the microcode on the CDs that are provided by the IBM service representative. The IP address to which the traps should be sent must be configured during initial installation of the DS8870. For more information about the DS8870 and SNMP, see Chapter 15, “Monitoring with Simple Network Management Protocol” on page 407.

� Email

When you choose to enable email notifications, email messages are sent to all the addresses that are defined on the HMC whenever the storage complex encounters a serviceable event or must alert you to other information.

During the planning process, create a list of who must be notified.

Additionally, when the DS8870 is attached to a System z server, a service information message (SIM) notification will occur automatically and therefore requires no setup. A SIM message will be displayed on the operating system console if there is a serviceable event. These messages are not sent from the HMC, but from the DS8870 using the channel connections (usually FICON), that run between the server and the DS8870.

SNMP and email notification options for the DS8870 will require setup on the HMC.

9.3.5 Call home and remote supportThe HMC uses outbound (call home) and inbound (remote service) support.

Call home is the capability of the HMC to contact the IBM support center to report a serviceable event. Remote support is the capability of IBM support representatives to connect to the HMC to perform service tasks remotely. If allowed to do so by the setup of the client’s environment, an IBM service support representative could connect to the HMC to perform detailed problem analysis. The IBM service support representative can view error logs and problem logs and initiate trace or dump retrievals.

Remote support can be configured for dial-up connection through a modem, VPN Internet connection, or IBM Tivoli Assist On-site (AOS). Setup of the remote support environment is done by the IBM service representative during initial installation. For more information, see Chapter 16, “Remote support” on page 421.

Chapter 9. DS8870 HMC planning and setup 249

Page 268: IBM 8870 Archt

9.4 HMC and IPv6The DS8870 HMC can be configured for an IPv6 client network. IPv4 also is still supported.

Configuring the HMC in an IPv6 environmentUsually, the configuration is done by the IBM service representative during the DS8870 initial installation. Complete the following steps to configure the DS8870 HMC client network port for IPv6:

1. Launch and log in to WebUI. For more information, see to 9.2.4, “Web-based user interface” on page 245. The HMC Welcome window opens, as shown in Figure 9-6.

Figure 9-6 WebUI Welcome window

250 IBM DS8870 Architecture and Implementation

Page 269: IBM 8870 Archt

2. In the HMC Management window, select Change Network Settings, as shown in Figure 9-7.

Figure 9-7 WebUI HMC Management window

3. Click the LAN Adapters tab.

4. Only eth2 is shown. The private network ports are not editable. Click Details.

5. Click the IPv6 Settings tab.

6. Click Add to add a static IP address to this adapter. Figure 9-8 shows the LAN Adapter Details window where you can configure the IPv6 values.

Figure 9-8 WebUI IPv6 settings window

Chapter 9. DS8870 HMC planning and setup 251

Page 270: IBM 8870 Archt

9.5 HMC user managementUser management can be performed by using the DS CLI or the DS GUI. An administrator user ID is preconfigured during the installation of the DS8870 and uses the following defaults:

� User ID: admin� Password: admin

The password of the admin user ID must be changed before it can be used. The GUI forces you to change the password when you first log in. By using the DS CLI, you log in but you cannot issue any other commands until you change the password. For example, to change the admin user’s password to passw0rd, use the following DS CLI command:

chuser-pw passw0rd admin

After you issue that command, you can issue other commands.

User rolesDuring the planning phase of the project, a worksheet or a script file was established with a list of all users who need access to the DS GUI or DS CLI. A user can be assigned to more than one group. At least one user (user_id) should be assigned to each of the following roles:

� The Administrator (admin) has access to all HMC service methods and all storage image resources, except for encryption functionality. This user authorizes the actions of the Security Administrator during the encryption deadlock prevention and resolution process.

� The Security Administrator (secadmin) has access to all encryption functions. This role requires an Administrator user to confirm the actions that are taken during the encryption deadlock prevention and resolution process.

� The Physical operator (op_storage) has access to physical configuration service methods and resources, such as managing storage complex, storage image, rank, array, and extent pool objects.

� The Logical operator (op_volume) has access to all service methods and resources that relate to logical volumes, hosts, host ports, logical subsystems, and Volume Groups, excluding security methods.

� The Monitor group has access to all read-only, nonsecurity HMC service methods, such as list and show commands.

� The Service group has access to all HMC service methods and resources, such as performing code loads and retrieving problem logs. This group also has the privileges of the Monitor group, excluding security methods.

Important: The DS8870 supports the capability to use a Single Point of Authentication function for the GUI and CLI through a proxy to contact the external repository (for example, LDAP Server). Proxy used is a Tivoli Component (Embedded Security Services, also known as Authentication Service). This capability requires a minimum Tivoli Storage Productivity Center Version 5.1 server.

For more information about LDAP-based authentication, see IBM System Storage DS8000: LDAP Authentication, REDP-4505.

For information on Configuring Jazz for Service Management and DS8000 for LDAP authentication, refer to the IBM Tivoli Storage Productivity Center V5.2 InfoCenter at:

http://pic.dhe.ibm.com/infocenter/tivihelp/v59r1/topic/com.ibm.tpc_V52.doc/fqz0_r_config_ds8000_ldap.html

252 IBM DS8870 Architecture and Implementation

Page 271: IBM 8870 Archt

� The Copy Services operator has access to all Copy Services methods and resources, and the privileges of the Monitor group, excluding security methods.

� No access prevents access to any service method or storage image resources. This group is used by an administrator to temporarily deactivate a user ID. By default, this user group is assigned to any user account in the security repository that is not associated with any other user group.

Password policiesWhenever a user is added, a password is entered by the administrator. During the first login, this password must be changed. Password settings include the time period (in days) after which passwords expire and a number that identifies how many failed logins are allowed. The user ID is deactivated if an invalid password is entered more times than the limit. Only a user with administrator rights can then reset the user ID with a new initial password.

If access is denied for the Administrator because of the number of invalid login attempts, a procedure can be obtained from your IBM support representative to reset the Administrator’s password.

The password for each user account is forced to adhere to the following rules:

� Passwords must contain one character from at least two groups and must be 8 - 16 characters. In addition, the following changes were made:

– Groups now include alphabetic, numeric, and punctuation– Old rules required at least five alphabetic and one numeric character– Old rules required first and last characters to be alphabetic

� Passwords cannot contain the user’s ID.

� Initial passwords on new user accounts are expired.

� Passwords that are reset by an administrator are expired.

� Users must change expired passwords at the next logon.

The following password security implementations also are included:

� Password rules are checked when passwords are changed.

� Valid character set, embedded user ID, age, length, and history also are checked.

� Passwords are invalidated by change remain usable until the next password change.

� Users with invalidated passwords are not automatically disconnected from DS8870.

Important: Available Resource Groups offers an enhanced security capability that supports the hosting of multiple customers with Copy Services requirements. It also supports the single client with requirements to isolate the data of multiple operating systems’ environments. For more information, see IBM Systems Storage DS8000: Copy Services Scope Management and Resource Groups, REDP-4758.

General rule: Do not set the values of chpass to 0 because this setting indicates that passwords never expire and unlimited login attempts are allowed.

Important: Upgrading an existing storage system to the latest code release does not change old default user acquired rules. Existing default values are retained to prevent disruption. The user might opt to use the new defaults with the chpass –reset command. The command resets all default values to new defaults immediately.

Chapter 9. DS8870 HMC planning and setup 253

Page 272: IBM 8870 Archt

� The following password rules are checked when a user logs on:

– Password expiration, locked out, and failed attempts

– Users with passwords that expire or are locked out by the administrator while logged on are not automatically disconnected from DS8870.

9.6 External HMCAn external, secondary HMC (for redundancy) can be ordered for the DS8870. The external HMC is an optional purchase, but is highly useful. The internal HMC is referred to as HMC1 and the external HMC as HMC2. The two HMCs run in a dual-active configuration, so either HMC can be used at any time. Each HMC will be assigned a role of either primary (normally the internal one) or secondary (normally the external one). Some service functions can only be performed on the primary HMC. For this book, the distinction between the internal and external HMC is only for the purposes of clarity and explanation because they are identical in functionality.

An alternate solution to support HMC redundancy is to use a second internal HMC. This requires a second DS8870 in the account, and tying the two internal HMC networks together. This is known as a cross-coupled configuration, and will allow either HMC to manage either DS8870, thereby providing HMC redundancy.

The DS8870 can perform all storage duties while the HMC is down or offline, but configuration, error reporting, and maintenance capabilities become severely restricted. Any organization with high availability requirements should consider deploying an HMC redundant configuration.

9.6.1 HMC redundancy benefitsHMC redundancy provides the following advantages:

� Enhanced maintenance capability

Because the HMC is the only interface that is available for service personnel, an alternate HMC provides maintenance operational capabilities if the internal HMC fails.

� Greater availability for power management

The use of the HMC is the only way to safely power on or power off the DS8870. An external HMC is necessary to shut down the DS8870 in the event of a failure with the internal HMC.

� Greater availability for remote support over modem

A second HMC with a phone line on the modem provides IBM with a way to perform remote support if an error occurs that prevents access to the first HMC. If network offload (FTP) is not allowed, one HMC can be used to offload data over the modem line while the other HMC is used for troubleshooting. For more information about the HMC modem, see Chapter 16, “Remote support” on page 421.

Important: User names and passwords are case-sensitive. For example, if you create a user name that is called Anthony, you cannot log in by using the user name anthony.

Important: The internal and external HMCs are not available to be used as general purpose computing resources.

254 IBM DS8870 Architecture and Implementation

Page 273: IBM 8870 Archt

� Greater availability of encryption deadlock recovery

If the DS8870 is configured for full disk encryption and an encryption deadlock situation occurs, the use of the HMC is the only way to input a Recovery Key to allow the DS8870 to become operational.

� Greater availability for Advanced Copy Services

Because all Copy Services functions are driven by the HMC, any environment that uses Advanced Copy Services should include dual HMCs for operations continuance.

� Greater availability for configuration operations

All configuration commands must go through the HMC. This requirement is true regardless of whether access is through the Tivoli Storage Productivity Center, Tivoli Storage Productivity Center Enterprise Manager, DS CLI, the DS Storage Manager, or DS Open API with another management program. An external HMC allows these operations to continue in the event of a failure with the internal HMC.

When a configuration or Copy Services command is issued, the DS CLI or DS Storage Manager sends the command to the first HMC. If the first HMC is not available, it automatically sends the command to the second HMC instead. Typically, you do not have to reissue the command.

Any changes that are made by using one HMC are instantly reflected in the other HMC. There is no caching of host data that is done within the HMC, so there are no cache coherency issues.

9.7 Configuration worksheetsDuring the installation of the DS8870, your IBM service representative customizes the setup of your storage complex that is based on information that you provide in a set of customization worksheets. Each time that you install a new storage unit or management console, you must complete the customization worksheets before the IBM service representatives can perform the installation.

The customization worksheets are important and must be completed before the installation. It is important that this information is entered into the machine so that preventive maintenance and high availability of the machine are maintained. You can find the customization worksheets in IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

By using the customization worksheets, you specify the initial setup for the following items:

� Company information: This information allows IBM service representatives to contact you quickly when they must access your storage complex.

� Management console network settings: You specify the IP address and LAN settings for your management console (MC).

� Remote support (includes call home and remote support settings): You specify whether you want outbound or inbound remote support.

� Notifications (include SNMP trap and email notification settings): You specify the types of notifications that you and others might want to receive.

� Power control: You select and control the various power modes for the storage complex.

� Control Switch settings: You specify certain DS8870 settings that affect host connectivity. You must enter these choices on the control switch settings worksheet so that the service representative can set them during the installation of the DS8870.

Chapter 9. DS8870 HMC planning and setup 255

Page 274: IBM 8870 Archt

9.8 Configuration flowComplete the following tasks to configure storage in the DS8870. The order of the tasks does not have to be completed exactly as shown here:

� Install license keys: Activate the license keys for the DS8870.

� Create arrays: Configure the installed disk drives as RAID 5, RAID 6, or RAID 10 arrays.

� Create ranks: Assign each array to a fixed block (FB) rank or a count key data (CKD) rank.

� Create extent pools: Define extent pools, associate each one with Server 0 or Server 1, and assign at least one rank to each extent pool. If you want to take advantage of Storage Pool Striping, you must assign multiple ranks to an extent pool. With current versions of the DS GUI, you can start directly with the creation of extent pools (arrays and ranks are automatically and implicitly defined).

� Create a repository for space efficient volumes.

� Configure I/O ports: Define the type of the Fibre Channel/ Fibre Channel connection (FICON) ports. The port type can be Switched Fabric, Arbitrated Loop, or FICON.

� Create host connections for open systems: Define open systems hosts and their Fibre Channel (FC) host bus adapter (HBA) worldwide port names.

� Create volume groups for open systems: Create volume groups where FB volumes are assigned and select the host attachments for the volume groups.

� Create open systems volumes: Create striped open systems FB volumes and assign them to one or more volume groups.

� Create System z logical control units (LCUs): Define the LCU type and other attributes, such as subsystem identifiers (SSIDs).

� Create System z volumes: Create System z CKD base volumes and (optionally) parallel access volume (PAV) aliases for them to implement the PAV feature.

The actual configuration can be done by using the DS Storage Manager GUI, DS Command-Line Interface, or a combination of both. A novice user might prefer to use the GUI, whereas a more experienced user might use the CLI, particularly for the more repetitive tasks, such as creating large numbers of volumes.

For more information about how to perform the specific tasks, see the following chapters:

� Chapter 10, “IBM System Storage DS8000 features and licensed functions” on page 259� Chapter 12, “Configuration by using the DS Storage Manager GUI” on page 293� Chapter 13, “Configuration with the DS command-line interface” on page 353

Important: IBM service representatives cannot install a DS8870 system or management console until you provide them with the completed customization worksheets.

Important: The configuration flow changes when you activate the Full Disk Encryption feature for the DS8870.

256 IBM DS8870 Architecture and Implementation

Page 275: IBM 8870 Archt

General guidelines for configuring storageRemember the following general guidelines when you are configuring storage on the DS8870:

� To achieve a well-balanced load distribution, use at least two extent pools, each assigned to one DS8870 internal server (server 0 and server 1). If CKD and FB volumes are required, use at least four extent pools with two CKD and two FB types assigned to each server.

� Address groups of 16 LCUs or logical subsystems (LSSs) will be used for CKD or FB. An address group will have IDs from x0 to xF. The first LCU or LSS used in the address group to support either CKD or FB volumes will determine the usage of the rest of the LCUs or LSSs in the group.

� Volumes of one LCU or LSS can be allocated on multiple extent pools.

� Each extent pool pair should have the same characteristics in terms of RAID type, rpm, and distributed data management (DDM) size.

� Ranks in one extent pool should belong to separate device adapters.

� Assign multiple ranks to extent pools to take advantage of Storage Pool Striping.

� CKD 3380 and 3390 type volumes can be intermixed in an LCU and an extent pool.

� FB guidelines:

– Create a volume group for each server (host), unless LUN sharing is required.

– Place all ports for a single server in one volume group.

– If LUN sharing is required, the following options are available:

• Use separate volumes for servers and place LUNs in multiple volume groups.

• Place servers (clusters) and volumes to be shared in a single volume group.

� I/O port guidelines:

– Distribute host connections of each type (FICON and FCP) evenly across the I/O enclosure.

– Typically for FCP ports only, the access any parameter (when using the DSCLI commands) is used for I/O ports with access to HA ports in the DS8870 that are controlled by SAN zoning. If using the DS SM GUI instead, select the Automatic (any valid I/O port) option on the Defining I/O Ports window.

Chapter 9. DS8870 HMC planning and setup 257

Page 276: IBM 8870 Archt

258 IBM DS8870 Architecture and Implementation

Page 277: IBM 8870 Archt

Chapter 10. IBM System Storage DS8000 features and licensed functions

The activation of licensed functions is described in this chapter.

This chapter covers the following topics:

� IBM DS8870 licensed functions� Activation of licensed functions� Licensed scope considerations

10

© Copyright IBM Corp. 2014. All rights reserved. 259

Page 278: IBM 8870 Archt

10.1 IBM System Storage DS8000 licensed functionsMany of the functions of the DS8000 that we described so far are optional licensed functions that must be enabled for use. The licensed functions are enabled through a 242x licensed function indicator feature, plus a 239x licensed function authorization feature number, in the following manner:

� The licensed functions for DS8870 are enabled through a pair of 242x-961 licensed function indicator feature numbers (FC 07xx and FC 7xxx), plus a Licensed Function Authorization (239x-LFA) feature number (FC 7xxx). These functions and numbers are listed in Table 10-1.

Table 10-1 DS8000 licensed functions

� The DS8000 provides Enterprise Choice warranty options that are associated with a specific machine type. The x in 242x designates the machine type according to its warranty period, where x can be 1, 2, 3, or 4. For example, a 2424-961 machine type designates a DS8870 storage system with a four-year warranty period.

Licensed function forDS8000 with Enterprise Choice warranty

IBM 242x indicator feature numbers

IBM 239x function authorization model and feature numbers

Encrypted Drive Activation 1750 Model LFA, 1750

Encrypted Drive De-Activation 1754 Model LFA, 1754

Operating Environment License 0700 and 70xx Model LFA, 7030-7060

FICON Attachment 0703 and 7091 Model LFA, 7091

Thin Provisioning 0707 and 7071 Model LFA, 7071

Database Protection 0708 and 7080 Model LFA, 7080

High Performance FICON 0709 and 7092 Model LFA, 7092

IBM System Storage Easy Tier 0713 and 7083 Model LFA, 7083

Easy Tier Server 0715 and 7084 Model LFA, 7085

z/OS Distributed Data Backup 0714 and 7094 Model LFA, 7094

FlashCopy 0720 and 72xx Model LFA, 7250-7260

Space Efficient FlashCopy 0730 and 73xx Model LFA, 7350-7360

Metro/Global Mirror 0742 and 74xx Model LFA, 7480-7490

Metro Mirror 0744 and 75xx Model LFA, 7500-7510

Global Mirror 0746 and 75xx Model LFA, 7520-7530

z/OS Global Mirror 0760 and 76xx Model LFA, 7650-7660

z/OS Metro/Global Mirror -Incremental Resync

0763 and 76xx Model LFA, 7680-7690

Parallel Access Volumes 0780 and 78xx Model LFA, 7820-7830

HyperPAV 0782 and 7899 Model LFA, 7899

I/O Priority Manager 0784 and 784x Model LFA, 7840-7850

260 IBM DS8870 Architecture and Implementation

Page 279: IBM 8870 Archt

� The x in 239x can be 6, 7, 8, or 9, according to the associated 242x base unit model. The 2396 function authorizations apply to 2421 base units, 2397–2422, and so on. For example, a 2399-LFA machine type designates a DS8000 Licensed Function Authorization for a 2424 machine with a four-year warranty period.

� The 242x licensed function indicator feature numbers enable the technical activation of the function, subject to a feature activation code that is made available by IBM and applied by the client. The 239x licensed function authorization feature numbers establish the extent of authorization for that function on the 242x machine for which it was acquired.

10.1.1 LicensingSome of the orderable feature codes must be activated through the installation of a corresponding license key. These codes are listed in Table 10-1 on page 260. Some features can be directly configured for the client through the IBM marketing representative during the ordering process.

Feature codes that work with a license keyThe following features also are available:

� Metro Mirror (MM) is a synchronous way to perform remote replication. Global Mirror (GM) enables asynchronous replication, which is useful for larger distances and lower bandwidth. For more information about Copy Services, see Chapter 6, “IBM DS8000 Copy Services overview” on page 137.

� Metro/Global Mirror (MGM) enables cascaded three-site replications, which combine synchronous mirroring to an intermediate site with asynchronous mirroring from that intermediate site to a third site at a large distance. Combinations with other Copy Services features are possible, and sometimes even needed. Usually, the three-site MGM installation also requires an MM license on the A site with the MGM license (and even a GM license, if after a B breakdown you want to re-synchronize between A and C). At the B site, on top of the MGM, you also need the MM and GM licenses. At the C site, you then need licenses for MGM, GM, and FlashCopy.

� There are two possibilities for FlashCopy:

– The standard FlashCopy Point-in-Time Copy (PTC) license FC72xx, which works with thick (standard) volumes or thin provisioned extent space efficient (ESE) volumes, if you also have the thin provisioning license FC7071.

– The FlashCopy SE FC73xx, with which you make FlashCopies with track space efficient (TSE) target volumes.

TSE volumes are thin volumes with a fine granularity, which saves space. However, they are supported only as FlashCopy targets and are not meant for direct server attachments. Because writes are slower on the small granularity of TSE volumes, the sizing for FlashCopy SE target repositories must be done with sufficient care under performance and capacity aspects. TSE volumes are not handled by Easy Tier rebalancing algorithms.

The more modern way to perform thin provisioning is to use the ESE volumes, which require the thin provisioning license FC7071 that can be combined with the classic PTC FlashCopy license, if needed. The ESE thin volumes also can go into remote-mirroring relations. Because of their larger granularity, ESE volumes are handled with the same good performance as standard (thick) volumes, and they are managed by Easy Tier algorithms. However, ESE thin volumes are not available for System z count key data (CKD) clients.

� The z/OS Global Mirror (zGM) license, which is also known as Extended Remote Copy (XRC), enables z/OS clients to copy data by using System Data Mover (SDM). This copy

Chapter 10. IBM System Storage DS8000 features and licensed functions 261

Page 280: IBM 8870 Archt

is an asynchronous copy. For more information, see 6.3.8, “z/OS Global Mirror” on page 156.

� For System z clients, parallel access volumes (PAVs) allow multiple concurrent I/O streams to the same CKD volume. HyperPAV also reassigns the alias addresses dynamically to the base addresses of the volumes that are based on the needs of a dynamically changing workload. Both features result in such large performance gains that for many years they are configured as a de facto standard for mainframe clients, much like the Fibre Channel connection (FICON), which is required for z/OS.

� High-Performance FICON (zHPF, FC#7092) is a feature that uses a protocol extension for FICON that allows the data for multiple commands to be grouped in a single data transfer. This grouping increases the channel throughput for many workload profiles because of the decreased overhead. It works on newer zEnterprise systems such as zEC12, zBC12, z196, z114, or z10, and is recommended for these systems because of the performance gains it offers.

� z/OS Distributed Data Backup (zDDB) is a feature for clients with a mix of mainframe and distributed workloads to use their powerful System z host facilities to back up and restore open systems data. For more information, see IBM System Storage DS8000: z/OS Distributed Data Backup, REDP-4701.

� Easy Tier is available in the following modes:

– Automatic mode, which works on subvolume level (extent level), and allows for auto-tiering in hybrid extent pools. The most-accessed volume parts go to the upper tiers. In single-tier pools, it allows auto-rebalancing if turned on.

– Manual dynamic volume relocation mode works on the level of full volumes and allows volumes to be relocated or restriped to other places in the DS8000 online. It also allows ranks to be moved out of pools. Because this feature is available at no charge, it is usually configured on all DS8000s. For more information, see IBM System Storage DS8000 Easy Tier, REDP-4667.

As part of Easy Tier generation 5 features and based on the same license code, new functions can be implemented as described below:

– Easy Tier Application provides a new application programming interface (API) for software developers to use to have applications direct Easy Tier data placement on the DS8870. This also enables clients to assign (“pin”) volumes to a particular tier within an Easy Tier pool to meet performance and cost requirements. For more information, see IBM System Storage DS8000 - Easy Tier Application, REDP-5014.

– Easy Tier Heat Map Transfer automatically replicates heat map to remote systems to ensure that they are also optimized for performance and cost after a planned or unplanned outage. For more information, see IBM Easy Tier Heat Map Transfer Utility, REDP-5015.

– Easy Tier Server is a feature that can automatically move a copy of the hottest data to an IBM Power Systems server direct-attached local flash or solid-state drive (SSD) drawer, improving performance up to five times by caching the most frequently accessed data to the SSD read cache on the Power Systems servers. For more information, see IBM System Storage DS8000 Easy Tier Server, REDP-5013.

� I/O Priority Manager is the quality of service (QoS) feature for the System Storage DS8000 series. When larger extent pools are used that include many servers that are competing for the same rank and device adapter resources, clients can define Performance Groups of higher-priority and lower-priority servers and volumes. In overload conditions, the I/O Priority Manager throttles the lower-priority Performance Groups to maintain service on the higher-priority groups. For more information, see DS8000 I/O Priority Manager, REDP-4760.

262 IBM DS8870 Architecture and Implementation

Page 281: IBM 8870 Archt

� IBM Database Protection, FC7080: With this feature, you receive the highest level of protection for Oracle databases by the use of more end-to-end checks for detecting data corruption on the way through the different storage area network (SAN) and storage hardware layers. This feature complies with the Oracle Hardware-Assisted Resilient Data (HARD) initiative. For more information about this feature, see IBM Database Protection User’s Guide, GC27-2133-02, which is available at this website:

http://www.ibm.com/support/docview.wss?uid=ssg1S7003786

Feature code ordering options without the need of a license key The following ordering options of the DS8870 do not require the client to install a license key:

� Earthquake Resistance Kit, FC1906: The Earthquake Resistance Kit is an optional seismic kit for stabilizing the storage unit racks so that the racks comply with IBM earthquake resistance standards. It includes cross-braces on the front and rear of the racks, and the racks are secured to the floor. These stabilizing features limit potential damage to critical DS8000 machine components and help to prevent human injury.

� Overhead cabling: For more information about FC1400 (top-exit bracket) and FC1101 (ladder), see 8.2.3, “Overhead cabling features” on page 222. One ladder per site is sufficient.

� Shipping Weight Reduction, FC0200: If your site features delivery weight constraints, IBM offers this option that limits the maximum shipping weight of the individually packed components to 909 kg (2000 lb). Because this feature increases installation time, it should be ordered only when required.

� Extended Powerline Disturbance Feature, FC1055: This feature extends the available uptime in case both power cords lose the external power, as described in “Extended Power Line Disturbance feature” on page 225.

� Tivoli Key Lifecycle Manager server, FC1760: This feature is used for the Full Disk Encryption (FDE). It consists of a System x server hardware, with SUSE Linux, which can run one instance of the Tivoli Key Lifecycle Manager software to manage the encryption keys.

� Epic (FC0964), VMware VAAI (FC0965): For clients who want to use the Epic healthcare software or VMware VAAI, these features should be selected by the IBM marketing representative. For the VAAI XCOPY/Clone primitive, the PTC (FlashCopy) license also is needed.

� IBM ProtecTIER® indicator (FC0960), SVC (FC0963), IBM Storwize® V7000 virtualization (FC0961), N series Gateway (FC0962) indicator: In case the DS8870 is used or virtualized behind any of these deduplication or virtualization devices or an NAS gateway, the respective feature should be selected to indicate this.

For more information about these features, see IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

EncryptionIf encryption is wanted, Feature Code FC1750 should be included in the order. This feature enables the client to download from the IBM data storage feature activation (DSFA) website the function authorization (see 10.2, “Activating licensed functions” on page 266) and to elect to turn on encryption. Feature Code FC1754 is used to disable encryption. But also machines with the encryption enablement key FC1750 ordered or even applied can be run unencrypted. However, if encryption is wanted, it should be enabled at first use. For more information about disk encryption, see IBM System Storage DS8000 Disk Encryption, REDP-4500.

Chapter 10. IBM System Storage DS8000 features and licensed functions 263

Page 282: IBM 8870 Archt

10.1.2 Licensing: cost structureIBM offers value-based licensing for the Operating Environment License (OEL). It is priced based on the disk drive performance, capacity, speed, and other characteristics that provide more flexible and optimal price and performance configurations. As shown in Table 10-2, each feature indicates some value units.

Table 10-2 Operating Environment License: value unit indicators

These features are required in addition to the per-TB OEL features (#703x–704x). For each disk drive set, the corresponding number of value units must be configured, as shown in Table 10-3.

Table 10-3 DS8870 value unit requirements that are based on drive size, type, and speed

Feature number Description

7050 OEL – Value unit inactive indicator

7051 OEL – 1 value unit indicator

7052 OEL – 5 value unit indicator

7053 OEL – 10 value unit indicator

7054 OEL – 25 value unit indicator

7055 OEL – 50 value unit indicator

7060 OEL – 100 value unit indicator

7065 OEL – 200 value unit indicator

Drive set feature number

Drive size Drive type Drive speed Encryption capable

Value units required

6156 400 GB SSD (half drive set)

N/A Yes 18.2

6158 400 GB SSD N/A Yes 36.4

5108 146 GB SAS 15 K rpm Yes 4.8

5308 300 GB SAS 15 K rpm Yes 6.8

5708 600 GB SAS 10 K rpm Yes 11.5

5758 900 GB SAS 10 K rpm Yes 16.0

5768 1.2 TB SAS 10 K rpm Yes 20.0

5858 3 TB NL SAS(half drive set)

7.2 K rpm Yes 13.5

5868 4 TB NL SAS(half drive set)

7.2 K rpm Yes 16.2

5209 (CoD) 146 GB SAS 15 K rpm Yes 4.8

5309 (CoD) 300 GB SAS 15 K rpm Yes 6.8

5709 (CoD) 600 GB SAS 10 K rpm Yes 11.5

5759 (CoD) 900 GB SAS 10 K rpm Yes 16.0

264 IBM DS8870 Architecture and Implementation

Page 283: IBM 8870 Archt

CoD denotes drive types that are available in a Capacity on Demand model.

The HyperPAV license is a flat-fee, add-on license that requires the PAV license to be installed. High-Performance FICON also is a flat-fee license.

Easy Tier is a license feature that is available at no charge. Therefore, it is usually configured by default. “Easy Tier Server” is a no-cost feature, but should only be configured when used.

The Database Protection and the IBM z/OS Distributed Data Backup features also are available at no charge.

The license for Space-Efficient FlashCopy does not require the FlashCopy (PTC) license. As with the ordinary FlashCopy, the FlashCopy SE is licensed in tiers by the gross amount of TB that is installed. FlashCopy (PTC) and FlashCopy SE can be complementary licenses. FlashCopy SE performs FlashCopies with track space efficient (TSE) target volumes. When FlashCopies-to-standard target volumes are done, also use the PTC license.

If you want to work with ESE thin volumes, the thin provisioning license is needed with PTC.

The MM license and GM also can be complementary features.

5769 (CoD) 1.2 TB SAS 10 K rpm Yes 20.0

5859 (CoD) 3 TB NL SAS(half drive set)

7.2 K rpm Yes 13.5

5869 (CoD) 4 TB NL SAS(half drive set)

7.2 K rpm Yes 16.2

Important: Check with an IBM representative or consult the IBM website for an up-to-date list of available drive types.

Information: New storage systems and future expansions for DS8870 will be delivered only with FDE disks. Existing DS8800 non-FDE disks are supported in a DS8800–DS8870 model conversion.

Tip: For more information about the features and the considerations you must have when DS8000 licensed functions are ordered, see the following announcement letters:

� IBM System Storage DS8870 (IBM 242x)� IBM System Storage DS8870 (M/T 239x) high performance flagship – Function

Authorizations

IBM announcement letters are available at this website:

http://www.ibm.com/common/ssi/index.wss

Use the DS8870 keyword as a search criterion in the Contents field.

Drive set feature number

Drive size Drive type Drive speed Encryption capable

Value units required

Chapter 10. IBM System Storage DS8000 features and licensed functions 265

Page 284: IBM 8870 Archt

10.2 Activating licensed functionsActivating the license keys of the DS8000 can be done after the IBM service representative completes the storage complex installation. Based on your 239x licensed function order, you must obtain the necessary keys from the IBM DSFA website at this location:

http://www.ibm.com/storage/dsfa

You can activate all license keys at the same time (for example, on initial activation of the storage unit) or they can be activated individually (for example, other ordered keys).

Before you connect to the IBM DSFA website to obtain your feature activation codes, ensure that you have the following items:

� The IBM License Function Authorization documents. If you are activating codes for a new storage unit, these documents are included in the shipment of the storage unit. If you are activating codes for an existing storage unit, IBM sends the documents to you in an envelope.

� A USB memory device can be used for downloading your activation codes if you cannot access the DS Storage Manager from the system that you are using to access the DSFA website. Instead of downloading the activation codes in softcopy format, you can print the activation codes and manually enter them by using the DS Storage Manager GUI or via data storage command-line interface (DS CLI). However, this process is slow and error-prone because the activation keys are 32-character long strings.

10.2.1 Obtaining DS8000 machine informationTo obtain license activation keys from the DSFA website, you need to know the serial number and machine signature of your DS8000 unit.

You can obtain the required information by using the DS Storage Manager GUI or DS CLI. These options are described next.

DS Storage Manager GUIComplete the following steps to obtain the required information by using the DS Storage Manager GUI:

1. Start the DS Storage Manager application. Log in by using a user ID with administrator access. If you are accessing the machine for the first time, contact your IBM service representative for the user ID and password. After a successful login, the DS8000 Storage Manager Overview window opens. Move your cursor to the left top icon so that a pop-up window opens. Select System Status, as shown in Figure 10-1 on page 267.

266 IBM DS8870 Architecture and Implementation

Page 285: IBM 8870 Archt

Figure 10-1 DS8000 Storage Manager GUI: Overview window

2. Click the Serial Number under the Storage Image header, then click Action. Move your cursor to Storage Image and select Add Activation Key, as shown in Figure 10-2.

Figure 10-2 DS8000 Storage Manager: Add Activation Key

Chapter 10. IBM System Storage DS8000 features and licensed functions 267

Page 286: IBM 8870 Archt

3. The Add Activation Key window shows the Serial number and the Machine signature information about your DS8000 Storage Image, as shown in Figure 10-3.

Figure 10-3 DS8000 machine signature and serial number

Gather the following information about your storage unit:

– The Machine Type – Model Number – Serial Number (MTMS) is a string that contains the machine type, model number, and serial number. The machine type is 242x and the machine model is 961. The last seven characters of the string are the machine's serial number (XYABCDE). The serial number always ends with 0 (zero).

– The machine signature, which is found in the Machine signature field and uses the following format: ABCD-EF12-3456-7890.

DS command-line interfaceTo obtain the required information by using DS CLI, log on to DS CLI and issue the lssi and showsi commands, as shown in Example 10-1.

Example 10-1 Obtain DS8000 information by using DS CLI

dscli> lssiDate/Time: 23 October 2013 14:19:27 CEST IBM DSCLI Version: 7.7.20.555 DS: -Name ID Storage Unit Model WWNN State ESSNet====================================================================================DS8870_ATS02 IBM.2107-75ZA571 IBM.2107-75ZA570 961 5005076303FFD5AA Online Enabled

dscli> showsi ibm.2107-75za571Date/Time: 23 October 2013 14:52:25 CEST IBM DSCLI Version: 7.7.20.555 DS: ibm.2107-75za571Name DS8870_ATS02desc MakoID IBM.2107-75ZA571Storage Unit IBM.2107-75ZA570Model 961WWNN 5005076303FFD5AASignature 3f90-1234-5678-9002State OnlineESSNet EnabledVolume Group V0

268 IBM DS8870 Architecture and Implementation

Page 287: IBM 8870 Archt

os400Serial 5AANVS Memory 8.0 GBCache Memory 233.3 GBProcessor Memory 253.6 GBMTS IBM.2421-75ZA570numegsupported 1ETAutoMode allETMonitor allIOPMmode ManagedETCCMode EnabledETHMTMode Enabled

Gather the following information about your storage unit:

– The Machine Type – Serial Number (MTS), which is a string that contains the machine type and the serial number. The machine type is 242x and the last seven characters of the string are the machine's serial number (XYABCDE), which always ends with 0 (zero).

– The model, which is always 961.

– The machine signature, which is found in the Machine signature field and uses the following format: ABCD-EF12-3456-7890.

Use Table 10-4 to document this information, which is entered in the IBM DSFA website to retrieve the activation codes.

Table 10-4 DS8000 machine information

10.2.2 Obtaining activation codesComplete the following steps to obtain the activation codes:

1. A shown in Figure 10-4 on page 270, connect to the DSFA website at the following address:

http://www.ibm.com/storage/dsfa

Note: The showsi command can take the SFI serial number as a possible argument. The SFI serial number is identical to the storage unit serial number, except that it ends with 1 instead of 0 (zero).

Property Your storage unit’s information

Machine type

Machine’s serial number

Machine signature

Note: A DS8800 is shown in some of the following figures; however, the steps are identical for all models of the DS8000 family.

Chapter 10. IBM System Storage DS8000 features and licensed functions 269

Page 288: IBM 8870 Archt

Figure 10-4 IBM DSFA website

2. Click DS8000 series. The “Select DS8000 series machine” window opens, as shown in Figure 10-5. Select the appropriate 242x Machine type.

Figure 10-5 DS8000 DSFA machine information entry window

270 IBM DS8870 Architecture and Implementation

Page 289: IBM 8870 Archt

3. Enter the machine information that was collected in Table 10-4 on page 269 and click Submit. The “View machine summary” window opens, as shown in Figure 10-6.

Figure 10-6 DSFA View machine summary window

The “View machine summary” window shows the total purchased licenses and how many of them are currently assigned. The example in Figure 10-6 shows a storage unit where all licenses are assigned. When assigning licenses for the first time, the Assigned field shows 0.0 TB.

Chapter 10. IBM System Storage DS8000 features and licensed functions 271

Page 290: IBM 8870 Archt

4. Click Manage activations. The “Manage activations” window opens, as shown in Figure 10-7. For each license type and storage image, enter the following information that is assigned to the storage image:

– License scope: fixed block (FB) data– Count key data (CKD)– All– Capacity value (in TB) to assign to the storage image

The capacity values are expressed in decimal terabytes with 0.1-TB increments. The sum of the storage image capacity values for a license cannot exceed the total license value.

Figure 10-7 DSFA Manage activations window

272 IBM DS8870 Architecture and Implementation

Page 291: IBM 8870 Archt

5. After the values are entered, click Submit. Select Retrieve activation codes. The Retrieve activation codes window opens, which shows the license activation codes for the storage image, as shown in Figure 10-8. Print the activation codes or click Download to save the activation codes in an XML file that you can import into the DS8000.

Figure 10-8 DSFA Retrieve activation codes window

10.2.3 Applying activation codes by using the GUIUse this process to apply the activation codes on your DS8000 storage images by using the DS Storage Manager GUI. After the codes are applied, you can begin to configure storage on a storage image.

Important: In most situations, the DSFA application can locate your 239x licensed function authorization record when you enter the DS8000 (242x) serial number and signature. However, if the 239x licensed function authorization record is not attached to the 242x record, you must assign it to the 242x record by using the Assign function authorization link on the DSFA application. In this case, you need the 239x serial number (which you can find on the License Function Authorization document).

Important: The initial enablement of any optional DS8000 licensed function is a concurrent activity (assuming the appropriate level of microcode is installed on the machine for the function).

The following activation activities are disruptive and require an initial machine load (IML) or reboot of the affected image:

� Removal of a DS8000 licensed function to deactivate the function.

� A lateral change or reduction in the license scope. A lateral change is defined as changing the license scope from FB to CKD or from CKD to FB. A reduction is defined as changing the license scope from all physical capacity (ALL) to only FB or only CKD capacity.

Chapter 10. IBM System Storage DS8000 features and licensed functions 273

Page 292: IBM 8870 Archt

The easiest way to apply the feature activation codes is to download the activation codes from the IBM DSFA website to your local computer and import the file into the DS Storage Manager. If you can access the DS Storage Manager from the same computer that you use to access the DSFA website, you can copy the activation codes from the DSFA window and paste them into the DS Storage Manager window. The third option is to manually enter the activation codes in the DS Storage Manager from a printed copy of the codes.

Complete the following steps to apply the activation codes (this method applies the activation codes by using your local computer or a USB drive):

1. Click Action under “Activation keys information” and select Import Key File, as shown in Figure 10-9.

Figure 10-9 DS8000 Storage Manager GUI: select Import Key File

Attention: Before you begin this task, you must resolve any current DS8000 problems. Contact IBM support for assistance in resolving these problems.

274 IBM DS8870 Architecture and Implementation

Page 293: IBM 8870 Archt

2. Click Browse and locate the downloaded key file on your computer, as shown in Figure 10-10.

Figure 10-10 Apply activation codes by importing the key from the file

3. After the file is selected, click Next. The key name is shown in the Confirmation window. Click Finish to complete the new key activation procedure, as shown in Figure 10-11.

Figure 10-11 Apply activation codes: Confirmation window

Chapter 10. IBM System Storage DS8000 features and licensed functions 275

Page 294: IBM 8870 Archt

Your license is now listed in the table. In the example, there is one OEL license active, as shown in Figure 10-12.

Figure 10-12 Apply Activation Codes window

4. Click OK to exit the Apply Activation Codes wizard.

Another way to enter the activation keys is to copy the activation keys from the DSFA window and paste them in the Storage Manager window, as shown in Figure 10-13.

Figure 10-13 Enter license keys manually

A third way to enter the activation keys is to enter the keys manually from a printed copy of the codes. Use Enter or the spacebar to separate the keys. Click Finish to complete the new key activation procedure.

276 IBM DS8870 Architecture and Implementation

Page 295: IBM 8870 Archt

5. The activation codes are displayed, as shown in Figure 10-14.

Figure 10-14 Activation codes that are applied

10.2.4 Applying activation codes by using the DS CLIThe license keys also can be activated by using the DS CLI. This option is available only if the machine OEL was activated and you have a console with a compatible DS CLI program installed.

Complete the following steps to apply activation codes by using the DS CLI:

1. Use the showsi command to display the DS8000 machine signature, as shown in Example 10-2.

Example 10-2 DS CLI showsi command

dscli> showsi ibm.2107-75za571Date/Time: 23 October 2013 14:39:26 CEST IBM DSCLI Version: 7.7.20.555 DS: -Name DS8870_ATS02desc MakoID IBM.2107-75ZA571Storage Unit IBM.2107-75ZA570Model 961WWNN 5005076303FFD4D4Signature 3f90-1234-5678-9002

Chapter 10. IBM System Storage DS8000 features and licensed functions 277

Page 296: IBM 8870 Archt

State OnlineESSNet EnabledVolume Group V0os400Serial 5AANVS Memory 8.0 GBCache Memory 233.7 GBProcessor Memory 253.7 GBMTS IBM.2421-75ZA570numegsupported 1ETAutoMode allETMonitor allIOPMmode ManagedETCCMode EnabledETHMTMode Enabled

2. Obtain your license activation codes from the IBM DSFA website, as described in 10.2.2, “Obtaining activation codes” on page 269.

3. Enter an applykey command at the following dscli command prompt. The -file parameter specifies the key file. The second parameter specifies the storage image:

dscli> applykey -file c:\2421_75ZA570.xml IBM.2107-75ZA571

4. Verify that the keys were activated for your storage unit by issuing the DS CLI lskey command, as shown in Example 10-3.

Example 10-3 Using lskey to list installed licenses

dscli> lskey ibm.2107-75za571Date/Time: 23 October 2013 14:44:01 CEST IBM DSCLI Version: 7.7.20.555 DS: ibm.2107-75za571Activation Key Authorization Level (TB) Scope==========================================================================Easy Tier Server on AllEncryption Authorization on AllGlobal mirror (GM) 170 AllHigh Performance FICON for System z (zHPF) on CKDI/O Priority Manager 170 AllIBM FlashCopy SE 170 AllIBM HyperPAV on CKDIBM System Storage DS8000 Thin Provisioning on AllIBM System Storage Easy Tier on AllIBM database protection on FBIBM z/OS Distributed Data Backup on FBMetro/Global mirror (MGM) 170 AllMetro mirror (MM) 170 AllOperating environment (OEL) 170 AllParallel access volumes (PAV) 170 CKDPoint in time copy (PTC) 170 AllRMZ Resync 170 CKDRemote mirror for z/OS (RMZ) 170 CKD

For more information about the DS CLI, see IBM System Storage DS: Command-Line Interface User’s Guide for DS8000 series, GC53-1127.

278 IBM DS8870 Architecture and Implementation

Page 297: IBM 8870 Archt

10.3 Licensed scope considerationsFor the PTC function and the Remote Mirror and Copy functions, you can set the scope of these functions to be FB, CKD, or All. You must decide what scope to set, as shown in Figure 10-7 on page 272. In that example, the Storage Facility Image includes 65 TB of PTC (FlashCopy), and the user decided to set the scope to All. If the scope was set to FB, you cannot use FlashCopy with any CKD volumes that are configured later. However, it is possible to return to the DSFA website at any time and change the scope from CKD or FB to All, or from All to CKD or FB. In every case, a new activation code is generated, which you can download and apply.

10.3.1 Why you have a choiceImagine a simple scenario in which a storage system has 20 TB of capacity. Of this capacity, 15 TB are configured as FB and 5 TB are configured as CKD. If you want to use only PTC for the CKD volumes, you can purchase only 5 TB of PTC and set the scope of the PTC activation code to CKD. There is no need to buy a new PTC license if you do not need PTC for CKD, but would like to use it for FB. Obtain a new activation code from the DSFA website by changing the scope to FB.

When you decide which scope to set, there are several scenarios to consider. Use Table 10-5 to guide you in your choice. This table applies to PTC and Remote Mirror and Copy functions.

Table 10-5 Deciding which scope to use

Any scenario that changes from FB or CKD to All does not require an outage. If you choose to change from All to CKD or FB, you must have a disruptive outage. If you are certain that your machine will be used only for one storage type (for example, only CKD or only FB), you also can safely use the All scope.

License scope: Changing the license scope of the OEL license is a disruptive action that requires a power cycle of the machine.

Scenario PTC or Remote Mirror and Copy function usage consideration

Suggested scope setting

1 This function is only used by open systems hosts.

Select FB.

2 This function is only used by System z hosts. Select CKD.

3 This function is used by open systems and System z hosts.

Select All.

4 This function is only needed by open systems hosts, but we might use it for System z at some point.

Select FB and change to scope All if and when the System z requirement occurs.

5 This function is only needed by System z hosts, but we might use it for open systems hosts.

Select CKD and change to scope All if and when the open systems requirement occurs.

6 This function is set to All. Leave the scope set to All. Changing the scope to CKD or FB requires a disruptive outage.

Chapter 10. IBM System Storage DS8000 features and licensed functions 279

Page 298: IBM 8870 Archt

10.3.2 Using a feature for which you are not licensedIn Example 10-4, there is a storage system where the scope of the PTC license is set to FB. This setting means that we cannot use PTC to create CKD FlashCopies. When we try, the command fails. However, you can create CKD volumes because the OEL key scope is All.

Example 10-4 Trying to use a feature for which you are not licensed

dscli> lskey IBM.2107-7520391Date/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391Activation Key Authorization Level (TB) Scope============================================================Metro mirror (MM) 5 AllOperating environment (OEL) 5 AllPoint in time copy (PTC) 5 FB The FlashCopy scope is currently set to FB.

dscli> lsckdvolDate/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)=========================================================================================- 0000 Online Normal Normal 3390-3 CKD Base - P2 3339- 0001 Online Normal Normal 3390-3 CKD Base - P2 3339

dscli> mkflash 0000:0001 We are not able to create CKD FlashCopiesDate/Time: 05 October 2013 14:20:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391CMUN03035E mkflash: 0000:0001: Copy Services operation failure: feature not installed

280 IBM DS8870 Architecture and Implementation

Page 299: IBM 8870 Archt

10.3.3 Changing the scope to AllIn Example 10-5, we logged on to DSFA and changed the scope for the PTC license to All. We then apply this new activation code. We can now perform a CKD FlashCopy.

Example 10-5 Changing the scope from FB to All

dscli> lskey IBM.2107-7520391Date/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391Activation Key Authorization Level (TB) Scope============================================================Metro mirror (MM) 5 AllOperating environment (OEL) 5 AllPoint in time copy (PTC) 5 FB The FlashCopy scope is currently set to FB

dscli> applykey -key 1234-5678-9FEF-C232-51A7-429C-1234-5678 IBM.2107-7520391Date/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391CMUC00199I applykey: Licensed Machine Code successfully applied to storage image IBM.2107-7520391.

dscli> lskey IBM.2107-7520391Date/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391Activation Key Authorization Level (TB) Scope============================================================Metro mirror (MM) 5 AllOperating environment (OEL) 5 AllPoint in time copy (PTC) 5 AllThe FlashCopy scope is now set to All

dscli> lsckdvolDate/Time: 05 October 2013 15:51:53 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)=========================================================================================- 0000 Online Normal Normal 3390-3 CKD Base - P2 3339- 0001 Online Normal Normal 3390-3 CKD Base - P2 3339

dscli> mkflash 0000:0001 We are now able to create CKD FlashCopiesDate/Time: 05 October 2013 16:09:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391CMUC00137I mkflash: FlashCopy pair 0000:0001 successfully created.

Chapter 10. IBM System Storage DS8000 features and licensed functions 281

Page 300: IBM 8870 Archt

10.3.4 Changing the scope from All to FBIn Example 10-6, we decide to increase storage capacity for the entire storage system. However, we do not want to purchase any more PTC licenses because PTC is used only by open systems hosts and this new capacity is to be used only for CKD storage. Therefore, we change the scope to FB so that we log on to the DSFA website and create an activation code. We apply the code but discover that because this change is effectively a downward change (decreasing the scope), it does not apply until we have a disruptive outage on the DS8000.

Example 10-6 Changing the scope from All to FB

dscli> lskey IBM.2107-7520391Date/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391Activation Key Authorization Level (TB) Scope============================================================Metro mirror (MM) 5 AllOperating environment (OEL) 5 AllPoint in time copy (PTC) 5 AllThe FlashCopy scope is currently set to All

dscli> applykey -key ABCD-EFAB-EF9E-6B30-51A7-429C-1234-5678 IBM.2107-7520391Date/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391CMUC00199I applykey: Licensed Machine Code successfully applied to storage image IBM.2107-7520391.

dscli> lskey IBM.2107-7520391Date/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391Activation Key Authorization Level (TB) Scope============================================================Metro mirror (MM) 5 AllOperating environment (OEL) 5 AllPoint in time copy (PTC) 5 FBThe FlashCopy scope is now set to FB

dscli> lsckdvolDate/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)=========================================================================================- 0000 Online Normal Normal 3390-3 CKD Base - P2 3339- 0001 Online Normal Normal 3390-3 CKD Base - P2 3339

dscli> mkflash 0000:0001Date/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391CMUC00137I mkflash: FlashCopy pair 0000:0001 successfully created.

In this scenario, we made a downward license feature key change. We must schedule an outage of the storage image. We should make only the downward license key change immediately before this outage is taken.

Consideration: Making a downward license change and then not immediately performing a reboot of the storage image is not supported. Do not allow your DS8000 to be in a position where the applied key is different from the reported key.

282 IBM DS8870 Architecture and Implementation

Page 301: IBM 8870 Archt

10.3.5 Applying an insufficient license feature keyIn Example 10-7, there is a scenario in which a DS8000 has a 5-TB OEL, FlashCopy (PTC), and Metro Mirror license. We increased the storage capacity and, as a result, increased the license key for OEL and MM. However, we forgot to increase the license key for FlashCopy (PTC). In Example 10-7, you can see that the FlashCopy license is only 5 TB. However, we are still able to create FlashCopies.

Example 10-7 Insufficient FlashCopy license

dscli> lskey IBM.2107-7520391Date/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391Activation Key Authorization Level (TB) Scope============================================================Metro mirror (MM) 10 AllOperating environment (OEL) 10 AllPoint in time copy (PTC) 5 Alldscli> mkflash 1800:1801Date/Time: 05 October 2013 17:46:14 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391CMUC00137I mkflash: FlashCopy pair 1800:1801 successfully created.

This configuration is still valid because the configured ranks on the machine total less than 5 TB of storage. In Example 10-8, we try to create a rank that brings the total rank capacity above 5 TB. This command fails.

Example 10-8 Creating a rank when we are exceeding a license key

dscli> mkrank -array A1 -stgtype CKDDate/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-7520391CMUN02403E mkrank: Unable to create rank: licensed storage amount has been exceeded

10.3.6 Calculating how much capacity is used for CKD or FBTo calculate how much disk space is used for CKD or FB storage, we must combine the output of two commands. The following simple rules apply:

� License key values are decimal numbers. Therefore, 5 TB of license is 5000 GB.� License calculations use the disk size number that is shown by the lsarray command.� License calculations include the capacity of all DDMs in each array site.� Each array site is eight DDMs.

To make the calculation, we use the lsrank command to determine which array the rank contains, and whether this rank is used for FB or CKD storage. We use the lsarray command to obtain the disk size used by each array. Then, we multiply the disk size (146, 300, 600, 1200, or 4000 GB) by eight (for eight DDMs in each array site).

Important: To configure the additional ranks, we must first increase the license key capacity of every installed license. In this example, these licenses include the FlashCopy license.

Chapter 10. IBM System Storage DS8000 features and licensed functions 283

Page 302: IBM 8870 Archt

In Example 10-9, the lsrank command tells us that rank R0 uses array A0 for CKD storage. The lsarray command tells us that array A0 uses 300-GB disk drive modules (DDMs). Therefore, we multiple 300 (the DDM size) by 8, giving us 300 × 8 = 2400 GB, which means that we are using 2400 GB for CKD storage.

Rank R4 in Example 10-9 is based on array A6. Array A6 uses 146-GB DDMs. Therefore, multiply 146 by 8, giving us 146 × 8 = 1168 GB, which means that we are using 1168 GB for FB storage.

Example 10-9 Displaying array site and rank usage

dscli> lsrankDate/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-75ABTV1ID Group State datastate Array RAIDtype extpoolID stgtype==========================================================R0 0 Normal Normal A0 5 P0 ckdR4 0 Normal Normal A6 5 P4 fb

dscli> lsarrayDate/Time: 05 October 2013 14:19:17 CEST IBM DSCLI Version: 7.7.20.220 DS: IBM.2107-75ABTV1Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B)====================================================================A0 Assigned Normal 5 (6+P+S) S1 R0 0 300.0A1 Unassigned Normal 5 (6+P+S) S2 - 0 300.0A2 Unassigned Normal 5 (6+P+S) S3 - 0 300.0A3 Unassigned Normal 5 (6+P+S) S4 - 0 300.0A4 Unassigned Normal 5 (7+P) S5 - 0 146.0A5 Unassigned Normal 5 (7+P) S6 - 0 146.0A6 Assigned Normal 5 (7+P) S7 R4 0 146.0A7 Assigned Normal 5 (7+P) S8 R5 0 146.0

For CKD scope licenses, we use 2400 GB. For FB scope licenses, we use 1168 GB. For licenses with a scope of All, we use 3568 GB. By using the limits that are shown in Example 10-7 on page 283, we are within scope for all licenses.

If we combine Example 10-7 on page 283, Example 10-8 on page 283, and Example 10-9, we can also see why the mkrank command in Example 10-8 on page 283 failed. In Example 10-8 on page 283, we tried to create a rank by using array A1. Now, array A1 uses 300-GB DDMs. This configuration means that for FB scope and All scope licenses, we use 300 x 8 = 2400 GB more license keys.

In Example 10-7 on page 283, we had only 5 TB of FlashCopy license with a scope of All. This configuration means that the total configured capacity cannot exceed 5000 GB. Because we already use 3568 GB (2400 GB CKD + 1168 GB FB), the attempt to add 2400 GB fails because the total exceeds the 5 TB license. If we increase the size of the FlashCopy license to 10 TB, we can have 10,000 GB of total configured capacity, so the rank creation succeeds.

284 IBM DS8870 Architecture and Implementation

Page 303: IBM 8870 Archt

Part 3 Storage configuration

In this part of the book, we describe the storage configuration tasks required on an IBM DS8870.

The following topics are included:

� Configuration flow� Configuration by using the DS Storage Manager GUI � Configuration with the DS command-line interface

Part 3

© Copyright IBM Corp. 2014. All rights reserved. 285

Page 304: IBM 8870 Archt

286 IBM DS8870 Architecture and Implementation

Page 305: IBM 8870 Archt

Chapter 11. Configuration flow

This chapter provides a brief overview of the required sequence of tasks to configure the storage in an IBM DS8870.

11

© Copyright IBM Corp. 2014. All rights reserved. 287

Page 306: IBM 8870 Archt

11.1 Configuration worksheetsDuring the installation of the DS8870, your IBM marketing representative customizes the setup of your storage complex that is based on information that you provide in a set of customization worksheets. Each time that you install a new storage unit or management console, you must complete the customization worksheets before the installation can be done by the IBM marketing representatives.

It is important that the information from the customization worksheets is entered into the machine so that preventive maintenance and high availability of the machine are ensured. You can find the customization worksheets in IBM System Storage DS8870 Introduction and Planning Guide, GC27-2297-06.

By using the customization worksheets, you specify the initial setup for the following items:

� Company information: IBM marketing representatives use this information to contact you quickly when they need to access your storage complex.

� Management console network settings: You specify the IP address and LAN settings for your management console (MC).

� Remote support (which includes call home and remote service settings): You specify whether you want outbound (call home) or inbound (remote services) remote support.

� Notifications (including Simple Network Management Protocol (SNMP) trap and email notification settings): You specify the types of notifications that you want and that others might want to receive.

� Power control: You select and control the various power modes for the storage complex.

Control Switch settings: You specify certain DS8870 settings that affect host connectivity. You need to enter these choices on the control switch settings worksheet so that the sales representative can set them during the DS8870 installation.

11.2 Disk EncryptionBecause the configuration for a system that uses disk encryption differs from a system that does not, it is important to plan for disk encryption before performing configuration.

The DS8870 provides disk-based encryption for data that resides within the storage system, for increased data security. This disk-based encryption is combined with an enterprise-scale key management infrastructure.

All disk drive modules (DDMs) that are installed in a DS8870 (other than model conversions) support Full Disk Encryption (FDE), which means that all DDMs that can be ordered are encryption-capable, including solid-state flash drives (SSDs). These disks have encryption hardware, and can perform symmetric encryption and decryption of data at full disk speed with no impact on performance. At the time of this writing, the DS8870 supports one encryption group. Managing the disk encryption environment is the responsibility of the client.

Although all DS8870s have certificates installed, encryption is optional and can be activated when feature number 1750 is ordered. Activation must be completed before performing any logical configuration. For more information about encryption license considerations, see “Encryption planning” on page 235.

Important: The IBM service representative cannot complete the installation of a DS8870 unless provided with completed configuration worksheets.

288 IBM DS8870 Architecture and Implementation

Page 307: IBM 8870 Archt

The current DS8870 encryption solution requires the use of either Tivoli Key Lifecycle Manager (TKLM), or its replacement, the IBM Security Key Lifecycle Manager v2.5 (ISKLM), or IBM Security Key Lifecycle Manager for z/OS. All assist with generating, protecting, storing, and maintaining encryption keys that are used to encrypt information being written to and decrypt information being read from devices.

For more information, including current considerations and best practices regarding DS8870 encryption, see the latest version of IBM DS8000 Disk Encryption, REDP-4500.

11.3 Network securityThe security of the network used to communicate with the DS8000 (specifically the HMC) for management purposes can be very important, depending on the client requirements. This release for the DS8870 provides support for compliance to the NIST SP800-131a standards, also known as Gen-2 security.

There are two components required to provide full network protection:

� The first is IPSec, and for Gen-2 security, IPsec-v3 is required. IPsec protects the network communication at the Internet layer, or the packets that are sent over the network. This ensures that a valid workstation or server is talking to the HMC and that the communication between them cannot be intercepted.

� The second component is TLS 1.2. It provides protection at the application layer, to ensure that valid software (external to the HMC or client) is communicating with the software (server) in the HMC.

Note: The details for implementing and managing Gen-2 security requirements are provided in the IBM Redpaper publication DS8870 and NIST SP 800-131a Compliance, REDP-5069.

Chapter 11. Configuration flow 289

Page 308: IBM 8870 Archt

11.4 Configuration flowThis section shows the list of tasks to do when storage is configured in the DS8870. Depending on your environment and requirements, not all tasks need to be necessarily completed.

1. Install license keys: Activate the license keys for the storage unit.

2. Create arrays: Configure the installed disk drives as RAID 5, RAID 6, or RAID 10 arrays.

3. Create ranks: Assign each array to a fixed block (FB) rank or a count key data (CKD) rank.

4. Create extent pools: Define extent pools, associate each one with Server 0 or Server 1, and assign at least one rank to each extent pool. If you want to take advantage of Storage Pool Striping, you must assign multiple ranks to an extent pool. With current versions of the DS graphical user interface (GUI), you can start directly with the creation of extent pools (arrays and ranks are automatically and implicitly defined).

5. Create a repository for Space Efficient volumes. See the latest version of DS8000 Thin Provisioning REDP-4554 for details.

6. Configure I/O ports: Define the type of the Fibre Channel/Fibre Channel connection (FICON) ports. The port type can be Switched Fabric, Arbitrated Loop, or FICON.

7. Create volume groups for open systems: Create volume groups where FB volumes are assigned.

8. Create host connections for open systems: Define open systems hosts and their Fibre Channel (FC) host bus adapter (HBA) worldwide port names. Assign volume groups to the host connections.

9. Create open systems volumes: Create striped open systems FB volumes and assign them to one or more volume groups.

10.Create System z logical control units (LCUs): Define their type and other attributes, such as subsystem identifiers (SSIDs).

11.Create striped System z volumes: Create System z CKD base volumes and parallel access volume (PAV) aliases for them.

12.The actual configuration can be done by using the DS Storage Manager GUI or data storage command-line interface (DS CLI), or both. A novice user might prefer to use the GUI, whereas a more experienced user might use the DS CLI, particularly for the more repetitive tasks, such as creating large numbers of volumes. To support the latest thin provisioning configurations, you must use DS CLI, which is detailed in DS8000 Thin Provisioning REDP-4554.

For more information about these tasks, see the following chapters:

� Chapter 10, “IBM System Storage DS8000 features and licensed functions” on page 259� Chapter 12, “Configuration by using the DS Storage Manager GUI” on page 293� Chapter 13, “Configuration with the DS command-line interface” on page 353

Important: The configuration flow changes when you use the Full Disk Encryption Feature for the DS8870. For more information, see IBM DS8000 Disk Encryption, REDP-4500, which also applies to DS8870.

Important: If you plan to use Easy Tier (in particular, in automatic mode), select the All ranks option to receive all of the benefits of Easy Tier data management.

290 IBM DS8870 Architecture and Implementation

Page 309: IBM 8870 Archt

11.4.1 General storage configuration guidelinesRemember the following general guidelines when storage is configured in the DS8870:

� To achieve a well-balanced load distribution, use at least two extent pools, each assigned to one of the internal servers (extent pool 0 and extent pool 1). If CKD and FB volumes are required, use at least four extent pools.

� The first volume in an address group determines the type of the address group (all CKD or all FB). An address group contains 16 LCUs or logical subsystems (LSS) numbered from x0 - xF, (where x = 0 - E), and F0 - FE (only 15).

� Volumes of one LCU/LSS can be allocated on multiple extent pools.

� An extent pool should contain only ranks with similar characteristics (for example, Redundant Array of Independent Disks (RAID) level, disk type). Exceptions apply to hybrid pools.

� Ranks in one extent pool should belong to separate device adapters (DAs).

� Assign multiple ranks to extent pools to take advantage of Storage Pool Striping.

� CKD: 3380 and 3390 type volumes can be intermixed in an LCU and an extent pool.

� FB:

– Create a volume group for each server unless logical unit number (LUN) sharing is required.

– Assign the volume group for one server to all its host connections.

– If LUN sharing is required, the following options are available (see Figure 11-1):

• Create one volume group for each server. Place the shared volumes in each volume group. Assign the individual volume groups to the corresponding server’s host connections. The advantage of this option is that you can assign private and shared volumes to a host.

• Create one common volume group for all servers. Place the shared volumes in the volume group and assign it to the host connections.

Figure 11-1 LUN configuration for shared access

� I/O ports:

– Distribute host connections of each type (FICON and FCP) evenly across the I/O enclosure.

– A port can be configured to be FICON or Fibre Channel Protocol (FCP).

– Ensure that each host is connected to at least two different host adapters in two different I/O enclosures for redundancy.

Chapter 11. Configuration flow 291

Page 310: IBM 8870 Archt

– Typically, access any is used for I/O ports with access to ports that are controlled by storage area network (SAN) zoning.

Intermixing: Avoid intermixing host I/O with Copy Services I/O on the same ports.

292 IBM DS8870 Architecture and Implementation

Page 311: IBM 8870 Archt

Chapter 12. Configuration by using the DS Storage Manager GUI

The DS Storage Manager provides a graphical user interface (GUI) to configure the IBM DS8870. The DS Storage Manager GUI (DS GUI) is a browser-based tool that can be accessed a number of different ways.

This chapter covers the following topics:

� DS Storage Manager GUI overview� Logical configuration process� Examples of configuring DS8870 storage� Examples of exploring DS8870 storage status and hardware

For more information about Copy Services configuration in the DS8000 family by using the DS GUI, see the following IBM Redbooks publications:

� IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788� IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787

For more information about DS GUI changes that are related to disk encryption, see IBM System Storage DS8700 Disk Encryption, REDP-4500.

For more information about DS GUI changes that are related to LDAP authentication, see IBM System Storage DS8000: LDAP Authentication, REDP-4505.

12

Code version: Some of the figures in this chapter might not reflect the latest version of the DS GUI code.

© Copyright IBM Corp. 2014. All rights reserved. 293

Page 312: IBM 8870 Archt

12.1 DS Storage Manager GUI overviewIn this section, we describe the DS GUI access method design. The DS GUI code is on the DS8000 Hardware Management Console (HMC) and we describe access methodologies.

12.1.1 Accessing the DS GUIYou can access the DS GUI in any of the following ways:

� From a browser that is connected to the HMC� From Tivoli Storage Productivity Center on a workstation that is connected to the HMC� From a browser that is connected to Tivoli Storage Productivity Center on any server� By using Microsoft Windows Remote Desktop

The DS Storage Manager is included with the HMC and communicates with the DS Network Interface Server (also known as the ESSNI), which is responsible for communicating with the two controllers of the DS8000.

Access to the DS8000 HMC is supported through the IPv4 and IPv6 Internet Protocol.

These access capabilities, which use basic authentication, are shown in Figure 12-1. In the illustration, Tivoli Storage Productivity Center Server connects to two HMCs that manage two DS8000 storage complexes.

Figure 12-1 Accessing the DS8000 GUI

The DS8000 supports the ability to use a Single Point of Authentication function for the GUI and data storage command-line interface (DS CLI) through a centralized Lightweight Directory Access Protocol (LDAP) server. This capability is supported by Tivoli Storage Productivity Center Version 4.2.1 (or later), which is preinstalled. If you have an earlier Tivoli Storage Productivity Center version, you must upgrade Tivoli Storage Productivity Center to

TPC GUIDS GUI

TPC

ESSNIServer

DS8000 HMC 1

DS8870Complex 1

TPC Server

ESSNIServer

DS8000 HMC 2

DS8870Complex 2

Directly

TPC GUI

TCP/IP

TCP/IP

Browser

Remote desktop

ESSNI Client

Browser / DS CLI

Authentication without LDAP

User Authentication is managed by the

ESSNI Server regardless of type

of Connection

User repository

User repository

294 IBM DS8870 Architecture and Implementation

Page 313: IBM 8870 Archt

V4.2.1 to take advantage of the Single Point of Authentication function for the GUI and CLI through a centralized LDAP server.

The access capabilities through LDAP authentication are shown in Figure 12-2. As shown, Tivoli Storage Productivity Center connects to two HMCs that are managing two DS8000 storage complexes.

Figure 12-2 LDAP authentication to access the DS8000 GUI and CLI

For information on Configuring Jazz for Service Management and DS8000 for LDAP authentication, refer to the IBM Tivoli Storage Productivity Center V5.2 InfoCenter at:

http://pic.dhe.ibm.com/infocenter/tivihelp/v59r1/topic/com.ibm.tpc_V52.doc/fqz0_r_config_ds8000_ldap.html

More information: For more information about LDAP-based authentication, see the Redpaper publication IBM System Storage DS8000: LDAP Authentication, REDP-4505.

TPC GUIDS GUI

TPC 5.1ESSNIServer

DS8870 HMC 1

DS8870Complex 1

TPC Server

ESSNIServer

DS8870 HMC 2

DS8870Complex 2

Directly

TPC GUI

TCP/IP

TCP/IP

Browser

Remote desktop

AuthenticationServer

Browser / DS CLI

Authentication Client

Authentication Client

TIP

LDAP Service

Host System

LDAP Authentication

The Authentication Server provides the connection to the LDAP or other repositories

The authentication is now managed through the Authentication Server, a TPC component, and a new Authentication Client at the HMC.

ESSNI Client

1

23

4

56

7

8

9

10

11,2,3

1 1,2,3

Chapter 12. Configuration by using the DS Storage Manager GUI 295

Page 314: IBM 8870 Archt

Accessing the DS GUI directly through a browserThe DS Storage Manager GUI can be launched directly from any workstation or server with network connectivity to the HMC.

To connect to the DS GUI, different versions of Internet browsers can be used. Supported browsers include Mozilla Firefox 17 ESR and Microsoft Internet Explorer 9. Account security might require that another browser be used that meets the security requirements.

To connect to the DS Storage Manager GUI, enter the following URL in a supported browser:

https://<HMC-IP>:8452/DS8000

An example of the DS GUI login is shown in Figure 12-3.

Figure 12-3 DS GUI login

Accessing DS GUI through Tivoli Storage Productivity Center Web-based GUIThe DS GUI code on the DS8000 HMC can also be accessed from the Tivoli Storage Productivity Center. The Tivoli Storage Productivity Center server includes two versions of Tivoli Storage Productivity Center: Stand-alone and Web-based (to support access to Tivoli Storage Productivity Center using a browser).

In earlier versions of Tivoli Storage Productivity Center, the Stand-alone version has an Element Manager function that provides the option to access the DS GUI of a configured DS8000. This option is not available in the GUI-based version.

With Version 5.2 of Tivoli Storage Productivity Center, access to the DS GUI of a configured DS8000 is moved from the Stand-alone GUI to the Web-based GUI version. Only the Web-based GUI version has the menu option to access the DS GUI from Tivoli Storage Productivity Center.

The steps to access the DS GUI while using the Tivoli Storage Productivity Center are detailed next.

296 IBM DS8870 Architecture and Implementation

Page 315: IBM 8870 Archt

Complete the following steps to access the DS GUI through the Tivoli Storage Productivity Center Web-based GUI:

1. Log in to your Tivoli Storage Productivity Center server and start the IBM Tivoli Storage Productivity Center - Tivoli Storage Productivity Center Web-based GUI.

2. Enter your Tivoli Storage Productivity Center user ID and password.

3. In the Tivoli Storage Productivity Center window that is shown in Figure 12-4, the “Dashboard” is displayed. Hover over the Storage Resources icon and on the pop-up menu click Storage Systems.

Figure 12-4 Tivoli Storage Productivity Center Web-based GUI: Dashboard view

Important: We assume that the DS8870 system is already configured in Tivoli Storage Productivity Center.

Chapter 12. Configuration by using the DS Storage Manager GUI 297

Page 316: IBM 8870 Archt

4. With the Storage Systems displayed, select the specific system that you want to start the GUI on with one left-click on the system name. Next, click Actions and on the pull-down list click Open Storage System GUI, as shown in Figure 12-5.

Figure 12-5 Tivoli Storage Productivity Center Web-based GUI: Open Storage System GUI

There are additional ways to make this same selection:

a. Double left-click the storage system and an Overview displays of the system. Click Actions and then select Open Storage System GUI.

b. Right-click the storage system and a menu will be displayed of available actions, including Open Storage System GUI.

5. A new browser window opens with the DS GUI login displayed, as shown in Figure 12-3 on page 296.

Note: If you are performing these steps from the section: “Accessing the DS GUI with a browser that is connected to Tivoli Storage Productivity Center Server” on page 299, you might not have to perform a login to the DS GUI. If you previously logged in with the browser, the browser will use the previous login details to automatically log in.

298 IBM DS8870 Architecture and Implementation

Page 317: IBM 8870 Archt

6. The browser window displays the DS GUI Overview as shown in Figure 12-6.

Figure 12-6 DS GUI Overview window

Accessing the DS GUI with a browser that is connected to Tivoli Storage Productivity Center Server

To access the DS GUI, you can connect to Tivoli Storage Productivity Center Server with a web browser. Use the procedure that is described in “Accessing DS GUI through Tivoli Storage Productivity Center Web-based GUI” on page 296, starting at step 2. (You will automatically be accessing the Tivoli Storage Productivity Center Web-based GUI.)

Accessing the DS GUI with a remote desktop connection to Tivoli Storage Productivity Center Server

You can use a remote desktop connection to connect to Tivoli Storage Productivity Center Server. After you are connected to Tivoli Storage Productivity Center Server, follow the procedure that is described in “Accessing DS GUI through Tivoli Storage Productivity Center Web-based GUI” on page 296 to access the DS GUI. For information about Tivoli Storage Productivity Center, see: “IBM Tivoli Storage Productivity Center 5.2” on page 474.

12.1.2 DS GUI Overview windowAfter you log on, the DS Storage Manager Overview window that is shown in Figure 12-6 displays. In this Overview window, you get a picture that contains icons representing the physical and logical components involved in the DS8000 configuration. Left-click an icon in the picture to view its description in the lower half of the window. Other links to information and Initial Setup Tasks are provided.

The left side of the window is the navigation pane.

Chapter 12. Configuration by using the DS Storage Manager GUI 299

Page 318: IBM 8870 Archt

DS GUI window optionsFigure 12-7 shows an example of the Manage Volumes window. Several options available on this page are also on many other windows of DS Storage Manager. We explain several of these options next.

Figure 12-7 GUI window layout

The DS GUI displays the configuration of your DS8000 in tables. There are several functions available that you can use:

� To download the information from the table, click Download. This function can be useful if you want to document your configuration. The file is in comma-separated value (.csv) format and you can open the file with a spreadsheet program. If the table on the DS8000 Manager consists of several pages, the .csv file includes all pages.

� The Print report option opens a new window with the table in HTML format and starts the printer dialog box if you want to print the table.

� The Action drop-down menu provides you with specific actions that you can perform. Select the object that you want to access and then the appropriate action (for example, Create or Delete). An alternate way to perform an action on a listed item is to right-click the item and select from the available list of actions that appear in the pop-up menu.

� Choose column values sets and clear filters so that only specific items are displayed in the table (for example, show only fixed block (FB) extent pools in the table). This function can be useful if you have tables with many items.

� To search the table, enter the criteria in the search field. The GUI displays entries in the table that match the criteria.

Action pulldown Download displayed table as .csv

Search field

Choose column values

300 IBM DS8870 Architecture and Implementation

Page 319: IBM 8870 Archt

DS GUI navigation paneIn the navigation pane, which is on the left side of the window, you can navigate to the various functions of the DS8000 GUI. It has two views to choose from: Icon and Legacy. The default view is set to Icon view, but you can change this default by clicking Navigation Choice in the bottom part of the navigation pane. The two views are shown in Figure 12-8.

Figure 12-8 Navigation pane. Icon view on the left and legacy view on the right

When you hover over one of the icons, a pop-up menu shows the actions that are available for the icon, as shown in Figure 12-9.

Figure 12-9 Example icon view

The navigation pane features the following menu structure:

� Home:

– Getting Started– System Status

� Monitor:

– Tasks

� Pools:

– Internal Storage

� Volumes:

– FB Volumes– Volume Groups– Count key data (CKD) logical control units (LCUs) and Volumes

Chapter 12. Configuration by using the DS Storage Manager GUI 301

Page 320: IBM 8870 Archt

� Hosts:

– Hosts

� Copy Services:

– FlashCopy– Metro Mirror/Global Copy– Global Mirror– Mirroring Connectivity

� Access:

– Users– Remote Authentication– Resource Groups

� Configuration:

– Encryption Key Servers– Encryption Groups

12.2 User management for the DS GUIFor GUI user administration, sign on to the DS GUI with an administration ID, and complete the following steps:

1. In the navigation pane, hover over Access (represented by a padlock) as shown in Figure 12-10.

Figure 12-10 GUI Access menu

2. On the pop-up menu, click Users.

302 IBM DS8870 Architecture and Implementation

Page 321: IBM 8870 Archt

3. The Users panel is displayed as shown in Figure 12-11. It will not contain any user details because a Storage Complex must be selected first.

Figure 12-11 The Users panel without a selected Storage Complex

4. Click the Storage Complex pull-down menu and select a complex that you want to view from the list displayed. The machine that your HMC belongs to should appear in the list. If you need an additional complex for the GUI to manage, it needs to be defined. (For more information, see “Defining a storage complex” on page 307.)

With the complex selected, the panel is updated with the list of users defined in the authentication policy that is active in the complex, as shown in Figure 12-12.

Figure 12-12 Users panel with a storage complex selected

Chapter 12. Configuration by using the DS Storage Manager GUI 303

Page 322: IBM 8870 Archt

5. With the users listed, it is possible to manage current users or add a new user. Click Action to see the list of user functions as shown in Figure 12-13.

Figure 12-13 User panel actions

The administrator can perform the following tasks from this window:

– Add User (the DS CLI equivalent is mkuser)– Modify User (the DS CLI equivalent is chuser)– Lock or Unlock User: The choice toggles (the DS CLI equivalent is chuser)– Delete User (the DS CLI equivalent is rmuser)– Password Settings (the DS CLI equivalent is chpass)

304 IBM DS8870 Architecture and Implementation

Page 323: IBM 8870 Archt

6. The Password Settings window is where user password settings can be modified as shown in Figure 12-14.

Figure 12-14 Password Settings window

7. Selecting Add user displays a window in which a user can be added by entering the user ID, the temporary password, and the role, as shown in Figure 12-15. The role decides what type of activities can be performed by the user. In this window, the ID can be temporarily deactivated by selecting the No access option.

Figure 12-15 Adding a user to Storage Manager

Important: If you are not using an administrator mode of ID, only your ID appears in the list. The only action that they can perform is to change their password.

Chapter 12. Configuration by using the DS Storage Manager GUI 305

Page 324: IBM 8870 Archt

Take special note of the new role of the Security Administrator (secadmin). This role was created to separate the duties of managing the storage from managing the encryption for DS8870 units that are shipped with Full Disk Encryption storage drives.

If you are logged in to the GUI as a Storage Administrator, you cannot create, modify, or delete users of the Security Administrator role. Notice how the Security Administrator option is disabled in the Add User window in Figure 12-15 on page 305. Similarly, Security Administrators cannot create, modify, or delete Storage Administrators. This feature is new to the microcode for the DS8870.

12.3 Logical configuration introductionThe primary function of the DS GUI is to allow a client to perform the logical configuration of their DS8870.

When performing the logical configuration, use the following sequence:

1. Define the storage complex (only if adding an additional storage system).2. Create extent pools (will automatically create the arrays and ranks).3. Create open system volumes, volume groups, and hosts (as required).4. Create count key data (CKD) logical control units (LCUs) and volumes (as required).

The next section provides the details of all the configuration requirements.

Tasks summary windowSome logical configuration tasks include dependencies on the successful completion of other tasks. For example, you cannot create ranks on arrays until the array creation is complete. The Tasks window assists you in this process by reporting the progress and status of tasks that have been initiated and their current state. This can be especially helpful when tasks take a long time to complete.

To view the Tasks window, hover over Monitor in the navigation pane and click Tasks. The example in Figure 12-16 shows the current list of tasks. Left-click the specific task link to get more information about the task. A currently running task can be stopped with the action End Task.

Figure 12-16 Tasks window

More information: For more information about configuration concepts, see 5.2.10, “Virtualization hierarchy summary” on page 128.

306 IBM DS8870 Architecture and Implementation

Page 325: IBM 8870 Archt

12.4 Configuring DS8870 storageIn this section, all of the required DS8870 logical configuration tasks are presented. Some of these tasks can be grouped together, but they are presented individually so that you understand the complete configuration requirements.

For each configuration task (for example, creating an array), the process guides you through the steps in which you enter the necessary information. During this process, you can go back to make modifications or cancel the task. At the end of the process, you receive a verification window in which you can verify the entered information before you submit the task to be performed on the DS8870.

12.4.1 Defining a storage complexDuring the DS8870 installation, your IBM service representative customizes the setup of your storage complex that is based on information that you provide in the customization worksheets. Before you start the logical configuration, check the status of your storage system to verify it is “Normal”.

In the navigation pane of the DS GUI, hover over Home and click System Status. The System Status window opens, as shown in Figure 12-17.

Figure 12-17 System Status window

You must have at least one storage complex listed in the table. If you have more than one DS8000 system in your environment, you can define them. This is only required when using mirroring type features.

Although this task might not be required, it is included here in case it needs to be performed.

Complete the following steps to add a storage complex.

1. Select Storage Complex Add from the Action drop-down menu to add a storage complex, as shown in Figure 12-18.

Figure 12-18 Storage Complex Add function

Chapter 12. Configuration by using the DS Storage Manager GUI 307

Page 326: IBM 8870 Archt

The Add Storage Complex window opens, as shown in Figure 12-19.

Figure 12-19 Add Storage Complex window

2. Enter the IP address of the HMC that belongs to the storage complex that you want to add. Click OK to continue. A new storage complex is added to the table, as shown in Figure 12-20. The task will fail if there is not an operational network connection to the address specified.

Figure 12-20 New storage complex is added

Having all the DS8000 storage complexes defined together provides flexible control and management. The status information indicates the healthiness of each DS8000. When clicking the status link of a storage complex, a status window displays the current status of vital DS8870 components, as shown in Figure 12-21.

Figure 12-21 Status details

308 IBM DS8870 Architecture and Implementation

Page 327: IBM 8870 Archt

In Figure 12-22, an example of systems with various status conditions is shown.

Figure 12-22 Different Storage Complex Status

A Critical status indicates that vital storage complex resources are unavailable. An Attention status might be triggered by resources that are unavailable. Because the DS8870 has redundant components, the storage complex is still operational. One example is when only one storage server inside a storage unit is offline, as shown in Figure 12-23.

Figure 12-23 One storage server is offline

Check the status of your storage complex and proceed with logical configuration tasks (create arrays, ranks, extent pools, or volumes) only when both storage servers inside the storage image are online. This is required for all configuration tasks to complete successfully.

Chapter 12. Configuration by using the DS Storage Manager GUI 309

Page 328: IBM 8870 Archt

12.4.2 Creating arrays

Complete the following steps in the DS GUI to create an array:

1. In the navigation pane, hover over Pools and click Internal Storage. The Internal Storage window displays, as shown in Figure 12-24.

Figure 12-24 Disk Configuration window

In our example, some of the DS8870 capacity is assigned to open systems, some is assigned to z/OS, and some is unassigned.

Important: You do not have to create arrays first and then ranks. You can proceed directly with the creation of extent pools, as described in 12.4.4, “Creating extent pools” on page 318. This will automatically create the necessary arrays and ranks for the extent pool that is created.

Important: If there is more than one storage complex defined in the GUI, be sure to select the correct storage image before you start creating arrays. From the Storage image drop-down menu, select the storage image that you want to work with.

310 IBM DS8870 Architecture and Implementation

Page 329: IBM 8870 Archt

2. Click the Array Sites tab to check the storage that is available to create the array, as shown in Figure 12-25. Any site in an “Unassigned” state is available for an array.

Figure 12-25 Array sites

3. In our example, some array sites are unassigned and therefore are eligible to be used for array creation. Each array site has eight physical disk drives. To see more details about an array site, select it and click Properties under the Action drop-down menu. The Single Array Site Properties window opens and provides general array site characteristics, as shown in Figure 12-26.

Figure 12-26 Select Array Site Properties view

Chapter 12. Configuration by using the DS Storage Manager GUI 311

Page 330: IBM 8870 Archt

4. Click the Status tab to view details including the state of each disk drive module (DDM), as shown in Figure 12-27.

Figure 12-27 Single Array Site Properties: Status view

5. All DDMs in this array site are in the Normal state. Click OK to close the Array Site Properties window and return to the Internal Storage main window.

There are no changes that can be made to an array site because they are controlled by the hardware and do not require any configuration activity. They are reviewed here so that you understand the state an array site must be in.

6. Now that we know there is an array site available, we can create an array. Click the Arrays tab in the Manage Internal Storage window and click Create Arrays in the Action drop-down menu, as shown in Figure 12-28.

Figure 12-28 Create Arrays menu

312 IBM DS8870 Architecture and Implementation

Page 331: IBM 8870 Archt

The Create New Arrays window opens, as shown in Figure 12-29.

Figure 12-29 Create New Arrays window

You must provide the following information:

a. Define Storage Characteristics

• RAID Type:

RAID 5 (default)

RAID 6

RAID 10

- SSD are usually RAID 5 but 6 and 10 are also possible with RPQ.

- Nearline-SAS disks support RAID 6 or RAID 10 (with RPQ).

Although you can select any RAID Type in the pull-down menu, you will only be able to configure an array if there is at least one unassigned array site that supports the RAID Type that you have selected. The available storage displays in the Select Available Capacity section of the window.

b. Select Available Capacity

• Type of Configuration/Drive class/Storage capacity to configure:

There are two options for Type of Configuration.

1. Automatic: Allows the system to choose the unassigned array site based on the “Drive class” that is selected to create the arrays.

The options for “Drive class” will be based on the different types (Nearline, Enterprise, solid state) of unassigned array sites currently in the storage system.

Once the “Drive class” is selected, the “Storage capacity to configure” option will display an entry for each array that can be created for the selected “Drive class”. You can select as many as you need from the list.

2. Manual: Provides more control over the resources. When you select this option, a table of available array sites is displayed. Select the array sites from the table, as many as you need.

Chapter 12. Configuration by using the DS Storage Manager GUI 313

Page 332: IBM 8870 Archt

• DA Pair Usage:

This is only displayed with the Automatic configuration option. The Spread Among All Pairs option balances ranks evenly across all available DA pairs. The Spread Among Least Used Pairs option assigns the ranks to the least-used DA pairs. The Sequentially Fill All Pairs option assigns ranks to the first DA pair, then to the second DA pair, and so on.

The bar graph (Automatic or Manual) displays the effect of your choices, and indicate the DA pairs that the arrays will be attached to. Spreading arrays evenly over the DA pairs will provide better overall performance.

If you want to create arrays with different characteristics (RAID and DDM type) in one task, select Add Another Array as many times as required.

In the example that is shown in Figure 12-29 on page 313, we create one RAID 5 array on SSDs.

Click OK to continue.

7. The Create array verification window is displayed, as shown in Figure 12-30. All array sites that will be used are listed here. At this stage, you can still change your configuration by deleting the array sites from the lists as well as adding new array sites, if required. Click Create All once you decide to continue with the proposed configuration.

Figure 12-30 Create array verification window

Wait for the message in Figure 12-31 to be displayed and then, click Close.

Figure 12-31 Creating arrays: Completed message

The Arrays table now reflects the arrays that were configured.

314 IBM DS8870 Architecture and Implementation

Page 333: IBM 8870 Archt

12.4.3 Creating ranks

Complete the following steps in the DS GUI to create a rank:

1. In the navigation pane, hover over Pools and click Internal Storage. The Manage Internal Storage window opens. Click the Ranks tab to start working with ranks. Select Create Ranks from the Action drop-down menu, as shown in Figure 12-32.

Figure 12-32 Select Create Ranks

2. The Create New Ranks window opens, as shown in Figure 12-33.

Figure 12-33 Create New Ranks window

Important: You do not necessarily need to create arrays first and then ranks. You can proceed directly with the creation of extent pools, as described in 12.4.4, “Creating extent pools” on page 318.

Important: If there is more than one storage complex defined in the GUI, be sure to select the correct storage image before you start creating arrays. From the Storage image drop-down menu, select the storage image that you want to work with.

Chapter 12. Configuration by using the DS Storage Manager GUI 315

Page 334: IBM 8870 Archt

To create a rank, you must provide the following information:

a. Define Storage Characteristics

• Storage Type: The type of extent for which the rank is to be configured. The storage type can be set to one of the following values:

FB: Fixed block extents = 1 GB. In fixed block architecture (used by open systems hosts), the data is written in fixed-size blocks.

Count key data (CKD) extents = 3390 Mod 1. In count-key-data architecture (used by System z hosts), the data is written in variable sized records.

• RAID Type:

RAID 5 (default)

RAID 6

RAID 10

- SSDs are usually RAID 5, but 6 and 10 are also possible with RPQ.

- Nearline-SAS disks support RAID 6 or RAID 10 (with RPQ).

Although you can select any RAID Type in the pull-down, you will only be able to configure a rank if there is unused storage (either an unassigned array site or array) that supports the RAID Type that you have selected. The available storage displays in the Select Available Capacity section of the window.

b. Select Available Capacity

Type of Configuration/Drive class/Storage capacity to configure:

There are two options for Type of Configuration.

1. Automatic: This is the default and it allows the system to choose the physical resources to use based on the capacity and DDM type selected. In order for a rank to be created, there must be an unassigned array to assign the rank to.

In order to create a new rank with this task, there must either be a configured array that is unassigned, or an array site that is unassigned for the rank to use. If there are unassigned array sites, the task will create a new array to support the rank that you want to create. The list of available Drive Class options will depend on the current unassigned arrays as well as unassigned array sites if any.

The “Select capacity to configure” options will depend on which Drive Class was selected. Each line selected will create a rank. If unassigned array sites are used, the arrays will automatically be created in order for the rank to be created.

Automatic option means that you do not have to select which array will be used to create the rank, if there is more than one array that is available to create the rank.

• DA Pair Usage:

This is only displayed with the Automatic configuration option. The Spread Among All Pairs option balances ranks evenly across all available DA pairs. The Spread Among Least Used Pairs option assigns the ranks to the least-used DA pairs. The Sequentially Fill All Pairs option assigns ranks to the first DA pair, then to the second DA pair, and so on. This option will have no effect if unassigned arrays are being used to create the rank.

The bar graph (for automatic or manual) displays the effect of your choices, and depending on the quantity of ranks being created, indicate the DA pairs that the arrays will be attached to.

2. Manual: This option can be used if you want more control over the resources. When you select this option, a table of unassigned arrays (Ax) and array sites (Sx)

316 IBM DS8870 Architecture and Implementation

Page 335: IBM 8870 Archt

is displayed. You then manually select the specific resource from the table to be used to create the rank. The table will provide all the details so that you can determine which resource to use for the creation of the rank. If an array site is selected, an array will be created automatically for the rank to be created.

• Encryption Group is used to indicate whether encryption is enabled or disabled for ranks. Select 1 from the Encryption Group drop-down menu if the encryption feature is enabled on this machine. Otherwise, select None.

If you want to create ranks with different characteristics (Storage, RAID, and DDM type) with one task, select Add Another Rank as many times as required.

Our example in Figure 12-33 on page 315 shows one FB rank on SSDs with RAID 5 and automatic type.

Click OK to continue.

3. The Create rank verification window is displayed, as shown in Figure 12-34. Each array site that is listed in the table is assigned to the corresponding array that we created in 12.4.2, “Creating arrays” on page 310. At this stage, you can still change your configuration by deleting the ranks from the lists and adding new ranks, if required. Click Create All after you decide to continue with the proposed configuration.

Figure 12-34 Create rank verification window

4. The Creating ranks window opens. Click View Details to check the overall progress. It displays the Task Properties window, as shown in Figure 12-35.

Figure 12-35 Creating ranks: Task Properties view

Chapter 12. Configuration by using the DS Storage Manager GUI 317

Page 336: IBM 8870 Archt

5. After the task is completed, return to Internal Storage, and under the Rank tab, check the list for the newly created ranks.

The bar graph in the Disk Configuration Summary section is changed. There are new ranks, but they are not assigned to extent pools.

12.4.4 Creating extent poolsComplete the following steps in the DS GUI to create an extent pool:

1. In the navigation pane, hover over Pools and click Internal Storage. The Internal Storage window opens. The Extent Pools tab is selected by default.

The bar graph in the summary section provides information about unassigned and assigned capacity.

Click Create Extent Pools from the Action drop-down menu, as shown in Figure 12-36.

Figure 12-36 Select Create Extent Pools

Important: If there is more than one storage complex defined in the GUI, be sure to select the correct storage image before you start creating arrays. From the Storage image drop-down menu, select the storage image that you want to work with.

318 IBM DS8870 Architecture and Implementation

Page 337: IBM 8870 Archt

2. The Create New Extent Pools window opens, as shown in Figure 12-37. Scroll down to see the rest of the window and provide input for all the fields.

Figure 12-37 Create New Extent Pools window

To create an extent pool, you must provide the following information:

a. Define Storage Characteristics

• Storage Type: The type of extent for which the rank is to be configured. The storage type can be set to one of the following values:

FB: Fixed block extents = 1 GiB. In fixed block architecture (used by open systems hosts), the data is written in fixed-size blocks.

Count key data (CKD) extents = 3390 Mod 1. In count-key-data architecture (used by System z hosts), the data is written in variable sized records.

• RAID Type:

RAID 5 (default)

RAID 6

Chapter 12. Configuration by using the DS Storage Manager GUI 319

Page 338: IBM 8870 Archt

RAID 10

- SSD disks support only RAID 5 or RAID 10 (with RPQ).

- Nearline-SAS disks support RAID 6 or RAID 10 (with RPQ).

Although you can select any RAID Type in the pull-down, you will only be able to configure an extent pool if there is unused storage (either an unused rank, unassigned array, or array site) that supports the RAID Type that you have selected. The available storage displays in the Select Available Capacity section.

b. Select Available Capacity

• Type of Configuration/Drive class/Storage capacity to configure:

There are two options for Type of Configuration.

1.Automatic: This is the default and it allows the system to choose the physical resources to use based on the capacity and DDM type selected. In order for an extent pool to be created, there must be an unused rank to assign it to.

In order to create a new extent pool with this task, there must either be unused ranks, arrays that are unassigned, or array sites that are unassigned in the storage system. If there are unassigned array sites that will be used, the following steps will happen automatically:

a. Create an array, formatted per RAID type and assign to an unassigned array site.

b. Create a rank, formatted per Storage type and assign to the new array.

c. Use the rank for the extent pool being created.

If there are unassigned arrays that will be used, only steps b and c occur. If there are unused ranks that will be used, only step c occurs.

The Drive Class options will depend on the current unused ranks, unassigned arrays, and unassigned array sites if any. Select one of the options.

The “Select capacity to configure” options will depend on which Drive Class was selected.

Automatic means that you do not have the option to select which ranks are used.

• DA Pair Usage:

This is only displayed with the Automatic configuration option. The Spread Among All Pairs option balances ranks evenly across all available DA pairs. The Spread Among Least Used Pairs option assigns the ranks to the least-used DA pairs. The Sequentially Fill All Pairs option assigns ranks to the first DA pair, then to the second DA pair, and so on. This option will have no effect if unused ranks or unassigned arrays are being used for the extent pool.

The bar graph (for Automatic or Manual) displays the effect of your choices, and depending on the quantity of ranks being created, indicate the DA pairs that the arrays will be attached to.

2. Manual: This option can be used if you want more control over the resources. When you select this option, a table of unassigned arrays (Ax) and array sites (Sx) is displayed. You then manually select the specific resource from the table to be used to create the rank. The table will provide all the details so that you can determine which resource to use for the creation of the rank. If an array site is selected, an array will be created automatically for the rank to be created.

• Encryption Group is used to indicate whether encryption is enabled or disabled for ranks. Select 1 from the Encryption Group drop-down menu if the encryption feature is enabled on this machine. Otherwise, select None.

320 IBM DS8870 Architecture and Implementation

Page 339: IBM 8870 Archt

c. Define Extent Pool Characteristics

• Number of extent pools:

Here you choose the number of extent pools to create. There are three available options: Two extent pools (ease of management), single extent pool, and extent pool for each rank (physical isolation). The default configuration creates two extent pools per storage type, dividing all ranks equally among each pool.

• Nickname prefix and suffix:

Provides a unique name for each extent pool. This setup is useful if you have multiple extent pools, each assigned to separate hosts and platforms.

• Server assignment:

The Automatic option allows the system to determine the best server for each extent pool. It is the only choice when you select the Two extent pools option as the number of extent pools. If you prefer to select a specific server, select 0 or 1.

If the extent pool is going to be used for CKD volumes, the server selected will determine the LCUs that will manage the volumes in the extent pool, when they are created. Server 0 manages all even-numbered LCUs and Server 1 manages all odd-numbered LCUs. It is ideal to have LCUs managed by both servers. This will require that at least two extent pools be created for CKD volumes.

The LCUs need to be defined in the input/output configuration data set (IOCDS) that the System z servers use. More details about LCUs and CKD volumes is available at 12.4.9, “Creating LCUs and CKD volumes” on page 337.

If the extent pool is going to be used for FB volumes, the volumes that a host will use should be spread between both servers. This will require that at least two extent pools be created for FB volumes.

• Storage threshold:

Specifies the percentage when the DS8000 generates a storage threshold alert. By using this option, you make any adjustments before a full storage condition occurs.

• Storage reserved:

Specifies the percentage of the total extent pool capacity that is reserved. This percentage is prevented from being allocated to volumes or space-efficient storage.

3. To create all of the required extent pools in one task, select Add Another Pool as many times as required.

Click OK to continue.

4. The Create extent pool verification window opens, as shown in Figure 12-38. Here you can check the names of the extent pools that are going to be created, their capacity, server assignments, RAID protection, and other information. If you want to add capacity to the extent pools or add another extent pool, select the appropriate action from the Action drop-down list. After you are satisfied with the specified values, click Create All to create the extent pools.

Figure 12-38 Create extent pool verification window

Chapter 12. Configuration by using the DS Storage Manager GUI 321

Page 340: IBM 8870 Archt

5. The Creating extent pools window opens. Click View Details to check the overall progress. The Task Properties window opens, as shown in Figure 12-39.

Figure 12-39 Creating extent pools: Task Properties window

6. After the task is completed, return to the Internal Storage window (under the extent pools tab) and check the list of newly created ranks.

The bar graph in the summary section is changed. There are ranks that are assigned to extent pools and you can create new volumes from each extent pool.

7. The options that are available from the Action drop-down menu are shown in Figure 12-40. To check the extent pool properties, select the extent pool and click Properties from the Action drop-down menu.

Figure 12-40 Extent pool action: Properties selection

322 IBM DS8870 Architecture and Implementation

Page 341: IBM 8870 Archt

8. The Single Pool Properties window opens, as shown in Figure 12-41. Basic extent pool information and volume relocation-related information is provided here. If necessary, you can change the Extent Pool Name, Storage Threshold, and Storage Reserved values. Select Apply to commit all of the changes.

Figure 12-41 SIngle Pool Properties: General tab

9. For more information about the drives, volumes, and ranks that are used in the extent pool, click the appropriate tab. Click OK to return to the Internal Storage window.

10.For more detailed information about the DDMs, select the extent pool from the Manage Internal Storage table, and from the Action drop-down menu click DDM Properties. The DDM Property window opens, as shown in Figure 12-42.

Figure 12-42 Extent Pool: DDM Properties

Use the DDM Properties window to view all of the DDMs that are associated with the selected extent pool and to determine the state of the DDM. You can print the table, download it in .csv file format, and modify the table view by selecting the appropriate icon at the top of the table.

Click OK to return to the Internal Storage window.

Chapter 12. Configuration by using the DS Storage Manager GUI 323

Page 342: IBM 8870 Archt

12.4.5 Configuring I/O ports Before you can assign host attachments to I/O ports, you must confirm the operation setting of the I/O ports. There are four or eight FCP/FICON ports on each host adapter (depending on the model). Complete the following steps to independently configure each port:

1. Hover over the Home icon and select System Status. The System Status window opens.

2. Select the storage image for which you want to configure the ports and, from the Action drop-down menu, click Storage Image Configure I/O ports, as shown in Figure 12-43.

Figure 12-43 System Status window: Configure I/O ports

3. The Configure I/O Ports window opens, as shown in Figure 12-44.

Here, you select the ports that you want to format and then click the wanted port format (FcSf, FC-AL, or FICON) from the Action drop-down menu.

Figure 12-44 Select I/O port format

You receive a warning message that the ports might become unusable by the hosts that are currently connected to them.

You can repeat this step to format all ports to their required function. Multiple port selection is supported.

324 IBM DS8870 Architecture and Implementation

Page 343: IBM 8870 Archt

12.4.6 Configuring logical host systemsIn this section, we show you how to configure host systems. This process applies only to open systems hosts. A default FICON host definition is automatically created after you define an I/O port to be a FICON port.

Complete the following steps to create a host system:

1. Hover over Hosts and click Hosts. The Host connections summary opens, as shown in Figure 12-45.

Figure 12-45 Host connections summary

Under the Tasks section, there are links to various actions. If you want to modify the I/O port configuration, click Configure I/O ports.

2. If you have more than one storage image, you must select the correct one and then click Create new host in the Tasks section to create a host.

Important: You can use the View host port login status link to query any host port that is logged in to the system. You also can use this window to debug host access and switch configuration issues.

Important: In the View Host Port Login status window, the list of logged in host ports includes all of the host ports that the storage unit detects. It does not take into account changes that the storage unit could not detect. For example, the storage unit cannot detect that a cable was disconnected from the port of the host device or that a fabric zoning change occurred. In these cases, the host might not be able to communicate with the storage device. However, the storage device might not detect this state and still views the host as logged in.

Chapter 12. Configuration by using the DS Storage Manager GUI 325

Page 344: IBM 8870 Archt

3. The resulting window guides you through the steps required to create a host configuration, beginning with the Define Host Ports window, as shown in Figure 12-46.

Figure 12-46 Define Host Ports window

In the host information window, enter the following information:

a. Host Nickname: Name of the host.

b. Port Type: You must specify whether the host is attached over an FC Switch fabric (P-P) or direct FC arbitrated loop to the DS8000.

c. Host Type: The drop-down menu gives you a list of host types from which to select. In our example, we create a Linux host.

d. The Host worldwide port name (WWPN) numbers or select the WWPN from the drop-down menu and click Add. This step must be repeated for each host port being defined to the host before continuing. Additional ports can be added later if necessary.

After the host entry is added into the table, you can manually add a description of each host. After you enter the necessary information, click Next.

4. The Map Host Ports to a Volume Group window opens, as shown in Figure 12-47 on page 327. In this window, you can choose the following options:

– Select Map at a later time to create a host connection without mapping host ports to a volume group.

– Select Map to a new volume group to create a volume group to use in this host connection.

– Select Map to an existing volume group to map to a volume group that is already defined. Choose an existing volume group from the menu. Only volume groups that are compatible with the host type that you selected from the previous window are displayed.

Click Next after you select the appropriate option.

326 IBM DS8870 Architecture and Implementation

Page 345: IBM 8870 Archt

Figure 12-47 Map Host Ports to a Volume Group window

5. The Define I/O Ports window opens, as shown in Figure 12-48.

Figure 12-48 Define I/O Ports window

In the Define I/O Ports window, you can choose to automatically allow all your I/O ports, or manually allow them from the table. Automatic means that the host ports will be able to communicate with the DS8870 through any I/O port. Manual lets you choose which specific I/O ports in the DS8870 that the host ports will be allowed to communicate with.

Important: The option that you select in this window will determine the next window that will be displayed after you click Next.

If you select “Map at a later time” for the Volume Group selection, the window in this step will not display. The next window that displays is the Verification window as explained in step 6 and seen in Figure 12-49 on page 328.

If you select “Map a new volume group”, the next window “Map to a new volume group” will ask for input to define a new volume group, and the volumes to be included if any. (This is not shown.) When completed, click Next on the “Map to a new volume group” window and proceed with this step.

If you select “Map to an existing volume group”, you must select the name of the volume group from the pull-down list and click and proceed with this step.

Chapter 12. Configuration by using the DS Storage Manager GUI 327

Page 346: IBM 8870 Archt

6. After the I/O ports are defined, click Next. The Verification window opens, as shown Figure 12-49, in which you can approve your choices before you commit them.

Figure 12-49 Verification window

7. In the Verification window, check the information that you entered. If you want to make modifications, select Back, or cancel the process. After you verify the information, click Finish to create the host system. This action takes you to the Manage Hosts window.

If you need to change a host system definition, select your host and choose the appropriate action from the drop-down menu, as shown in Figure 12-50.

Figure 12-50 Modify host connections

328 IBM DS8870 Architecture and Implementation

Page 347: IBM 8870 Archt

12.4.7 Creating fixed block volumesComplete the following steps to create fixed block (FB) volumes:

1. Hover over Volumes and select FB Volumes. The FB Volumes summary window opens, as shown in Figure 12-51.

Figure 12-51 FB Volumes summary window

2. If you have more than one storage image, you must select the appropriate image.

In the Tasks area at the bottom of the window, click Create new volumes. The Create Volumes window opens, as shown in Figure 12-52.

Figure 12-52 Create Volumes window: Select extent pools

Chapter 12. Configuration by using the DS Storage Manager GUI 329

Page 348: IBM 8870 Archt

3. The table in the Create Volumes window contains all the extent pools that were previously created for the FB storage type. To ensure a balanced configuration, select extent pools in pairs (one from each server). If you select multiple pools, the new volumes are assigned to the pools based on the assignment option that you select on this window.

Click Next to continue. The Define Volume Characteristics window opens, as shown in Figure 12-53.

Figure 12-53 Add Volumes: Define Volume Characteristics

To create a fixed block volume, provide the following information:

– Volume type: Specifies the units for the size parameter. There are three options for FB volumes and two options for IBM iSeries® volumes. FB volumes can be used for any open system type of operating system except IBM i. The IBM i operating system requires volumes that are iSeries type. There are two options for iSeries.

The option selected will determine the unit value that will be used in the Size field:

– Size: The size of the volume in the units specified.

– Volume quantity: The number of volumes to create.

– Storage allocation method: This setting gives you the option to create a standard volume (which is used in our example), or a space-efficient volume. For more

330 IBM DS8870 Architecture and Implementation

Page 349: IBM 8870 Archt

information about space-efficient volumes, see 5.2.6, “Space-efficient volumes” on page 114. The detailed procedures for configuring track space-efficient (TSE) volumes are provided in the Redpaper publication IBM System Storage DS8000 Series: IBM Flashcopy SE REDP-4368. The detailed procedures for extent space-efficient (ESE) volumes are provided in the latest version of DS8000 Thin Provisioning, REDP-4554.

– Extent allocation method: Defines how volume extents are allocated on the ranks in the extent pool. This field is not applicable for track space efficient (TSE) volumes. The following options are available:

• Rotate extents: The extents of a volume are allocated on all ranks in the extent pool in a round-robin fashion. This function is called Storage Pool Striping. This allocation method can improve performance because the volume is allocated on multiple ranks. It also helps to avoid hotspots by spreading the workload more evenly on the ranks. This method is the default allocation method.

• Rotate volumes: All extents of a volume are allocated on the rank that contains the most free extents. If the volume does not fit on any one rank, it can span multiple ranks in the extent pool.

– Performance group: You set the priority level of your volume’s I/O operations. For more information, see the Redpaper publication DS8000 I/O Priority Manager, REDP-4760.

– Resource Group: If you plan to use Resource Groups (which means that only certain operators manage copy services for these volumes), you can specify a Resource Group for the volumes you are going to create. A Resource Group other than PUBLIC (the default) must be defined first. To define a Resource Group, hover over Access and select Resource Groups. Here you can define a Resource Group name.

Optionally, you can provide a Nickname prefix, a Nickname suffix, and select one or more volume groups (if you want to add this new volume to a previously created volume group). When your selections are complete, click Add Another if you want to create more volumes with different characteristics. Otherwise, click OK to continue. The Create Volumes window opens, as shown in Figure 12-54.

Figure 12-54 Create Volumes window

4. If you need to make any other modifications to the volumes in the table, select the volumes that you want to modify and select the appropriate action from the Action drop-down menu. You can also add additional volumes or delete any in the list. Otherwise, click Next to continue.

Important: It is recommended to use DS CLI for configuring space-efficient volumes because the current DS GUI does not support all of the configuration options.

Chapter 12. Configuration by using the DS Storage Manager GUI 331

Page 350: IBM 8870 Archt

5. You need to select how volumes will be assigned to an LSS. You can choose Automatic, Manual (Fill), or Manual (Group):

– Automatic means the system assigns the volumes to LSSs for you.

– Manual (Fill) means the system fills the first LSS selected before filling the next one.

– Manual (Group) means to select one or more LSSs to assign volume addresses to and the system will spread the volumes across all the LSSs that are selected. (If necessary, scroll down to view the information for Server 1.)

In our example, we select the Automatic assignment method, as shown in Figure 12-55. One LSS can manage a maximum of 256 volumes (addresses).

Figure 12-55 Select LSS

6. Click Finish to continue.

7. The Create Volumes Verification window that is shown in Figure 12-56 opens, which lists all of the volumes that are going to be created. If you want to add more volumes or modify the existing volumes, you select the appropriate action from the Action drop-down list. After you are satisfied with the specified values, click Create all to create the volumes.

Figure 12-56 Create Volumes Verification window

332 IBM DS8870 Architecture and Implementation

Page 351: IBM 8870 Archt

8. The Creating Volumes information task window opens. Depending on the number of volumes, the process can take some time to complete. Optionally, click View Details to check the overall progress.

9. After the task is complete, the Manage Volumes window opens. You can select View Details or Close. If you click Close, you return to the main FB Volumes window.

10.The bar graph in the FB Volumes window is updated. From there, you can select other actions, such as Manage existing volumes. The Manage Volumes window is shown in Figure 12-57.

Figure 12-57 FB Volumes: Manage Volumes window

If you need to change a volume, select a volume and click the appropriate action from the Action drop-down menu. Change the Filter setting to help locate the volumes that you are looking for.

12.4.8 Creating volume groupsComplete the following steps to create a volume group:

1. Point to Volumes and select Volume Groups. The Volume Groups window opens.

2. To create a volume group, select Create from the Action drop-down menu, as shown in Figure 12-58.

Figure 12-58 Volume Groups window: Select Create

Chapter 12. Configuration by using the DS Storage Manager GUI 333

Page 352: IBM 8870 Archt

The Define Volume Group Properties window opens, as shown in Figure 12-59.

Figure 12-59 Define Volume Group Properties window

3. In the Define Volume Group Properties window, enter the nickname for the volume group and select the host type from which you want to access the volume group. This selection does not affect the functionality of the volume group; it supports the host type selected.

4. Select the volumes to include in the volume group. If you need to select many volumes, you can specify the LSS so that only these volumes display in the list, and then you can select all.

5. Click Next to open the Verification window, as shown in Figure 12-60.

Figure 12-60 Create New Volume Group Verification window

6. In the Verification window, check the information that you entered during the process. If you want to make modifications, select Back, or you can cancel the process. After you verify the information, click Finish to create the host system attachment. After the creation completes, a Create Volume Group completion window opens in which you can select View Details or Close.

7. After you select Close, you see the new volume group in the Volume Group window.

334 IBM DS8870 Architecture and Implementation

Page 353: IBM 8870 Archt

Creating Volume Group of scsimap256 In Linux 2.4 kernels, Small Computer System Interface (SCSI) devices are discovered by scanning the SCSI bus when the host adapter driver is loaded. If there is a gap in the LUN ID sequence, the LUNs after the gap are not discovered (see Figure 12-61). A list of devices that were discovered and are recognized by the SCSI subsystem are listed in the /proc/scsi/scsi directory. Use the cat command to display the output of /proc/scsi/scsi to verify that the correct number of LUNs was recognized by the kernel.

Figure 12-61 Gaps in the LUN ID

If you want to modify the LUN ID of an FB volume that is already in the Volume Group, use Remove Volumes. Use Add Volumes to add the volumes back to the Volume Group, and then modify the LUN ID to the new LUN ID.

You can change the LUN ID field when you create the volume group, or add a volume to an existing volume group. You can edit the column under the LUN ID, as shown in Figure 12-62.

Figure 12-62 Update the LUN ID field

Chapter 12. Configuration by using the DS Storage Manager GUI 335

Page 354: IBM 8870 Archt

If the LUN ID column is not displayed in the window, you can enable it by right-clicking the menu bar. Select the box to the right of LUN ID. The LUN ID field is shown, (see Figure 12-63). You can edit this column.

Figure 12-63 Enable display of LUN ID column

If you enter a LUN ID that is used in this volume group, an error message is shown (see Figure 12-64).

Figure 12-64 Error message for duplicated LUN ID

336 IBM DS8870 Architecture and Implementation

Page 355: IBM 8870 Archt

There are only 256 LUNs in a scsimap256 volume group 0 - 255. If you enter a number that is larger than 255, you receive the error message that is shown in Figure 12-65.

Figure 12-65 Error message for number larger than 255

12.4.9 Creating LCUs and CKD volumesIn this section, we show how to create LCUs and CKD volumes. This process is necessary only for IBM System z.

Complete the following steps to create an LCU and CKD volume:

1. Point to Volumes and select CKD LCUs and Volumes. The CKD LCUs and Volumes window opens, as shown in Figure 12-66.

Figure 12-66 CKD LCUs and Volumes window

Important: The LCUs that you create must match the LCU definitions in each server IOCDS configuration. More precisely, each LCU ID number you select during this process must correspond to a CNTLUNIT definition in the IOCDS with the same CUADD number. It is vital that the two configurations match each other.

Chapter 12. Configuration by using the DS Storage Manager GUI 337

Page 356: IBM 8870 Archt

2. Select a storage image from the Select storage image drop-down menu if you have more than one image. The window is refreshed to show the LCUs in the storage image.

3. To create new LCUs, select Create new LCUs with volumes from the tasks list. The Create LCUs window opens, as shown in Figure 12-67.

Figure 12-67 Create LCUs window

4. Select the LCUs you want to create. You can select them from the list that is displayed on the left by clicking the number, or you can use the map. When you use the map, click the available LCU square. You must enter the following necessary parameters for the selected LCUs:

– Starting SSID: Enter a Subsystem ID (SSID) for the LCU. The SSID is a four-character hexadecimal number. If you create multiple LCUs at once, the SSID number is incremented by one for each LCU. The LCUs that are attached to the same SYSPLEX must have different SSIDs. Use unique SSID numbers across your whole environment.

– LCU type: Select the LCU type that you want to create. Select 3990 Mod 6, unless your operating system does not support Mod 6. The following options are available:

• 3990 Mod 3• 3990 Mod 3 for TPF• 3990 Mod 6

338 IBM DS8870 Architecture and Implementation

Page 357: IBM 8870 Archt

The following parameters affect the operation of certain Copy Services functions:

– Concurrent copy session timeout: The time in seconds that any logical device on this LCU in a concurrent copy session stays in a long busy state before a concurrent copy session is suspended.

– z/OS Global Mirror Session timeout: The time in seconds that any logical device in a z/OS Global Mirror session (XRC session) stays in long busy before the XRC session is suspended. The long busy occurs because the data mover has not offloaded data when the logical device (or XRC session) is no longer able to accept more data.

With recent enhancements to z/OS Global Mirror, there is now an option to suspend the z/OS Global Mirror session instead of presenting the long busy status to the applications.

– Consistency group timeout: The time in seconds that remote mirror and copy consistency group volumes on this LCU stay extended long busy after an error that causes a consistency group volume to suspend. While in the extended long busy state, I/O is prevented from updating the volume.

– Consistency group timeout enabled: Check the box to enable the Remote Mirror and Copy consistency group timeout option on the LCU.

– Critical mode enabled: Check the box to enable critical heavy mode. Critical heavy mode controls the behavior of the remote copy and mirror pairs that have a primary logical volume on this LCU.

– Optionally specify a Resource Group other than PUBLIC if you want different groups of people manage your copy services.

When all of the selections are made, click Next.

5. In the next window (as shown in Figure 12-68), you must configure your base volumes and, optionally, assign alias volumes. The parallel access volume (PAV) license function must be activated to use alias volumes.

Figure 12-68 Create Volumes window

Chapter 12. Configuration by using the DS Storage Manager GUI 339

Page 358: IBM 8870 Archt

Define the base volume characteristics in the first third of this window with the following information:

– Base type:• 3380 Mod 2• 3380 Mod 3• 3390 Standard Mod 3• 3390 Standard Mod 9• 3390 Mod A (used for Extended Address Volumes - EAV)• 3390 Custom

– Volume size: This field must be changed if you use the volume type 3390 Custom or 3390 Mode A.

– Size format: This format must be changed only if you want to enter a special number of cylinders. This format can also be used only by 3390 Custom or 3390 Mod A volume types.

– Volume quantity: Enter the number of volumes you want to create.

– Base start address: The starting address of volumes you are about to create. Specify a decimal number in the range of 0 - 255. This number defaults to the value that is specified in the Address Allocation Policy definition.

– Order: Select the address allocation order for the base volumes. The volume addresses are allocated sequentially, starting from the base start address in the selected order. If an address is already allocated, the next free address is used.

– Storage allocation method: This field is displayed only on systems that have the FlashCopy SE function activated. The options are:

• Standard: Allocate standard volumes.

• Track space efficient (TSE): Allocate Space Efficient volumes to be used as FlashCopy SE target volumes.

– Extent allocation method: Defines how volume extents are allocated on the ranks in the extent pool. This field is not applicable for TSE volumes. The following options are available:

• Rotate extents: The extents of a volume are allocated on all ranks in the extent pool in a round-robin fashion. This function is called Storage Pool Striping. This allocation method can improve performance because the volume is allocated on multiple ranks. It also helps to avoid hotspots by spreading the workload more evenly on the ranks. This method is the default allocation method.

• Rotate volumes: All extents of a volume are allocated on the rank that contains the most free extents. If the volume does not fit on any one rank, it can span multiple ranks in the extent pool.

340 IBM DS8870 Architecture and Implementation

Page 359: IBM 8870 Archt

Select Assign the alias volume to these base volumes if you use PAV or Hyper PAV. Provide the following information:

– Alias start address: From the pull-down list, select the starting address to use. The range of addresses that you select cannot overlap addresses used for base volumes for any LCU.

– Order: Select the address allocation order for the alias volumes. The volume addresses are allocated sequentially starting from the alias start address in the selected order.

– Evenly assign alias volumes among bases: When you select this option, you must enter the total number of aliases you want to assign to all the base volumes, so that each base volume has the same number of aliases.

– Assign aliases by using a ratio of aliases to base volume: You can assign alias volumes by using a ratio of alias volumes-to-base volumes. The first value specifies the number of aliases that will be assigned to the quantity of base volumes specified by the second value. The second value is normally 1, and the first number indicates the quantity of aliases that each base volume will get. The total number of base and alias addresses per LCU cannot exceed 256.

– All the addresses (00-FF) that are used for base or alias volumes, must also be configured in the server IOCDS device definition, so that they match the correct type: base or alias. The only volumes that will actually use space in the DS8000 will be the base volumes. Any alias volume is used as an alternate address for a base volume.

In the last section of this window, you can optionally assign the following alias nicknames for your volumes:

– Nickname prefix: If you select a nickname suffix of None, you must enter a nickname prefix in this field. Blank fields are not allowed. If you select a nickname suffix of Volume ID or Custom, you can leave this field blank.

– Nickname suffix: You can select None as described previously. If you select Volume ID, you must enter a four-character volume ID for the suffix. If you select Custom, you must enter a four-digit hexadecimal number or a five-digit decimal number for the suffix.

– Start: If you select Hexadecimal sequence, you must enter a number in this field.

Assign all aliases: You can assign all aliases in the LCU to just one base volume if you implemented HyperPAV or Dynamic alias management. With HyperPAV, the alias devices are not permanently assigned to any base volume, even though you initially assign each to a certain base volume. Rather, they are in a common pool and are assigned to base volumes as needed on a per I/O basis. With Dynamic alias management, WLM eventually moves the aliases from the initial base volume to other volumes as needed.

If your host system is using Static alias management, you must assign aliases to all base volumes on this window because the alias assignments that are made here are permanent. To change the assignments later, you must delete and re-create aliases.

Nickname: The nickname is not the System z VOLSER of the volume. The VOLSER is created later when the volume is initialized by the ICKDSF INIT command, although it could be the same as long as the nickname is 6 characters long.

Chapter 12. Configuration by using the DS Storage Manager GUI 341

Page 360: IBM 8870 Archt

Click OK to proceed. The Create Volumes window opens, as shown in Figure 12-69.

Figure 12-69 Create Volumes window

6. In the Create Volumes window (as shown in Figure 12-69), you can select the created volumes to modify or delete them. You also can create more volumes if necessary. Select Next if you do not need to create more volumes.

7. In the next window (as shown in Figure 12-70), you can change the extent pool assignment to your LCU. Select Finish if you do not want to make any changes.

Figure 12-70 LCU to Extent Pool Assignment window

342 IBM DS8870 Architecture and Implementation

Page 361: IBM 8870 Archt

8. The Create LCUs Verification window opens, as shown in Figure 12-71. You can see a list of all the volumes that are going to be created. If you want to add more volumes or modify the existing volumes, you can do so by selecting the appropriate action from the Action drop-down list. After you are satisfied with the specified values, click Create All to create the volumes.

Figure 12-71 Create LCUs Verification window

9. The Creating Volumes information window opens. Depending on the number of volumes, the process can take some time to complete. Optionally, click View Details to check the overall progress.

10. After the creation is complete, a final window is shown. You can click View details or Close. If you click Close, you return to the main CKD LCUs and Volumes window, where you see that the bar graph is changed.

Chapter 12. Configuration by using the DS Storage Manager GUI 343

Page 362: IBM 8870 Archt

12.4.10 Additional tasks on LCUs and CKD volumesWhen you select Manage existing LCUs and Volumes (as shown in Figure 12-72), you can complete other tasks at the LCU or volume level.

As shown in Figure 12-72, the following options are available:

� Create: For information, see 12.4.9, “Creating LCUs and CKD volumes” on page 337.

� Clone LCU: For information, see 12.4.9, “Creating LCUs and CKD volumes” on page 337. All properties from the selected LCU are cloned here.

� Add Volumes: You can add base volumes to the selected LCU here. For more information, see 12.4.9, “Creating LCUs and CKD volumes” on page 337 for more information.

� Add Aliases: You can add alias volumes here without creating more base volumes.

� Properties: You show the additional properties here. You also can change some of the properties, such as the timeout value.

� Delete: You can delete the selected LCU here. This action must be confirmed because you also delete all volumes that can contain data.

� Migrate: You migrate volumes from one extent pool to another. For more information about migrating volumes, see IBM System Storage DS8000 Easy Tier, REDP-4667.

Figure 12-72 Manage LCUs and Volumes window

344 IBM DS8870 Architecture and Implementation

Page 363: IBM 8870 Archt

The next window (as shown in Figure 12-73) shows that you can take the following actions at the volume level after you select an LCU:

� Increase capacity: You can increase the size of a 3390-type volume. The capacity of a 3380 volume cannot be increased. After the operation completes, you can use ICKDSF to refresh the volume VTOC to reflect the additional cylinders.

� Add Aliases: You define more aliases without creating base volumes.

� Properties: Here you can view the volumes properties. The only value that you change is the nickname. You can also see whether the volume is online from the DS8000 side.

� Delete: Here you can delete the selected volume. This action must be confirmed because you also delete all alias volumes and data on this volume. The volume must be offline to any host or you must select the Force option.

� Migrate: You migrate volumes from one extent pool to another. For more information about migrating volumes, see IBM System Storage DS8000 Easy Tier, REDP-4667.

Figure 12-73 Manage CKD Volumes

The Increase capacity action can be used to dynamically expand volume capacity without the need to bring the volume offline in z/OS. It is good practice to start by using 3390 Mod A after you can expand the capacity and change the device type of your existing 3390 Mod 3, 3390 Mod 9, and 3390 Custom volumes. Keep in mind that 3390 Mod A volumes can be used only on z/OS V1.10 or later. After the capacity is increased on DS8000, you can run an ICKDSF to refresh the VTOC Index to be sure that the new volume size is fully recognized.

Important: The capacity of a volume cannot be decreased.

Important: After the volumes are initialized by using the ICKDSF INIT command, you also see the Volume Serial Numbers (VOLSERs) in this window. This action is not done in this example.

Chapter 12. Configuration by using the DS Storage Manager GUI 345

Page 364: IBM 8870 Archt

12.5 Other DS GUl functionsIn this section, we describe other DS GUI functions.

12.5.1 Easy Tier To enable Easy Tier, go to System Status, highlight the DS8000 storage image, then select Action Storage Image PropertiesIn the Advanced tab, it is possible to enable Easy Tier, as shown in Figure 12-74.

Figure 12-74 Storage Image Properties: Set up for EasyTier and IO Priority Manager

Easy Tier Auto Mode manages the Easy Tier Automatic Mode behavior. You can select the following options:

� All Pools: Automatically manage all single and multitier pools.� Tiered Pools: Automatically manage multitier pools only.� No Pools: No volume is managed.

346 IBM DS8870 Architecture and Implementation

Page 365: IBM 8870 Archt

To retrieve information about the performance of Easy Tier, go to System Status, select the DS8000, then select Action Storage Image Export Easy Tier Summary Report, as shown in Figure 12-75.

Figure 12-75 Easy Tier Summary Report

The browser will ask if you want to view or save the .zip file, which contains the report. Save the file in the location that you specify and then it can be analyzed using the STAT tool that is related to the machine code level. For more information about the use of this tool, see the Redpaper publication IBM DS8000 Easy Tier, REDP-4667.

12.5.2 I/O Priority ManagerTo enable I/O Priority Manager, go to System Status, select the DS8000, then select Action Storage Image Properties. In the Advanced tab, click Manage (as shown in Figure 12-74 on page 346). The I/O Priority Manager license must be activated for I/O Priority Management.

By enabling SNMP Traps, hosts can be informed when a rank is going in saturation and managed by the host directly, if management is allowed. The priority of all volumes must be manually selected for each volume by selecting Volumes then selecting the volume typology (FB or CKD). Click Manage existing volumes.Select the volume then click Properties. Select Performance Group by right-clicking and selecting the appropriate performance group for the selected volume (see Figure 12-76 on page 348).

Chapter 12. Configuration by using the DS Storage Manager GUI 347

Page 366: IBM 8870 Archt

Figure 12-76 shows the I/O Priority Manager Performance Group selection.

Figure 12-76 I/O Priority Manager Performance Group selection

To retrieve information about the performance of I/O Priority Manager, go to System Status. Select the DS8000, then click Action Storage Image I/O Priority Manager, as shown in Figure 12-77.

Figure 12-77 I/O Priority Manager reports

348 IBM DS8870 Architecture and Implementation

Page 367: IBM 8870 Archt

12.5.3 Checking the status of the DS8000Complete the following steps to display and explore the overall status of your DS8000 system:

1. In the navigation pane in the DS GUI, hover over Home and select System Status. The System Status window opens.

2. Select your storage complex and, from the Action drop-down menu, click Storage Unit System Summary, as shown in Figure 12-78.

Figure 12-78 Select Storage Unit System Summary view

3. The new Storage Complex window provides general DS8000 system information. As shown in Figure 12-79 on page 350, the window is divided into the following sections:

a. System Summary: You can quickly identify the percentage of capacity that is used, and the available and used capacity for open systems and System z. In addition, you can check the system state and obtain more information by clicking the state link.

b. Management Console information.

c. Performance: Provides performance graphs for host MBps, host KIOps, rank MBps, and rank KIOps. This information is updated every 60 seconds.

d. Racks: Represents the physical configuration.

For more information: For more information about I/O Priority Manager and the different options that are available, see 7.6, “I/O Priority Manager” on page 189.

Chapter 12. Configuration by using the DS Storage Manager GUI 349

Page 368: IBM 8870 Archt

Figure 12-79 System Summary overview

4. In the Rack section, the number of racks that is shown matches the racks that are physically installed in the storage unit. If you hover over the rack, more rack information is displayed, such as the rack number, the number of DDMs, and the number of host adapters, as shown in Figure 12-80.

Figure 12-80 System Summary: rack information

12.5.4 Preview of the new DS8000 GUIA new GUI interface is being introduced to monitor a DS8000. Following are some highlights of the new GUI.

Beside the storage management function icons, the main window now displays three bars and an event message in the lower right corner of the main window:

1. The capacity utilization2. The aggregate I/O throughput MBps and input/output operations per second (IOPS)3. The Status bar

350 IBM DS8870 Architecture and Implementation

Page 369: IBM 8870 Archt

4. Event status

The icons on the left represent:

� System Monitoring� Storage Pool Management� Volume Management� User Access� Settings

To access the preview GUI, enter the url on a supported browser:

https://<HMC-IP>:8452/preview

The login appears. Use the same user and password values that are used for the standard DS GUI.

Figure 12-81 shows the initial display system when logged in.

Figure 12-81 DS8000 main display

Currently, the only function that is available is Monitoring, which allows System and Events.

An example of the Events option is displayed in Figure 12-82 on page 352.

Chapter 12. Configuration by using the DS Storage Manager GUI 351

Page 370: IBM 8870 Archt

Figure 12-82 DS8000 Event Monitoring window

The rest of the storage management functions are under Pools, Volumes, and Hosts, followed by User Access and Settings icons. As of the time of this writing, they are not available to select.

352 IBM DS8870 Architecture and Implementation

Page 371: IBM 8870 Archt

Chapter 13. Configuration with the DS command-line interface

In this chapter, we describe how to configure storage on the IBM DS8870 by using the DS command-line interface (DS CLI). This chapter covers the following topics:

� DS command-line interface overview� Configuring the I/O ports� Configuring the DS8000 storage for fixed block volumes� Configuring DS8000 storage for CKD volumes� Metrics with DS CLI

For more information about Copy Services configuration, see the following publications:

� IBM DS8000 Version 7 Release 2 Command-Line Interface User's Guide, GC27-4212-02� IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788� IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787

For more information about DS CLI commands that are related to disk encryption, see IBM DS8870 Disk Encryption, REDP-4500.

For more information about DS CLI commands that are related to Lightweight Directory Access Protocol (LDAP) authentication, see the Redpaper publication IBM System Storage DS8000: LDAP Authentication, REDP-4505. For more information about DS CLI commands that are related to Resource Groups, see IBM System Storage DS8000 Copy Services Scope Management and Resource Groups, REDP-4758. For more information about DS CLI commands that are related to Performance I/O Priority Manager, see DS8000 I/O Priority Manager, REDP-4760.

For more information about DS CLI commands that are related to Easy Tier, see the following publications:

� IBM DS8000 Easy Tier, REDP-4667� IBM DS8000 Easy Tier Server, REDP-5013� IBM DS8000 Easy Tier Application, REDP-5014

13

© Copyright IBM Corp. 2014. All rights reserved. 353

Page 372: IBM 8870 Archt

13.1 DS command-line interface overviewThe data storage command-line interface (DS CLI) provides a full-function command set with which you check your storage unit configuration and perform specific application functions. For more information about DS CLI use and setup, see IBM DS8000 Version 7 Release 2 Command-Line Interface User's Guide, GC27-4212-02.

The following list highlights a few of the functions that you can perform with the DS CLI:

� Create user IDs that can be used with the graphical user interface (GUI) and the DS CLI.

� Manage user ID passwords.

� Install activation keys for licensed features.

� Manage storage complexes and units.

� Configure and manage Storage Facility Images.

� Create and delete Redundant Array of Independent Disks (RAID) arrays, ranks, and extent pools.

� Create and delete logical volumes.

� Manage host access to volumes.

� Check the current Copy Services configuration that is used by the Storage Unit.

� Create, modify, or delete Copy Services configuration settings.

� Integrate LDAP policy usage and configuration.

� Implement encryption functionality.

13.1.1 Supported operating systems for the DS CLIThe DS CLI can be installed on many operating system platforms, including AIX, HP-UX, Red Hat Linux, SUSE Linux, IBM i, Oracle Solaris, HP OpenVMS, VMware ESX, and Microsoft Windows.

Before you can install the DS CLI, make sure that you have at least Java version 1.42 or later installed. Many hosts might already have a suitable level of Java installed. The installation program checks for this requirement during the installation process and does not install the DS CLI if you do not have the suitable version of Java.

Single installation: In almost all cases, you can use a single installation of the latest version of the DS CLI for all of your system needs. However, it is not possible to test every version of DS CLI with every licensed machine code (LMC) level, so an occasional problem might occur despite every effort to maintain that level of compatibility. If you suspect a version incompatibility problem, install the DS CLI version that corresponds to the LMC level that is installed on your system. You can have more than one version of DS CLI installed on your system, each in its own directory.

Important: For the most recent information about currently supported operating systems, specific pre-installation concerns, and installation file locations, see the IBM System Storage DS8000 Information Center at this website:

http://publib.boulder.ibm.com/infocenter/ds8000ic/index.jsp

Refer to the section “Command-line interface”.

354 IBM DS8870 Architecture and Implementation

Page 373: IBM 8870 Archt

The installation process can be performed through a shell, such as the bash or Korn shell, or the Windows command prompt, or through a GUI. If installed by using a shell, it can be done silently by using a profile file. The installation process also installs software that allows the DS CLI to be uninstalled should it no longer be required.

13.1.2 User accountsDS CLI communicates with the DS8000 system through the Hardware Management Console (HMC). The primary or secondary HMC console can be used. DS CLI access is authenticated by using ESSNI (DSNI) on the HMC. The same user IDs are used for DS CLI and DS GUI access. For more information about user accounts, see 9.5, “HMC user management” on page 252. The default user ID is admin and the password is admin. The system forces you to change the password at the first login. In the event you forget the admin password, a reset can be performed that will reset the admin password to the default value.

13.1.3 User management by using the DS CLIApart from the administration user, you might want to define some other users, maybe with different rights.

The following commands are used to manage user IDs by using the DS CLI:

� mkuser

A user account that can be used with DS CLI and the DS GUI is created by using this command. In Example 13-1, we create a user called JohnDoe, which is in the op_storage group. The temporary password of the user is passw0rd. The user must use the chpass command when they log in for the first time.

Example 13-1 Using the mkuser command to create a user

dscli> mkuser -pw passw0rd -group op_storage JohnDoeCMUC00133I mkuser: User JohnDoe successfully created.

� rmuser

An existing user ID is removed by using this command. In Example 13-2, we remove a user called JaneSmith.

Example 13-2 Removing a user

dscli> rmuser JaneSmithCMUC00135W rmuser: Are you sure you want to delete user JaneSmith? [y/n]:yCMUC00136I rmuser: User JaneSmith successfully deleted.

� chuser

By using this command, you can change the password or group (or both) of an existing user ID. It also can be used to unlock a user ID that was locked by exceeding the allowable login retry count. The administrator could also use this command to lock a user ID. In Example 13-3, we unlock the user, change the password, and change the group membership for a user called JohnDoe. The user must use the chpass command the next time they log in.

Example 13-3 Changing a user with chuser

dscli> chuser -unlock -pw time2change -group op_storage JohnDoeCMUC00134I chuser: User JohnDoe successfully modified.

Chapter 13. Configuration with the DS command-line interface 355

Page 374: IBM 8870 Archt

� lsuser

By using this command, a list of all user IDs can be generated. In Example 13-4, we can see three users and the administrator account.

Example 13-4 Using the lsuser command to list users

dscli> lsuserName Group State===============================================JohnDoe op_storage activesecadmin admin activeadmin admin active

� showuser

The account details of a user ID can be displayed by using this command. In Example 13-5, we list the details of the user JohnDoe.

Example 13-5 Using the showuser command to list user information

dscli> showuser JohnDoeName JohnDoeGroup op_storageState activeFailedLogin 0DaysToExpire 365Scope PUBLIC

� managepwfile

An encrypted password file that is placed onto the local machine is created or added by using this command. This file can be referred to in a DS CLI profile. You can run scripts without specifying a DS CLI user password in clear text. If you are manually starting DS CLI, you also can refer to a password file with the -pwfile parameter. By default, the file is in the following directories:

Windows C:\Users\<User>\dscli\security.dat

Non Windows $HOME/dscli/security.dat

In Example 13-6, we manage our password file by adding the user ID JohnDoe. The password is now saved in an encrypted file that is called security.dat.

Example 13-6 Using the managepwfile command

dscli> managepwfile -action add -name JohnDoe -pw passw0rdCMUC00206I managepwfile: Record 10.0.0.1/JohnDoe successfully added to password file C:\Users\Administrator\dscli\security.dat.

� chpass

By using this command, you can change two password policies: password expiration (days) and failed logins allowed. In Example 13-7 on page 357, we change the expiration to 365 days and five failed login attempts.

Note: Pre-windows 7, the default directory for the security.dat file is:

C:\Documents and Settings\<User>\DSCLI\

356 IBM DS8870 Architecture and Implementation

Page 375: IBM 8870 Archt

Example 13-7 Changing rules by using the chpass command

dscli> chpass -expire 365 -fail 5CMUC00195I chpass: Security properties successfully set.

� showpass

The properties for passwords (Password Expiration days and Failed Logins Allowed) are listed by using this command. In Example 13-8, we can see that passwords are set to expire in 90 days and that four login attempts are allowed before a user ID is locked.

Example 13-8 Using the showpass command

dscli> showpassPassword Expiration 365 daysFailed Logins Allowed 5Password Age 0 daysMinimum Length 6Password History 4

13.1.4 DS CLI profileTo access a DS8000 system with the DS CLI, you must provide certain information by using the dscli command. At a minimum, the IP address or host name of the DS8000 HMC, a user name, and a password are required. You can also provide other information, such as the output format for list commands, the number of rows per page in the command-line output, and whether a banner is included with the command-line output.

If you create one or more profiles to contain your preferred settings, you do not have to specify this information each time you use DS CLI. When you start DS CLI, you must specify only a profile name by using the dscli command. You can override the values of the profile by specifying a different parameter value by using the dscli command.

When you install the command-line interface software, a default profile is installed in the profile directory with the software. The file name is dscli.profile; for example, c:\Program Files\IBM\dscli\profile\dscli.profile for the pre-Windows 7 platform, c:\Program Files (x86)\IBM\dscli for Windows 7 (and later), and /opt/ibm/dscli/profile/dscli.profile for UNIX and Linux platforms.

You have the following options for using profile files:

� You can modify the system default profile: dscli.profile.

� You can create a personal default profile by copying the system default profile as <user_home>/dscli/profile/dscli.profile. The default home directory <user_home> is designated in the following directories:

– Windows system: %USERPROFILE% usually C:\Users\Administrator– UNIX/Linux system: $HOME

� You can create specific profiles for different Storage Units and operations. Save the profile in the user profile directory. For example:

– %USERPROFILE%\IBM\DSCLI\profile\operation_name1 – %USERPROFILE%\IBM\DSCLI\profile\operation_name2

Chapter 13. Configuration with the DS command-line interface 357

Page 376: IBM 8870 Archt

These profile files can be specified by using the DS CLI command parameter -cfg <profile_name>. If the -cfg file is not specified, the default profile of the user is used. If a profile of a user does not exist, the system default profile is used.

Profile change illustrationComplete the following steps to edit the profile:

(This sequence assumes your %userprofile% is C:\Users\Administrator)

1. Use Windows Explorer to copy the profile folder from C:\Program Files (x86)\IBM\dscli to C:\Users\Administrator\dscli

2. From the Windows desktop, double-click the DS CLI icon.

3. In the command window that opens, enter the following command:

cd C:\Users\Administrator\dscli\

4. In the profile directory, enter the notepad dscli.profile command, as shown in Example 13-9.

Example 13-9 Command prompt operation

C:\Users\Administrator\dscli>cd profileC:\Users\Administrator\dscli\profile>notepad dscli.profile

5. The notepad opens and includes the DS CLI profile. There are four lines that you can consider adding. Examples of these lines are shown in bold in Example 13-10.

Example 13-10 DS CLI profile example

# DS CLI Profile## Management Console/Node IP Address(es)# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.#hmc1:127.0.0.1#hmc2:127.0.0.1

# Default target Storage Image ID# "devid" and "remotedevid" are equivalent to

Default profile file: The default profile file that you created when you installed the DS CLI might be replaced every time that you install a new version of the DS CLI. It is a good practice to open the default profile and then save it as a new file. You can then create multiple profiles and reference the relevant profile file by using the -cfg parameter. Example to use a different profile when starting dscli:

dscli -cfg newprofile.profile (or whatever name you gave to the new profile)

Two default profiles: If there are two default profiles called dscli.profile, one in the default system’s directory and one in your personal directory, your personal profile is loaded.

Default newline delimiter: The default newline delimiter is a UNIX delimiter, which can render text in notepad as one long line. Use a text editor that correctly interprets UNIX line endings.

358 IBM DS8870 Architecture and Implementation

Page 377: IBM 8870 Archt

# "-dev storage_image_ID" and "-remotedev storeage_image_ID" command options, respectively. #devid: IBM.2107-AZ12341#remotedevid:IBM.2107-AZ12341

devid: IBM.2107-75ABCD1hmc1: 10.0.0.250username: adminpassword: passw0rd

Adding the serial number by using the devid parameter, and the HMC IP address by using the hmc1 parameter, is suggested. Not only does this addition help you to avoid mistakes when you are using more profiles, but you do not need to specify this parameter for certain dscli commands that require it. Additionally, if you specify dscli profile for copy services usage, the use of the remotedevid parameter is suggested for the same reasons. To determine the ID of a storage system, use the lssi CLI command.

Although adding the user name and password parameters simplifies the DS CLI startup, it is not suggested that you add them because they are an undocumented feature that might not be supported in the future. Also, the password is saved in clear text in the profile file.

Instead, It is better to create an encrypted password file with the managepwfile CLI command. A password file that is generated by using the managepwfile command is in the user_home_directory/dscli/profile/security/security.dat directory.

The following customization parameters also affect dscli output:

– banner: Date and time with the dscli version is printed for each command.– header: Column names are printed.– format: The output format (specified as default, xml, delim, or stanza).– paging: For interactive mode, this parameter breaks output after a certain number of

rows (24 by default).

6. After you save your changes, use Windows Explorer to copy the updated profile from C:\Users\Administrator\dscli\profile to C:\Program Files (x86)\IBM\dscli\profile.

13.1.5 Configuring DS CLI to use a second HMCThe second HMC can be specified on the command line or in the profile file that is used by the DS CLI. To specify the second HMC in a command, use the -hmc2 parameter, as shown in Example 13-11.

Example 13-11 Using the -hmc2 parameter

C:\Program Files (x86)\IBM\dscli>dscli -hmc1 10.0.0.1 -hmc2 10.0.0.5Enter your username: JohnDoeEnter your password: xxxxxIBM.2107-75ZA571dscli>

Important: Use care if you are adding multiple devid and HMC entries. Only one entry should be uncommented (or more literally, unhashed) at any one time. If you have multiple hmc1 or devid entries, the DS CLI uses the entry that is closest to the bottom of the profile.

Chapter 13. Configuration with the DS command-line interface 359

Page 378: IBM 8870 Archt

Alternatively, you can modify the following lines in the dscli.profile (or any profile) file:

# Management Console/Node IP Address(es)# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.hmc1:10.0.0.1hmc2:10.0.0.5

After these changes are made and the profile is saved, the DS CLI automatically communicates through HMC2 if HMC1 becomes unreachable. By using this change, you can perform configuration and Copy Services commands with full redundancy.

13.1.6 Command structureHere we describe the components and structure of a command-line interface command.

A command-line interface command consists of one to four types of components that are arranged in the following order:

1. The command name: Specifies the task that the command-line interface is to perform.

2. Flags: Modifies the command. They provide more information that directs the command-line interface to perform the command task in a specific way.

3. Flags parameter: Provides information that is required to implement the command modification that is specified by a flag.

4. Command parameters: Provides basic information that is necessary to perform the command task. When a command parameter is required, it is always the last component of the command, and it is not preceded by a flag.

13.1.7 Using the DS CLI applicationTo issue commands to the DS8000, you must fist log in to the DS8000 through the DS CLI with one of the following command modes of execution:

� Single-shot command mode� Interactive command mode� Script command mode

Single-shot command modeUse the DS CLI single-shot command mode if you want to issue an occasional command from the OS shell prompt where you need special handling, such as redirecting the DS CLI output to a file. You also use this mode if you are embedding the command into an OS shell script.

You must supply the login information and the command that you want to process at the same time. Complete the following steps to use the single-shot mode:

1. At the OS shell prompt, enter the following command:

dscli -hmc1 <hostname or ip address> -user <adm user> -passwd <pwd> <command>

or

dscli -cfg <dscli profile> -pwfile <security file> <command>

Two HMCs: If you have two HMCs and you specify only one of them in a DS CLI command (or profile), any changes that you make to users are still replicated onto the other HMC.

360 IBM DS8870 Architecture and Implementation

Page 379: IBM 8870 Archt

2. Wait for the command to process and display the results.

Example 13-12 shows the use of the single-shot command mode.

Example 13-12 Single-shot command mode

C:\Program Files (x86)\IBM\dscli>dscli -hmc1 10.10.10.1 -user admin -passwd <pwd> lsuserName Group State===============================================AlphaAdmin admin lockedAlphaOper op_copy_services activeBetaOper op_copy_services activeadmin admin active[ exit status of dscli = 0 ]

Interactive command modeUse the DS CLI interactive command mode when you want to issue a few infrequent commands without having to log on to the DS8000 for each command.

The interactive command mode provides a history function that makes repeating or checking prior command usage easy to do.

Complete the following steps to use the interactive command mode:

1. Log on to the DS CLI application at the directory where it is installed.

2. Provide the information that is requested by the information prompts. The information prompts might not appear if you provided this information in your profile file. The command prompt switches to a dscli command prompt.

3. Use the DS CLI commands and parameters. You are not required to begin each command with dscli because this prefix is provided by the dscli command prompt.

4. Use the quit or exit command to end interactive mode.

Important: It is not recommended to embed the user name and password into the profile. The -pwfile command should be used.

Important: When you are typing the command, you can use the host name or the IP address of the HMC. It is important to understand that when a command is executed in single shot mode, the user must be authenticated. The authentication process can take a considerable amount of time.

Interactive mode: In interactive mode for long outputs, the message Press Enter To Continue... appears. The number of rows can be specified in the profile file. Optionally, you can turn off the paging feature in the profile file by using the paging:off parameter.

Chapter 13. Configuration with the DS command-line interface 361

Page 380: IBM 8870 Archt

Example 13-13 shows the use of interactive command mode using profile ds8870.profile.

Example 13-13 Interactive command mode

C:\Program Files (x86)\IBM\dscli>dscli -cfg ds8870.profileDate/Time: 2013-12-05T14:05:50+0100 IBM DSCLI Version: 7.7.30.98 DS:IBM.2107-75ZA571

dscli> lsarraysite -lDate/Time: 2013-12-05T14:05:56+0100 IBM DSCLI Version: 7.7.30.98 DS: IBM.2107-75ZA571arsite DA Pair dkcap (10^9B) diskrpm State Array diskclass encrypt=======================================================================S1 0 900.0 10000 Assigned A17 ENT supportedS2 0 900.0 10000 Assigned A18 ENT supportedS3 0 900.0 10000 Assigned A19 ENT supportedS4 0 900.0 10000 Assigned A20 ENT supportedS5 0 900.0 10000 Assigned A21 ENT supportedS6 0 900.0 10000 Assigned A22 ENT supportedS7 1 300.0 15000 Assigned A5 ENT supportedS8 1 300.0 15000 Assigned A6 ENT supportedS9 1 300.0 15000 Assigned A7 ENT supportedS10 1 300.0 15000 Assigned A8 ENT supportedS11 1 300.0 15000 Assigned A9 ENT supportedS12 1 300.0 15000 Assigned A10 ENT supportedS13 2 3000.0 7200 Assigned A2 NL supportedS14 2 3000.0 7200 Assigned A3 NL supportedS15 2 3000.0 7200 Assigned A4 NL supportedS16 2 400.0 65000 Assigned A0 SSD supportedS17 2 400.0 65000 Assigned A1 SSD supportedS18 3 300.0 15000 Assigned A15 ENT supportedS19 3 300.0 15000 Assigned A13 ENT supportedS20 3 300.0 15000 Assigned A12 ENT supportedS21 3 300.0 15000 Assigned A14 ENT supportedS22 3 300.0 15000 Assigned A11 ENT supportedS23 3 300.0 15000 Assigned A16 ENT supportedS24 10 400.0 65000 Assigned A23 Flash supportedS25 10 400.0 65000 Assigned A24 Flash supportedS26 10 400.0 65000 Assigned A25 Flash supportedS27 10 400.0 65000 Assigned A26 Flash supporteddscli> lssiDate/Time: 2013-12-05T14:07:22+0100 IBM DSCLI Version: 7.7.30.98 DS: -Name ID Storage Unit Model WWNN State ESSNet====================================================================================DS8870_ATS02 IBM.2107-75ZA571 IBM.2107-75ZA570 961 5005076303FFD5AA Online Enableddscli>

Script command modeUse the DS CLI script command mode if you want to use a sequence of DS CLI commands. If you want to run a script that contains only DS CLI commands, you can start DS CLI in script mode. The script that DS CLI executes can contain only DS CLI commands.

362 IBM DS8870 Architecture and Implementation

Page 381: IBM 8870 Archt

Example 13-14 shows the contents of a DS CLI script file. The file contains only DS CLI commands, although comments can be placed in the file by using a hash symbol (#). Empty lines are also allowed. One advantage of using this method is that scripts that are written in this format can be used by the DS CLI on any operating system that you can install DS CLI.

Example 13-14 Example of a DS CLI script file

# Sample ds cli script file# Comments can appear if hashedlsarraysite -llsarray -llsrank -l

For script command mode, you can turn off the banner and header for easier output parsing. Also, you can specify an output format that might be easier to parse by your script.

In Example 13-15, we start the DS CLI by using the -script parameter and specifying a profile and the name of the script that contains the commands from Example 13-14.

Example 13-15 Executing DS CLI file

C:\Program Files (x86)\IBM\dscli>dscli -cfg ds8870.profile -script c:\ds8000.scriptDate/Time: 2013-12-05T14:22:43+0100 IBM DSCLI Version: 7.7.30.98 DS: IBM.2107-75ZA571arsite DA Pair dkcap (10^9B) diskrpm State Array diskclass encrypt=======================================================================S13 2 3000.0 7200 Assigned A2 NL supportedS14 2 3000.0 7200 Assigned A3 NL supportedS15 2 3000.0 7200 Assigned A4 NL supportedS16 2 400.0 65000 Assigned A0 SSD supportedS17 2 400.0 65000 Assigned A1 SSD supportedS24 10 400.0 65000 Assigned A23 Flash supportedS25 10 400.0 65000 Assigned A24 Flash supportedS26 10 400.0 65000 Assigned A25 Flash supportedS27 10 400.0 65000 Assigned A26 Flash supportedDate/Time: 2013-12-05T14:22:45+0100 IBM DSCLI Version: 7.7.30.98 DS: IBM.2107-75ZA571Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B) diskclass encrypt========================================================================================A0 Assigned Normal 5 (6+P+S) S16 R0 2 400.0 SSD supportedA1 Assigned Normal 5 (6+P+S) S17 R34 2 400.0 SSD supportedA2 Assigned Normal 6 (5+P+Q+S) S13 R30 2 3000.0 NL supportedA3 Assigned Normal 6 (5+P+Q+S) S14 R3 2 3000.0 NL supportedA4 Assigned Normal 6 (5+P+Q+S) S15 R4 2 3000.0 NL supportedA23 Assigned Normal 5 (6+P+S) S24 R5 10 400.0 Flash supportedA24 Assigned Normal 5 (6+P+S) S25 R29 10 400.0 Flash supportedA25 Assigned Normal 5 (6+P) S26 R27 10 400.0 Flash supportedA26 Assigned Normal 5 (6+P) S27 R14 10 400.0 Flash supportedDate/Time: 2013-12-05T14:22:46+0100 IBM DSCLI Version: 7.7.30.98 DS: IBM.2107-75ZA571ID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts encryptgrp=======================================================================================================R0 1 Normal Normal A0 5 P5 ET_2Tier_P5 fb 2121 1911 -R3 1 Normal Normal A3 6 P5 ET_2Tier_P5 fb 12890 4782 -R4 0 Normal Normal A4 6 P6 VAAIslow fb 12890 1825 -R5 1 Normal Normal A23 5 P11 fb_flash_1 fb 2122 142 -R14 0 Normal Normal A26 5 P2 fb_flash_0 fb 2122 652 -R27 0 Normal Normal A25 5 P4 BruceP4 fb 2122 2085 -R29 0 Depopulating Normal A24 5 P2 fb_flash_0 fb 2122 626 -R30 0 Normal Normal A2 6 P12 sotest fb 12890 8601 -R34 - Unassigned Normal A1 5 - - ckd 2376 - -

Chapter 13. Configuration with the DS command-line interface 363

Page 382: IBM 8870 Archt

13.1.8 Return codesWhen the DS CLI exits, the exit status code is provided. This result is effectively a return code. If DS CLI commands are issued as separate commands (rather than by using script mode), a return code is presented for every command. If a DS CLI command fails (for example, because a syntax error or the use of an incorrect password), a failure reason and a return code are shown. Standard techniques to collect and analyze return codes can be used.

The return codes that are used by the DS CLI are listed in Table 13-1.

Table 13-1 DS CLI exit codes

13.1.9 User assistance The DS CLI is designed to include several forms of user assistance. The main form of user assistance is through the IBM DS8000 Information Center, which is available at this website:

http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp

Look under the Command-line interface tab. User assistance can also be found when using the DS CLI program through the help command. The following examples of usage are included:

� help lists all the available DS CLI commands.� help -s lists all the DS CLI commands with brief descriptions of each. � help -l lists all the DS CLI commands with their syntax information.

To obtain information about a specific DS CLI command, enter the command name as a parameter of the help command. The following examples of usage are included:

Important: The DS CLI script can contain only DS CLI commands. The use of shell commands results in process failure. You can add comments in the scripts that are prefixed by the hash symbol (#). The hash symbol must be the first non-blank character on the line.

Only one authentication process is needed to execute all of the script commands.

Return code Category Description

0 Success The command was successfully processed.

2 Syntax error There was a syntax error in the command.

3 Connection error There was a connectivity or protocol error.

4 Server error An error occurred during a function call to the application server.

5 Authentication error The password or user ID details were incorrect.

6 Application error An error occurred because of a MetaProvider client application-specific process.

63 Configuration error The CLI.cfg file was not found or is inaccessible.

64 Configuration error The javaInstall variable was not provided in the CLI.cfg file.

65 Configuration error The javaClasspath variable was not provided in the CLI.cfg file.

66 Configuration error The format of the configuration file was incorrect.

364 IBM DS8870 Architecture and Implementation

Page 383: IBM 8870 Archt

� help <command name> gives a detailed description of the specified command.� help -s <command name> gives a brief description of the specified command. � help -l <command name> gives syntax information about the specified command.

Example 13-16 shows the output of the help command.

Example 13-16 Displaying a list of all commands in DS CLI by using the help command

dscli> helpapplydbcheck lshostconnect mkipsec sendssapplykey lshosttype mkipseccert setauthpolchaccess lshostvol mkkeygrp setcontactinfochauthpol lsioport mkkeymgr setdbcheckchckdvol lsipsec mklcu setdialhomechextpool lsipseccert mkpe setenvchfbvol lskey mkpprc setflashrevertiblechhostconnect lskeygrp mkpprcpath setioportchipsec lskeymgr mkrank setipsecchkeymgr lslcu mkreckey setnetworkportchlcu lslss mkremoteflash setoutputchlss lsnetworkport mkresgrp setplexchpass lspe mksession setremoteflashrevertiblechrank lsperfgrp mksestg setrmpwchresgrp lsperfgrprpt mkuser setsimchsession lsperfrescrpt mkvolgrp setsmtpchsestg lsportprof offloadauditlog setsnmpchsi lspprc offloaddbcheck setvpnchsp lspprcpath offloadfile showaccesschsu lsproblem offloadss showarraychuser lsrank pausegmir showarraysitechvolgrp lsremoteflash pausepprc showauthpolclearvol lsresgrp quit showckdvolcloseproblem lsserver resumegmir showcontactinfocommitflash lssession resumepprc showenvcommitremoteflash lssestg resyncflash showextpoolcpauthpol lssi resyncremoteflash showfbvoldiagsi lsss reverseflash showgmirdscli lsstgencl revertflash showgmircgecho lssu revertremoteflash showgmiroosexit lsuser rmarray showhostconnectfailbackpprc lsvolgrp rmauthpol showioportfailoverpprc lsvolinit rmckdvol showkeygrpfreezepprc lsvpn rmextpool showkeymgrhelp manageaccess rmfbvol showlcuhelpmsg manageckdvol rmflash showlssinitckdvol managedbcheck rmgmir shownetworkportinitfbvol managefbvol rmhostconnect showpasslsaccess managehostconnect rmipsec showplexlsaddressgrp managekeygrp rmipseccert showranklsarray managekeymgr rmkeygrp showresgrplsarraysite managepwfile rmkeymgr showsestglsauthpol managereckey rmlcu showsilsavailpprcport manageresgrp rmpprc showsplsckdvol mkaliasvol rmpprcpath showsulsda mkarray rmrank showuserlsdbcheck mkauthpol rmreckey showvolgrplsddm mkckdvol rmremoteflash testauthpollsextpool mkesconpprcpath rmresgrp testcallhomelsfbvol mkextpool rmsession unfreezeflashlsflash mkfbvol rmsestg unfreezepprclsframe mkflash rmuser ver

Chapter 13. Configuration with the DS command-line interface 365

Page 384: IBM 8870 Archt

lsgmir mkgmir rmvolgrp wholshba mkhostconnect sendpe whoami

Man pages A man page is available for every DS CLI command. Man pages are most commonly seen in UNIX based operating systems and give information about command capabilities. This information can be displayed by issuing the relevant command followed by the -h, -help, or -? flags.

13.2 Configuring the I/O portsSet the I/O ports to the wanted topology. In Example 13-17, we list the I/O ports by using the lsioport command. Note that I0000-I0003 are on one card, whereas I0100-I0103 are on another card.

Example 13-17 Listing the I/O ports

dscli> lsioport -dev IBM.2107-7503461ID WWPN State Type topo portgrp===============================================================I0000 500507630300008F Online Fibre Channel-SW SCSI-FCP 0I0001 500507630300408F Online Fibre Channel-SW SCSI-FCP 0I0002 500507630300808F Online Fibre Channel-SW SCSI-FCP 0I0003 500507630300C08F Online Fibre Channel-SW SCSI-FCP 0I0100 500507630308008F Online Fibre Channel-LW FICON 0I0101 500507630308408F Online Fibre Channel-LW SCSI-FCP 0I0102 500507630308808F Online Fibre Channel-LW FICON 0I0103 500507630308C08F Online Fibre Channel-LW FICON 0

The following possible topologies for each I/O port are available:

� SCSI-FCP: Fibre Channel-switched fabric (also called switched point-to-point). This port type is also used for mirroring.

� FC-AL: Fibre Channel-arbitrated loop (for direct attachment without a SAN switch).

� FICON: FICON (for System z hosts only).

In Example 13-18, we set two I/O ports to the FICON topology and then check the results.

Example 13-18 Changing topology by using setioport

dscli> setioport -topology ficon I0001CMUC00011I setioport: I/O Port I0001 successfully configured.dscli> setioport -topology ficon I0101CMUC00011I setioport: I/O Port I0101 successfully configured.dscli> lsioportID WWPN State Type topo portgrp===============================================================I0000 500507630300008F Online Fibre Channel-SW SCSI-FCP 0I0001 500507630300408F Online Fibre Channel-SW FICON 0I0002 500507630300808F Online Fibre Channel-SW SCSI-FCP 0I0003 500507630300C08F Online Fibre Channel-SW SCSI-FCP 0I0100 500507630308008F Online Fibre Channel-LW FICON 0I0101 500507630308408F Online Fibre Channel-LW FICON 0I0102 500507630308808F Online Fibre Channel-LW FICON 0

366 IBM DS8870 Architecture and Implementation

Page 385: IBM 8870 Archt

I0103 500507630308C08F Online Fibre Channel-LW FICON 0

To monitor the status for each I/O port, see 13.5, “Metrics with DS CLI” on page 389.

13.3 Configuring the DS8000 storage for fixed block volumesIn this section, we review examples of a typical DS8000 storage configuration when they are attached to open systems hosts. We perform the DS8000 storage configuration by completing the following steps:

1. Create arrays.2. Create ranks.3. Create extent pools.4. Optionally, create repositories for track space-efficient volumes (not included). 5. Create volumes.6. Create volume groups.7. Create host connections.

13.3.1 Creating arraysIn this step, we create the arrays. Before the arrays are created, it is a best practice to list the arrays sites. Use the lsarraysite to list the array sites, as shown in Example 13-19. Array sites are groups of eight disks that are predefined in the DS8000.

Example 13-19 Listing array sites

dscli> lsarraysitearsite DA Pair dkcap (10^9B) State Array=============================================S1 0 146.0 Unassigned -S2 0 146.0 Unassigned -S3 0 146.0 Unassigned -S4 0 146.0 Unassigned -

In Example 13-19, you can see that there are four array sites and that we can therefore create four arrays.

We can now issue the mkarray command to create arrays, as shown in Example 13-20. In this case, we used one array site (in the first array, S1) to create a single RAID 5 array. If we wanted to create a RAID 10 array, we must change the -raidtype parameter to 10. If we wanted to create a RAID 6 array, we must change the -raidtype parameter to 6 (instead of 5).

Example 13-20 Creating arrays with mkarray

dscli> mkarray -raidtype 5 -arsite S1CMUC00004I mkarray: Array A0 successfully created.dscli> mkarray -raidtype 5 -arsite S2CMUC00004I mkarray: Array A1 successfully created.

Important: An array for a DS8000 can contain only one array site, and a DS8000 array site contains eight disk drive modules (DDMs).

Chapter 13. Configuration with the DS command-line interface 367

Page 386: IBM 8870 Archt

We can now see which arrays were created by using the lsarray command, as shown in Example 13-21.

Example 13-21 Listing the arrays with lsarray

dscli> lsarrayArray State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B)=====================================================================A0 Unassigned Normal 5 (6+P+S) S1 - 0 146.0A1 Unassigned Normal 5 (6+P+S) S2 - 0 146.0

We can see in this example the type of RAID array and the number of disks that are allocated to the array (in this example 6+P+S, which means the usable space of the array is six times the DDM size), the capacity of the DDMs that are used, and which array sites were used to create the arrays.

13.3.2 Creating ranksAfter we create all of the required arrays, we then create the ranks by using the mkrank command. The format of the command is mkrank -array Ax -stgtype xxx, where xxx is fixed block (FB) or count key data (CKD), depending on whether you are configuring for open systems or System z hosts.

After all of the ranks are created, the lsrank command is run. This command displays all of the ranks that were created, to which server the rank is attached (to none, in our example up to now), the RAID type, and the format of the rank, whether it is FB or CKD.

Example 13-22 shows the mkrank commands and the result of a successful lsrank -l command.

Example 13-22 Creating and listing ranks with mkrank and lsrank

dscli> mkrank -array A0 -stgtype fbCMUC00007I mkrank: Rank R0 successfully created.dscli> mkrank -array A1 -stgtype fbCMUC00007I mkrank: Rank R1 successfully created.dscli> lsrank -lID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts=======================================================================================R0 - Unassigned Normal A0 5 - - fb 740 -R1 - Unassigned Normal A1 5 - - fb 740 -

13.3.3 Creating extent poolsThe next step is to create extent pools. Remember the following points when you are creating the extent pools:

� Each extent pool includes an associated rank group that is specified by the -rankgrp parameter, which defines the extent pools’ server affinity (0 or 1, for server0 or server1).

� The extent pool type is FB or CKD and is specified by the -stgtype parameter.

� The number of extent pools can range from one to as many as there are existing ranks. However, to associate ranks with both servers, you need at least two extent pools.

For easier management, we create empty extent pools that are related to the type of storage that is in the pool. For example, create an extent pool for high capacity disk, create another for high performance, and, if needed, extent pools for the CKD environment.

368 IBM DS8870 Architecture and Implementation

Page 387: IBM 8870 Archt

When an extent pool is created, the system automatically assigns it an extent pool ID, which is a decimal number that starts from 0, preceded by the letter P. The ID that was assigned to an extent pool is shown in the CMUC00000I message, which is displayed in response to a successful mkextpool command. Extent pools that are associated with rank group 0 receive an even ID number. Extent pools that are associated with rank group 1 receive an odd ID number. The extent pool ID is used when referring to the extent pool in subsequent CLI commands. Therefore, it is good practice to make note of the ID.

Example 13-23 shows one example of extent pools that you could define on your machine. This setup requires a system with at least six ranks.

Example 13-23 An extent pool layout plan

FB Extent Pool high capacity 300gb disks assigned to server 0 (FB_LOW_0)FB Extent Pool high capacity 300gb disks assigned to server 1 (FB_LOW_1)FB Extent Pool high performance 146gb disks assigned to server 0 (FB_High_0)FB Extent Pool high performance 146gb disks assigned to server 0 (FB_High_1)CKD Extent Pool High performance 146gb disks assigned to server 0 (CKD_High_0)CKD Extent Pool High performance 146gb disks assigned to server 1 (CKD_High_1)

The mkextpool command forces you to name the extent pools. In Example 13-24, we first create empty extent pools by using the mkextpool command. We then list the extent pools to get their IDs. Then, we attach a rank to an empty extent pool by using the chrank command. Finally, we list the extent pools again by using lsextpool and note the change in the capacity of the extent pool.

Example 13-24 Creating extent pool by using mkextpool, lsextpool, and chrank

dscli> mkextpool -rankgrp 0 -stgtype fb FB_high_0CMUC00000I mkextpool: Extent Pool P0 successfully created.dscli> mkextpool -rankgrp 1 -stgtype fb FB_high_1CMUC00000I mkextpool: Extent Pool P1 successfully created.dscli> lsextpoolName ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols===========================================================================================FB_high_0 P0 fb 0 below 0 0 0 0 0FB_high_1 P1 fb 1 below 0 0 0 0 0dscli> chrank -extpool P0 R0CMUC00008I chrank: Rank R0 successfully modified.dscli> chrank -extpool P1 R1CMUC00008I chrank: Rank R1 successfully modified.dscli> lsextpoolName ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols===========================================================================================FB_high_0 P0 fb 0 below 740 0 740 0 0FB_high_1 P1 fb 1 below 740 0 740 0 0

After a rank is assigned to an extent pool, we should be able to see this change when we display the ranks. In Example 13-25, we can see that rank R0 is assigned to extpool P0.

Example 13-25 Displaying the ranks after a rank is assigned to an extent pool

dscli> lsrank -lID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts===================================================================================R0 0 Normal Normal A0 5 P0 FB_high_0 fb 740 0R1 1 Normal Normal A1 5 P1 FB_high_1 fb 740 0

Chapter 13. Configuration with the DS command-line interface 369

Page 388: IBM 8870 Archt

Creating a repository for space-efficient volumesThere are two types of space-efficient volumes that can be created in the DS8870. They are track space-efficient (TSE) and extent space-efficient (ESE). While the TSE volumes require that a repository be created for them in the extent pool, for ESE volumes the repository is optional, but recommended. Each volume type requires its own repository in an extent pool. It is preferred to not include both types of repositories in the same extent pool, but it is possible.

For more details about using dscli commands to create a TSE repository, see IBM System Storage DS8000 Series: IBM Flashcopy SE, REDP-4368.

For more details about using dscli commands to create an ESE repository, see IBM DS8000 Thin Provisioning, REDP-4554.

13.3.4 Creating FB volumesWe are now able to create volumes and volume groups. When we create the volumes or groups, we should try to distribute them evenly across the two rank groups in the storage unit.

While an FB-type volume can be created as standard, TSE or ESE type volumes, this section only details the creation of the standard type.

Creating standard volumesThe following format of the command that we use to create a volume is used:

mkfbvol -extpool pX -cap xx -name high_fb_0#h XXXX-XXXX

The last parameter is the volume_ID, which can be a range or single entry. The four-digit entry is based on LL and VV. LL (00–FE) equals the logical subsystem (LSS) that the volume belongs to, and VV (00–FF) equals the volume number on the LSS. This allows the DS8870 to support 255 LSSs and each LSS to support a maximum 256 volumes.

In Example 13-26, we created eight volumes, each with a capacity of 10 GiB. The first four volumes are assigned to rank group 0, and assigned to LSS 10 with volume numbers of 00 – 03. The second four are assigned to rank group 1, assigned to LSS 11 with volume numbers of 00 – 03.

Example 13-26 Creating fixed block volumes by using mkfbvol

dscli> lsextpoolName ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols===========================================================================================FB_high_0 P0 fb 0 below 740 0 740 0 0FB_high_1 P1 fb 1 below 740 0 740 0 0dscli> mkfbvol -extpool p0 -cap 10 -name high_fb_0_#h 1000-1003CMUC00025I mkfbvol: FB volume 1000 successfully created.CMUC00025I mkfbvol: FB volume 1001 successfully created.CMUC00025I mkfbvol: FB volume 1002 successfully created.CMUC00025I mkfbvol: FB volume 1003 successfully created.dscli> mkfbvol -extpool p1 -cap 10 -name high_fb_1_#h 1100-1103CMUC00025I mkfbvol: FB volume 1100 successfully created.CMUC00025I mkfbvol: FB volume 1101 successfully created.CMUC00025I mkfbvol: FB volume 1102 successfully created.CMUC00025I mkfbvol: FB volume 1103 successfully created.

Looking closely at the mkfbvol command that is used in Example 13-26, we see that volumes 1000 - 1003 are in extpool P0. That extent pool is attached to rank group 0, which means server 0. Now rank group 0 can contain only even-numbered LSSs, which means volumes in

370 IBM DS8870 Architecture and Implementation

Page 389: IBM 8870 Archt

that extent pool must belong to an even-numbered LSS. The first two digits of the volume serial number are the LSS number; so, in this case, volumes 1000 - 1003 are in LSS 10.

For volumes 1100 - 1003 in Example 13-26 on page 370, the first two digits of the volume serial number are 11 (an odd number) which signifies that they belong to rank group 1. The -cap parameter determines size. However, because the -type parameter was not used, the default size is a binary size. So, these volumes are 10 GB binary, which equates to 10,737,418,240 bytes. If we used the -type ess parameter, the volumes are decimally sized and are a minimum of 10,000,000,000 bytes in size.

In Example 13-26 on page 370, we named the volumes by using naming scheme high_fb_0_#h, where #h means that you are using the hexadecimal volume number as part of the volume name. This naming convention can be seen in Example 13-27, where we list the volumes that we created by using the lsfbvol command. We then list the extent pools to see how much space is left after the volume is created.

Example 13-27 Checking the machine after volumes are created by using lsextpool and lsfbvol

dscli> lsfbvolName ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B)=========================================================================================high_fb_0_1000 1000 Online Normal Normal 2107-900 FB 512 P0 10.0 high_fb_0_1001 1001 Online Normal Normal 2107-900 FB 512 P0 10.0 high_fb_0_1002 1002 Online Normal Normal 2107-900 FB 512 P0 10.0 high_fb_0_1003 1003 Online Normal Normal 2107-900 FB 512 P0 10.0 high_fb_1_1100 1100 Online Normal Normal 2107-900 FB 512 P1 10.0 high_fb_1_1101 1101 Online Normal Normal 2107-900 FB 512 P1 10.0 high_fb_1_1102 1102 Online Normal Normal 2107-900 FB 512 P1 10.0 high_fb_1_1103 1103 Online Normal Normal 2107-900 FB 512 P1 10.0 dscli> lsextpoolName ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols===========================================================================================FB_high_0 P0 fb 0 below 700 5 700 0 4FB_high_1 P1 fb 1 below 700 5 700 0 4

Important:

� For the DS8000, the LSSs can be ID 00 to ID FE. The LSSs are in address groups. Address group 0 is LSS 00 to 0F, address group 1 is LSS 10 to 1F, and so on, except group F, which is F0 - FE. When you create an FB volume in an address group, that entire address group can be used only for FB volumes. Be aware of this fact when you are planning your volume layout in a mixed FB and CKD DS8000. The LSS is automatically created when the first volume is assigned to it.

� You can configure a volume to belong to a certain Performance I/O Priority Manager by using the -perfgrp <perf_group_ID> flag in the mkfbvol command. For more information, see IBM System Storage DS8000 I/O Priority Manager, REDP-4760.

Resource group:

Starting with DS8000 with Licensed Machine Code Release 6.1, you can configure a volume to belong to a certain resource group by using the -resgrp <RG_ID> flag in the mkfbvol command. For more information, see IBM System Storage DS8000: Copy Services Scope Management and Resource Groups, REDP-4758.

Chapter 13. Configuration with the DS command-line interface 371

Page 390: IBM 8870 Archt

T10 Data Integrity Field volumesA standard for end-to-end error checking from the application to the disk drives is emerging called SCSI T10 DIF (Data Integrity Field). T10 DIF requires volumes to be formatted in 520-byte sectors with cyclic redundancy check (CRC) bytes added to the data. Currently, T10 DIF is supported for Linux on System z. If you want to use this technique, you must create volumes that are formatted for T10 DIF usage. This configuration can be done by adding the -t10dif parameter to the mkfbvol command. For more information, see “T10 data integrity field support” on page 112.

Storage Pool StripingWhen a volume is created, you have a choice of how the volume is allocated in an extent pool with several ranks. The extents of a volume can be kept together in one rank (if there is enough free space on that rank). The next rank is used when the next volume is created. This allocation method is called rotate volumes.

You can also specify that you want the extents of the volume that you are creating to be evenly distributed across all ranks within the extent pool. This allocation method is called rotate extents. The Storage Pool Striping spreads the I/O of a LUN to multiple ranks, which improves performance and greatly reduces hot spots.

The extent allocation method is specified with the -eam rotateexts or -eam rotatevols option of the mkfbvol command, as shown in see Example 13-28.

Example 13-28 Creating a volume with Storage Pool Striping

dscli> mkfbvol -extpool p7 -cap 15 -name ITSO-XPSTR -eam rotateexts 1720CMUC00025I mkfbvol: FB volume 1720 successfully created.

The showfbvol command with the -rank option (see Example 13-29) shows that the volume we created is distributed across three ranks. It also shows how many extents on each rank were allocated for this volume.

Example 13-29 Getting information about a striped volume

dscli> showfbvol -rank 1720Name ITSO-XPSTRID 1720accstate Onlinedatastate Normalconfigstate NormaldeviceMTM 2107-900datatype FB 512addrgrp 1extpool P7exts 15captype DScap (2^30B) 15.0cap (10^9B) -cap (blocks) 31457280volgrp -ranks 3dbexts 0sam Standardrepcapalloc -eam rotateexts

Default allocation policy: For DS8870, the default allocation policy is rotate extents.

372 IBM DS8870 Architecture and Implementation

Page 391: IBM 8870 Archt

reqcap (blocks) 31457280realextents 15virtualextents 0migrating 0perfgrp PG0migratingfrom -resgrp RG0tierassignstatus -tierassignerror -tierassignorder -tierassigntarget -%tierassigned 0==============Rank extents==============rank extents============R6 5R11 5R23 5

Space-efficient volumesAs discussed previously in the section“Creating a repository for space-efficient volumes” on page 370, there are both TSE and ESE type space-efficient volumes that are supported. For more information about space-efficient volumes, see 5.2.6, “Space-efficient volumes” on page 114. The detailed procedures for creating TSE volumes are provided in the Redpaper publication IBM System Storage DS8000 Series: IBM FlashCopy SE, REDP-4368. For details about creating ESE volumes, refer to IBM DS8000 Thin Provisioning, REDP-4554.

Dynamic Volume ExpansionA volume can be expanded without removing the data within the volume. You can specify a new capacity by using the chfbvol command, as shown in Example 13-30.

Example 13-30 Expanding a striped volume

dscli> chfbvol -cap 40 1720CMUC00332W chfbvol: Some host operating systems do not support changing the volume size. Are you sure that you want to resize the volume? [y/n]: yCMUC00026I chfbvol: FB volume 1720 successfully modified.

The largest LUN size is now 16 TiB. Copy Services are not supported for LUN sizes larger than 2 TiB.

Because the original volume included the rotateexts attribute, the other extents are also striped, as shown in Example 13-31.

Example 13-31 Checking the status of an expanded volume

dscli> showfbvol -rank 1720Name ITSO-XPSTRID 1720accstate Onlinedatastate Normalconfigstate NormaldeviceMTM 2107-900

New capacity: The new capacity must be larger than the previous capacity. You cannot shrink the volume.

Chapter 13. Configuration with the DS command-line interface 373

Page 392: IBM 8870 Archt

datatype FB 512addrgrp 1extpool P7exts 40captype DScap (2^30B) 40.0cap (10^9B) -cap (blocks) 83886080volgrp -ranks 3dbexts 0sam Standardrepcapalloc -eam rotateextsreqcap (blocks) 83886080realextents 40virtualextents 0migrating 0perfgrp PG0migratingfrom -resgrp RG0tierassignstatus -tierassignerror -tierassignorder -tierassigntarget -%tierassigned 0==============Rank extents==============rank extents============R6 13R11 14R23 13

Deleting volumesFB volumes can be deleted by using the rmfbvol command.

On a DS8870 and older models with Licensed Machine Code (LMC) level 6.5.1.xx or higher, the command includes options to prevent the accidental deletion of volumes that are in use. An FB volume is considered to be in use if it is participating in a Copy Services relationship or if the volume received any I/O operation in the previous five minutes.

Volume deletion is controlled by the -safe and -force parameters (they cannot be specified at the same time) in the following manner:

� If none of the parameters are specified, the system performs checks to see whether the specified volumes are in use. Volumes that are not in use are deleted and the volumes that are in use are not deleted.

� If the -safe parameter is specified and if any of the specified volumes are assigned to a user-defined volume group, the command fails without deleting any volumes.

� The -force parameter deletes the specified volumes without checking to see whether they are in use.

Important: Before you can expand a volume, you must delete all Copy Services relationships for that volume.

374 IBM DS8870 Architecture and Implementation

Page 393: IBM 8870 Archt

In Example 13-32, we create volumes 2100 and 2101. We then assign 2100 to a volume group. We then try to delete both volumes with the -safe option, but the attempt fails without deleting either of the volumes. We can delete volume 2101 with the -safe option because it is not assigned to a volume group. Volume 2100 is not in use, so we can delete it by not specifying either parameter.

Example 13-32 Deleting an FB volume

dscli> mkfbvol -extpool p1 -cap 12 -eam rotateexts 2100-2101CMUC00025I mkfbvol: FB volume 2100 successfully created.CMUC00025I mkfbvol: FB volume 2101 successfully created.dscli> chvolgrp -action add -volume 2100 v0CMUC00031I chvolgrp: Volume group V0 successfully modified.dscli> rmfbvol -quiet -safe 2100-2101CMUC00253E rmfbvol: Volume IBM.2107-75NA901/2100 is assigned to a user-defined volume group. No volumes were deleted.dscli> rmfbvol -quiet -safe 2101CMUC00028I rmfbvol: FB volume 2101 successfully deleted.dscli> rmfbvol 2100CMUC00027W rmfbvol: Are you sure you want to delete FB volume 2100? [y/n]: yCMUC00028I rmfbvol: FB volume 2100 successfully deleted.

13.3.5 Creating volume groupsFixed block volumes are assigned to open system hosts by using volume groups, which are not to be confused with the term volume groups, which is used in AIX. A fixed block volume can be a member of multiple volume groups. Volumes can be added or removed from volume groups as required. Each volume group must be SCSI MAP256 or SCSI MASK, depending on the SCSI LUN address discovery method that is used by the operating system to which the volume group is attached.

Determining whether an open systems host is SCSI MAP256 or SCSI MASK

First, we determine the type of SCSI host with which we are working. Then, we use the lshosttype command with the -type parameter of scsimask and then scsimap256.

In Example 13-33, we can see the results of each command.

Example 13-33 Listing host types with the lshostype command

dscli> lshosttype -type scsimaskHostType Profile AddrDiscovery LBS===========================================================================Hp HP - HP/UX reportLUN 512SVC San Volume Controller reportLUN 512SanFsAIX IBM pSeries - AIX/SanFS reportLUN 512pSeries IBM pSeries - AIX reportLUN 512pSeriesPowerswap IBM pSeries - AIX with Powerswap support reportLUN 512zLinux IBM zSeries - zLinux reportLUN 512dscli> lshosttype -type scsimap256HostType Profile AddrDiscovery LBS=====================================================AMDLinuxRHEL AMD - Linux RHEL LUNPolling 512AMDLinuxSuse AMD - Linux Suse LUNPolling 512AppleOSX Apple - OSX LUNPolling 512Fujitsu Fujitsu - Solaris LUNPolling 512HpTru64 HP - Tru64 LUNPolling 512

Chapter 13. Configuration with the DS command-line interface 375

Page 394: IBM 8870 Archt

HpVms HP - Open VMS LUNPolling 512LinuxDT Intel - Linux Desktop LUNPolling 512LinuxRF Intel - Linux Red Flag LUNPolling 512LinuxRHEL Intel - Linux RHEL LUNPolling 512LinuxSuse Intel - Linux Suse LUNPolling 512Novell Novell LUNPolling 512SGI SGI - IRIX LUNPolling 512SanFsLinux - Linux/SanFS LUNPolling 512Sun SUN - Solaris LUNPolling 512VMWare VMWare LUNPolling 512Win2000 Intel - Windows 2000 LUNPolling 512Win2003 Intel - Windows 2003 LUNPolling 512Win2008 Intel - Windows 2008 LUNPolling 512Win2012 Intel - Windows 2012 LUNPolling 512iLinux IBM iSeries - iLinux LUNPolling 512nSeries IBM N series Gateway LUNPolling 512pLinux IBM pSeries - pLinux LUNPolling 512

Creating a volume groupAfter we determine the host type, we can create a volume group. In Example 13-34, the example host type we chose is AIX. In Example 13-33 on page 375, we can see the address discovery method for AIX is scsimask.

Example 13-34 Creating a volume group with mkvolgrp and displaying it

dscli> mkvolgrp -type scsimask -volume 1000-1002,1100-1102 AIX_VG_01CMUC00030I mkvolgrp: Volume group V11 successfully created.dscli> lsvolgrpName ID Type=======================================ALL CKD V10 FICON/ESCON AllAIX_VG_01 V11 SCSI MaskALL Fixed Block-512 V20 SCSI AllALL Fixed Block-520 V30 OS400 Alldscli> showvolgrp V11Name AIX_VG_01ID V11Type SCSI MaskVols 1000 1001 1002 1100 1101 1102

Adding or deleting volumes in a volume group In this example, we added volumes 1000 - 1002 and 1100 - 1102 to the new volume group. We added these volumes to evenly spread the workload across the two rank groups. We then listed all available volume groups by using the lsvolgrp command. Finally, we listed the contents of volume group V11 because we created this volume group.

We might also want to add or remove volumes to this volume group later. To add or remove volumes, we use the chvolgrp command with the -action parameter. In Example 13-35, we add volume 1003 to volume group V11. We display the results and then remove the volume.

Example 13-35 Changing a volume group with chvolgrp

dscli> chvolgrp -action add -volume 1003 V11CMUC00031I chvolgrp: Volume group V11 successfully modified.dscli> showvolgrp V11Name AIX_VG_01

376 IBM DS8870 Architecture and Implementation

Page 395: IBM 8870 Archt

ID V11Type SCSI MaskVols 1000 1001 1002 1003 1100 1101 1102dscli> chvolgrp -action remove -volume 1003 V11CMUC00031I chvolgrp: Volume group V11 successfully modified.dscli> showvolgrp V11Name AIX_VG_01ID V11Type SCSI MaskVols 1000 1001 1002 1100 1101 1102

All operations with volumes and volume groups that were previously described also can be used with space efficient volumes.

13.3.6 Creating host connectionsThe final step in the logical configuration process is to create host connections for your attached hosts. You must assign volume groups to those connections. Each host HBA can be defined only once. Each host connection (hostconnect) can include only one volume group that is assigned to it. A volume can be assigned to multiple volume groups.

In Example 13-36, we create a single host connection that represents one HBA in our example AIX host. We use the -hosttype parameter by using the host type that we have in Example 13-33 on page 375. We allocated it to volume group V11. If the SAN zoning is correct, the host should be able to see the LUNs in volume group V11.

Example 13-36 Creating host connections by using mkhostconnect and lshostconnect

dscli> mkhostconnect -wwname 100000C912345678 -hosttype pSeries -volgrp V11 AIX_Server_01CMUC00012I mkhostconnect: Host connection 0000 successfully created.dscli> lshostconnectName ID WWPN HostType Profile portgrp volgrpID ESSIOport=========================================================================================AIX_Server_01 0000 100000C912345678 pSeries IBM pSeries - AIX 0 V11 all

You can also use -profile instead of -hosttype. However, this method is not a best practice. The use of the -hosttype parameter actually invokes both parameters (-profile and -hosttype). In contrast, the use of -profile leaves the -hosttype column unpopulated.

The option in the mkhostconnect command to restrict access only to certain I/O ports also is available. This method is done with the -ioport parameter. Restricting access in this way is usually unnecessary. If you want to restrict access for certain hosts to certain I/O ports on the DS8000, perform zoning on your SAN switch.

Managing hosts with multiple HBAsIf you have a host that features multiple HBAs, you must consider the following points:

� For the GUI to consider multiple host connects to be used by the same server, the host connects must have the same name. In Example 13-37 on page 378, host connects 0010 and 0011 appear in the GUI as a single server with two HBAs. However, host connects 000E and 000F appear as two separate hosts even though they are used by the same

Important: Not all operating systems can manage the removal of a volume. See your operating system documentation to determine the safest way to remove a volume from a host.

Chapter 13. Configuration with the DS command-line interface 377

Page 396: IBM 8870 Archt

server. If you do not plan to use the GUI to manage host connections, this consideration is not important. The use of more verbose hostconnect naming might make management easier.

� If you want to use a single command to change the assigned volume group of several hostconnects at the same time, you must assign these hostconnects to a unique port group and then use the managehostconnect command. This command changes the assigned volume group for all hostconnects that are assigned to a particular port group.

When hosts are created, you can specify the -portgrp parameter. By using a unique port group number for each attached server, you can detect servers with multiple HBAs.

In Example 13-37, we have six host connections. By using the port group number, we see that there are three separate hosts, each with two HBAs. Port group 0 is used for all hosts that do not have a port group number set.

Example 13-37 Using the portgrp number to separate attached hosts

dscli> lshostconnectName ID WWPN HostType Profile portgrp volgrpID ===========================================================================================bench_tic17_fc0 0008 210000E08B1234B1 LinuxSuse Intel - Linux Suse 8 V1 allbench_tic17_fc1 0009 210000E08B12A3A2 LinuxSuse Intel - Linux Suse 8 V1 allp630_fcs0 000E 10000000C9318C7A pSeries IBM pSeries - AIX 9 V2 allp630_fcs1 000F 10000000C9359D36 pSeries IBM pSeries - AIX 9 V2 allp615_7 0010 10000000C93E007C pSeries IBM pSeries - AIX 10 V3 allp615_7 0011 10000000C93E0059 pSeries IBM pSeries - AIX 10 V3 all

Changing host connectionsIf we want to change a host connection, we can use the chhostconnect command. This command can be used to change nearly all parameters of the host connection, except for the worldwide port name (WWPN). If you must change the WWPN, you must create a host connection. To change the assigned volume group, use the chhostconnect command to change one hostconnect at a time, or use the managehostconnect command to simultaneously reassign all of the hostconnects in one port group.

13.3.7 Mapping open systems host disks to storage unit volumesWhen you assign volumes to an open system host and you install the DS CLI on this host, you can run the lshostvol DS CLI command on this host. This command maps assigned LUNs to open systems host volume names.

In this section, we give examples for several operating systems. In each example, we assign several logical volumes to an open systems host. We install DS CLI on this host. We log on to this host and start DS CLI. It does not matter which HMC we connect to with the DS CLI. We then issue the lshostvol command.

Important: The lshostvol command communicates only with the operating system of the host on which the DS CLI is installed. You cannot run this command on one host to see the attached disks of another host.

378 IBM DS8870 Architecture and Implementation

Page 397: IBM 8870 Archt

AIX: Mapping disks when Multipath I/O is usedIn Example 13-38, we have an AIX server that uses Multipath I/O (MPIO). We have two volumes assigned to this host, 1800 and 1801. Because MPIO is used, we do not see the number of paths.

In fact, from this display, it is not possible to tell if MPIO is even installed. You must run the pcmpath query device command to confirm the path count.

Example 13-38 lshostvol on an AIX host by using MPIO

dscli> lshostvolDisk Name Volume Id Vpath Name==========================================hdisk3 IBM.2107-1300819/1800 ---hdisk4 IBM.2107-1300819/1801 ---

AIX: Mapping disks when Subsystem Device Driver is usedIn Example 13-39, we have an AIX server that uses Subsystem Device Driver (SDD). We have two volumes assigned to this host, 1000 and 1100. Each volume has four paths.

Example 13-39 lshostvol on an AIX host by using SDD

dscli> lshostvolDisk Name Volume Id Vpath Name============================================================hdisk1,hdisk3,hdisk5,hdisk7 IBM.2107-1300247/1000 vpath0hdisk2,hdisk4,hdisk6,hdisk8 IBM.2107-1300247/1100 vpath1

Hewlett-Packard UNIX: Mapping disks when SDD is not usedIn Example 13-40, we have a Hewlett-Packard UNIX (HP-UX) host that does not have SDD. We have two volumes assigned to this host, 1105 and 1106.

Example 13-40 lshostvol on an HP-UX host that does not use SDD

dscli> lshostvolDisk Name Volume Id Vpath Name==========================================c38t0d5 IBM.2107-7503461/1105 ---c38t0d6 IBM.2107-7503461/1106

HP-UX or Solaris: Mapping disks when SDD is usedIn Example 13-41 on page 380, we have a Solaris host that has SDD installed. Two volumes are assigned to the host, 4205 and 4206, and are using two paths. The Solaris command iostat -En also can produce similar information. The output of lshostvol on an HP-UX host looks the same, with each vpath made up of disks with controller, target, and disk (c-t-d) numbers. However, the addresses that are used in the example for the Solaris host do not work in an HP-UX system.

Open HyperSwap: If you use Open HyperSwap on a host, the lshostvol command might fail to show any devices.

HP-UX: Current releases of HP-UX support addresses only up to 3FFF.

Chapter 13. Configuration with the DS command-line interface 379

Page 398: IBM 8870 Archt

Example 13-41 shows a Solaris host that has SDD installed.

Example 13-41 lshostvol on a Solaris host that has SDD

dscli> lshostvolDisk Name Volume Id Vpath Name==================================================c2t1d0s0,c3t1d0s0 IBM.2107-7520781/4205 vpath2c2t1d1s0,c3t1d1s0 IBM.2107-7520781/4206 vpath1

Solaris: Mapping disks when SDD is not usedIn Example 13-42, we have a Solaris host that does not have SDD installed. Instead, it uses an alternative multipathing product. We have two volumes that are assigned to this host, 4200 and 4201. Each volume has two paths. The Solaris command iostat -En also can produce similar information.

Example 13-42 lshostvol on a Solaris host that does not have SDD

dscli> lshostvolDisk Name Volume Id Vpath Name==========================================c6t1d0 IBM-2107.7520781/4200 ---c6t1d1 IBM-2107.7520781/4201 ---c7t2d0 IBM-2107.7520781/4200 ---c7t2d1 IBM-2107.7520781/4201 ---

Windows: Mapping disks when SDD is not used or SDDDSM is usedIn Example 13-43, we run lshostvol on a Windows host that does not use SDD or uses SDDDSM. The disks are listed by Windows Disk number. If you want to know which disk is associated with which drive letter, you must look at the Windows Disk manager.

Example 13-43 lshostvol on a Windows host that does not use SDD or uses SDDDSM

dscli> lshostvolDisk Name Volume Id Vpath Name==========================================Disk2 IBM.2107-7520781/4702 ---Disk3 IBM.2107-75ABTV1/4702 ---Disk4 IBM.2107-7520781/1710 ---Disk5 IBM.2107-75ABTV1/1004 ---Disk6 IBM.2107-75ABTV1/1009 ---Disk7 IBM.2107-75ABTV1/100A ---Disk8 IBM.2107-7503461/4702 ---

Windows: Mapping disks when SDD is usedIn Example 13-44 on page 381, we run lshostvol on a Windows host that uses SDD. The disks are listed by Windows Disk number. If you want to know which disk is associated with which drive letter, you must look at the Windows Disk manager.

380 IBM DS8870 Architecture and Implementation

Page 399: IBM 8870 Archt

Example 13-44 lshostvol on a Windows host that does use SDD

dscli> lshostvolDisk Name Volume Id Vpath Name============================================Disk2,Disk2 IBM.2107-7503461/4703 Disk2Disk3,Disk3 IBM.2107-7520781/4703 Disk3Disk4,Disk4 IBM.2107-75ABTV1/4703 Disk4

13.4 Configuring DS8000 storage for CKD volumesThis list contains the steps to configure CKD storage in the DS8870:

1. Create arrays.2. Create CKD ranks.3. Create CKD extent pools.4. Create LCUs.5. Create CKD volumes.

You do not need to create volume groups or host connects for CKD volumes. If there are I/O ports in Fibre Channel connection (FICON) mode, access to CKD volumes by FICON hosts is granted automatically.

13.4.1 Create arraysArray creation for CKD is the same as for FB. For more information, see 13.3.1, “Creating arrays” on page 367.

13.4.2 Ranks and extent pool creationWhen ranks and extent pools are created, you must specify -stgtype ckd, as shown in Example 13-45. Then, you can create the extent pool as also shown in Example 13-45.

Example 13-45 Rank and extent pool creation for CKD

dscli> mkrank -array A0 -stgtype ckdCMUC00007I mkrank: Rank R0 successfully created.dscli> lsrankID Group State datastate Array RAIDtype extpoolID stgtype==============================================================R0 - Unassigned Normal A0 6 - ckddscli> mkextpool -rankgrp 0 -stgtype ckd CKD_High_0CMUC00000I mkextpool: Extent Pool P0 successfully created.dscli> chrank -extpool P2 R0CMUC00008I chrank: Rank R0 successfully modified.dscli> lsextpoolName ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvol ===========================================================================================CKD_High_0 2 ckd 0 below 252 0 287 0 0

Chapter 13. Configuration with the DS command-line interface 381

Page 400: IBM 8870 Archt

13.4.3 Logical control unit creationWhen volumes for a count key data (CKD) environment are created, you must create LCUs before the volumes are created. In Example 13-46, you can see what happens if you try to create a CKD volume without creating a logical control unit (LCU) first.

Example 13-46 Trying to create CKD volumes without an LCU

dscli> mkckdvol -extpool p2 -cap 262668 -name ITSO_EAV1_#h C200CMUN02282E mkckdvol: C200: Unable to create CKD logical volume: CKD volumes require a CKD logical subsystem.

We must use the mklcu command first. The command uses the following format:

mklcu -qty XX -id XX -ss XXXX

To display the LCUs that we created, we use the lslcu command.

In Example 13-47, we create two LCUs by using the mklcu command, and then list the created LCUs by using the lslcu command. By default, the LCUs that were created are 3990-6.

Example 13-47 Creating a logical control unit with mklcu

dscli> mklcu -qty 2 -id BC -ss BC00CMUC00017I mklcu: LCU BC successfully created.CMUC00017I mklcu: LCU BD successfully created.dscli> lslcuID Group addrgrp confgvols subsys conbasetype=============================================BC 0 C 0 0xBC00 3990-6BD 1 C 0 0xBC01 3990-6

Because we created two LCUs (by using the parameter -qty 2), the first LCU, which is ID BC (an even number), is in address group 0, which equates to rank group 0. The second LCU, which is ID BD (an odd number), is in address group 1, which equates to rank group 1. By placing the LCUs into both address groups, we maximize performance by spreading workload across both servers in the DS8000.

13.4.4 Creating CKD volumesNow that an LCU was created, we can now create CKD volumes by using the mkckdvol command. The mkckdvol command uses the following format:

mkckdvol -extpool P2 -cap 262668 -datatype 3390-A -eam rotatevols -name ITSO_EAV1_#h BC06

Important: For the DS8000, the CKD LCUs can be ID 00 to ID FE. The LCUs fit into one of 16 address groups. Address group 0 is LCUs 00 to 0F, address group 1 is LCUs 10 to 1F, and so on, except group F is F0 - FE. If you create a CKD LCU in an address group, that address group cannot be used for FB volumes. Likewise, if there were, for example, FB volumes in LSS 40 to 4F (address group 4), that address group cannot be used for CKD. Be aware of this limitation when you are planning the volume layout in a mixed FB/CKD DS8000. Each LCU can manage a maximum of 256 volumes, including alias volumes for the Parallel Access Volume (PAV) feature.

382 IBM DS8870 Architecture and Implementation

Page 401: IBM 8870 Archt

The biggest difference with CKD volumes is that the capacity is expressed in cylinders or as mod1 (Model 1) extents (1113 cylinders). To not waste space, use volume capacities that are a multiple of 1113 cylinders. The support for extended address volumes (EAVs) was enhanced. The DS8870 now supports EAV volumes up to 1,182,006 cylinders. The EAV device type is called 3390 Model A. You need z/OS V1.R12 or above to use such volumes.

The last parameter in the command is the volume_ID. This value determines the LCU that the volume will belong to as well as the unit address for the volume. Both of these values must be matched to a control unit and device definition in the input/output configuration data set (IOCDS) that a System z server uses to access the volume.

The volume_ID has a format of LLVV, with LL (00–FE) being equal to the LCU that the volume will belong to, and VV (00–FF) being equal to the unit address for the volume. An LCU can only have one volume use a unique VV of 00–FF.

In Example 13-48, we create a single 3390-A volume with a capacity of 262,668 cylinders, and assigned it to LCU BC with a unit address of 06.

Example 13-48 Creating CKD volumes by using mkckdvol

dscli> mkckdvol -extpool P2 -cap 262668 -datatype 3390-A -eam rotatevols -name ITSO_EAV1_#h BC06CMUC00021I mkckdvol: CKD Volume BC06 successfully created.dscli> lsckdvolName ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)================================================================================================ITSO_BC00 BC00 Online Normal Normal 3390-9 CKD Base - P2 10017ITSO_BC01 BC01 Online Normal Normal 3390-9 CKD Base - P2 10017ITSO_BC02 BC02 Online Normal Normal 3390-9 CKD Base - P2 10017ITSO_BC03 BC03 Online Normal Normal 3390-9 CKD Base - P2 10017ITSO_BC04 BC04 Online Normal Normal 3390-9 CKD Base - P2 10017ITSO_BC05 BC05 Online Normal Normal 3390-9 CKD Base - P2 10017ITSO_EAV1_BC06 BC06 Online Normal Normal 3390-A CKD Base - P2 262668ITSO_BD00 BD00 Online Normal Normal 3390-9 CKD Base - P3 10017ITSO_BD01 BD01 Online Normal Normal 3390-9 CKD Base - P3 10017ITSO_BD02 BD02 Online Normal Normal 3390-9 CKD Base - P3 10017ITSO_BD03 BD03 Online Normal Normal 3390-9 CKD Base - P3 10017ITSO_BD04 BD04 Online Normal Normal 3390-9 CKD Base - P3 10017ITSO_BD05 BD05 Online Normal Normal 3390-9 CKD Base - P3 10017

Remember, we can create only CKD volumes in LCUs that we already created.

You also must be aware that volumes in even-numbered LCUs must be created from an extent pool that belongs to rank group 0. Volumes in odd-numbered LCUs must be created from an extent pool in rank group 1.

Important: For 3390-A volumes, the size can be specified 1 - 65,520 in increments of 1 and from 65,667 (next multiple of 1113) to 1,182,0068 in increments of 1113.

Important: With the DS8870 and older DS8000 models with Release 6.1 microcode, you can configure a volume to belong to a certain Resource Group by using the -resgrp <RG_ID> flag in the mkckdvol command. For more information, see the Redpaper publication IBM DS8000: Copy Services Scope Management and Resource Groups, REDP-4758.

Chapter 13. Configuration with the DS command-line interface 383

Page 402: IBM 8870 Archt

Storage pool stripingWhen a volume is created, you have a choice about how the volume is allocated in an extent pool with several ranks. The extents of a volume can be kept together in one rank (if there is enough free space on that rank). The next rank is used when the next volume is created. This allocation method is called rotate volumes.

You can also specify that you want the extents of the volume to be evenly distributed across all ranks within the extent pool. This allocation method is called rotate extents.

The extent allocation method is specified with the -eam rotateexts or -eam rotatevols option of the mkckdvol command (see Example 13-49).

Example 13-49 Creating a CKD volume with extent pool striping

dscli> mkckdvol -extpool p4 -cap 10017 -name ITSO-CKD-STRP -eam rotateexts 0080CMUC00021I mkckdvol: CKD Volume 0080 successfully created.

The showckdvol command with the -rank option (see Example 13-50) shows that the volume we created is distributed across two ranks. It also displays how many extents on each rank were allocated for this volume.

Example 13-50 Getting information about a striped CKD volume

dscli> showckdvol -rank 0080Name ITSO-CKD-STRPID 0080accstate Onlinedatastate Normalconfigstate NormaldeviceMTM 3390-9volser -datatype 3390voltype CKD Baseorgbvols -addrgrp 0extpool P4exts 9cap (cyl) 10017cap (10^9B) 8.5cap (2^30B) 7.9ranks 2sam Standardrepcapalloc -eam rotateextsreqcap (cyl) 10017==============Rank extents==============rank extents============R4 4R30 5

Rotate extents: For the DS8870, the default allocation policy is rotate extents.

384 IBM DS8870 Architecture and Implementation

Page 403: IBM 8870 Archt

Track space-efficient volumesWhen your DS8000 includes the IBM FlashCopy feature, you can create track space-efficient (TSE) volumes to be used as FlashCopy target volumes. A repository must exist in the extent pool where you plan to allocate TSE volumes.

For more information about space-efficient volumes, see 5.2.6, “Space-efficient volumes” on page 114. The detailed procedures for configuring TSE volumes are provided in IBM System Storage DS8000 Series: IBM Flashcopy SE REDP-4368.

Dynamic Volume ExpansionA volume can be expanded without removing the data within the volume. You can specify a new capacity by using the chckdvol command, as shown in Example 13-51. The new capacity must be larger than the previous one; you cannot shrink the volume.

Example 13-51 Expanding a striped CKD volume

dscli> chckdvol -cap 30051 0080CMUC00332W chckdvol: Some host operating systems do not support changing the volume size. Are you sure that you want to resize the volume? [y/n]: yCMUC00022I chckdvol: CKD Volume 0080 successfully modified.

Because the original volume had the rotateexts attribute, the additional extents also are striped, as shown in Example 13-52.

Example 13-52 Checking the status of an expanded CKD volume

dscli> showckdvol -rank 0080Name ITSO-CKD-STRPID 0080accstate Onlinedatastate Normalconfigstate NormaldeviceMTM 3390-9volser -datatype 3390voltype CKD Baseorgbvols -addrgrp 0extpool P4exts 27cap (cyl) 30051cap (10^9B) 25.5cap (2^30B) 23.8ranks 2sam Standardrepcapalloc -eam rotateextsreqcap (cyl) 30051==============Rank extents==============rank extents============R4 13R30 14

Chapter 13. Configuration with the DS command-line interface 385

Page 404: IBM 8870 Archt

It is possible to expand a 3390 Model 9 volume to a 3390 Model A. You can do make these expansions by specifying a new capacity for an existing Model 9 volume. When you increase the size of a 3390-9 volume beyond 65,520 cylinders, its device type automatically changes to 3390-A. However, keep in mind that a 3390 Model A can be used only in z/OS V1.10 or V1.12 (depending on the size of the volume) and later, as shown in Example 13-53.

Example 13-53 Expanding a 3390 to a 3390-A

*** Command to show CKD volume definition before expansion:

dscli> showckdvol BC07Name ITSO_EAV2_BC07ID BC07accstate Onlinedatastate Normalconfigstate NormaldeviceMTM 3390-9volser -datatype 3390voltype CKD Baseorgbvols -addrgrp Bextpool P2exts 9cap (cyl) 10017cap (10^9B) 8.5cap (2^30B) 7.9ranks 1sam Standardrepcapalloc -eam rotatevolsreqcap (cyl) 10017

*** Command to expand CKD volume from 3390-9 to 3390-A:

dscli> chckdvol -cap 262668 BC07CMUC00332W chckdvol: Some host operating systems do not support changing the volume size. Are you sure that you want to resize the volume? [y/n]: yCMUC00022I chckdvol: CKD Volume BC07 successfully modified.

*** Command to show CKD volume definition after expansion:

dscli> showckdvol BC07Name ITSO_EAV2_BC07ID BC07accstate Onlinedatastate Normalconfigstate NormaldeviceMTM 3390-Avolser -datatype 3390-Avoltype CKD Baseorgbvols -addrgrp B

Important: Before you can expand a volume, you first must delete all Copy Services relationships for that volume. You also cannot specify -cap and -datatype for the chckdvol command.

386 IBM DS8870 Architecture and Implementation

Page 405: IBM 8870 Archt

extpool P2exts 236cap (cyl) 262668cap (10^9B) 223.3cap (2^30B) 207.9ranks 1sam Standardrepcapalloc -eam rotatevolsreqcap (cyl) 262668

You cannot reduce the size of a volume. If you try to reduce the size, an error message is displayed, as shown in Example 13-54.

Example 13-54 Reducing a volume size

dscli> chckdvol -cap 10017 BC07CMUC00332W chckdvol: Some host operating systems do not support changing the volume size. Are you sure that you want to resize the volume? [y/n]: yCMUN02541E chckdvol: BC07: The expand logical volume task was not initiated because the logical volume capacity that you have requested is less than the current logical volume capacity.

Deleting volumesCKD volumes can be deleted by using the rmckdvol command. FB volumes can be deleted by using the rmfbvol command.

For the DS8870 and older models with Licensed Machine Code (LMC) level 6.5.1.xx or above, the command includes a capability to prevent the accidental deletion of volumes that are in use. A CKD volume is considered to be in use if it is participating in a Copy Services relationship, or if the IBM System z path mask indicates that the volume is in a grouped state or online to any host system.

If the -force parameter is not specified with the command, volumes that are in use are not deleted. If multiple volumes are specified and some are in use and some are not, the ones not in use are deleted. If the -force parameter is specified on the command, the volumes are deleted without checking to see whether they are in use.

Chapter 13. Configuration with the DS command-line interface 387

Page 406: IBM 8870 Archt

In Example 13-55, we try to delete two volumes, 0900 and 0901. Volume 0900 is online to a host, whereas 0901 is not online to any host and not in a Copy Services relationship. The rmckdvol 0900-0901 command deletes only volume 0901, which is offline. To delete volume 0900, we use the -force parameter.

Example 13-55 Deleting CKD volumes

dscli> lsckdvol 0900-0901Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)========================================================================================ITSO_J 0900 Online Normal Normal 3390-9 CKD Base - P1 10017ITSO_J 0901 Online Normal Normal 3390-9 CKD Base - P1 10017

dscli> rmckdvol -quiet 0900-0901CMUN02948E rmckdvol: 0900: The Delete logical volume task cannot be initiated because the Allow Host Pre-check Control Switch is set to true and the volume that you have specified is online to a host.CMUC00024I rmckdvol: CKD volume 0901 successfully deleted.

dscli> lsckdvol 0900-0901Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)========================================================================================ITSO_J 0900 Online Normal Normal 3390-9 CKD Base - P1 10017

dscli> rmckdvol -force 0900CMUC00023W rmckdvol: Are you sure you want to delete CKD volume 0900? [y/n]: yCMUC00024I rmckdvol: CKD volume 0900 successfully deleted.

dscli> lsckdvol 0900-0901CMUC00234I lsckdvol: No CKD Volume found.

13.4.5 Resource GroupsThe Resource Group (RG) feature is designed for multi-tenancy environments. The resources are volumes, LCUs, and LSSs, and are used for access control for Copy Services functions only.

For more information about RGs, see IBM System Storage DS8000 Copy Services Scope Management and Resource Groups, REDP-4758, which is available at this website:

http://www.redbooks.ibm.com/abstracts/redp4758.html?Open

13.4.6 Performance I/O Priority ManagerBy using Performance I/O Priority Manager, you can control quality of service (QoS). There are 16 performance group policies for z/OS, PG16-PG31.

For more information about I/O Priority Manager, see DS8000 I/O Priority Manager, REDP-4760, which is available at this website:

http://www.redbooks.ibm.com/abstracts/redp4760.html?Open

388 IBM DS8870 Architecture and Implementation

Page 407: IBM 8870 Archt

13.4.7 Easy TierIBM System Storage DS8000 Easy Tier is designed to automate data placement throughout the storage system disks pool. It enables the system, automatically and without disruption to applications, to relocate data (at the extent level) across up to three drive tiers. The process is fully automated. Easy Tier also automatically rebalances extents among ranks within the same tier, removing workload skew between ranks, even within homogeneous and single-tier extent pools.

With Licensed Machine Code (LMC) 7.7.10.xx, Easy Tier has three new enhancements. Easy Tier Server offers cooperative caching with AIX POWER server hosts. Easy Tier Application allows for more granular control over Easy Tier operations within the DS8000. Easy Tier Heat Map Transfer allows for the transfer of Easy Tier heat maps from primary to auxiliary storage sites.

For more information about Easy Tier, see the following publications:

� IBM DS8000 Easy Tier, REDP-4667

� IBM System Storage DS8000 Easy Tier Server, REDP-5013

� IBM System Storage DS8000 Easy Tier Application, REDP-5014

� IBM System Storage DS8000 Easy Tier Heat Map Transfer, REDP-5015

13.5 Metrics with DS CLIThis section describes some command examples from the DS CLI interface that analyzes the performance metrics from different levels in a storage unit. The suggested IBM tool for performance monitoring is the IBM Tivoli Storage Productivity Center.

Example 13-56 and Example 13-57 on page 390 show an example of the showfbvol and showckdvol commands. These commands display detailed properties for an individual volume and include a -metrics parameter that returns the performance counter values for a specific volume ID.

Example 13-56 Metrics for a specific fixed block volume

dscli> showfbvol -metrics f000ID F000normrdrqts 2814071normrdhits 2629266normwritereq 2698231normwritehits 2698231seqreadreqs 1231604seqreadhits 1230113seqwritereq 1611765seqwritehits 1611765cachfwrreqs 0

Important: The help command shows specific information about each of the metrics.

Performance metrics: All performance metrics are an accumulation since the most recent counter wrap or counter reset. The performance counters are reset on the following occurrences:

� When the storage unit is turned on.� When a server fails and the failover and fallback sequence is performed.

Chapter 13. Configuration with the DS command-line interface 389

Page 408: IBM 8870 Archt

cachfwrhits 0cachefwreqs 0cachfwhits 0inbcachload 0bypasscach 0DASDtrans 440816seqDASDtrans 564977cachetrans 2042523NVSspadel 110897normwriteops 0seqwriteops 0reccachemis 79186qwriteprots 0CKDirtrkac 0CKDirtrkhits 0cachspdelay 0timelowifact 0phread 1005781phwrite 868125phbyteread 470310phbytewrite 729096recmoreads 232661sfiletrkreads 0contamwrts 0PPRCtrks 5480215NVSspallo 4201098timephread 1319861timephwrite 1133527byteread 478521bytewrit 633745timeread 158019timewrite 851671zHPFRead -zHPFWrite -zHPFPrefetchReq 0zHPFPrefetchHit 0GMCollisionsSidefileCount 0GMCollisionsSendSyncCount 0

Example 13-57 Metrics for a specific CKD volume

dscli> showckdvol -metrics 7b3dID 7B3Dnormrdrqts 9normrdhits 9normwritereq 0normwritehits 0seqreadreqs 0seqreadhits 0seqwritereq 0seqwritehits 0cachfwrreqs 0cachfwrhits 0cachefwreqs 0cachfwhits 0

390 IBM DS8870 Architecture and Implementation

Page 409: IBM 8870 Archt

inbcachload 0bypasscach 0DASDtrans 201seqDASDtrans 0cachetrans 1NVSspadel 0normwriteops 0seqwriteops 0reccachemis 0qwriteprots 0CKDirtrkac 9CKDirtrkhits 9cachspdelay 0timelowifact 0phread 201phwrite 1phbyteread 49phbytewrite 0recmoreads 0sfiletrkreads 0contamwrts 0PPRCtrks 0NVSspallo 0timephread 90timephwrite 0byteread 0bytewrit 0timeread 0timewrite 0zHPFRead 0zHPFWrite 0zHPFPrefetchReq 0zHPFPrefetchHit 0GMCollisionsSidefileCount 0GMCollisionsSendSyncCount 0

Example 13-58 shows an example of the showrank command. This command generates two types of reports. One report displays the detailed properties of a specified rank and the other displays the performance metrics of a specified rank by using the -metrics parameter.

Example 13-58 Metrics for a specific rank

ID R14byteread 87595588bytewrit 50216632Reads 208933399Writes 126759118timeread 204849532timewrite 408989116dataencrypted no

Chapter 13. Configuration with the DS command-line interface 391

Page 410: IBM 8870 Archt

Example 13-59 shows an example of the showioport command. This command displays the properties of a specified I/O port and the performance metrics by using the -metrics parameter. Monitoring the I/O ports is one of the most important tasks of the system administrator. Here is the point where the HBAs, SAN, and DS8000 exchange information. If one of these components has problems because of hardware or configuration issues, all of the other components also are affected.

Example 13-59 Metrics for a specific I/O port

dscli> showioport -metrics I0000ID I0000Date 05/30/2013 13:09:09 CESTbyteread (FICON/ESCON) 0bytewrit (FICON/ESCON) 0Reads (FICON/ESCON) 0Writes (FICON/ESCON) 0timeread (FICON/ESCON) 0timewrite (FICON/ESCON) 0bytewrit (PPRC) 824620byteread (PPRC) 146795Writes (PPRC) 1649240Reads (PPRC) 293591timewrite (PPRC) 9663528timeread (PPRC) 5532byteread (SCSI) 41438385bytewrit (SCSI) 8958381Reads (SCSI) 59601604Writes (SCSI) 73994653timeread (SCSI) 754392timewrite (SCSI) 686436LinkFailErr (FC) 14LossSyncErr (FC) 219LossSigErr (FC) 2PrimSeqErr (FC) 0InvTxWordErr (FC) 192CRCErr (FC) 0LRSent (FC) 0LRRec (FC) 0IllegalFrame (FC) 0OutOrdData (FC) 0OutOrdACK (FC) 0DupFrame (FC) 0InvRelOffset (FC) 0SeqTimeout (FC) 0BitErrRate (FC) 0RcvBufZero (FC) 1SndBufZero (FC) 600RetQFullBusy (FC) 0ExchOverrun (FC) 0ExchCntHigh (FC) 0ExchRemAbort (FC) 3SFPRxPower (FC) 0SFPTxPower (FC) 0CurrentSpeed (FC) 8 Gb/s%UtilizeCPU (FC) 4 Average

392 IBM DS8870 Architecture and Implementation

Page 411: IBM 8870 Archt

For the DS8870, some metrics counters were added to the showioport command. The %UtilizeCPU metric for the CPU utilization of the HBA might be of interest and the CurrentSpeed the port is actually using.

Example 13-59 on page 392 shows the many important metrics returned by the command. It provides the performance counters of the port and the FC link error counters. The FC link error counters are used to determine the health of the overall communication.

The following groups of errors point to specific problem areas:

� Any non-zero figure in the counters LinkFailErr, LossSyncErr, LossSigErr, and PrimSeqErr indicates that the SAN probably has HBAs attached to it that are unstable. These HBAs log in and log out to the SAN and create name server congestion and performance degradation.

� If the InvTxWordErr counter increases by more than 100 per day, the port is receiving light from a source that is not an SFP. The cable that is connected to the port is not covered at the end or the I/O port is not covered by a cap.

� The CRCErr counter shows the errors that arise between the last sending SFP in the SAN and the receiving port of the DS8000. These errors do not appear in any other place in the data center. You must replace the cable that is connected to the port or the SFP in the SAN.

� The link reset counters LRSent and LRRec also suggest that there are hardware defects in the SAN; these errors must be investigated.

� The counters IllegalFrame, OutOrdData, OutOrdACK, DupFrame, InvRelOffset, SeqTimeout, and BitErrRate point to congestions in the SAN and can be influenced only by configuration changes in the SAN.

13.6 Private network security commandsThere are commands available that are used to manage network security on the DS8000 by using the DS CLI. With the introduction of support for NIST 800-131a compliance, there have been new commands introduced to enable compliance support. Network security includes both Internet Protocol Security (IPSec) to protect the data transmission as well as Transport Layer Security (TLS) to protect application access.

The following IPSec commands are available:

� setipsec

The setipsec command allows you to manage IPSec controls.

– Enable and disable the IPSec server on the HMCs, either on primary, secondary, or all.

� mkipsec

The mkipsec command creates an IPSec connection by importing an IPSec connection configuration file that contains a connection definition to the HMC.

� rmipsec

The rmipsec command deletes an IPSec connection from the IPSec server.

Note: The server will not start without defined connections. The server will start automatically when the first connection is created, and stop when the last is deleted.

Chapter 13. Configuration with the DS command-line interface 393

Page 412: IBM 8870 Archt

� chipsec

The chipsec command modifies an existing IPSec connection.

Enable and disable existing IPSec connections.

� lsipsec

The lsipsec command displays a list of defined IPSec connection configurations.

� mkipseccert

The mkipseccert command imports an IPSec certificate to the DS8000.

� rmipseccert

The rmipseccert command deletes an IPSec certificate from the HMC.

� lsipseccert

The lsipseccert command displays a list of IPSec certificates.

The following commands are available to manage TLS security settings:

� manageaccess

The manageaccess command is used to manage the security protocol access settings of a Hardware Management Console (HMC) for all communications to and from the DS8870. It can be used to start and stop outbound VPN connections in place of the setvpn command. It can also control port 1750 access to the Network Interface (NI) server for pre Gen-2 certificate access.

It is primarily used to manage server access in the HMC. This includes the Common Information Model (CIM) (SMS-S), DS GUI, web user interface (WUI), and NI servers.

Each of these accesses can be set to an access level of either Legacy or 800131a. When set to 800131a level, it is now NIST-800131a-compliant.

� showaccess

This command displays the current setting for each access that is managed with the manageaccess command. It will also display the remote service access settings that are provided with the lsaccess command.

The following security commands are available to manage remote service access settings:

� chaccess

The chaccess command allows you to change the following settings of an HMC:

– Enable and disable the command-line shell access to the HMC via the Internet or a VPN connection.

– Enable and disable the WUI access on the HMC via the Internet or a VPN connection.

– Enable and disable the modem dial-in and VPN initiation to the HMC.

Important:

� This command affects service access only and does not change access to the machine via the DS CLI or DS Storage Manager.

� Only users with administrator authority can access this command.

394 IBM DS8870 Architecture and Implementation

Page 413: IBM 8870 Archt

� lsaccess

The lsaccess command displays the access settings of an HMC. If you add the -l parameter, it will also display the VPN status. If enabled, it means that there is an active VPN connection. (The VPN status is similar to the output of the lsvpn command, which is still available.) A VPN connection is only used for remote support purposes.

For more information, see Chapter 5 of the IBM DS8000 Version 7 Release 2 Command-Line Interface User's Guide, GC27-4212-02.

13.7 Copy Services commandsThere are many more DS CLI commands available. Many of these commands deal with the management of Copy Services, FlashCopy, Metro Mirror, and Global Mirror commands.

These commands are not described in this chapter. For more information about these commands, see the following publications:

� IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788� IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787

Important: For more information about security issues and overall security management to implement NIST 800-131a compliance, see the IBM Redpaper publication DS8870 and NIST SP 800-131a Compliance, REDP-5069.

Chapter 13. Configuration with the DS command-line interface 395

Page 414: IBM 8870 Archt

396 IBM DS8870 Architecture and Implementation

Page 415: IBM 8870 Archt

Part 4 Maintenance and upgrades

In this part of the book, we provide useful information about maintenance and upgrades.

The following topics are included:

� Licensed machine code� Monitoring with Simple Network Management Protocol� Remote support� DS8870 Capacity upgrades and Capacity on Demand

Part 4

© Copyright IBM Corp. 2014. All rights reserved. 397

Page 416: IBM 8870 Archt

398 IBM DS8870 Architecture and Implementation

Page 417: IBM 8870 Archt

Chapter 14. Licensed machine code

In this chapter, we describe considerations that are related to the planning and installation of new licensed machine code (LMC) bundles on the IBM DS8870. The overall process for the DS8870 is the same as for previous models. However, there are several enhancements regarding power system firmware updates that are described.

This chapter covers the following topics:

� How new microcode is released� Bundle installation� Concurrent and non-concurrent updates� Code updates� Host adapter firmware updates� Loading the code bundle� Post-installation activities� Summary

14

© Copyright IBM Corp. 2014. All rights reserved. 399

Page 418: IBM 8870 Archt

14.1 How new microcode is releasedThe various components of the DS8870 system use firmware that can be updated as new releases become available. These components include device adapters (DAs), host adapters (HAs), power subsystems that are direct current uninterruptible power supply (DC-UPS) and Rack Power Control (RPC), and Fibre Channel interface cards (FCICs). In addition, the microcode and internal operating system that run on the HMCs and each central electronics complex (CEC) can be updated. As IBM continues to develop the DS8870, new functional features also are released through new LMC levels.

When IBM releases new microcode for the DS8870, it is released in the form of a bundle. The term bundle is used because a new code release can include updates for various DS8870 components. These updates are tested together, and then the various code packages are bundled together into one unified release. In general, when we refer to what code level is used on a DS8870, we use the term bundle. Components within the bundle each include their own revision levels.

For more information about a DS8870 cross-reference table of code bundles, see this website:

http://www.ibm.com/support/docview.wss?uid=ssg1S1004204

The Cross-Reference Table shows the levels of code for Release 87.20, which is current at the time of this writing. It should be updated as new bundles are released. It is important to always match your data storage command-line interface (DS CLI) version to the bundle installed on your DS8870.

For the DS8870, the following naming convention of bundles, PR.MM.FFF.EEEE, is used:

� P: Product (8 = DS8870)� R: Release Major (X)� MM: Release Minor (xx)� FFF: Fix Level (xxx)� EEEE: EFIX Level (0 is base, and 1.n is the interim fix build above base level)

The naming convention is shown in Example 14-1.

Example 14-1 BUNDLE level information

For BUNDLE 87.20.209.0 :Product DS8870Release Major 7Release Minor 20Fix Level 209EFIX level 0

A release major/minor naming like 7.20, which is shown in Example 14-1, stands for the R7.2 release.

If DS CLI is used, you can obtain the CLI and LMC code level information by using the ver command. The ver command uses the following parameters and displays the versions of the command-line interface, Storage Manager, and licensed machine code:

� -s (Optional): The -s parameter displays the version of the command-line interface program. You cannot use the -s and -l parameters together.

400 IBM DS8870 Architecture and Implementation

Page 419: IBM 8870 Archt

� -l (Optional): The -l parameter displays the versions of the command-line interface, Storage Manager, and licensed machine code. You cannot use the -l and -s parameters together. See Example 14-2.

� -cli (Optional): Displays the version of the command-line interface program. Version numbers are in the format version.release.modification.fixlevel.

� -stgmgr (Optional): Displays the version of the Storage Manager.

This ID is not the graphical user interface (GUI) (Storage Manager GUI). This ID is related to Hardware Management Console (HMC) code bundle information.

� -lmc (Optional): Displays the version of the licensed machine code (LMC).

Example 14-2 DS CLI version command

dscli> ver -lDSCLI 7.7.20.555StorageManager 7.7.7.0.20130628.1================Version=================Storage Image LMC===========================IBM.2107-75ZA571 7.7.20.555

The LMC level also can be retrieved from DS Storage Manager. Click System Status Storage Image Properties. See Figure 14-1.

Figure 14-1 LMC level under DS Storage Manager

Chapter 14. Licensed machine code 401

Page 420: IBM 8870 Archt

14.2 Bundle installation

The bundle package contains the following new levels of code that are updated:

� HMC Code Levels– HMC OS/Managed System Base– DS Storage Manager– Common Information Model (CIM) Agent Version

� Managed System Code Levels� PTF Code Levels� Storage Facility Image Code Levels� Host Adapter Code Levels� Device Adapter Code Level� IO Enclosure Code Level� Power Code Levels� Fibre Channel Interface Card Code Levels� Storage Enclosure Power Supply Unit Code Levels� Disk drive module (DDM) Firmware Code Level

It is likely that a new bundle includes updates for the following components:

� Linux OS for the HMC� AIX OS for the CECs� Microcode for HMC and CECs� Microcode or Firmware for HAs

A new bundle might include updates for the following components:

� Firmware for the power subsystem (DC-UPS and RPC)� Firmware for storage DDMs� Firmware for Fibre Channel interface cards� Firmware for device adapters� Firmware for the hypervisor on CEC

Code Distribution and Activation (CDA) preload is the current method that is used to perform Concurrent Code Load distribution. By using CDA Preload, the IBM service representative performs every non-impacting Concurrent Code Load step for a code load by inserting the physical media in to the primary HMC or running a network acquire of the wanted code level. IBM service representatives can also download the bundle to their notebook and then load it on the HMC via a service tool. After the CDA preload is started, the following steps are performed automatically:

1. Downloads the release bundle.

2. Prepares the HMC with any code update-specific fixes.

3. Distributes the code updates to the LPAR and installs them to an alternative Base Operating System (BOS) repository.

4. Performs scheduled precheck scans until the distributed code is ready to be activated by the user for up to 11 days.

Important: LMC is always provided and installed by an IBM service representative. Installing a new LMC is not a client-serviceable task. Usually, there is a Prerequisites section or Attention Must Read section in Microcode Update Instructions. If there are any prerequisite or other considerations to take into account, your IBM service representative will inform you during the previous planning phase.

402 IBM DS8870 Architecture and Implementation

Page 421: IBM 8870 Archt

Any time after the preload is completed, when the user logs in to the primary HMC, they are guided automatically to correct any serviceable events that might be open, update the HMC, and activate the previously distributed code on the storage facility. The overall process is also known as Concurrent Code Load (CCL).

The installation process involves the following stages:

1. Update code in the primary HMC (HMC1).

2. If a dual HMC configuration is used, the code is acquired and applied in the secondary HMC (HMC2) that is being retrieved from the primary HMC (HMC1).

3. Perform updates to the CEC operating system (currently AIX V7.1), and updates to the internal LMC, which are performed individually. The updates cause each CEC to fail over its logical subsystems to the alternative CEC. This process also updates the firmware that is running in each device adapter that is owned by that CEC.

4. Perform updates to the host adapters. For DS8870 host adapters, the impact of these updates on each adapter is less than 2.5 seconds and should not affect connectivity. If an update takes longer, the multipathing software on the host or the control-unit initiated reconfiguration (CUIR) directs I/O to another host adapter. If a host is attached with only a single path, connectivity is lost. For more information about host attachments, see 4.4.2, “Host connections” on page 74.

5. At times, new DC-UPS and RPC firmware is released. New firmware can be loaded into each RPC card and DC-UPS directly from the HMC. The DS8870 includes the following enhancements about the power subsystem microcode update for DC-UPS and RPC cards (for more information, see 4.7, “RAS on the power subsystem” on page 91):

– During DC-UPS firmware update, the current power state is maintained, so the DC-UPS remains operational during a microcode update.

– During RPC firmware update, the RPC card is available most of the time. It is not available only during a short period.

6. At times, new firmware for the Hypervisor, service processor, system planar, and I/O enclosure planars is released. This firmware can be loaded into each device directly from the HMC. Activation of this firmware might require a shutdown and reboot of each CEC individually. This process causes each CEC to fail over its logical subsystems to the alternative CEC. Certain updates do not require this step, or it might occur without processor reboots. For more information, see 4.3, “CEC failover and failback” on page 69.

7. It is important to check for the latest DDM firmware because there are more updates that come with new bundle releases. DDM firmware update is a concurrent process in the DS8870 series family.

Although this installation process might seem complex, it does not require a great deal of user intervention. The IBM service representative normally starts the CDA process and then monitors its progress by using the HMC. From bundle 87.0.x.x, power subsystem firmware update activation (RPC cards and DC-UPSs) is included in the same general task that is started at CDA. In previous bundles, it was necessary to start a power update from an option in the HMC when the other elements were already updated. This option remains available when only a power subsystem update is required.

Important: An upgrade of the DS8870 microcode might require that you upgrade the DS CLI on workstations. Check with your IBM service representative regarding the description and contents of the release bundle.

Chapter 14. Licensed machine code 403

Page 422: IBM 8870 Archt

14.3 Concurrent and non-concurrent updatesThe DS8870 allows for concurrent microcode updates. Code updates can be installed with all attached hosts that are running with no interruption to your business applications. It is also possible to install microcode update bundles non-concurrently, with all attached hosts shut down. However, this task should not be necessary. This method is usually only employed at DS8870 installation time.

14.4 Code updatesThe microcode that runs on the HMC normally is updated as part of a new code bundle. The HMC can hold up to six versions of code. Each CEC can hold three versions of code (the previous version, the active version, and the next version). Most organizations should plan for two code updates per year.

Before you update the CEC operating system and microcode, a pre-verification test is run to ensure that no conditions exist that must be corrected. The HMC code update installs the latest version of the pre-verification test, then the newest test can be run. If problems are detected, there are one or two days before the scheduled code installation window date to correct them. This procedure is shown in the following example:

� Thursday

1. Copy or download the new code bundle to the HMCs. 2. Update the HMCs to the new code bundle. 3. Run the updated pre-verification test. 4. Resolve any issues that were raised by the pre-verification test.

� Saturday

Update the SFI.

The actual time that is required for the concurrent code load varies based on the bundle that you are currently running and the bundle to which you are updating. Always consult with your IBM service representative regarding proposed code load schedules. Code bundle recommendations are listed on the following site:

http://www.ibm.com/support/docview.wss?uid=ssg1S1004456

You can also contact your service representative for the most current information.

Additionally, check multipathing drivers and SAN switch firmware levels for current levels at regular intervals.

14.5 Host adapter firmware updatesOne of the final steps in the concurrent code load process is updating the host adapters. Normally, every code bundle contains new host adapter code. For DS8870 Fibre Channel

Best practice: Many clients with multiple DS8000 systems follow the updating schedule that is detailed in this chapter, wherein the HMC is updated a day or two before the rest of the bundle is applied. Remember that if there is a large gap between the present and destination level of bundles, some DS CLI commands (specially copy services related) might not be executed until SFI is updated to the same level of the HMC. Your IBM Service Representative can assist you in this situation.

404 IBM DS8870 Architecture and Implementation

Page 423: IBM 8870 Archt

cards, regardless of whether they are used for open systems (FC) attachment or System z (FICON) attachment, the update process is concurrent to the attached hosts. The Fibre Channel cards use a technique that is known as adapter fast-load. This technique allows the cards to switch to the new firmware in less than 2 seconds. This fast update means that single path hosts, hosts that boot from SAN, and hosts that do not have multipathing software do not need to be shut down during the update. They can keep operating during the host adapter update because the update is so fast. Also, no SDD path management should be necessary.

Interactive HA also can be enabled, which means that before HA cards are updated, there is a notification and a confirmation is needed.

Remote Mirror and Copy path considerationsFor Remote Mirror and Copy paths that use Fibre Channel ports, there are no special considerations. The ability to perform a fast-load means that no interruption occurs to the Remote Mirror operations.

Control-unit initiated reconfigurationControl-unit initiated reconfiguration (CUIR) prevents loss of access to volumes in System z environments because of incorrect or wrong path handling. This function automates channel path management in System z environments in support of selected DS8870 service actions. CUIR is available for the DS8870 when operated in the z/OS and z/VM environments. The CUIR function automates channel path vary on and vary off actions to minimize manual operator intervention during selected DS8870 service actions.

CUIR allows the DS8870 to request that all attached system images set all paths that are required for a particular service action to the offline state. System images with the appropriate level of software support respond to these requests by varying off the affected paths, and notifying the DS8870 subsystem that the paths are offline, or that it cannot take the paths offline. CUIR reduces manual operator intervention and the possibility of human error during maintenance actions. CUIR also reduces the time that is required for the maintenance window. This feature is useful in environments in which there are many systems that are attached to a DS8870.

14.6 Loading the code bundleThe DS8870 code bundle installation is performed by the IBM Service Representative. Contact your IBM Service Representative to review and arrange the required services.

14.7 Post-installation activitiesAfter a new code bundle is installed, you might need to perform the following tasks:

1. Upgrade the DS CLI of external workstations. For most of new release code bundles, there is a corresponding new release of the DS CLI, with the LMC version and the DS CLI version usually being identical. Ensure that you upgrade to the new version of the DS CLI to take advantage of any improvements.

A current version of DS CLI can be best downloaded from Fix Central:

� http://www.ibm.com/support/fixcentral/swg/selectFixes?parent=Enterprise+Storage+Servers&product=ibm/Storage_Disk/DS8870

� http://www.ibm.com/support/entry/portal/downloads/hardware/system_storage/disk_systems/enterprise_storage_servers/ds8870

Chapter 14. Licensed machine code 405

Page 424: IBM 8870 Archt

When needed, you can use the following FTP site to retrieve previous versions of DS CLI:

ftp://ftp.software.ibm.com/storage/ds8000/updates/DS8K_Customer_Download_Files

2. Verify the connectivity from each DS CLI workstation to the DS8870.

3. Verify the DS Storage Manager connectivity from the Tivoli Storage Productivity Center to the DS8870.

4. Verify the connectivity from any stand-alone Tivoli Storage Productivity Center Element Manager to the DS8870.

5. Verify the connectivity from the DS8870 to all Key Lifecycle Manager Servers in use.

14.8 SummaryIBM might release changes to the DS8870 Licensed Machine Code. These changes might include code fixes and feature updates relevant to the DS8870.

These updates and the information about them are documented n the DS8870 Code Cross-Reference website. You can find this information by a specific bundle under the Bundle Release Note information section on the website.

406 IBM DS8870 Architecture and Implementation

Page 425: IBM 8870 Archt

Chapter 15. Monitoring with Simple Network Management Protocol

This chapter provides information about the Simple Network Management Protocol (SNMP) implementation and messages for the IBM System Storage DS8000 series.

This chapter covers the following topics:

� SNMP implementation on DS8000� SNMP notifications� SNMP configuration with the HMC� SNMP configuration with the DS CLI

15

© Copyright IBM Corp. 2014. All rights reserved. 407

Page 426: IBM 8870 Archt

15.1 SNMP implementation on the DS8000Simple Network Management Protocol (SNMP) as used by the DS8000 is designed so that the DS8000 sends traps only if there is a notification. The traps can be sent to a defined IP address.

SNMP alert traps provide information about problems that the storage unit detects. You or the service provider must perform corrective action for the trap-related problems.

The DS8000 does not include an installed SNMP agent that can respond to SNMP polling. The default Community Name parameter is set to public.

The management server that is configured to receive the SNMP traps receives all of the generic trap 6 and specific trap 3 messages, which are sent in parallel with the call home to IBM.

Before SNMP is configured for the DS8000, you are required to get the destination address for the SNMP trap and the port information about which the Trap Daemon listens.

15.1.1 Message Information Base fileThe DS8000 Series provides a Message Information Base (MIB) file that describes the SNMP trap objects. The file should be loaded by the software used for Enterprise and SNMP Monitoring.

The file is located in the snmp subdirectory on the data storage command-line interface (DS CLI) installation CD or available on the DS CLI installation CD image available from this FTP site:

ftp://ftp.software.ibm.com/storage/ds8000/updates/DS8K_Customer_Download_Files/CLI

15.1.2 Predefined SNMP trap requestsAn SNMP agent can send SNMP trap requests to SNMP managers to inform them about the change of values or status on the IP host where the agent is running. There are seven predefined types of SNMP trap requests, as shown in Table 15-1.

Table 15-1 SNMP trap request types

A trap message contains pairs of an object identifier (OID) and a value, as shown in Table 15-1, to notify the cause of the trap message. You can also use type 6, the

Standard port: The standard port for SNMP traps is port 162.

Trap type Value Description

coldStart 0 Restart after a crash.

warmStart 1 Planned restart.

linkDown 2 Communication link is down.

linkUp 3 Communication link is up.

authenticationFailure 4 Invalid SNMP community string was used.

egpNeighborLoss 5 EGP neighbor is down.

enterpriseSpecific 6 Vendor-specific event happened.

408 IBM DS8870 Architecture and Implementation

Page 427: IBM 8870 Archt

enterpriseSpecific trap type, when you must send messages that do not fit other predefined trap types, for example the DS8000 is using this type for notifications described in this chapter.

15.2 SNMP notificationsThe HMC of the DS8000 sends an SNMPv1 trap in the following cases:

� A serviceable event was reported to IBM by using call home.� An event occurred in the Copy Services configuration or processing.

A serviceable event is posted as a generic trap 6 specific trap 3 messages. The specific trap 3 is the only event that is sent for serviceable events. For reporting Copy Services events, generic trap 6 and specific traps 100, 101, 102, 200, 202, 210, 211, 212, 213, 214, 215, 216, or 217 are sent.

15.2.1 Serviceable event that uses specific trap 3In Example 15-1, we see the contents of generic trap 6, specific trap 3. The trap holds the following information:

� Serial number of the DS8000� Event number that is associated with the manageable events from the HMC� Reporting Storage Facility Image (SFI)� System reference code (SRC)� Location code of the part that is logging the event

The SNMP trap is sent in parallel with a call home for service to IBM.

Example 15-1 SNMP special trap 3 of a DS8000

Manufacturer=IBMReportingMTMS=2107-922*7503460ProbNm=345LparName=nullFailingEnclosureMTMS=2107-922*7503460SRC=10001510EventText=2107 (DS 8000) ProblemFru1Loc=U1300.001.1300885Fru2Loc=U1300.001.1300885U1300.001.1300885-P1

For open events in the event log, a trap is sent every eight hours until the event is closed.

15.2.2 Copy Services event trapsFor state changes in a remote Copy Services environment, 13 traps are implemented. The traps 1xx are sent for a state change of a physical link connection. The 2xx traps are sent for state changes in the logical Copy Services setup. For all of these events, no call home is generated and IBM is not notified.

Note: Consistency group traps (200 and 201) must be prioritized above all other traps and must be surfaced in less than 2 seconds from the real-time incident.

Chapter 15. Monitoring with Simple Network Management Protocol 409

Page 428: IBM 8870 Archt

This chapter describes only the messages and the circumstances when traps are sent by the DS8000. For more information about these functions and terms, see IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787 and IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788.

Physical connection eventsWithin the trap 1xx range, a state change of the physical links is reported. The trap is sent if the physical remote copy link is interrupted. The Link trap is sent from the primary system. The PLink and SLink columns are only used by the 2105 ESS disk unit.

If one or several links (but not all links) are interrupted, a trap 100 (as shown in Example 15-2), is posted and indicates that the redundancy is degraded. The RC column in the trap represents the return code for the interruption of the link; return codes are listed in Table 15-2 on page 411.

Example 15-2 Trap 100: Remote Mirror and Copy links degraded

PPRC Links DegradedUNIT: Mnf Type-Mod SerialNm LSPRI: IBM 2107-922 75-20781 12SEC: IBM 2107-9A2 75-ABTV1 24Path: Type PP PLink SP SLink RC1: FIBRE 0143 XXXXXX 0010 XXXXXX 152: FIBRE 0213 XXXXXX 0140 XXXXXX OK

If all of the links all interrupted, a trap 101 (as shown in Example 15-3) is posted. This event indicates that no communication between the primary and the secondary system is possible.

Example 15-3 Trap 101: Remote Mirror and Copy links are inoperable

PPRC Links DownUNIT: Mnf Type-Mod SerialNm LSPRI: IBM 2107-922 75-20781 10SEC: IBM 2107-9A2 75-ABTV1 20Path: Type PP PLink SP SLink RC1: FIBRE 0143 XXXXXX 0010 XXXXXX 172: FIBRE 0213 XXXXXX 0140 XXXXXX 17

After the DS8000 can communicate again by using any of the links, trap 102 (as shown in Example 15-4) is sent after one or more of the interrupted links are available again.

Example 15-4 Trap 102: Remote Mirror and Copy links are operational

PPRC Links UpUNIT: Mnf Type-Mod SerialNm LSPRI: IBM 2107-9A2 75-ABTV1 21SEC: IBM 2107-000 75-20781 11Path: Type PP PLink SP SLink RC1: FIBRE 0010 XXXXXX 0143 XXXXXX OK2: FIBRE 0140 XXXXXX 0213 XXXXXX OK

The Remote Mirror and Copy return codes are listed in Table 15-2 on page 411.

410 IBM DS8870 Architecture and Implementation

Page 429: IBM 8870 Archt

Table 15-2 Remote Mirror and Copy return codes

Return code Description

02 Initialization failed. IBM ESCON links reject threshold exceeded when attempting to send ELP or RID frames.

03 Timeout. No reason available.

04 There are no resources available in the primary storage unit for establishing logical paths because the maximum numbers of logical paths were established.

05 There are no resources available in the secondary storage unit for establishing logical paths because the maximum numbers of logical paths were established.

06 There is a secondary storage unit sequence number, or logical subsystem number, mismatch.

07 There is a secondary LSS subsystem identifier (SSID) mismatch, or failure of the I/O that collects the secondary information for validation.

08 The ESCON link is offline. This condition is caused by the lack of light detection that is coming from a host, peer, or switch.

09 The establish failed. It is tried again until the command succeeds or a remove paths command is run for the path.

The attempt-to-establish state persists until the establish path operation succeeds or the remove remote mirror and copy paths command is run for the path.

0A The primary storage unit port or link cannot be converted to channel mode if a logical path is already established on the port or link. The establish paths operation is not tried again within the storage unit.

10 Configuration error. The source of the error is caused by one of the following conditions:� The specification of the SA ID does not match the installed ESCON cards in the primary

controller.� For ESCON paths, the secondary storage unit destination address is zero and an ESCON

Director (switch) was found in the path.� For ESCON paths, the secondary storage unit destination address is not zero and an ESCON

director does not exist in the path. The path is a direct connection.

14 The Fibre Channel path link is down.

15 The maximum number of Fibre Channel path retry operations was exceeded.

16 The Fibre Channel path secondary adapter is not Remote Mirror and Copy capable. This incapability could be caused by one of the following conditions:� The secondary adapter is not configured properly or does not have the current firmware

installed. � The secondary adapter is already a target of 32 logical subsystems (LSSs).

17 The secondary adapter Fibre Channel path is not available.

18 The maximum number of Fibre Channel path primary login attempts was exceeded.

19 The maximum number of Fibre Channel path secondary login attempts was exceeded.

1A The primary Fibre Channel adapter is not configured properly or does not have the correct firmware level installed.

1B The Fibre Channel path was established but degraded because of a high failure rate.

1C The Fibre Channel path was removed because of a high failure rate.

Chapter 15. Monitoring with Simple Network Management Protocol 411

Page 430: IBM 8870 Archt

Remote Mirror and Copy eventsIf you configured consistency groups and a volume within this consistency group is suspended because of a write error to the secondary device, trap 200 is sent, as shown in Example 15-5. One trap per logical subsystem (LSS), which is configured with the consistency group option, is sent. This trap can be handled by automation software, such as Tivoli Storage Productivity Center, to freeze this consistency group. The SR column in the trap represents the suspension reason code, which explains the cause of the error that suspended the Remote Mirror and Copy group. Suspension reason codes are listed in Table 15-3 on page 415.

Example 15-5 Trap 200: LSS Pair Consistency Group Remote Mirror and Copy pair error

LSS-Pair Consistency Group PPRC-Pair ErrorUNIT: Mnf Type-Mod SerialNm LS LD SRPRI: IBM 2107-922 75-03461 56 84 08SEC: IBM 2107-9A2 75-ABTV1 54 84

Trap 202, as shown in Example 15-6, is sent if a Remote Copy Pair goes into a suspend state. The trap contains the serial number (SerialNm) of the primary and secondary machine, the LSS (LS), and the logical device (LD). To avoid SNMP trap flooding, the number of SNMP traps for the LSS is throttled. The complete suspended pair information is represented in the summary. The last row of the trap represents the suspend state for all pairs in the reporting LSS. The suspended pair information contains a hexadecimal string of a 64 characters. By converting this hex string into binary code, each bit represents a single device. If the bit is 1, then the device is suspended; otherwise, the device is still in full duplex mode.

Example 15-6 Trap 202: Primary Remote Mirror and Copy devices on the LSS were suspended because of an error

Primary PPRC Devices on LSS Suspended Due to ErrorUNIT: Mnf Type-Mod SerialNm LS LD SRPRI: IBM 2107-922 75-20781 11 00 03SEC: IBM 2107-9A2 75-ABTV1 21 00Start: 2005/11/14 09:48:05 CSTPRI Dev Flags (1 bit/Dev, 1=Suspended):C000000000000000000000000000000000000000000000000000000000000000

Trap 210, as shown in Example 15-7, is sent when a consistency group in a Global Mirror environment was successfully formed.

Example 15-7 Trap210: Global Mirror initial consistency group successfully formed

Asynchronous PPRC Initial Consistency Group Successfully FormedUNIT: Mnf Type-Mod SerialNmIBM 2107-922 75-20781Session ID: 4002

As shown in Example 15-8, trap 211 is sent if the Global Mirror setup is in a severe error state in which no attempts are made to form a consistency group.

Example 15-8 Trap 211: Global Mirror Session is in a fatal state

Asynchronous PPRC Session is in a Fatal StateUNIT: Mnf Type-Mod SerialNmIBM 2107-922 75-20781Session ID: 4002

412 IBM DS8870 Architecture and Implementation

Page 431: IBM 8870 Archt

Trap 212, as shown in Example 15-9, is sent when a consistency group cannot be created in a Global Mirror relationship for one of the following reasons:

� Volumes were taken out of a copy session.� The Remote Copy link bandwidth might not be sufficient.� The FC link between the primary and secondary system is not available.

Example 15-9 Trap 212: Global Mirror Consistency Group failure - Retry is attempted

Asynchronous PPRC Consistency Group Failure - Retry will be attemptedUNIT: Mnf Type-Mod SerialNmIBM 2107-922 75-20781Session ID: 4002

Trap 213, as shown in Example 15-10, is sent when a consistency group in a Global Mirror environment can be formed after a previous consistency group formation failure.

Example 15-10 Trap 213: Global Mirror Consistency Group successful recovery

Asynchronous PPRC Consistency Group Successful RecoveryUNIT: Mnf Type-Mod SerialNmIBM 2107-9A2 75-ABTV1Session ID: 4002

Trap 214, as shown in Example 15-11, is sent if a Global Mirror Session is terminated by using the DS CLI command rmgmir or the corresponding GUI function.

Example 15-11 Trap 214: Global Mirror Master terminated

Asynchronous PPRC Master TerminatedUNIT: Mnf Type-Mod SerialNmIBM 2107-922 75-20781Session ID: 4002

As shown in Example 15-12, trap 215 is sent if, in the Global Mirror environment, the master detects a failure to complete the FlashCopy commit. The trap is sent after a number of commit retries failed.

Example 15-12 Trap 215: Global Mirror FlashCopy at remote site unsuccessful

Asynchronous PPRC FlashCopy at Remote Site UnsuccessfulA UNIT: Mnf Type-Mod SerialNmIBM 2107-9A2 75-ABTV1Session ID: 4002

Trap 216, as shown in Example 15-13 on page 414, is sent if a Global Mirror master cannot terminate the Global Copy relationship at one of their subordinates. This error might occur if the master is terminated by using the rmgmir command but the master cannot terminate the copy relationship on the subordinate.

You might need to run a rmgmir command against the subordinate to prevent any interference with other Global Mirror sessions.

Chapter 15. Monitoring with Simple Network Management Protocol 413

Page 432: IBM 8870 Archt

Example 15-13 Trap 216: Global Mirror subordinate termination unsuccessful

Asynchronous PPRC Slave Termination UnsuccessfulUNIT: Mnf Type-Mod SerialNmMaster: IBM 2107-922 75-20781Slave: IBM 2107-921 75-03641Session ID: 4002

Trap 217, as shown in Example 15-14, is sent if a Global Mirror environment was suspended by the DS CLI command pausegmir or the corresponding GUI function.

Example 15-14 Trap 217: Global Mirror paused

Asynchronous PPRC PausedUNIT: Mnf Type-Mod SerialNmIBM 2107-9A2 75-ABTV1Session ID: 4002

As shown in Example 15-15, trap 218 is sent if a Global Mirror exceeded the allowed threshold for failed consistency group formation attempts.

Example 15-15 Trap 218: Global Mirror number of consistency group failures exceed threshold

Global Mirror number of consistency group failures exceed thresholdUNIT: Mnf Type-Mod SerialNmIBM 2107-9A2 75-ABTV1Session ID: 4002

Trap 219, as shown in Example 15-16, is sent if a Global Mirror successfully formed a consistency group after one or more formation attempts previously failed.

Example 15-16 Trap 219: Global Mirror first successful consistency group after prior failures

Global Mirror first successful consistency group after prior failuresUNIT: Mnf Type-Mod SerialNmIBM 2107-9A2 75-ABTV1Session ID: 4002

Trap 220, as shown in Example 15-17, is sent if a Global Mirror exceeded the allowed threshold of failed FlashCopy commit attempts.

Example 15-17 Trap 220: Global Mirror number of FlashCopy commit failures exceed threshold

Global Mirror number of FlashCopy commit failures exceed thresholdUNIT: Mnf Type-Mod SerialNmIBM 2107-9A2 75-ABTV1Session ID: 4002

414 IBM DS8870 Architecture and Implementation

Page 433: IBM 8870 Archt

Table 15-3 shows the Copy Services suspension reason codes.

Table 15-3 Copy Services suspension reason codes

15.2.3 I/O Priority Manager SNMPWhen the I/O Priority Manager Control switch is set to Monitor or Managed, an SNMP trap alert message also can be enabled. The DS8000 microcode monitors for rank saturation. If a rank is being overdriven to the point of saturation (high usage), an SNMP trap alert message #224 is posted to the SNMP server, as shown in Example 15-18 on page 416.

Suspension reason code Description

03 The host system sent a command to the primary volume of a Remote Mirror and Copy volume pair to suspend copy operations. The host system might specify an immediate suspension or a suspension after the copy completed and the volume pair reached a full duplex state.

04 The host system sent a command to suspend the copy operations on the secondary volume. During the suspension, the primary volume of the volume pair can still accept updates but updates are not copied to the secondary volume. The out-of-sync tracks that are created between the volume pair are recorded in the change recording feature of the primary volume.

05 Copy operations between the Remote Mirror and Copy volume pair were suspended by a primary storage unit secondary device status command. This system resource code can be returned only by the secondary volume.

06 Copy operations between the Remote Mirror and Copy volume pair were suspended because of internal conditions in the storage unit. This system resource code can be returned by the control unit of the primary volume or the secondary volume.

07 Copy operations between the remote mirror and copy volume pair were suspended when the secondary storage unit notified the primary storage unit of a state change transition to simplex state. The specified volume pair between the storage units is no longer in a copy relationship.

08 Copy operations were suspended because the secondary volume became suspended because of internal conditions or errors. This system resource code can be returned only by the primary storage unit.

09 The Remote Mirror and Copy volume pair was suspended when the primary or secondary storage unit was rebooted or when the power was restored. The paths to the secondary storage unit might not be disabled if the primary storage unit was turned off. If the secondary storage unit was turned off, the paths between the storage units are restored automatically, if possible. After the paths are restored, issue the mkpprc command to resynchronize the specified volume pairs. Depending on the state of the volume pairs, you might have to issue the rmpprc command to delete the volume pairs and reissue a mkpprc command to reestablish the volume pairs.

0A The Remote Mirror and Copy pair was suspended because the host issued a command to freeze the Remote Mirror and Copy group. This system resource code can be returned only if a primary volume was queried.

Chapter 15. Monitoring with Simple Network Management Protocol 415

Page 434: IBM 8870 Archt

The following SNMPs rules are followed:

� Up to 8 SNMP traps per SFI server in 24 hour period (maximum: 16 per 24 hours per SFI).

� Rank enters saturation state if in saturation for five consecutive 1-minute samples.

� Rank exits saturation state if not in saturation for three of five consecutive 1-minute samples.

� SNMP message #224 is reported when a rank enters saturation or every 8 hours if in saturation. The message identifies the rank and SFI. See Example 15-18.

Example 15-18 SNMP trap alert message #224

Rank Saturated UNIT: Mnf Type-Mod SerialNm IBM 2107-951 75-ACV21 Rank ID: R21 Saturation Status: 0

15.2.4 Thin provisioning SNMPThe DS8000 can trigger two specific SNMP trap alerts that are related to the thin provisioning feature. The trap is sent out when certain extent pool capacity thresholds are reached, which causes a change in the extent status attribute. A trap is sent under the following conditions:

� Extent status is not zero (available space is already below threshold) when the first extent space-efficient (ESE) volume is configured

� Extent status changes state if ESE volumes are configured in extent pool

Example 15-19 shows an illustration of generated event trap 221.

Example 15-19 Trap 221: Space Efficient repository or over-provisioned volume reached a warning watermark

Space Efficient Repository or Over-provisioned Volume has reached a warning watermarkUnit: Mnf Type-Mod SerialNmIBM 2107-922 75-03460Session ID: 4002

Example 15-20 shows an illustration of generated event trap 223.

Example 15-20 SNMP trap alert message #223

Extent Pool Capacity Threshold ReachedUNIT: Mnf Type-Mod SerialNmIBM 2107-922 75-03460Extent Pool ID: P1Limit: 95%Threshold: 95%Status: 0

Important: To receive traps from I/O Priority Manager, IOPM should be set to manage SNMP by using the following command:

chsi -iopmmode managesnmp <Storage_Image>

416 IBM DS8870 Architecture and Implementation

Page 435: IBM 8870 Archt

15.3 SNMP configurationThe SNMP for the DS8000 is designed to send traps as notifications. The DS8000 does not include an installed SNMP agent that can respond to SNMP polling. Also, the SNMP community name for Copy Services-related traps is fixed and set to public.

15.3.1 SNMP preparationDuring the planning for the installation (see 9.3.4, “Monitoring DS8870 with the HMC” on page 249), the IP addresses of the management system are provided for the IBM service personnel. This information must be applied by IBM service personnel during the installation. Also, IBM service personnel can configure the HMC to send a notification for every serviceable event or for only those events that call home to IBM.

The network management server that is configured on the HMC receives all the generic trap 6 specific trap 3 messages, which are sent in parallel with any events that call home to IBM.

SNMP alerts can contain a combination of a generic and a specific alert trap. The Traps list outlines the explanations for each of the possible combinations of generic and specific alert traps. The format of the SNMP traps, the list, and the errors that are reported by SNMP are available in chapter: Generic and specific alert traps of the Troubleshooting section of the DS8000 Information Center, at the following site:

http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/topic/com.ibm.storage.ssic.help.doc/f2c_genspecalerttraps_2o5tk8.html

SNMP alert traps provide information about problems that the storage unit detects. You or the service provider must perform corrective action for the related problems.

15.3.2 SNMP configuration from the HMCCustomers can configure the SNMP alerting by logging in to DS8000 HMC Service Management (https://HMC_ip_address) from remote or local through a web browser and launch the web application by using the following client credentials:

� User ID: customer� Password: cust0mer

Complete the following steps to configure SNMP at the HMC:

1. Log in to the Service Management section on the HMC, as shown in Figure 15-1.

Figure 15-1 HMC Service Management

Chapter 15. Monitoring with Simple Network Management Protocol 417

Page 436: IBM 8870 Archt

2. Select Manage Serviceable Event Notification (as shown in Figure 15-2) and enter the TCP/IP information of the SNMP server in the Trap Configuration folder.

Figure 15-2 HMC Management Serviceable Event Notification

3. To verify the successful setup of your environment, create a Test Event on your DS8000 HMC. Select Storage Facility Management Services Utilities Test Problem Notification (PMH, SNMP, Email), as shown in Figure 15-3.

Figure 15-3 HMC test SNMP trap

418 IBM DS8870 Architecture and Implementation

Page 437: IBM 8870 Archt

The test generates the Service Reference Code BEB20010 and the SNMP server receives the SNMP trap notification, as shown in Figure 15-4.

Figure 15-4 HMC SNMP trap test

15.3.3 SNMP configuration with the DS CLIPerform the configuration process for receiving the Copy Services-related traps by using the DS CLI. Example 15-21 shows how SNMP is enabled by using the chsp command.

Example 15-21 Configuring the SNMP by using dscli

dscli> chsp -snmp on -snmpaddr 10.10.10.1,10.10.10.2CMUC00040I chsp: Storage complex IbmStoragePlex successfully modified.

dscli> showspName IbmStoragePlexdesc -acct -SNMP EnabledSNMPadd 10.10.10.1,10.10.10.2emailnotify Disabledemailaddr -emailrelay Disabledemailrelayaddr -emailrelayhost -numkssupported 4

Chapter 15. Monitoring with Simple Network Management Protocol 419

Page 438: IBM 8870 Archt

SNMP preparation for the management softwareTo enable the Trap receiving software to display the correctly decoded message in a human readable format, load the DS8000 specific MIB file.

The MIB file delivered with the latest DS8000 DS CLI CD is compatible with all previous levels of DS8000 microcode. Therefore, ensure that you have loaded the latest MIB file available.

420 IBM DS8870 Architecture and Implementation

Page 439: IBM 8870 Archt

Chapter 16. Remote support

This chapter describes the outbound (call home and support data offload) and inbound (code download and remote support) communications for the IBM System Storage DS8000 family.

The DS8870 maintains the same functions as in the previous generation. Special emphasis was placed on the Assist On-site (AOS) section. It is an additional method for remote access to IBM products.

This chapter covers the following topics:

� Introduction to remote support� IBM policies for remote support� VPN advantages� Remote connection types� DS8870 support tasks� Remote connection scenarios� Assist On-site� Audit logging

16

© Copyright IBM Corp. 2014. All rights reserved. 421

Page 440: IBM 8870 Archt

16.1 Introduction to remote supportRemote support is a complex topic that requires close scrutiny and education. IBM is committed to servicing the DS8870, whether it is warranty work, planned code upgrades, or management of a component failure, in a secure and professional manner. Dispatching service personnel to come to your site and perform maintenance on the system is still a part of that commitment. But as much as possible, IBM wants to minimize downtime and maximize efficiency by performing many support tasks remotely.

This plan of providing support remotely must be balanced with the client’s expectations for security. Maintaining the highest levels of security in a data connection is a primary goal for IBM. This goal can be achieved only by careful planning with a client and a thorough review of all available options.

16.1.1 Suggested readingThe following resources can be used to better understand IBM remote support offerings:

� Chapter 8, “DS8870 physical planning and installation” on page 217, contains more information about physical planning.

� A Comprehensive Guide to Virtual Private Networks, Volume I: IBM Firewall, Server and Client Solutions, SG24-5201, can be downloaded at this website:

http://www.redbooks.ibm.com/abstracts/sg245201.html?Open

� The Security Planning website is available at the following URL:

http://publib16.boulder.ibm.com/doc_link/en_US/a_doc_lib/aixbman/security/ipsec_planning.htm

� VPNs Illustrated: Tunnels, VPNs, and IPSec, by Jon C. Snader:

http://www.amazon.com/VPNs-Illustrated-Tunnels-IPsec/dp/032124544X

� VPN Implementation, S1002693, which is available at this site:

http://www.ibm.com/support/docview.wss?&rs=1114&uid=ssg1S1002693

16.1.2 Organization of this chapterA list of the relevant terminology for remote support is presented first. The remainder of this chapter is organized as follows:

� Connections

We review the types of connections that can be made from the HMC to the world outside of the DS8870.

� Tasks

We review the various support tasks that must be run on those connections.

� Scenarios

We present a scenario that describes how each task is performed over the types of remote connections.

Important: The client has the flexibility to quickly enable or disable remote support connectivity by issuing the chaccess or lsaccess commands by using the data storage command-line interface (DS CLI).

422 IBM DS8870 Architecture and Implementation

Page 441: IBM 8870 Archt

16.1.3 Terminology and definitionsListed here are brief explanations of some of the terms that are used when remote support is described. Having an understanding of these terms contributes to your discussions on remote support and security concerns. A generic definition is presented here and then more specific information about how IBM implements the idea is presented later in this chapter.

IP networkThere are many protocols that are running on local area networks (LANs) around the world. Most companies use the Transmission Control Protocol/Internet Protocol (TCP/IP) standard for their connectivity between workstations and servers. IP is also the networking protocol of the global Internet. Web browsing and email are two of the most common applications that run on top of an IP network. IP is the protocol that is used by the DS8870 Hardware Management Console (HMC) to communicate with external systems, such as the Tivoli Storage Productivity Center or DS CLI workstations. There are two varieties of IP: IPv4 and IPv6. For more information about these networks, see Chapter 9, “DS8870 HMC planning and setup” on page 241.

Secure ShellSecure Shell (SSH) is a protocol that establishes a secure communications channel between two computer systems. The term SSH also is used to describe a secure American Standard Code for Information Interchange (ASCII) terminal session between two computers. SSH can be enabled on a system when regular Telnet and File Transfer Protocol (FTP) are disabled, which makes it possible to communicate only with the computer in a secure manner.

File Transfer ProtocolFTP is a method of moving binary and text files from one computer system to another over an IP connection. FTP is not inherently secure as it has no provisions for encryption and features only simple user and password authentication. FTP is considered appropriate for data that is already public, or if the entirety of the connection is within the physical boundaries of a private network.

Secure Sockets LayerSecure Sockets Layer (SSL) refers to methods of securing otherwise unsecure protocols such as HTTP (websites), FTP (files), or SMTP (email). Carrying HTTP over SSL often is referred to as HTTPS. An SSL connection over the Internet is considered reasonably secure.

Virtual Private Network A Virtual Private Network (VPN) is a private tunnel through a public network. Most commonly, it refers to the use of specialized software and hardware to create a secure connection over the Internet. The two systems, although physically separate, behave as though they are on the same private network. A VPN allows a remote worker or an entire remote office to remain part of a company’s internal network. VPNs provide security by encrypting traffic, authenticating sessions and users, and verifying data integrity.

Assist On-siteTivoli Assist On-site (AOS) is an IBM remote support option that allows SSL connectivity to a system that is at the client site and used to troubleshoot storage devices. With AOS Version 3.3, AOS offers port forwarding as a solution that grants customers attended and unattended sessions. IBM support can use this methodology with VPN for data analysis. For more information, see the Introduction section of Assist On-site, SG24-4889.

Chapter 16. Remote support 423

Page 442: IBM 8870 Archt

IPSecInternet Protocol Security (IPSec) is a suite of protocols that is used to provide a secure transaction between two systems that use the Transmission Control Protocol/Internet Protocol (TCP/IP) network protocol. IPSec focuses on authentication and encryption, two of the main ingredients of a secure connection. Most VPNs that are used on the Internet use IPSec mechanisms to establish the connection.

FirewallA firewall is a device that controls whether data is allowed to travel onto a network segment. Firewalls are deployed at the boundaries of networks. They are managed by policies that declare what traffic can pass based on the sender’s address, the destination address, and the type of traffic. Firewalls are an essential part of network security and their configuration must be considered when remote support activities are planned.

BandwidthBandwidth refers to the characteristics of a connection and how they relate to moving data. Bandwidth is affected by the physical connection, the logical protocols that are used, physical distance, and the type of data that is being moved. In general, higher bandwidth means faster movement of larger data sets.

16.2 IBM policies for remote supportThe following guidelines are at the core of IBM remote support strategies for the DS8870:

� When the DS8870 must transmit service data to IBM, only logs and process dumps are gathered for troubleshooting. The I/O from host adapters and the contents of NVS cache memory are never transmitted.

� When a VPN session with the DS8870 is needed, the HMC always initiates such connections and only to predefined IBM servers or ports. There is never any active process that is listening for incoming sessions on the HMC.

� IBM maintains multiple-level internal authorizations for any privileged access to the DS8870 components. Only approved IBM service personnel can gain access to the tools that provide the security codes for HMC command-line access.

� Although the HMC is based on a Linux operating system, IBM disabled or removed all unnecessary services, processes, and IDs, including standard Internet services, such as telnet, FTP, r commands, and remote procedure call (RCP) programs.

16.3 VPN rationale and advantagesSecurity is a critical issue for companies worldwide. Having a secure infrastructure requires systems to work together to mitigate the risk of malicious activity from external and internal sources. Any connection from your network to the public Internet raises the following security concerns:

� Infection by viruses� Intrusion by hackers� The accessibility of your data from the remote support site� Authorization of the remote users to access your machine when a remote connection is

opened

424 IBM DS8870 Architecture and Implementation

Page 443: IBM 8870 Archt

The IBM VPN connections, along with the built-in security features of the DS8870, allow IBM support to assist you in resolving the most complex problems without the risk inherent to non-secure connections.

Remote-access support can help to greatly reduce service costs and shorten repair times, which in turn lessens the impact of any failures on your business. The use of IBM security access provides a number of advantages that are designed help you to save time and money and efficiently solve problems.

The following benefits can be realized:

� Faster problem solving: You can contact technical experts in your support region to help resolve problems on your DS8870 without having to wait for data such as logs, dumps, and traces. As a result, problems can be solved faster.

� Connection with a worldwide network of experts: IBM Technical support engineers can call on other worldwide subject experts to assist with problem determination. These engineers can then simultaneously view the DS8870 Hardware Management Console.

� Closer monitoring and enhanced collaboration.

� Not a business-to-business connection: It is an HMC server-to-IBM VPN server connection, which could also be used as a call home.

� Save time and money: Many problems can be analyzed in advance. When an IBM Service Representative arrives to your site, they already have an Action Plan, if required.

16.4 Remote connection typesThe DS8870 HMC can be connected to the client’s network by using a standard Ethernet (100/1000 Mb) cable. The HMC can also be connected to a phone line through a modem port. These two physical connections offer the following connection possibilities for sending and receiving data between the DS8870 and IBM:

� Asynchronous modem connection� IP network connection� IP network connection with VPN � Assist On-site

Rather than leaving the modem and Ethernet disconnected, clients provide these connections and then apply policies on when they are to be used and what type of data they can carry. Those policies are enforced by the settings on the HMC and the configuration of client network devices, such as routers and firewalls. The next four sections describe the capabilities of each type of connection.

16.4.1 Asynchronous modemA modem creates a low-speed asynchronous connection by using a telephone line that is plugged into the HMC modem port. This type of connection favors transferring small amounts of data. It is relatively secure because the data is not traveling across the Internet. However, this type of connection is not terribly useful because of bandwidth limitations. In some countries, average connection speed is high (28 - 56 Kbps), but in others, it can be lower.

VoIP: Connectivity issues are seen on Voice over IP (VoIP) phone infrastructures that do not support the Modem Over IP (MoIP) standard ITU V150.

Chapter 16. Remote support 425

Page 444: IBM 8870 Archt

The DS8870 HMC modem can be configured to call IBM and send small status messages. Authorized support personnel can call the HMC and get privileged access to the command line of the operating system. Typical PE Package transmission is not normally performed over a modem line because it can take too long, depending on the quality of the connection. Code downloads over a modem line are not possible.

The client controls whether the modem answers an incoming call. These options are changed from the WebUI on the HMC by selecting Service Management Manage Inbound Connectivity, as shown in Figure 16-1.

Figure 16-1 Service Management in WebUI

The HMC provides the following settings to govern the usage of the modem port:

� Unattended Session

This setting allows the HMC to answer modem calls without operator intervention. If this setting is disabled, someone must go to the HMC and allow for the next expected call. IBM Support must contact the client every time they must dial in to the HMC.

– Duration: Continuous

This option indicates that the HMC can always answer all calls.

– Duration: Automatic

This option indicates that the HMC answers all calls for a specified number of days after any Serviceable Event (problem) is created.

– Duration: Temporary

This option sets a starting and ending date, during which the HMC answers all calls.

426 IBM DS8870 Architecture and Implementation

Page 445: IBM 8870 Archt

These options are shown in Figure 16-2. A modem connection is shown in Figure 16-3 on page 435.

Figure 16-2 Modem settings

16.4.2 IP networkNetwork connections are considered high speed when compared to a modem. Enough data can flow through a network connection to make it possible to run a graphical user interface (GUI).

HMCs that are connected to a client IP network, and eventually to the Internet, can send status updates and offloaded problem data to IBM by using SSL sessions. It typically takes less than one hour to move the information.

Though favorable for speed and bandwidth, network connections introduce security concerns. Therefore, the following concerns must be considered:

� Verify the authenticity of data, that is, is it really from the sender it claims to be?� Verify the integrity of data, that is, has it been altered during transmission?� Verify the security of data, that is, can it be captured and decoded by unwanted systems?

The SSL protocol is one answer to these questions. It provides transport layer security with authenticity, integrity, and confidentiality, for a secure connection between the client network and IBM. Some of the following features are provided by SSL:

� Client and server authentication to ensure that the appropriate machines are exchanging data

� Data signing to prevent unauthorized modification of data while in transit

� Data encryption to prevent the exposure of sensitive information while data is in transit

� Traffic through an SSL proxy is supported by the user ID and password that is provided by the client.

A basic network connection is shown in Figure 16-5 on page 437.

Select this option to allow the HMCmodem to receive unattended calls

Modem will always answer

Modem will answer for n daysafter a new problem is opened

Modem will answer during this time period only

Chapter 16. Remote support 427

Page 446: IBM 8870 Archt

16.4.3 IP network with traditional VPNAdding a VPN tunnel to an IP network that is IPSec-based and not proxy capable, greatly increases the security of the connection between the two endpoints. Data can be verified for authenticity and integrity. Data can be encrypted so that even if it is captured en route, it cannot be replayed or deciphered. If required, Network Address Translation is supported and can be configured on request.

With the safety of running within a VPN, IBM can use its service interface (WebUI) to perform the following tasks:

� Check the status of components and services on the DS8870 in real time� Queue up diagnostic data offloads� Start, monitor, pause, and restart repair service actions

Performing the following steps results in the HMC creating a VPN tunnel back to the IBM network, which service personnel can then use. There is no VPN service that sits idle, waiting for a connection to be made by IBM. Only the HMC is allowed to initiate the VPN tunnel, and it can be made only to predefined IBM addresses. The following steps are used to create a VPN tunnel from the DS8870 HMC to IBM:

1. IBM support calls the HMC by using the modem. After the first level of authentications, the HMC is asked to launch a VPN session.

2. The HMC hangs up the modem call and initiates a VPN connection back to a predefined address or port within IBM Support.

3. IBM Support verifies that they can see and use the VPN connection from an IBM internal IP address.

4. IBM Support launches the WebUI or other high-bandwidth tools to work on the DS8870.

5. In addition to dialing via modem, a remote access VPN can be established via web user interface (WUI) from the HMC, by using Prepare under Manage Inbound Connections, or by DS CLI command.

An illustration of a traditional VPN connection is shown in Figure 16-6 on page 438.

16.4.4 Assist On-siteAssist On-site (AOS) provides a method of remote access. It does not support data offload or call home. For more information, see 16.7, “Assist On-site” on page 439.

16.5 DS8870 support tasksDS8870 support tasks require the HMC to contact the outside world. Some tasks can be performed by using the modem or the network connection. Some tasks can be done only over a network. The combination of tasks and connection types is described in 16.6, “Remote connection scenarios” on page 434. The following support tasks require the DS8870 to connect to outside resources:

� Call home and heartbeat� Data offload� Code download� Remote support

428 IBM DS8870 Architecture and Implementation

Page 447: IBM 8870 Archt

16.5.1 Call home and heartbeat: outboundHere we describe the call home and heartbeat capabilities.

Call homeCall home is the capability of the HMC to contact IBM Service to report a service event. It is referred to as call home for service. The HMC provides machine reported product data (MRPD) information to IBM by way of the call home facility. The MRPD information includes installed hardware, configurations, and features. The call home also includes information about the nature of a problem so that an active investigation can be started. Call home is a one-way communication, with data that moves from the DS8870 HMC to the IBM data store.

HeartbeatThe DS8870 also uses the call home facility to send proactive heartbeat information to IBM. A heartbeat is a small message with basic product information so that IBM knows that the unit is operational. By sending heartbeats, IBM and the client ensure that the HMC is always able to initiate a full call home to IBM in the case of an error. If the heartbeat information does not reach IBM, a service call to the client is made to investigate the status of the DS8870. Heartbeats represent a one-way communication, with data that moves from the DS8870 HMC to the IBM data store.

The call home facility can be configured to use the following data transfer methods:

� HMC modem � The Internet through an SSL connection� The Internet through a IPSec tunnel from the HMC to IBM

Call home information and heartbeat information is stored in the IBM internal data store so the support representatives can access the records.

16.5.2 Data offload: outboundFor many DS8870 problem events, such as a hardware component failure, a large amount of diagnostic data is generated. This data can include text and binary log files, firmware dumps, memory dumps, inventory lists, and timelines. These logs are grouped into collections by the component that generated them or the software service that owns them. The entire bundle is collected together in a PEPackage. A DS8870 PEPackage can be large, often exceeding 100 MB. In certain cases, more than one PEPackage might be needed to properly diagnose a problem. In certain cases, the IBM Support center might need an additional dump that is internally created by DS8870 or manually created through the intervention of an operator.

The HMC is a focal point, gathering and storing all the data packages. So, the HMC must be accessible if a service action requires the information. The data packages must be offloaded from the HMC and sent in to IBM for analysis. The offload can be done in the following ways:

� Modem offload� Standard FTP offload� SSL offload� VPN offload

On Demand Data Dump: The On Demand Data (ODD) Dump provides a mechanism that allows the collection of debug data for error scenarios. With ODD Dump, IBM can collect data after an initial error occurs with no impact to host I/O. ODD cannot be generated via DS CLI.

Chapter 16. Remote support 429

Page 448: IBM 8870 Archt

Modem offloadThe HMC can be configured to support automatic data offload by using the internal modem and a regular phone line. Offloading a PE Package over a modem connection is slow, often taking 15 - 20 hours. It also ties up the modem during that time and IBM support cannot dial in to the HMC to perform command-line tasks. Only files such as dumps (LPAR statesave) (which are about 10 MB) can stand this bandwidth. If this connectivity option is the only option that is available, be aware that the overall process of remote support is delayed while data is in transit.

Standard FTP offloadThe HMC can be configured to support automatic data offload by using File Transfer Protocol (FTP) over a network connection. This traffic can be examined at the client’s firewall before it is moved across the Internet. FTP offload allows IBM Service personnel to dial in to the HMC by using the modem line while support data is transmitted to IBM over the network.

When a direct FTP session across the Internet is not available or wanted, a client can configure the FTP offload to use a client-provided FTP proxy server. The client then becomes responsible for configuring the proxy to forward the data to IBM.

The client is required to manage its firewalls so that FTP traffic from the HMC (or from an FTP proxy) can pass onto the Internet.

SSL offloadFor environments that do not allow FTP traffic out to the Internet, the DS8870 also supports offload of data by using SSL security. In this configuration, the HMC uses the client-provided network connection to connect to the IBM data store, the same as in a standard FTP offload. But with SSL, all the data is encrypted so that it is rendered unusable if intercepted.

Client firewall settings between the HMC and the Internet for SSL setup require the following IP addresses that are open on port 443 based on geography:

� North and South America

129.42.160.48 IBM Authentication Primary207.25.252.200 IBM Authentication Secondary129.42.160.49 IBM Data Primary207.25.252.204 IBM Data Secondary

� All other regions

129.42.160.48 IBM Authentication Primary207.25.252.200 IBM Authentication Secondary129.42.160.50 IBM Data Primary207.25.252.205 IBM Data Secondary

Important: FTP offload of data is supported as an outbound service only. There is no active FTP server that is running on the HMC that can receive connection requests.

IP values: The IP values that are provided here could change with new code releases. If the values fail, consult the Information Center documentation in your DS8870 HMC by searching for isolating call home/remote services failure under the “Test the Internet (SSL) connection section”. For more information, contact the IBM DS8870 support center.

430 IBM DS8870 Architecture and Implementation

Page 449: IBM 8870 Archt

VPN offloadA remote service VPN session can be initiated at the HMC for data offload over a modem or an Internet VPN connection. At least one of these methods of connectivity must be configured through the Outbound Connectivity panel. The VPN session is always initiated outbound from the HMC, not inbound.

When a firewall is in place to shield the client network from the open Internet, the firewall must be configured to allow the HMC to connect to the IBM servers. The HMC establishes connection to the following TCP/IP addresses:

� IBM Boulder VPN Server: 207.25.252.196 � IBM Rochester VPN Server: 129.42.160.16

You must also enable the following ports and protocols:

� ESP� UDP Port 500� UDP Port 4500

Example 16-1 shows the output of defined permissions that is based on a Cisco PIX model 525 firewall.

Example 16-1 Cisco Firewall configuration

access-list DMZ_to_Outside permit esp host 207.25.252.196 host <IP addr for HMC>access-list DMZ_to_Outside permit esp host 129.42.160.16 host <IP addr for HMC>access-list DMZ_to_Outside permit udp host 207.25.252.196 host <IP addr for HMC> eq 500access-list DMZ_to_Outside permit udp host 129.42.160.16 host <IP addr for HMC> eq 500access-list DMZ_to_Outside permit udp host 207.25.252.196 host <IP addr for HMC> eq 4500access-list DMZ_to_Outside permit udp host 129.42.160.16 host <IP addr for HMC> eq 4500

Only the HMC client network must be defined for access to the IBM VPN Servers. The IPSec tunneling technology that is used by the VPN software, with the TCP/IP port forwarder on the HMC, provides the ability for IBM Service to access the DS8870 servers themselves through the secure tunnel.

Chapter 16. Remote support 431

Page 450: IBM 8870 Archt

Comparison of DS8870 connectivity optionsTable 16-1 shows the benefits and drawbacks of the types of connection. The terms remote access and remote service are used interchangeably throughout this document.

Service activities include problem reporting, debug data offload, and remote access. Enabling multiple options is allowed and must be used for optimal availability.

Table 16-1 Remote support connectivity comparison

Connectivity Option Pros Cons Comments

FTP - Fast debug data transfer to IBM- Allows proxying

- Does not support problem reporting or remote access

- To support all service activities, VPNInternet, or modem must also be enabled as an adjunct.

Internet (SSL) - Fast debug datatransfer to IBM- Supports problemreporting- For various reasons, such as proxying, SSL is easier to implementthan VPN

- Does not supportremote access

- To support remoteaccess, VPN Internet, or modem must also be enabled as anadjunct.

VPN Internet - Fast debug datatransfer- Supports all service activities

- Can be difficult toimplement in someenvironments- Does not allow you to inspect packets

- Generally the best option.- Might be the only option enabled.However, use the modem as a backup and for initiating remote access sessions.

AOS - SSL security- Secure Attended and Un-Attended Sessions- Economical connectivity solution- Easy installation

- An installation and configuration with IBM support required

- AOS is a preferred solution because it can add up more secure connectivity sessions.

Modem - Supports allservice activities- Allows IBM service to remotely initiate an outbound VPN session

- Extremely slow debug data transfer to IBM

- Might be the only option enabled.

432 IBM DS8870 Architecture and Implementation

Page 451: IBM 8870 Archt

16.5.3 Code download: inboundDS8870 microcode updates are published as bundles that can be downloaded from IBM. As described in 14.2, “Bundle installation” on page 402, the following possibilities are used for acquiring code on the HMC:

� Load the new code bundle by using CDs or DVDs.� Download the new code bundle directly from IBM by using FTP.� Download the new code bundle directly from IBM by using FTP over SSL (FTPS).

Loading code bundles from CDs or DVDs is the only option for DS8870 installations that do not include any outside connectivity. If the HMC is connected to the client network, IBM support downloads the bundles from IBM by using FTP or FTPS.

FTPIf allowed, the support representative opens an FTP session from the HMC to the IBM code repository and downloads the code bundle (or bundles) to the HMC. The client firewall must be configured to allow the FTP traffic to pass.

FTPSIf FTP is not allowed, an FTPS session can be used instead. FTPS is a more secure file transfer protocol that runs within an SSL session. If this option is used, the client firewall must be configured to allow the SSL traffic to pass.

After the code bundle is acquired from IBM, the FTP or FTPS session is closed and the code load can take place without needing to communicate outside of the DS8870.

16.5.4 Remote support: inbound and two-wayThe term remote support describes the most interactive level of assistance from IBM. After a problem comes to the attention of the IBM Support Center and it is determined that the issue is more complex than a straightforward parts replacement, the problem likely is escalated to higher levels of responsibility within IBM Support. This escalation could happen at the same time that a support representative is dispatched to the client site.

IBM might need to trigger a data offload, perhaps more than one, and at the same time be able to interact with the DS8870 to dig deeper into the problem and develop an action plan to restore the system to normal operation. This type of interaction with the HMC requires the most bandwidth.

If the only available connectivity is by modem, IBM Support must wait until any data offload is complete and then attempt the diagnostic testing and repair from a command-line environment on the HMC. This process is slower and more limited in scope than if a network connection can be used.

Another benefit of VPN is that IBM Support can offload data and troubleshoot in parallel with VPN over Ethernet. However, this task cannot be done with VPN over a modem. Upon establishing a secure session with the storage device, IBM Support can use ASCII end-to-end connection tools to diagnose and repair the problem.

Important: Package bundles are not available for users to download. Only IBM Support Representatives have the authority to use FTP or FTPS in the HMC to acquire a release bundle from the network. IBM Service Representatives also can download the bundle to their notebook and then load it on the HMC.

Chapter 16. Remote support 433

Page 452: IBM 8870 Archt

16.6 Remote connection scenariosNow that the four connection options were reviewed (see 16.4, “Remote connection types” on page 425) and the tasks were reviewed (see 16.5, “DS8870 support tasks” on page 428), we can examine how each task is performed, considering the type of access available to the DS8870.

16.6.1 No connectionsIf the modem or Ethernet is not physically connected and configured, the following tasks are performed:

� Call home and heartbeat: The HMC does not send heartbeats to IBM. The HMC does not call home if a problem is detected. IBM Support must be notified at the time of installation to add an exception for this DS8870 in the heartbeats database, indicating that it is not expected to contact IBM.

� Data offload: If required and allowed by the client, diagnostic data can be copied onto an SDHC rewritable media, transported to an IBM facility, and uploaded to the IBM data store.

� Code download: Code must be loaded onto the HMC by using CDs that are carried in by the IBM Service Representative who can also download the bundle to HMC via the Ethernet network.

� Remote support: IBM cannot provide any remote support for this DS8870. All diagnostic and repair tasks must take place with an operator who is physically at the console.

16.6.2 Modem onlyIf the modem is the only connectivity option, then the following tasks are performed:

� Call home and heartbeat: The HMC uses the modem to call IBM and send the call home data and the heartbeat data. These calls are short in duration.

� Data offload: After data offload is triggered, the HMC uses the modem to call IBM and send the data package. Depending on the package size and line quality, this call could take up to 20 hours to complete. Having modem and FTP is a great combination because data can be offloaded quickly while the modem calls home or remote support is engaged.

� Code download: Code must be loaded onto the HMC by using CDs or DVDs that are carried in by the Service Representative. There is no method of download if only a modem connection is available.

� Remote support: If the modem line is available (and is not being used to offload data or send call home data), IBM Support can dial in to the HMC and run commands in a command-line environment. IBM Support cannot use a GUI or any high-bandwidth tools.

434 IBM DS8870 Architecture and Implementation

Page 453: IBM 8870 Archt

A modem-only connection is shown in Figure 16-3.

Figure 16-3 Remote support with modem only

IBM Remote Support

Support Staff dial into HMCfor command-line access

No GUI

OR

PhoneLine

OR

Line

Data offloads and Call Homego to IBM over modem line

(one-way traffic)

DS8000Phone Line

IBM Data Store

Chapter 16. Remote support 435

Page 454: IBM 8870 Archt

16.6.3 VPN onlyIf the VPN is the only connectivity option, the following tasks are performed:

� Call home and heartbeat: The HMC uses the VPN network to call IBM and send the call home data and the heartbeat data.

� Data offload: After data offload is triggered, the HMC uses the VPN network to call IBM and send the data package. The package is sent to IBM server quickly.

� Remote support: An IBM Support Center representative calls you and asks you to open a VPN connection before the remote connection is started. After the VPN is opened, the IBM Support center can connect to the HMC and run commands in a command-line environment. IBM Support can use the Service Web Interface.

A VPN-only connection is shown in Figure 16-4.

Figure 16-4 Remote support with VPN only

IBM Remote SupportCustomer FirewallThe firewalls can easily identify the

traffic based on the ports used

The VPN connection from HMC to IBM is encrypted and authenticated

Data offloads and Call Homego to IBM over

IBM VPN

Internetgo to IBM over

the Internet through a VPN tunnel(one-way traffic)

IBM VPNDevice

DS8000IBM Firewall IBM Data Store

436 IBM DS8870 Architecture and Implementation

Page 455: IBM 8870 Archt

16.6.4 Modem and network with no VPNIf the modem and network access are provided without VPN, the following tasks are performed:

� Call home and heartbeat: The HMC uses the network connection to send call home data and heartbeat data to IBM across the Internet.

� Data offload: The HMC uses the network connection to send offloaded data to IBM across the Internet. Standard FTP or SSL sockets can be used.

� Remote support: Although there is a network connection, it is not configured to allow VPN traffic, so remote support must be done by using the modem. If the modem line is not busy, IBM Support can dial in to the HMC and run commands in a command-line environment. IBM Support cannot use a GUI or any high-bandwidth tools.

A modem and network connection that does not use VPN tunnels is shown in Figure 16-5.

Figure 16-5 Remote support with modem and network (no VPN)

IBM Remote SupportCustomer FirewallSupport Staff dial

into HMC for command-line access

Data offloads and Call Home

No GUI

Data offloads and Call Homego to IBM over

the Internet using FTP or SSL(one-way traffic)

InternetPhoneLineLine

HMC has no open networkports to receive connections

DS8000Phone Line IBM Firewall IBM Data Store

Chapter 16. Remote support 437

Page 456: IBM 8870 Archt

16.6.5 Modem and traditional VPNIf the modem and a VPN-enabled network connection are provided, the following tasks are performed:

� Call home and heartbeat: The HMC uses the network connection to send call home data and heartbeat data to IBM across the Internet, outside of a VPN tunnel.

� Data offload: The HMC uses the network connection to send offloaded data to IBM across the Internet, outside of a VPN tunnel. Standard FTP or SSL sockets can be used.

� Remote support: Upon request, the HMC establishes a VPN tunnel across the Internet to IBM. IBM Support can use tools to interact with the HMC.

A modem and network connection plus traditional VPN is shown in Figure 16-6.

Figure 16-6 Remote support with modem and traditional VPN

IBM Remote SupportCustomer FirewallThe firewalls can easily identify the

traffic based on the ports used

The VPN connection from HMC to IBM is encrypted and authenticated

IBM VPN

InternetPhoneLine

Data offloads and Call Homego to IBM over IBM VPN

DeviceLinego to IBM over

the Internet using FTP or SSL(one-way traffic)

DS8000Phone Line IBM Firewall IBM Data Store

438 IBM DS8870 Architecture and Implementation

Page 457: IBM 8870 Archt

16.7 Assist On-siteIBM Tivoli Assist On-site (AOS) is a software product that is provided by IBM at no cost and is designed to help clients. AOS offers a new method of remote support assistance for IBM products. It can be used with a wide-range of IBM hardware systems, including the DS8870.

AOS is a secured tunneling application server over SSL. It is controlled by the client at their facilities, and allows IBM support to access systems for diagnosis and troubleshooting. A client can have a server (it could be also a workstation or a virtual server) as the unique focal point in all their IT network infrastructure to manage and monitor all remote support requests for all different IBM products that support AOS. It gives the advantage of concentrating all remote support assistance in one point regardless of the type of specific remote maintenance tool that the IBM system or device requires. This simple concept allows for easy management and maintenance of the AOS equipment at the client site.

The client controls who (support individuals or support teams) can remotely support their equipment. The client can enable or disable different security options to decide what actions IBM remote support can do. The IBM remote support team can be granted full access to IBM systems or be allowed to see only the AOS window without the ability to interact. Customers can decide whether IBM remote support sessions are attended or unattended.

AOS can be used by DS8870 as a remote support method, which adds SSL security and allows the client to have more control over their environment. Some users are reluctant to implement VPN, even though it is a well-proven and consolidated secure option. To meet their security policies when they are using AOS, the client can decide to place the AOS client workstation in the DMZ or elsewhere rather than on the HMC.

AOS can be an alternative to remote support via modem, but AOS allows only inbound connectivity. Therefore, you still need to implement call home and data offload.

This section is not intended to be a comprehensive guide about AOS. We explain the fundamentals of AOS and specifically for DS8870 remote support.

For more information about AOS, prerequisites, and installation, see the IBM Redpaper publication Introduction to Assist On-site for DS8000, REDP4889.

Important: AOS cannot be used for call home or data offload. For more information about options to outbound connectivity (call home and data offload), see 16.5.1, “Call home and heartbeat: outbound” on page 429, and 16.5.2, “Data offload: outbound” on page 429.

Chapter 16. Remote support 439

Page 458: IBM 8870 Archt

16.8 Further remote support enhancementsThe client can customize what can be done to the DS8870 by using DS CLI commands. The following capabilities are available:

� Customer control of remote access

By using the HMC, the client can control the possibility of establishing remote access sessions from a GUI or command line by enabling or disabling specific users. The client is provided with a back-up command to disable or enable HMC users. The ability of the user to affect storage facility operations and maintenance is limited. With this addition of user control capability, customers who disable HMC users are expected to change the default password for user customers and manage the changed password.

� DS CLI command that invokes the HMC command

There are a series of DS CLI commands that serve as the main interface for the client to control remote access. The DS CLI commands invoke functions to allow the client to disable or enable remote access for any connection method and remote WUI access. There are two commands that are available to use for this purpose: chaccess, which enables and disables customized remote access, and lsaccess, which enables and disables all remote connections.

The chaccess command modifies access settings on an HMC for the command-line shell, the WUI, and modem access. The command uses the following syntax:

chaccess [ -commandline enable | disable ] [-wui enable | disable] [-modem enable | disable] hmc1 | hmc2

The description of the command parameters is listed in Table 16-2.

Table 16-2 Chaccess parameters description

Parameter Description Default Details

-commandline enable | disable (optional)

- Command Line Access via Internet/VPN

n/a - Optional. Enables or disables the command-line shell access to the HMC via Internet or VPN connection. This control is for service access only and has no effect on access to the machine via the DS command-line interface.- At least one of command-line, -wui, or modem must be specified.

-wui enable | disable (optional)

- WUI Access via Internet/VPN

n/a - Optional. Enables or disables the Hardware Management Console's WUI access on the HMC via Internet or VPN connection. This control is for service access only and has no effect on access to the machine by using the DS Storage Manager.- At least one of command-line, -wui, or modem must be specified.

-modem enable | disable (optional)

- Modem Dial-in and VPN Initiation

n/a - Optional. Enables or disables the modem dial-in and VPN initiation to the HMC.- At least one of command-line, -wui, or modem must be specified.

440 IBM DS8870 Architecture and Implementation

Page 459: IBM 8870 Archt

The lsaccess command displays access settings of primary and backup HMCs. The command uses the following syntax:

lsaccess [ hmc1 | hmc2 ]

The description of the command parameters is listed in Table 16-3.

Table 16-3 lsaccess parameters description

Each command runs with the remote connection type description. See Example 16-2 for an illustration of the lsaccess command output.

Example 16-2 lsaccess command output

dscli> lsaccesshmc commandline wui modem___ ___________ ___ _____hmc1 enabled enabled enabledhmc2 enabled enabled enabled

Use CasesThe user can toggle the following independent controls:

� Enable/Disable WUI Access via Internet/VPN� Enable/Disable Command Line Access via Internet/VPN� Enable/Disable Modem Dial in and VPN Initiation

- hmc1 | hmc2 (required)

- The primary or secondary HMC

n/a - Required. Specifies the primary (hmc1) or secondary (hmc2) HMC for which access should be modified.

Important: The hmc1 specifies the primary and hmc2 specifies the secondary HMC, regardless of how -hmc1 and -hmc2 were specified during dscli startup. A DS CLI connection might succeed, although a user inadvertently specifies a primary HMC by using –hmc2 and the secondary backup HMC by using –hmc1 at DS CLI start.

Parameter Description Default Details

hmc1 | hmc2 (optional) The primary or secondary HMC.

List access for all HMCs.

Required. Specifies the primary (hmc1) or secondary (hmc2) HMC for which settings should be displayed. If neither hmc1 or hmc2 is specified, settings for both are listed.

Important: The hmc1 specifies the primary and hmc2 specifies the secondary HMC, regardless of how -hmc1 and -hmc2 were specified during DS CLI start. A DS CLI connection might succeed, although a user inadvertently specifies a primary HMC by using –hmc2 and the secondary backup HMC by using –hmc1 at DS CLI start.

Parameter Description Default Details

Chapter 16. Remote support 441

Page 460: IBM 8870 Archt

The following use cases are available:

� Format� Option 1/Option 2/Option 3� D = Disabled� E = Enabled

The client can specify the following access options:

� D/D/D: No access is allowed.� E/E/E: Allow all access methods.� D/D/E: Only allow modem dial-in.� D/E/D: Only allow command-line access via network.� E/D/D: Only allow WUI access via network.� E/E/D: Allow WUI and command-line access via network.� D/E/E: Allow command-line access via network and modem dial-in.� E/D/E: Allow WUI access via network and model dial-in.

Customer notification of remote loginThe HMC code records all remote access, including modem, VPN, and network, in a log file. There is a DS CLI function that allows a client to offload this file for audit purposes. The DS CLI function combines the log file that contains all service login information with an ESSNI audit log file that contains all client configuration user logins via DS CLI and DS GUI. Therefore, it provides the client with a complete audit trial of remote access to an HMC.

This on-demand audit log mechanism is sufficient for client security requirements regarding HMC remote access notification.

In addition to the audit log, email notifications and SNMP traps also can be configured at the HMC to send notification in a remote support connection.

16.9 Audit loggingThe DS8870 offers an audit logging security function that is designed to track and log changes that are made by administrators that use e Storage Manager DS GUI or DS CLI. This function also documents remote support access activity to the DS8870. The audit logs can be downloaded by DS CLI or Storage Manager.

The DS CLI offloadauditlog command that provides clients with the ability to offload the audit logs to the DS CLI workstation in a directory of their choice is shown in Example 16-3.

Example 16-3 DS CLI command to download audit logs

dscli> offloadauditlog -logaddr smc1 c:\75ZA570_audit.txtDate/Time: October 2, 2012 15:30:40 CEST IBM DSCLI Version: 7.7.0.580 DS: -CMUC00244W offloadauditlog: The specified file currently exists. Are you sure you want to replace the file? [y/n]: yCMUC00243I offloadauditlog: Audit log was successfully offloaded from smc1 to c:\75ZA570_audit.txt.

The downloaded audit log is a text file that provides information about when a remote access session started and ended, and what remote authority level was applied. A portion of the downloaded file is shown in Example 16-4 on page 443.

442 IBM DS8870 Architecture and Implementation

Page 461: IBM 8870 Archt

Example 16-4 Audit log entries that are related to a remote support event by using a modem

U,2012/10/02 09:10:57:000 MST,,1,IBM.2107-75ZA570,N,8000,Phone_started,Phone_connection_started,,,U,2012/10/02 09:11:16:000 MST,,1,IBM.2107-75ZA570,N,8036,Authority_to_root,Challenge Key = 'Fy31@C37'; Authority_upgrade_to_root,,,U,2012/10/02 12:09:49:000 MST,customer,1,IBM.2107-75ZA570,N,8020,WUI_session_started,,,,U,2012/10/02 13:35:30:000 MST,customer,1,IBM.2107-75ZA570,N,8022,WUI_session_logoff,WUI_session_ended_loggedoff,,,U,2012/10/02 14:49:18:000 MST,,1,IBM.2107-75ZA570,N,8002,Phone_ended,Phone_connection_ended,,,

The Challenge Key that is presented to the IBM support representative is not a password on the HMC. It is a token that is shown to the IBM support representative who is dialing in to the DS8870. The representative must use the Challenge Key in an IBM internal tool to generate a Response Key that is given to the HMC. The Response Key acts as a one-time authorization to the features of the HMC. The Challenge and Response Keys change when a remote connection is made.

The Challenge-Response process must be repeated if the representative needs to escalate privileges to access the HMC command-line environment. There is no direct user login and no root login through the modem on a DS8870 HMC.

Entries are added to the audit file only after the operation completes. All information about the request and its completion status is known. A single entry is used to log request and response information. It is possible, though unlikely, that an operation does not complete because of an operation timeout. In this case, no entry is made in the log. The audit log entry includes the following roles:

� Log users that connect or disconnect to the storage manager.

� Log user password and user access violations.

� Log commands that create, remove, or modify logical configuration, including command parameters and user ID.

� Log commands that modify Storage Facility and Storage Facility settings, including command parameters and user ID.

� Log Copy Services commands, including command parameters and user ITPC-R commands are not supported).

Audit logs feature the following characteristics:

� Logs should be maintained for 30 days. It is the user's responsibility to periodically extract the log and save it away.

� Logs are automatically trimmed (FIFO) by the subsystem so they do not use more than 50 megabytes of disk storage.

For more information about how auditing is used to record who-did-what-and-when in the audited system and for a guide to log management, see this document:

http://csrc.nist.gov/publications/nistpubs/800-92/SP800-92.pdf

Chapter 16. Remote support 443

Page 462: IBM 8870 Archt

444 IBM DS8870 Architecture and Implementation

Page 463: IBM 8870 Archt

Chapter 17. DS8870 Capacity upgrades and Capacity on Demand

This chapter describes aspects of implementing capacity upgrades and Capacity on Demand (CoD) with the IBM DS8870.

This chapter covers the following topics:

� Installing capacity upgrades� Using Capacity on Demand

17

© Copyright IBM Corp. 2014. All rights reserved. 445

Page 464: IBM 8870 Archt

17.1 Installing capacity upgradesStorage capacity can be ordered and added to the DS8870 through disk drive sets. A disk drive set includes 16 disk drive modules (DDM) of the same capacity and spindle speed (rpm). All drives that are offered in the DS8870 are Full Disk Encryption (FDE) capable. Table 17-1 lists which DS8870 disk drive modules are available.

Table 17-1 DS8870 Disk drive types

The disk drives are installed in storage enclosures (SEs). A storage enclosure controller card, Fibre Channel interface card (FCIC), interconnects the drives to the device adapters. Each storage enclosure contains a redundant pair of controller cards. Each of the controller cards also includes redundant trunking, one connection to each of the device adapters in the DA Pair. Figure 3-15 on page 51 illustrates the available DS8870 Storage Enclosures.

Storage enclosures are always installed in pairs. A storage enclosure pair can be populated with one, two, or three disk drive sets (16, 32, or 48 DDMs). The LFF storage enclosure is an exception because one 4 TB drive set consists of eight disks. All DDMs in a storage enclosure pair must be of the same type (capacity and speed). Most commonly, each storage enclosure is shipped full with 24 DDMs, meaning that each pair has 48 DDMs. If a storage enclosure pair is populated with only 16 or 32 DDMs, disk drive filler modules that are called baffles are installed in the vacant DDM slots. Baffles maintain the correct cooling airflow throughout the enclosure.

Each storage enclosure attaches to two device adapters (DA). The DA cards are the RAID controllers that connect the central electronics complexes (CECs) to the DDMs. The DS8870 DA cards are always installed as a redundant pair, so they are referred to as DA pairs.

Physical installation and testing of the device adapters, storage enclosure pairs, and DDMs is done by your IBM service representative. After the capacity is added successfully, the new capacity is shown as unconfigured array sites.

Standard drives

SFF 146 GB 15-K rpm FDE SAS

SFF 300 GB 15-K rpm FDE SAS

SFF 600 GB 10-K rpm FDE SAS

SFF 1.2 TB 10-K rpm FDE SAS

LFF 4 TB 7.2-K rpm FDE Nearline SAS

SFF 400 GB FDE SSD

446 IBM DS8870 Architecture and Implementation

Page 465: IBM 8870 Archt

The SEs for DS8870 Enterprise Class installation order and DA-Pair Relationships are shown in Figure 17-1.

Figure 17-1 SE installation order and DA-Pair Relationship Enterprise Model

Storage enclosures for DS8870 Business Class are shown in Figure 17-2.

Figure 17-2 SE installation order and DA Pair-Relationship Business Class Model

Storage Enclosure to DA-Pair Relationships – Enterprise Class

A B C1 C2

3a13b1

1a11b1

7a17b1

6a16b1

5a15b1

3a23b2

2a22b2

0a10b1

6a26b2

5a25b21a21b2

0a20b2

4a24b2

3a33b3

2a32b37a37b3

6a36b3

1a31b3

0a30b3

5a35b3

4a34b3

7a27b2

4a14b1

2a12b1

0a40b4

1a41b4

2a42b4

3a43b4

7a47b4

6a46b4

5a45b4

4a44b4

1 00 1

3 22 3

5 44 5

7 66 7

1 00 1

3 22 3

5 44 5

7 66 7

bb

a

a

Device Adapter Pairs

DA-Pair

DA Interface

Position on Interface

5 a 4

dd

cc

Device Adapter Interfaces a-d

InstallOrder

InstallOrder

Front View of Subsystem

Rear View of Subsystem

B A

Storage Enclosure to DA-Pair Relationships – Business Class

A B C1 C2

2a42b3

2a42b4

0a40b4

0a20b2

3a13b1

1a21b2

0a10b1

2b12b2

3a23b2

1a41b41a31b3

1a11b1

3a43b4

4a24b2

6a46b46a36b3

6a16b1

4a34b3

4a14b1

4a44b4

6a26b2

3a23b2

0a30b3

2a12b1

5a15b1

5a35b3

7a47b4

5a25b2

7a37b3

7a17b1

5a45b4

7a27b2

1 00 1

3 22 3

5 44 5

7 66 7

1 00 1

3 22 3

5 44 5

7 66 7

bb

a

a

Device Adapter Pairs

DA-Pair

DA Interface

Position on Interface

5 a 4

dd

cc

Device Adapter Interfaces a-d

InstallOrder

InstallOrder

Front View of Subsystem

Rear View of Subsystem

B A

Chapter 17. DS8870 Capacity upgrades and Capacity on Demand 447

Page 466: IBM 8870 Archt

After a capacity upgrade, you might need to obtain new license keys and apply them to the storage image before you start configuring the new capacity. For more information, see Chapter 10, “IBM System Storage DS8000 features and licensed functions” on page 259. You cannot create ranks for the new capacity if this action causes your machine to exceed its license key limits. Applying increased feature activation codes is a concurrent action, but a license reduction or deactivation is often a disruptive action.

17.1.1 Installation order of upgradesIndividual machine configurations vary, so it is not possible to give an exact order in which every storage upgrade is installed. An upgrade installation order also is difficult to provide because it is possible to order a machine with multiple under-populated storage enclosures (SEs) across the DA pairs. The configuration is done in a way to allow future upgrades to be performed with the fewest physical changes. However, DA cards must be installed before storage enclosures can be added to the DA Pair and storage enclosures must be installed before DDMs can be added to the storage enclosures. All storage upgrades are concurrent.

Generally, when capacity is added to a DS8870, storage hardware is populated in the following order:

1. DDMs are added to under-populated enclosures. Whenever you add 16 DDMs to a machine, eight DDMs are installed into the upper storage enclosure and eight into the lower storage enclosure of the pair. If you add 3 full drive sets, 24 are installed in the upper storage enclosure and 24 are installed in the lower storage enclosure of the pair.

2. After the first storage enclosure pair on a DA pair is fully populated with DDMs (48 DDMs total), the next two storage enclosures to be populated are connected to a new DA pair for an Enterprise Class configuration or are added to the same DA pair for a Business Class configuration. The DA cards are installed into the I/O enclosures that are at the bottom of the base frame and the first expansion frame. For more information, see “Frames: DS8870” on page 36.

3. Each DA Pair can manage a maximum of four storage enclosure pairs (192 DDMs). Storage enclosure installation order always is done from the bottom to the top of the frame. For more information, see Figure 17-3 on page 452.

17.1.2 Checking how much total capacity is installedThe following data storage command-line interface (DS CLI) commands can be used to check how many DAs, SEs, and DDMs are installed in your DS8870:

� lsda � lsstgencl � lsddm � lsarraysite

Solid-state flash drives: Special restrictions in terms of placement and intermixing apply when solid-state flash drives are added. Refer to Chapter 8, section 8.5.3, “DS8000 solid-state drive considerations” on page 239 to check the placement and restrictions using solid-state flash drives.

Intermix DDM installation: If there is an intermix DDM installation, consult with your IBM Representative to verify the optimal configuration to get the maximum capacity from the new storage.

448 IBM DS8870 Architecture and Implementation

Page 467: IBM 8870 Archt

When the -l parameter is added to these commands, more information is shown. In the next section, we show examples of using these commands.

For these examples, the target DS8870 has four device adapter pairs (a total of eight DAs) and five storage enclosure pairs (a total of 10 SEs). Four SE pairs are fully populated and one SE pair features 16 SSDs (one drive set) and 32 baffles in which drives are not populated. These five storage enclosure pairs include one SE pair that contains 24 LFF 3-TB Nearline drives. There are 184 DDMs and 24 array sites because each array site consists of eight DDMs. In the examples, all array sites are in use, meaning that an array was created on each array site.

Example 17-1 shows a listing of the device adapters.

Example 17-1 List the device adapters

dscli> lsda -l IBM.2107-75ZA571Date/Time: October 8, 2012 11:13:52 CEST IBM DSCLI Version: 7.7.x.xxx DS: IBM.2107-75ZA571ID State loc FC Server DA pair interfs========================================================================================================IBM.1400-1B1-38490/R0-P1-C3 Online U1400.1B1.RJ38490-P1-C3 - 0 0 0x0030,0x0031,0x0032,0x0033IBM.1400-1B1-38490/R0-P1-C6 Online U1400.1B1.RJ38490-P1-C6 - 1 1 0x0060,0x0061,0x0062,0x0063IBM.1400-1B2-38491/R0-P1-C3 Online U1400.1B2.RJ38491-P1-C3 - 0 1 0x0130,0x0131,0x0132,0x0133IBM.1400-1B2-38491/R0-P1-C6 Online U1400.1B2.RJ38491-P1-C6 - 1 0 0x0160,0x0161,0x0162,0x0163IBM.1400-1B3-38477/R0-P1-C3 Online U1400.1B3.RJ38477-P1-C3 - 0 2 0x0230,0x0231,0x0232,0x0233IBM.1400-1B3-38477/R0-P1-C6 Online U1400.1B3.RJ38477-P1-C6 - 1 3 0x0260,0x0261,0x0262,0x0263IBM.1400-1B4-38489/R0-P1-C3 Online U1400.1B4.RJ38489-P1-C3 - 0 3 0x0330,0x0331,0x0332,0x0333IBM.1400-1B4-38489/R0-P1-C6 Online U1400.1B4.RJ38489-P1-C6 - 1 2 0x0360,0x0361,0x0362,0x0363

Example 17-2 shows a listing of the storage enclosures.

Example 17-2 List the storage enclosures

dscli> lsstgencl IBM.2107-75ZA571Date/Time: October 8, 2012 11:17:02 CEST IBM DSCLI Version: 7.7.0.xxx DS: IBM.2107-75ZA571ID Interfaces interadd stordev cap (GB) RPM=====================================================================================IBM.2107-D02-0752R/R1-S04 0x0062,0x0132,0x0063,0x0133 0x0 24 300.0 15000IBM.2107-D02-0774H/R1-S08 0x0032,0x0162,0x0033,0x0163 0x0 24 900.0 10000IBM.2107-D02-07764/R1-S03 0x0060,0x0130,0x0061,0x0131 0x0 24 300.0 15000IBM.2107-D02-077B5/R1-S06 0x0262,0x0332,0x0263,0x0333 0x0 24 300.0 15000IBM.2107-D02-077H8/R1-S05 0x0260,0x0330,0x0261,0x0331 0x0 24 300.0 15000IBM.2107-D02-077PN/R1-S07 0x0030,0x0160,0x0031,0x0161 0x0 24 900.0 10000IBM.2107-D02-0792T/R1-S10 0x0232,0x0362,0x0233,0x0363 0x0 8 400.0 65000IBM.2107-D02-0797F/R1-S09 0x0230,0x0360,0x0231,0x0361 0x0 8 400.0 65000IBM.2107-D02-07E8K/R1-S01 0x0230,0x0360,0x0231,0x0361 0x1 12 3000.0 7200IBM.2107-D02-07E91/R1-S02 0x0232,0x0362,0x0233,0x0363 0x1 12 3000.0 7200

Example 17-3 shows a listing of the storage drives. Because there are 184 DDMs in the example machine, only a partial list is shown here.

Example 17-3 List the DDMs (abbreviated)

dscli> lsddm IBM.2107-75ZA571Date/Time: October 8, 2012 11:19:15 CEST IBM DSCLI Version: 7.7.x.xxx DS: IBM.2107-75ZA571ID DA Pair dkcap (10^9B) dkuse arsite State================================================================================================IBM.2107-D02-0792T/R1-P1-D1 2 400.0 array member S17 NormalIBM.2107-D02-0792T/R1-P1-D2 2 400.0 array member S16 NormalIBM.2107-D02-0792T/R1-P1-D3 2 400.0 array member S17 NormalIBM.2107-D02-0792T/R1-P1-D4 2 400.0 array member S16 Normal

Chapter 17. DS8870 Capacity upgrades and Capacity on Demand 449

Page 468: IBM 8870 Archt

IBM.2107-D02-0792T/R1-P1-D5 2 400.0 array member S16 NormalIBM.2107-D02-0792T/R1-P1-D6 2 400.0 array member S16 NormalIBM.2107-D02-0792T/R1-P1-D7 2 400.0 spare required S17 NormalIBM.2107-D02-0792T/R1-P1-D8 2 400.0 array member S17 NormalIBM.2107-D02-0797F/R1-P1-D1 2 400.0 array member S16 NormalIBM.2107-D02-0797F/R1-P1-D2 2 400.0 array member S17 NormalIBM.2107-D02-0797F/R1-P1-D3 2 400.0 array member S17 NormalIBM.2107-D02-0797F/R1-P1-D4 2 400.0 array member S17 Normal

In Example 17-4, a listing of the array sites is shown.

Example 17-4 List the array sites

dscli> lsarraysite -dev IBM.2107-75ZA571Date/Time: October 8, 2012 11:25:27 CEST IBM DSCLI Version: 7.7.x.xxx DS: IBM.2107-75ZA571arsite DA Pair dkcap (10^9B) State Array===========================================S1 0 900.0 Assigned A17S2 0 900.0 Assigned A18S3 0 900.0 Assigned A19S4 0 900.0 Assigned A20S5 0 900.0 Assigned A21S6 0 900.0 Assigned A22S7 1 300.0 Assigned A5S8 1 300.0 Assigned A6S9 1 300.0 Assigned A7S10 1 300.0 Assigned A8S11 1 300.0 Assigned A9S12 1 300.0 Assigned A10S13 2 3000.0 Assigned A2S14 2 3000.0 Assigned A3S15 2 3000.0 Assigned A4S16 2 400.0 Assigned A0S17 2 400.0 Assigned A1S18 3 300.0 Assigned A11S19 3 300.0 Assigned A12S20 3 300.0 Assigned A13S21 3 300.0 Assigned A14S22 3 300.0 Assigned A15S23 3 300.0 Assigned A16

450 IBM DS8870 Architecture and Implementation

Page 469: IBM 8870 Archt

17.2 Using Capacity on DemandIBM offers Capacity on Demand (CoD) solutions that are designed to meet the changing storage needs of rapidly growing businesses. CoD on the DS8870 is described in this section.

There are various rules about CoD, which are described in the IBM DS8870 Introduction and Planning Guide, GC27-4209. This section describes aspects of implementing a DS8870 that include CoD disk packs.

17.2.1 What is Capacity on Demand?The Standby CoD offering is designed to provide you with the ability to tap into more storage. This feature is attractive if you have rapid or unpredictable growth, or if you want extra storage to be there when you need it.

In many environments, it is not unusual to have rapid growth in the amount of disk space that is required for your business. This growth can create a problem if there is an unexpected and urgent need for disk space and no time to create a purchase order or wait for the new storage to be delivered.

With this offering, up to six Standby CoD disk drive sets (96 disk drives) can be factory-installed or field-installed into your system. To activate the disk drives, you must exchange the CoD feature for the corresponding standard storage feature before you can logically configure the new storage. Upon activation of any portion of a Standby CoD storage, you may also need to increase the size of other advanced features to match the new size of the storage installed. Then, you can order replacement CoD disk drive sets.

This feature can help improve your cost of ownership because the extent of IBM authorization for licensed functions can grow at the same time you need your disk capacity to grow. This can also reduce the time required to increase the storage capacity of the DS8870.

Contact your IBM representative to obtain more information about Standby CoD offering terms and conditions.

17.2.2 Determining whether a DS8870 includes CoD disksA common question is how to determine whether a DS8870 has CoD disks installed. You must check for the following important indicators:

� Is the CoD indicator present in the Disk Storage Feature Activation (DSFA) website?

� What is the Operating Environment License (OEL) limit that is displayed by the lskey DS CLI command?

Verifying CoD on the DSFA websiteThe data storage feature activation (DSFA) website provides feature activation codes and license keys to technically activate functions that were acquired for your IBM storage products.

Important: Solid-state flash drives are not available as Standby CoD drives.

Chapter 17. DS8870 Capacity upgrades and Capacity on Demand 451

Page 470: IBM 8870 Archt

To check for the CoD indicator on the DSFA website, you need to perform the following tasks:

Using the GUIComplete the following steps to use the GUI:

1. Connect to the following URL via a web browser:

http://<hmc_ip_address>:8451/DS8000/Login

2. Select System Status under the Home icon.

3. In the Status column, right-click a status indicator and select Storage Image Add Activation Key

4. The storage system signature is displayed, as shown in Figure 17-3.

Figure 17-3 Machine signature and activation codes

452 IBM DS8870 Architecture and Implementation

Page 471: IBM 8870 Archt

The machine signature is a unique value that can be accessed only from the machine. You also must record the Machine Type that is displayed and the Machine Serial Number (which ends with 0).

Using DS CLIComplete the following steps to use the DS CLI:

1. Connect with the DS CLI and run the showsi -fullid command, as shown in Example 17-5.

Example 17-5 Machine Signature by using DS CLI

dscli> showsi -fullid IBM.2107-75ZA571Date/Time: 27 de Maio de 2013 6h35min59s BRT IBM DSCLI Version: 7.7.10.271 DS: IBM.2107-75ZA571 DS8870_ATS02desc MakoID IBM.2107-75ZA571Storage Unit IBM.2107-75ZA570Model 961WWNN 5005076303FFD5AASignature 3f98-2654-6487-4002 <============ Machine SignatureState OnlineESSNet EnabledVolume Group IBM.2107-75ZA571/V0os400Serial 5AANVS Memory 8.0 GBCache Memory 233.7 GBProcessor Memory 253.7 GBMTS IBM.2421-75ZA570 <=======Machine Type (2421) and S/N (75ZA570)numegsupported 1ETAutoMode allETMonitor allIOPMmode ManagedETCCMode EnabledETHMTMode Enabled

2. Now log on to the DSFA website at this URL:

http://www.ibm.com/storage/dsfa

3. Select IBM System Storage DS8000 Series from the DSFA start page. The next window requires you to choose the Machine Type and then enter the serial number and signature, as shown in Figure 17-4 on page 454.

Chapter 17. DS8870 Capacity upgrades and Capacity on Demand 453

Page 472: IBM 8870 Archt

Figure 17-4 DSFA machine specifics

In the View authorization details window, the feature code 0901 Standby CoD indicator is shown for DS8870 installations with Capacity on Demand, which is shown in Figure 17-5. If you see 0900 Non-Standby CoD, the CoD feature was not ordered for your storage system.

Figure 17-5 Verifying CoD by using DSFA: DS8700 system in this example

454 IBM DS8870 Architecture and Implementation

Page 473: IBM 8870 Archt

Verifying CoD on the DS8870Normally, new features or feature limits are activated by using the DS CLI applykey command. However, CoD does not include a discrete key. Instead, the CoD feature is installed as part of the Operating Environment License (OEL) key. An OEL key that activates CoD changes the feature limit from the limit that you purchased to the largest possible number.

In Example 17-6, you can see how the OEL key is changed. The machine in this example is licensed for 170 TB of OEL, but actually has 172 TB of disk installed because it has 2 TB of CoD disks. However, if you attempt to create ranks by using the final 2 TB of storage, the command fails because it exceeds the OEL limit. After a new OEL key with CoD is installed, the OEL limit increases to a large number (9.9 million TB). As a result, rank creation succeeds for the last 2 TB of storage.

Example 17-6 Applying an OEL key that contains CoD

dscli> lskey IBM.2107-75ZA571Date/Time: October 8, 2012 14:27:40 CEST IBM DSCLI Version: 7.7.x.xxx DS: IBM.2107-75ZA571Activation Key Authorization Level (TB) Scope==========================================================================Operating environment (OEL) 170,4 All

dscli> applykey -key xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxx IBM.2107-75ZA571Date/Time: October 8, 2012 14:32:03 CEST IBM DSCLI Version: 7.7.x.xxx DS: IBM.2107-75ZA571CMUC00199I applykey: Licensed Machine Code successfully applied to storage image IBM.2107-75ZA571

dscli> lskey IBM.2107-75AZA571Date/Time: October 8, 2012 14:33:23 CEST IBM DSCLI Version: 7.7.x.xxx DS: IBM.2107-75ZA571Activation Key Authorization Level (TB) Scope==========================================================================Operating environment (OEL) 9999999 All

Chapter 17. DS8870 Capacity upgrades and Capacity on Demand 455

Page 474: IBM 8870 Archt

Complete the following steps to add the Activation Keys by using web GUI:

1. Connect to the following URL via a web browser:

http://<hmc_ip_address>:8451/DS8000/Login

2. Select System Status under the Home icon.

3. In the Status column, right-click a status indicator and select Storage Image Add Activation Key or Storage Image Import Key File, depending on what you downloaded, as shown in Figure 17-6.

Figure 17-6 Add Activation Key selection

17.2.3 Using the CoD storageIn this section, we review the tasks that are required to use Standby CoD storage.

CoD array sitesIf CoD storage is installed, it is a maximum of 96 CoD disk drives. Because 16 disks make up a drive set, except for the 4 TB disks, which have eight disks per drive set, a better use of terminology is to say that a machine can include up to six drive sets of CoD disk. Because eight drives are used to create an array site, a maximum of 12 array sites of CoD can exist in a machine. For example, if a machine has 384 disk drives installed (of which 96 disk drives are CoD), there are 48 array sites, of which 12 are CoD. From the machine itself, there is no way to tell how many of the array sites in a machine are CoD array sites as opposed to array sites that you can start using immediately. During the machine order process, these CoD considerations must be clearly understood and documented.

456 IBM DS8870 Architecture and Implementation

Page 475: IBM 8870 Archt

Which array sites are the CoD array sitesGiven a sample DS8870 with 48 array sites, of which eight represent CoD disks, the client should configure only 40 of the 48 array sites. If all the disk drives are the same size, it is simply a matter of selecting 40 array sites to configure. It is possible to order CoD drive sets of different sizes. In this case, you must understand how many of each size were ordered and ensure that the correct number of array sites of each size are left unused until they are needed for growth.

Using the CoD array sitesYou must do a feature exchange to use the CoD storage. Exchange the Standby Capacity on Demand feature for the corresponding standard storage feature. Activating the Standby CoD feature to allow configuration is permanent, not temporary. The feature will provide a new OEL key, which will allow you to activate the CoD storage. If other advanced features are installed, the size of those features might also need to be increased to allow configuration of the new storage. Use the standard DS CLI commands (or DS GUI) to configure the storage, starting with the mkarray command, then the mkrank command, and so on. After the ranks are members of an Extent Pool, the volumes can be created. For more information, see Chapter 12, “Configuration by using the DS Storage Manager GUI” on page 293, and Chapter 13, “Configuration with the DS command-line interface” on page 353.

After the CoD array sites are in useYou also can order replacement Standby CoD disk drive sets. If new CoD disks are ordered and installed, a new OEL key also is issued and should be applied immediately. If other CoD disks are not needed, or the DS8870 reached maximum capacity, an OEL key is issued to reflect that CoD is no longer enabled on the storage system.

Important: IBM requires that a Standby CoD disk drive set must be activated within 12 months from the date of installation. All such activation is permanent.

Chapter 17. DS8870 Capacity upgrades and Capacity on Demand 457

Page 476: IBM 8870 Archt

458 IBM DS8870 Architecture and Implementation

Page 477: IBM 8870 Archt

Chapter 18. DS8800 to DS8870 model conversion

In this chapter, we describe model conversion from DS8800 to DS8870.

This chapter covers the following topics:

� Introducing model conversion� Mechanical conversion overview� Model conversion phases

18

© Copyright IBM Corp. 2014. All rights reserved. 459

Page 478: IBM 8870 Archt

18.1 Introducing model conversionIt is now possible to model convert an existing DS8800 (Model 951, 95E) into a DS8870 (Model 961, 96E). This conversion utilizes existing storage enclosures, disk drives, host adapters, and device adapters. All other hardware is physically replaced. This conversion process can be performed only by an IBM service representative.

The model conversion consists of four phases: Planning, verification of prerequisites, physical model conversion, and post conversion operations. The IBM service representative will not begin mechanical conversion until all prerequisites are complete.

18.2 Model conversion overviewThe following lists show the considerations for model conversion. There are specific hardware and configuration considerations to be addressed. Additionally, Business Class machines have more requirements.

18.2.1 Configuration considerationsThese items are specific to configuration of the DS8870 when model conversion has been completed:

� Model conversion does not change the machine type.� The existing DS8800 warranty is applied to the new DS8870, without extension.� Each DS8800 converted to a DS8870 retains the following information:

– Frame serial numbers.

– Worldwide node name (WWNN).

– Worldwide port names (WWPNs).

� All applicable licensed functions from the DS8800 remain unchanged, but the feature activation codes must be downloaded from the data storage feature activation (DFSA) codes website and reapplied to the DS8870 after conversion and before logical configuration.

� Logical configuration must be performed as a new machine. Existing configurations cannot be copied from the DS8800 to the converted DS8870.

18.2.2 Hardware considerationsThe following must be taken into account in preparation for model conversion:

� IBM ships new frames preconfigured with existing serial numbers, WWNN, and WWPNs.

� If the existing DS8800 has a secondary Hardware Management Console (HMC) feature associated, a DS8870 compatible secondary HMC will be shipped. Configurations where two DS8000 series systems share their internal HMCs (2x2) are not supported for model conversion.

Important: This process is non-concurrent. The process requires several days of prerequisite work, your planning, and onsite IBM support. Mechanical conversion itself might take several 8 hour shifts. Ensure that you allow for time needed to migrate data off the DS8800 before conversion and back onto the newly converted DS8870 when complete.

460 IBM DS8870 Architecture and Implementation

Page 479: IBM 8870 Archt

� The storage enclosures, disk drives, host adapters, and device adapters are transferred to the new DS8870 frames.

� The existing DS8800 frames, excluding adapters, storage enclosures, and disk drives, are returned to IBM.

� Existing DS8800 disk drives that are not full disk encryption drives (FDE) can be used in the new model converted DS8870. However, an intermix of non-FDE and FDE disk drives is not supported in the DS8000 system.

� A converted DS8870 with non-FDE disk drives might not be able to support future feature codes.

� Any additional upgrades must be performed after model conversion is complete.

18.3 Model conversion phasesIt is easier to divide the process into distinct phases, allowing each phase to be planned and performed individually. The four phases are best described as planning, prerequisites, mechanical conversion, and post conversion. The following sections describe each.

18.3.1 PlanningIt is important to plan the model conversion in a similar manner to a new installation. The physical infrastructure requirements are different between DS8800 and DS8870, so the use of existing power and any existing Earthquake Resistance Kit is not possible. The existence of this infrastructure is therefore a prerequisite for model conversion.

Because the metadata size has changed, the configuration of the DS8800 cannot be copied to the new DS8870 directly. This must be configured as though the DS8870 were a new machine.

Model conversion is not a concurrent operation. You will need to plan on the DS8800 being unavailable until conversion is complete to DS8870. This can include migration of data to another storage system. Sufficient capacity and infrastructure to perform this migration will need to be included in your planning.

IBM will mechanically replace all of the frames within the DS8800 system. During this period, additional floor space to perform the mechanical conversion will need to be provided. The IBM service representative will physically relocate the storage enclosures, disk drives, and adapters from the DS8800 frames to the DS8870 frames.

If you are making changes to the HMC configuration a new configuration worksheet will need to be provided to the SSR, otherwise the SSR will copy the current configuration For more information see, 9.3.1, “HMC planning tasks” on page 247.

18.3.2 PrerequisitesThe model conversion process requires that all data is migrated off the system and the logical configuration and encryption groups to be removed. All infrastructure also needs to be in place prior to start of the conversion process. This process might take a considerable amount of time, which varies dependent on system configuration. All of the prerequisites are client responsibilities and is not included as part of the model conversion service. It is important to plan for several days of the DS8800 being unavailable during model conversion to DS8870. All of the prerequisites must be completed prior to the IBM service representative performing the model conversion process.

Chapter 18. DS8800 to DS8870 model conversion 461

Page 480: IBM 8870 Archt

Data migrationIt is important to plan to have additional capacity within your environment to support all of the data that will be migrated from the DS8800 as the DS8800 will be unavailable during the model conversion process. The amount of time it takes to complete data migration varies on configuration and environment.

Logical configurationYou must remove all logical configuration prior to the IBM service representative beginning the model conversion process. Removal of logical configuration is not the responsibility of the IBM service representative. The removal process requires that all ranks and arrays must be removed. This process will also format all disk drives, and this format must be completed prior to mechanical conversion begins. You can use the data storage command-line interface (DS CLI) or the DS Storage Manager graphical user interface (GUI) to perform this operation.

Encryption groupIf the existing DS8800 has disk encryption activated, the encryption group must also be removed prior to beginning mechanical conversion. For instructions see, “Removing data, configuration and encryption” of Appendix D “DS8800 to DS8870 Model Conversion” in the IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

Secure Data OverwriteThe DS8800 frames are required to be returned to IBM, this includes the processor central electronics complexes (CECs) and HMCs. If you require these to be sanitized, you can request that the IBM service representative perform a Secure Data Overwrite (SDO). This additional service must be performed after you have completed migration of data, removal of logical configuration, and if necessary removal of the encryption group. For more information about the SDO process see, 4.8.4, “Secure data overwrite” on page 98

SDO by IBM service representatives became an option on DS8800 LIC 7.6.20.221. LIC levels prior to this require Systems and Technology Group (STG) Lab Services to perform this function as a service offering. You can choose to upgrade microcode instead.

Data encryption and Key Lifecycle Manager serversEncryption is optional for conversion. However, if the existing DS8800 has FDE disk drives, and you intend to activate encryption and you intend to use IBM Security Key Lifecycle Manager servers, ensure that the infrastructure is in place and that you have applied the encryption feature activation code prior to performing any logical configuration.

Power The DS8870 utilizes a different power system to the DS8800. This includes different power cords and ratings. You will need to ensure that this infrastructure is in place prior to performing a mechanical conversion. For more information see, 8.2.5, “Power requirements and operating environment” on page 224.

Tip: DS8800 supports only three-pass overwrite.

462 IBM DS8870 Architecture and Implementation

Page 481: IBM 8870 Archt

Fiber and IP networking and telecommunicationsIf you are relocating the converted DS8870 during the process, or for other reasons cannot utilize your existing network, Fibre Channel host connection, and telephone infrastructure, this needs to be replaced as needed. If you want the IBM service representative to perform routing of these cables, this activity is billed separately from the DS8870 model conversion. For more information about this infrastructure see, Chapter 8, “DS8870 physical planning and installation” on page 217.

HMC configurationThe DS8800 must be either in a single HMC configuration or an additional external HMC. If your DS8800 is in a configuration where two storage facilities share two internal HMCs (2x2), the storage system must be reconfigured to a single HMC configuration. This adds a significant amount of time, which must be accounted for in planning.

Earthquake resistance kitThe DS8870 uses a different set of hold down hardware than the DS8800. You cannot reuse the existing hardware, or holes, from the DS8800. For more information see 4.8.3, “Earthquake resistance” on page 97.

18.3.3 Mechanical conversionWhen all prerequisites have been completed, the IBM service representative will perform the mechanical conversion.

High-level overview of mechanical conversionThe mechanical conversion process will be performed only by the IBM service representative. The following list describes the process at a high level:

1. Verify removal of logical configuration and encryption groups.2. Remove storage enclosures, disk drives, and adapters from theDS8800.3. Physically install storage enclosures, disk drives, and adapters in the DS8870.4. Perform modified machine installation process.5. Perform miscellaneous equipment specification (MES) activities for all storage enclosures,

this includes the disk drives.

The mechanical conversion process takes three full days to complete, depending on physical configuration of the DS8800 to be converted. It is important to allow for this time in your planning.

18.3.4 Post conversion operations When the IBM service representative has completed mechanical conversion, you are informed that post conversion can begin. All post conversion activities are the responsibility of the client.

Download and apply appropriate feature activation codesBecause the serial number has not changed, all licensed functions remain identical. However, you must download all appropriate feature activation codes and apply them. To download activation codes, proceed to the DFSA website:

https://www.ibm.com/storage/dsfa/home.wss

For further information about licensed functions see, 10.2, “Activating licensed functions” on page 266.

Chapter 18. DS8800 to DS8870 model conversion 463

Page 482: IBM 8870 Archt

Encryption

If you intend to activate encryption, you must reconfigure your encryption infrastructure prior to performing any logical configuration. Creating configuration prior to completing encryption configuration results in an inability to enable encryption functionality:

1. Assign TLKM Servers2. Create Recovery Keys3. Create Encryption Group

For more information about disk encryption see, IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines, REDP-4500.

Create logical configurationWhen encryption is activated (if required), logical configuration can be created. It is important to remember that you cannot copy your existing configuration from the DS8800 to the DS8870. If the DS8800 was close to fully provisioned, the same configuration might not fit on the DS8870. You will need to plan configuration as though it were a new machine. For more information about configuration, see Chapter 11, “Configuration flow” on page 287.

Data migrationWhen all configuration is complete the DS8870 is available to migrate data onto.

Note: For more information about model conversion, see Appendix D “DS8800 to DS8870 Model Conversion” in the IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

464 IBM DS8870 Architecture and Implementation

Page 483: IBM 8870 Archt

Appendix A. Tools and service offerings

This appendix provides information about the tools that are available to help you when planning, managing, migrating, and analyzing activities with your DS8870. In this appendix, we also reference the sites where you can find information about the service offerings that are available from IBM to help you in several of the activities that are related to the DS8870 implementation.

A

© Copyright IBM Corp. 2014. All rights reserved. 465

Page 484: IBM 8870 Archt

Planning and administration toolsThis section describes some available tools to help plan for and administer DS8000 implementations.

Capacity MagicBecause of the additional flexibility and configuration options that storage systems provide, it becomes a challenge to calculate the raw and net storage capacity of disk systems, such as the DS8870. You must invest considerable time, and you need an in-depth technical understanding of how spare and parity disks are assigned. You also must consider the simultaneous use of disks with different capacities and configurations that deploy RAID 5, RAID 6, and RAID 10.

Capacity Magic can do the physical (raw) to effective (net) capacity calculations automatically, considering all applicable rules and the provided hardware configuration (number and type of disk drive sets). The following IBM storage systems are supported:

� IBM DS8000 series, including DS8870 Enterprise Class and Business Class configurations, and all DS8000 models before DS8870

� IBM Storwize family– IBM Storwize V7000– IBM Storwize V7000 Unified– IBM Flex System™ V7000– IBM Storwize V5000– IBM Storwize V3700– IBM Storwize V3500

� IBM DS6000™� IBM N series models

Capacity Magic is designed as an easy-to-use tool with a single, main interface. It offers a graphical user interface (GUI) with which you can enter the disk drive configuration of a DS8870 and other IBM disk systems, the number, and type of disk drive sets, and the Redundant Array of Independent Disks (RAID) type. With this input, Capacity Magic calculates the raw and net storage capacities. The tool also includes functionality with which you can display the number of extents that are produced per rank, as shown in Figure A-1 on page 467.

466 IBM DS8870 Architecture and Implementation

Page 485: IBM 8870 Archt

Figure A-1 IBM Capacity Magic configuration window

Figure A-1 shows the configuration window that Capacity Magic provides for you to specify the wanted number and type of disk drive sets.

Appendix A. Tools and service offerings 467

Page 486: IBM 8870 Archt

Figure A-2 shows the resulting output report that Capacity Magic produces. This report is also helpful in planning and preparing the configuration of the storage in the DS8870 because it includes extent count information. The net extent count and capacity slightly differ between the various DS8000 models.

Figure A-2 IBM Capacity Magic output report

Disk MagicDisk Magic is a Windows based disk system performance modeling tool. It supports disk systems from multiple vendors and offers the most detailed support for IBM subsystems. The tool models IBM disk controllers in System z, IBM i, and Open environments.

The first release was issued as an OS/2 application in 1994. Since that release, Disk Magic evolved from supporting Storage Control Units, such as the IBM 3880 and 3990, to supporting modern, integrated, advanced-function disk systems. Today, the following IBM storage systems are supported:

� IBM XIV� DS8000� DS6000� DS5000� IBM DS4000®

Important: IBM Capacity Magic for Windows is a product of IntelliMagic, which is licensed exclusively to IBM and IBM Business Partners. The product models disk storage system effective capacity as a function of physical disk capacity that is to be installed. Contact your IBM representative or IBM Business Partner to discuss a Capacity Magic study.

468 IBM DS8870 Architecture and Implementation

Page 487: IBM 8870 Archt

� Enterprise Storage Server (ESS)� SAN Volume Controller� Storwize family:

– Storwize V7000– Storwize V7000U– Storwize V5000– Storwize V3700– Storwize V3500

� SAN-attached N series� Scale Out Network Attached Storage

A critical design objective for Disk Magic is to minimize the amount of input that you must enter, and offers a rich and meaningful modeling capability. The following list provides several examples of what Disk Magic can model, but it is by no means complete:

� Move the current I/O load to a different disk system.� Merge the I/O load of multiple disk systems into a single load.� Introducing storage virtualization into an existing disk configuration.� Increase the current I/O load.� Storage consolidation.� Increase the disk system’s cache size.� Change to larger-capacity disk modules.� Use fewer or more logical unit numbers (LUNs).� Activate asynchronous or synchronous Peer-to-Peer Remote Copy.

Modeling results are presented through tabular reports and Disk Magic dialogs. Also, graphical output is offered by an integrated interface to Microsoft Excel. Figure A-3 shows how Disk Magic requires I/O workload data and disk system configuration details as input to build a calibrated model that can be used to explore possible changes.

Figure A-3 IBM Disk Magic overview

Appendix A. Tools and service offerings 469

Page 488: IBM 8870 Archt

Figure A-4 shows the IBM Disk Magic primary window. The TreeView displays the structure of a project with the entities that are part of a model. These entities can be host systems (IBM zSeries, TPF, open systems, or IBM iSeries) and disk subsystems. In this case, two AIX servers, one zSeries server, one iSeries server, and one IBM DS8800 storage system were selected in the general project wizard.

Figure A-4 IBM Disk Magic particular general project

Storage Tier Advisor ToolIn addition to the Easy Tier capabilities, IBM offers the IBM DS8870 Storage Tier Advisor Tool, which provides a graphical representation of performance data that is collected by Easy Tier over the recent days as well as the recommended capacity configuration of different tiers. The Storage Tier Advisor Tool (STAT) can help you determine which volumes are likely candidates for Easy Tier management by analyzing the performance of their current application workloads.

The Storage Tier Advisor Tool displays a System Summary report for the total of the extent pools and more detailed reports that contain the heat distribution in each volume. The STAT report also contains the recommended configuration for each tier and the related potential performance improvement. The tool produces an Easy Tier Summary Report after statistics are gathered over at least a 24-hour period. The Storage Tier Advisor Tool can be downloaded from this FTP site:http://www.ibm.com/support/docview.wss?uid=ssg1S4001057

Important: IBM Disk Magic for Windows is a product of IntelliMagic, which is licensed to IBM and IBM Business Partners to model disk storage system performance. Contact your IBM Representative or IBM Business Partner to discuss a Disk Magic study.

470 IBM DS8870 Architecture and Implementation

Page 489: IBM 8870 Archt

Figure A-5 shows how Storage Tier Advisor Tool requires I/O workload data as input to build a performance summary report.

Figure A-5 Storage Tier Advisor Tool Overview

How to use the Storage Tier Advisor ToolComplete the following steps to use the STAT:

1. To offload the Storage Tier Advisor summary report, select System Status, as shown in Figure A-6.

Figure A-6 Selecting System Status

Appendix A. Tools and service offerings 471

Page 490: IBM 8870 Archt

2. Select Export Easy Tier Summary Report, as shown in Figure A-7.

Figure A-7 Selecting Export Easy Tier Summary Report

Alternatively, it is possible to get the same information by using the DS CLI, as shown in Example A-1.

Example: A-1 Using the DS CLI to offload the Storage Tier Advisor summary report

dscli> offloadfile -etdata c:\tempDate/Time: 21 September 2013 16:49:19 CEST IBM DSCLI Version: 7.7.20.582 DS: IBM.2107-75ZA571CMUC00428I offloadfile: The etdata file has been offloaded to c:\temp\SF75ZA570ESS01_heat.data.CMUC00428I offloadfile: The etdata file has been offloaded to c:\temp\SF75ZA570ESS11_heat.data.

3. After you gather the information, it is necessary to run STAT with that information as input. Extract all of the files from the downloaded compressed file. There should be two files, as shown in Example A-2.

Example: A-2 Extracting all the files from the downloaded compressed file.

C:\temp>dir *.data Volume in drive C is PRK_1160607 Volume Serial Number is 6806-ABBD

Directory of C:\temp

21/09/2013 16:49 2,276,456 SF75ZA570ESS01_heat.data21/09/2013 16:49 1,157,288 SF75ZA570ESS11_heat.data 2 File(s) 3,433,744 bytes 0 Dir(s) 11,297,632,256 bytes free

472 IBM DS8870 Architecture and Implementation

Page 491: IBM 8870 Archt

4. Run STAT, as shown in Example A-3.

Example: A-3 Running STAT

C:\Program Files\IBM\STAT>stat -o c:\ds8k\output c:\temp\SF75ZA570ESS01_heat.data c:\temp\SF75ZA570ESS11_heat.data

CMUA00019I The STAT.exe command has completed.

5. In the output directory, you will notice the creation of an index.html file and a folder called Data_files is created. The index.html file can be opened with a web browser, as shown in Figure A-8. The System Summary page is displayed first by default. You can open the Systemwide Recommendation page by clicking the link on the left of the window.

Figure A-8 STAT output file

Getting into the Data_files folder, you will notice three csv files. The Figure A-9 on page 474 shows the files that are also created as a result of running the STAT command.

Important: As designed, this STAT tool requires write permissions to the directory where it is installed. The tool attempts to write the output file to this directory. If you do not have write permission, it fails with the following error: CMUA00007E.

Appendix A. Tools and service offerings 473

Page 492: IBM 8870 Archt

Figure A-9 Files created as a result of running the STAT command

After creating the files, you can validate Easy Tier’s behavior and verify if you have enough SSD to handle with the workload. Also, mainly verify if the correct data is into the correct tier of disks.

Another use of the skew_curve.csv file is to be an input for Disk Magic simulations to accurately set the skew level for a particular workload.

IBM Tivoli Storage Productivity Center 5.2IBM Tivoli Storage Productivity Center is a storage infrastructure management software solution that is designed to help you improve time-to-value. It also helps reduce the complexity of managing your storage environment by simplifying, centralizing, automating, and optimizing storage tasks that are associated with storage systems, storage networks, replication services, and capacity management.

This integrated solution helps to improve the storage total cost of ownership (TCO) and return on investment (ROI). It does so by combining the management of storage assets, capacity, performance, and operations that are traditionally offered by separate system resources manager (SRM), device, or storage area network (SAN) management applications into a single console.

IBM Tivoli Storage Productivity Center features provide the following capabilities:

� Provide comprehensive visibility and help centralize the management of your heterogeneous storage infrastructure from a next-generation, web-based user interface that uses role-based administration and single sign-on.

� Easily create and integrate IBM Cognos® based custom reports on capacity and performance.

� Deliver common services for simple configuration and consistent operations across hosts, fabrics, and storage systems.

� Manage performance and connectivity from the host file system to the physical disk, including in-depth performance monitoring and analysis of SAN fabric.

� Monitor, manage, and control (zone) SAN fabric components.

� Monitor and track the performance of SAN-attached Storage Management Initiative Specification (SMI-S) compliant storage devices.

Notes: For more information about using Storage Tier Advisor Tool (STAT), see the IBM Redpaper publication: IBM DS8870 Easy Tier, REDP-4667, available at the following site:

http://www.redbooks.ibm.com/abstracts/redp4667.html

474 IBM DS8870 Architecture and Implementation

Page 493: IBM 8870 Archt

� Manage advanced replication services (Global Mirror, Metro Mirror, and IBM FlashCopy).

� Easily set thresholds to monitor capacity throughput to detect bottlenecks on storage subsystems and SAN switches.

IBM Tivoli Storage Productivity Center can help you manage capacity, performance, and operations of storage systems and networks. It helps perform device configuration and manage multiple devices, and can tune and proactively manage the performance of storage devices on the SAN while managing, monitoring, and controlling your SAN fabric.

More information about integration and interoperability, including devices and databases supported, server hardware requirements, and operating system platform that is supported can be found on the IBM Tivoli Storage Productivity Center website:

http://www.ibm.com/software/products/tivostorprodcent

Additional technical information, such as installation, troubleshooting, downloads, and planning information, also can be found at the following website:

http://pic.dhe.ibm.com/infocenter/tivihelp/v59r1/index.jsp?topic=%2Fcom.ibm.tpc_V5111.doc%2FTPC_ic-homepage.html

You can add multiple devices into a Tivoli Storage Productivity Center by using the new web-based GUI that is available in Version 5.2 of Tivoli Storage Productivity Center.

After logging into the Tivoli Storage Productivity Center server, from the Home Dashboard, execute the following steps to correctly connect to an IBM DS8870 from the Tivoli Storage Productivity Center console:

1. Click the Storage Systems image to add a new storage system. Figure A-10 on page 476 shows the Tivoli Storage Productivity Center Home Dashboard.

Note: Tivoli Storage Productivity Center is an optional software that can bring you monitoring and management capabilities. Depending on the requirements of the project, it is highly recommended that you add this software to the solution. For more information, contact your IBM sales representative.

Appendix A. Tools and service offerings 475

Page 494: IBM 8870 Archt

Figure A-10 Tivoli Storage Productivity Center 5.2 Home Dashboard

2. Click Add Storage System as highlighted in Figure A-11.

Figure A-11 Add Storage System interface

476 IBM DS8870 Architecture and Implementation

Page 495: IBM 8870 Archt

3. Select DS8000 as the storage type and complete the fields that are available in the window that is shown in Figure A-12.

Figure A-12 Provide HMC address, user name, and password

� HMC address: Enter the IP address or host name for the Hardware Management Console (HMC) that manages the DS8000 system.

� HMC2 address (optional): Enter the IP address or host name of a second HMC that manages the DS8000 system.

� User name: Enter the user name for logging on to the IBM System Storage DS8000 Storage Manager (also known as the DS8000 element manager or GUI). The default user name is admin.

IBM Tivoli Storage Productivity Center discovers the DS8000 servers and collects initial configuration data from them. The discovery process gathers only raw information and is completed after a few minutes.

Figure A-13 on page 478 shows that after adding the DS8870, click Actions to display the available options for monitoring and managing the system.

After adding the DS8870 to the Tivoli Storage Productivity Center server, you can immediately start a data collection. Click Actions Data Collection and start probe for each storage system to collect statistics and detailed information about the monitored storage resources in your environment, such as pools, volumes, and disk controllers. The time that is needed to complete this process can last up to an hour (depending on the size of the DS8000 storage system).

Appendix A. Tools and service offerings 477

Page 496: IBM 8870 Archt

Figure A-13 Tivoli Storage Productivity Center Storage System Actions options

Performance monitoring with Tivoli Storage Productivity CenterFor examining the performance of your DS8870, Tivoli Storage Productivity Center offers to use either predefined reports, or to design Cognos custom reports that give detailed information about the performance and the properties of the monitored resources. With Cognos, you can drag key metrics into a report to generate a performance chart for a specific storage system, or part thereof.

Figure A-14 shows an example of a predefined report, over the chosen period of seven days, containing I/O rates and response times. Similar predefined reports exist for read and write I/Os, data rates, or cache-hit percentages. A lot of additional metrics can be used when going for custom-defined reports.

Figure A-14 Tivoli Storage Productivity Center predefined report example

478 IBM DS8870 Architecture and Implementation

Page 497: IBM 8870 Archt

Tivoli Storage Productivity Center GUIThe Tivoli Storage Productivity Center 5.2 offers a new, improved user interface that provides different functions for working with monitored resources. Compared to the stand-alone GUI, the web-based GUI offers you better and simplified navigation with the following major features:

� At-a-glance assessment of storage environment� Monitor and troubleshoot capabilities� Rapid problem determination� Review, acknowledge, and delete alerts� Review and acknowledge health status� View internal and external resources relationships� Access to Cognos reporting

For more information about storage monitoring, management, and Cognos reporting, see Tivoli Storage Productivity Center V5.2 Technical Guide, SG24-8053.

IBM Tivoli Storage FlashCopy ManagerIBM Tivoli Storage FlashCopy Manager provides the tools and information that are needed to create and manage volume-level snapshots on snapshot-oriented storage systems. The applications that contain data on those volumes remain online. Optionally, backups can be sent to Tivoli Storage Manager storage.

This product includes the following key benefits:

� Performs near-instant application-aware snapshot backups, with minimal performance impact for IBM DB2, Oracle, SAP, Microsoft SQL Server, and Exchange.

� Improves application availability and service levels through high-performance, near-instant restore capabilities that reduce downtime.

� Integrates with IBM System Storage DS8000, IBM Storwize family, IBM System Storage SAN Volume Controller, IBM XIV Storage System, IBM N series, and NetApp on AIX, Solaris, Linux, and Microsoft Windows.

� Creates application-aware snapshots at remote sites by using Metro or Global Mirror on SAN Volume Controller, the Storwize family, or XIV.

� Satisfies advanced data protection and data reduction needs with optional integration with IBM Tivoli Storage Manager.

� Supports the Windows, AIX, Solaris, and Linux operating systems.

For more information about IBM Tivoli Storage FlashCopy Manager, see the following sites:

� http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3/index.jsp?topic=%2Fcom.ibm.itsm.fcm.doc%2Fr_pdf_fcm.html

� http://www.ibm.com/software/tivoli/products/storage-flashcopy-mgr

Appendix A. Tools and service offerings 479

Page 498: IBM 8870 Archt

IBM Service offeringsNext, we describe the various service offerings.

IBM Global Technology Services: Service offeringsIBM can assist you in deploying IBM DS8870 storage systems, IBM Tivoli Storage Productivity Center, and IBM SAN Volume Controller solutions. IBM Global Technology Services® features the correct knowledge and expertise to reduce your system and data migration workload, and the time, money, and resources that are needed to achieve a system-managed environment.

For more information about available services, contact your IBM representative or IBM Business Partner, or visit the following websites:

� http://www.ibm.com/services/� http://www.ibm.com/services/us/en/it-services/storage-and-data-services.html

For more information about the IBM Business Continuity and Recovery Services that are available, contact your IBM representative, or see this website:

http://www.ibm.com/services/us/en/it-services/business-continuity/index.html

For more information about educational offerings in your country, see this website and select a country to continue:

http://www.ibm.com/services/learning/index.html

IBM STG Lab Services: Service offeringsIn addition to the IBM Global Technology Services, the Storage Services team from the IBM STG Lab is set up to assist customers with one-off, client-tailored solutions and services that help in the daily work with IBM hardware and software components.

The following sample offerings are included:

� IBM Storage Architecture Service� IBM Certified Secure Data Overwrite (SDO) Service � DS8000 Encryption Implementation Service� Healthcheck Services� Implementation, Configuration, and Migration Services� Proof of Concept� Skills transfer� Storage Efficiency Analysis� Storage Efficiency Workshop� Storage Efficiency Study� Technical Project Management

For more information about these service offerings, see this website:

http://www.ibm.com/systems/services/labservices/platforms/labservices_storage.html

480 IBM DS8870 Architecture and Implementation

Page 499: IBM 8870 Archt

Appendix B. Resiliency improvements

This appendix discusses resiliency improvements supported on the IBM DS8870 with DS8000 Licensed Machine Code (LMC) 7.7.10.xx.xx or later.

The resiliency improvements include the following features:

� Small Computer System Interface (SCSI) reserves detection and removal� Querying count key data (CKD) path groups� IBM z/OS Soft Fence

B

© Copyright IBM Corp. 2014. All rights reserved. 481

Page 500: IBM 8870 Archt

B.1 SCSI reserves detection and removalAssume that two hosts would share the same disk or logical unit number (LUN). Obviously, there is the potential issue that both hosts would write some data to the LUN at the very same time, thus compromising data integrity. To avoid such situations requires a mechanism that ensures exclusive access by only one host at a given time. Such mechanism is called SCSI reservation.

In both Global Mirror and Metro Mirror environments, clients are using FlashCopy to allow disaster recovery testing to occur while their Disaster Recovery (DR) service continues to run. To ensure that their DR procedures are the same as their test procedures, they will also use the FlashCopy devices in a real disaster situation (as supported by both IBM Geographically Dispersed Parallel Sysplex (GDPS) and Tivoli Storage Productivity Center for Replication).

In an open systems environment, it is possible to have situations where SCSI reserves are left outstanding on devices when the server has been shut down. This situation could be because of user error, desire to shut down quickly, or various other reasons. If this is done on FlashCopy target devices that are used for testing, a subsequent FlashCopy will fail (since the particular LUN is still reserved). Therefore, the client has no easy way to either detect this is the case or correct it the next time that he issues the FlashCopy (which will fail).

In the past, IBM provided examples on how an existing reservation could be lifted by using the operating system. For instance, refer to the following web page:

http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp?topic=%2Fcom.ibm.storage.ssic.help.doc%2Ff2c_t64rmvprstntrsrvs_1dbcjf.html

Up to now, there was no easy method to reset a SCSI reservation in a FlashCopy environment, especially when the server holding the reservation is not running. There was also no method to detect that a SCSI reservation exists, apart from attempting and failing to issue a FlashCopy or similar command to the device.

The impact of this situation is that a client DR test is significantly delayed, or worse, in a real disaster, that the recovery might also be delayed until IBM can provide assistance to identify and resolve the issue.

Clients want to be able to perform the following functions:

� Detect whether a SCSI reservation exists for devices in an environment and identify the server that holds the reservation.

� Reset the reservation when performing a FlashCopy after it has been identified that it is not a valid reservation for a running server.

Such requirements are now addressed with the DS8000 Licensed Machine Code (LMC) 7.7.10.xx.xx.

B.1.1 SCSI reservation detection and removalTo detect existing SCSI reservations, IBM has introduced a new parameter called reserve, for the showfbvol CLI command. Example B-1 on page 483 shows the output upon entering showfbvol -? in the CLI.

482 IBM DS8870 Architecture and Implementation

Page 501: IBM 8870 Archt

Example: B-1 Output generated by the CLI upon entering showfbvol -? (only relevant parts)

Specifying the -reserve parameter

If you specify the -reserve parameter and there are no SCSI reservationsfor this volume, the following message is displayed.

CMUC00234I lsfbvol: No SCSI reservations found.

If you specify the -reserve parameter and there are SCSI reservationsfor this volume, the SCSI reservation attributes and a SCSI reserve porttable is appended to the resulting output.

dscli> showfbvol -reserve 0200

The resulting output

ID 0200accstate Onlinedatastate Normalconfigstate Normal...migrating 0perfgrp PG0migratingfrom -resgrp RG0========SCSI Reserve Status========PortID WWPN ReserveType===================================I0040 500507630310003D PersistentI0041 500507630310403D Persistent- 50050763080805BB Persistent- 50050763080845BB Persistent

Report field definitions ( -reserve parameter is specified)

PortID The I/O port ID. If the host is online, then the I/O port ID is displayed and is formatted as a leading uppercase letter "I" followed by four hexadecimal characters (for example, I0040). If the host is not online, the field contains a '-' (dash).

WWPN The World Wide Port Name displayed as sixteen hexadecimal characters.

ReserveType The SCSI reservation type for all connections. Valid reservation types are "Traditional", "Persistent", or "PPRC".dscli>

Appendix B. Resiliency improvements 483

Page 502: IBM 8870 Archt

Example B-1 on page 483 shows the following valid reservation types:

� Traditional: Non-Persistent SCSI Reservation

� Persistent: Persistent SCSI Reservation

� PPRC: Peer-to-Peer Remote Copy (PPRC) Secondary Reservation (DS8000 specific cannot be overwritten by a host)

For an explanation of the differences between non-persistent and persistent SCSI reservations, see B.1.2, “Excursion: SCSI reservations” on page 484.

To remove existing SCSI reservations, IBM has introduced a new parameter called resetreserve that can be used with the mkflash, resyncflash, reverseflash, remoteflash, and resyncremoteflash CLI commands. As an example, refer to the relevant output upon entering mkflash -? in the CLI, as shown in Example B-2.

Example: B-2 Relevant output upon entering mkflash -?

-resetreserve (Optional - DS8000 only) Forcibly clears any SCSI reservation on the target volume and allows establishing of a FlashCopy relationship. The reservation is not restored after the relationship is established. * When this option is not specified and the target volume is reserved, the command fails. * This option is ignored if the target is a CKD volume; this option is applicable only for fixed block volumes.

B.1.2 Excursion: SCSI reservationsHistorically, in Small Computer System Interface (SCSI) there were two reservation mechanisms defined:

� SCSI-2 reservation (becoming more obsolete)� SCSI-3 reservation, also referred to as Persistent Reservation (PR), which is

“state-of-the-art”

Although, “officially” the term SCSI-3 in no longer used in any current SCSI standard in this publication, we still refer to SCSI-2 and SCSI-3 to be able to emphasize what we refer to in the specific context.

There is an organization called Technical Committee T10. This committee defines what is commonly known as the SCSI standards. It publishes various documents, among them: SCSI Primary Commands (SPC) and the SCSI Block Commands (SBC), which describe the reservation concepts in detail. For more information, see the following website:

http://www.t10.org/drafts.htm#TOC

A high-level overview is now provided.

SCSI-2 reservationThere are two commands associated with SCSI-2 reservations:

� The RESERVE(6) command, the command code of which is x’16 � The RESERVE(10) command, the command code of which is x’56

484 IBM DS8870 Architecture and Implementation

Page 503: IBM 8870 Archt

The number in parenthesis describes the length in bytes of the Command Descriptor Block (CDB). Its purpose is to send the command code together with additional data (that is, bit settings, page code defining specific actions, and so on) to the target device.

Associated with the two reservation commands previously mentioned are two release commands, which are, speaking in formal terms, the complement to reservation:

� The RELEASE(6) command, the command code of which is x’17� The RELEASE(10) command, the command code of which is x’57

When a host or one of its initiators respectively successfully sends a RESERVE command against a LUN, this LUN is reserved by that host. It means that the host has exclusive access to that LUN. Another (second) host cannot access it if the reservation is not removed. Such an approach is therefore appropriate to serialize access.

Unfortunately however, an SCSI-2 reservation is non-persistent. This means that not only the corresponding release command can lift the reservation but also a reset, bus device reset, or power-on. This was one of the reasons why the SCSI-3 specifications introduced the SCSI-3 reservation/persistent reservation (PR). From now on, we use the terms: SCSI-3 reservation, persistent reservation, and PR interchangeably.

SCSI-3 reservation/persistent reservationFor persistent reservation, there are also two commands: PERSISTENT RESERVE OUT and PERSISTENT RESERVE IN. Both of them could also be seen as being complementary to one another but in a a different way compared to “SCSI-2 reservation” on page 484:

1. The PERSISTENT RESERVE OUT (PR OUT) command, the command code of which is x’5F incorporates a CDB with a length of 10 bytes. Part of the CDB is a so-called SERVICE ACTION field (byte 1, bits 4 to 0 in the CDB) that defines the particular PR OUT action. Among others, the actions can be:

a. RESERVE (code: x’01) to create a PR (not an SCSI-2 reservation).

b. RELEASE (code: x’02) to remove a PR.

To such extent, we could consider these two SERVICE ACTIONs in the PR OUT command as being corresponding to the ones described in “SCSI-2 reservation” on page 484, but here in a PR context.

2. The PERSISTENT RESERVE IN (PR IN) command, the command code of which is x’5E, incorporates a CDB with a length of 10 bytes. Its purpose is to acquire information about (potentially) existing PR OUTs.

PR OUT reservations survive a power outage, a power-on, a bus device reset, and a reset. Beyond that, PR does not only allow exclusive, but also shared access. See also “Understanding Persistent Reserve”, in the “GPFS Problem Determination Guide”, GA76-0415-07.

The SPC also contains an annex that describes how PR OUT can replace RESERVE/RELEASE, as described in “SCSI-2 reservation” on page 484.

Appendix B. Resiliency improvements 485

Page 504: IBM 8870 Archt

B.2 Querying CKD path groupsUp until the availability of DS8000 Licensed Machine Code (LMC) 7.7.10.xx.xx, there was no way to display count key data (CKD) path groups. This inability presented the following issues:

� It could have led to a situation in which PPRC and FlashCopy pairs using online targets could not be established because of existing path groups, but information about the path groups could not be obtained without initiating internal diagnostic procedures.

� Causes accidental loss of data when initializing a volume with the ICKDSF z/OS utility. It could happen that a user applies ICKDSF to initialize a volume that is online to another system. So far, ICKDSF had no means to know if a volume was being used by other systems.

� Protect volume integrity: Assume that a volume is restored (using DFDSS full-volume restore) on system_1 while the same volume is also online to system_2. In this case, system_2 might not be aware of any changes that were made to the volume, such as the volume serial number (Volser), the volume table of contents (VTOC) location, VTOC index (VTOCIX) location, or changes to the VSAM volume data set (VVDS).

The DEVSERV QDASD z/OS command now offers the Query Host Access (QHA) option. The QHA capability can be used by:

� DFSMS System Data Mover Command Execution:

– Checks if target volumes are online before issuing commands to establish the FlashCopy pair

– Supports environments that do not have unit control blocks (UCBs) defined for the target volumes

� GDPS Panel Option:

Allows the user to request information for the particular device

� GDPS Monitoring will perform the following functions:

– Periodically issues the Query Host Access command

– Issue alerts if the volumes are accessed in a way that would cause a subsequent operation to fail

� GDPS Command Execution:

Validates that volume states are consistent with the expected state. For example: PPRC volumes are accessed by the GDPS configuration only. The FlashCopy target volumes are not accessed by any system

Although the data storage graphical user interface (DS GUI) does not yet support displaying CKD path groups, in the data storage command-line interface (DS CLI), the showckdvol command was updated by adding the pathgrp parameter, as shown in Example B-3.

Example: B-3 Entering the showckdvol DS CLI command supplying the -pathgrp parameter

dscli> showckdvol -pathgrp efff

The resulting output

Name efffID EFFFaccstate Onlinedatastate Normalconfigstate Normal...

486 IBM DS8870 Architecture and Implementation

Page 505: IBM 8870 Archt

migrating 0perfgrp PG31migratingfrom -resgrp RG62============Path Group status============GroupID State Reserve Mode====================================================800002AC6E2094C96F0481 Grouped Enabled Single800002AC6E2094C96F0481 Grouped Disabled Multi-path800002AC6E2094C96F0481 Ungrouped Disabled Single

Report field definitions ( -pathgrp parameter is specified)

GroupID The path group ID. An eleven-byte value that is displayed as 22 hexadecimal characters.

State The grouped state of this path group. Valid state values are "Grouped" or "Ungrouped".

Reserve The reserved state of this path group. Valid state values are "Enabled" or "Disabled".

Mode The path mode for this path group. Valid mode values are "Single" or "Multi-path".dscli>

In Example B-3 on page 486, you can see that there is a GroupID field, with values such as 800002AC6E2094C96F0481. In Example B-6 on page 488, we show how this string can be further tracked down.

There are also enhancements to ICKDSF as explained below. These enhancements are available with ICKDSF Release 17 with APAR PM76231. For more information, see the following website:

http://www-01.ibm.com/support/docview.wss?uid=isg1PM76231

The INIT and REFORMAT ICKDSF commands have a new VERIFYOFFLINE parameter that specifies that the operation should fail if the volume is being accessed by any systems other than the one performing the INIT/REFORMAT operation.

Refer to Example B-4.

Example: B-4 Applying ICKDSF command REFORMAT by using the VERIFYOFFLINE parameter

REFORMAT UNIT(9042) NVFY VOLID(TS9042) VERIFYOFFLINE ICK00700I DEVICE INFORMATION FOR 9042 IS CURRENTLY AS FOLLOWS:PHYSICAL DEVICE = 3390 STORAGE CONTROLLER = 2107 STORAGE CONTROL DESCRIPTOR = E8 DEVICE DESCRIPTOR = 0E ADDITIONAL DEVICE INFORMATION = 4A00003C TRKS/CYL = 15, # PRIMARY CYLS = 65520 ICK04000I DEVICE IS IN SIMPLEX STATE ICK00091I 9042 NED=002107.900.IBM.75.0000000ZA161

Appendix B. Resiliency improvements 487

Page 506: IBM 8870 Archt

ICK31306I VERIFICATION FAILED: DEVICE FOUND TO BE GROUPED ICK30003I FUNCTION TERMINATED. CONDITION CODE IS 12

So far, we would be able to conclude that there is another system accessing the volume (UNIT 9042). We can now use the ICKDSF ANALYZE command to determine what other z/OS systems that have the volume online.

With the ICKDSF ANALYZE command, the user can request information for:

� Only the systems that have the input device grouped (online)� Only the systems that do not have the device grouped (offline)� All of the systems (whether they have the device grouped or not). See Example B-5.

Beyond that, the user is able to:

� Allow a device in an alternate sub channel set to be specified as the input device.� Allow the user to specify the logical subsystem (LSS) and Channel Connection Address

(CCA) of the device to be queried. See Example B-6.

Example: B-5 Querying the host access for the device specified by the Unit parameter

ANALYZE UNIT(788D) NODRIVE NOSCAN HOSTACCESS(ALL)

Example: B-6 Specifying the LSS and CCA

ANALYZE UNIT(788D) NODRIVE NOSCAN HOSTACCESS(ALL) DEVADDR(X'01',X'07') HOST ACCESS INFORMATION LSS=01 CCA=07 +---------------------------+----+--------+------+--------+---------+ | PATH GROUP ID | | | | | MAXIMUM | |------+------+----+--------+ | | | DEVICE |NUMBER OF| | | |CPU |CPU TIME|PATH|SYSPLEX | |RESERVED|CYLINDERS| | ID |SERIAL|TYPE| STAMP |MODE| NAME |ONLINE| TIME |SUPPORTED| +------+------+----+--------+----+--------+------+--------+---------+ |800002|B947 |2827|CA78BC17|S |N/A |NO | ------ |120936 | +------+------+----+--------+----+--------+------+--------+---------+ |880005|B947 |2827|CAAD6FBA|M |N/A |YES | ------ |FFF0 | +------+------+----+--------+----+--------+------+--------+---------+ |800009|B947 |2827|CAC684B9|S |PLEXM1 |NO | ------ |120936 | +------+------+----+--------+----+--------+------+--------+---------+ |800001|B947 |2827|CAC65DFD|S |LOCAL |NO | ------ |120936 | +------+------+----+--------+----+--------+------+--------+---------+ |800007|B947 |2827|CA877725|M |N/A |YES | ------ |4020C | +------+------+----+--------+----+--------+------+--------+---------+ PATH MODE : S = SINGLE PATH M = MULTI PATH SYSPLEX NAME : N/A = NOT AVAILABLE

488 IBM DS8870 Architecture and Implementation

Page 507: IBM 8870 Archt

B.3 z/OS Soft FenceTo meet the highest standards of availability in a System z environment, we advise clients to take advantage of the Geographically Dispersed Parallel Sysplex (GDPS) Hyperswap capability for DS8000 volumes in Metro Mirror pairs.

But even with such setup, there are several exposures to system images after failing back to former PPRC primary volumes. Such exposures are:

� Reading outdated data from the former PPRC primaries.� Updating the former PPRC primaries. Therefore, updates would become lost when the

PPRC pairs are re-established in the reverse direction.

B.3.1 Basic information about Soft FenceDS8000 Licensed Machine Code (LMC) 7.7.10.xx.xx addresses this situation by introducing a concept called Soft Fence (SF), which is a volume level property. This property is such that when a volume is in the soft fenced state, the disk system prevents all reads and writes to the volume from any host system. Therefore, the Soft Fence function enables a host to put a volume into a soft fenced state and also take it out of the soft fenced state.

In detail, the Soft Fence function offers the following functions:

� GDPS is able to verify if all the primary and secondary disk systems in the GDPS PPRC configuration have the new feature installed.

� If the feature is installed, the attribute SF enabled is set to allow the use of the new function by the GDPS operations to follow.

� GDPS issues a Soft Fence against all former PPRC primary volumes during planned and unplanned HyperSwap and site switch processing, with the exception that HyperSwap RESYNCH becomes applied.

� SF is performed after the I/O is resumed, so it will not affect the HyperSwap user impact time.

� SF persists across DS8870 restart.

� SF is applicable to CKD and fixed-block architecture (FBA) volumes.

� SF is applicable to Multiplatform Resiliency for System z (xDR) managed volumes for both zLinux and zVM environments. For more information about Multiplatform Resiliency for System z, see the IBM Redbooks publication “GDPS Family An Introduction to Concepts and Facilities”, at the following website:

http://www.redbooks.ibm.com/redbooks.nsf/searchsite?SearchView&query=SG24-6374

Following are examples showing how to identify existing SFs:

� Example B-7 on page 490: Displaying SF using Device Support Facilities (ICKDSF)

� Example B-8 on page 490: Displaying SF looking towards the output of the z/OS DEVSERV command

� Example B-9 on page 490: Displaying SF looking towards the output of the z/OS VARY DEVICE command

� Example B-10 on page 490: Query FENCES z/VM command

� Example B-11 on page 490: showlcu DS CLI command

Appendix B. Resiliency improvements 489

Page 508: IBM 8870 Archt

Example: B-7 Displaying SF using Device Support Facilities (ICKDSF)

PPRC QUERY UNIT(3109)ICK00700I DEVICE INFORMATION FOR 3109 IS CURRENTLY AS FOLLOWS:PHYSICAL DEVICE = 3390STORAGE CONTROLLER = 2107STORAGE CONTROL DESCRIPTOR = E8DEVICE DESCRIPTOR = 0AADDITIONAL DEVICE INFORMATION = 4A00243DTRKS/CYL = 15, # PRIMARY CYLS = 1113ICK04035I DEVICE IS IN A SOFT FENCED STATEICK04030I DEVICE IS A PEER TO PEER REMOTE COPY VOLUMEICK00091I 3109 NED=001750.500.IBM.13.000000000016QUERY REMOTE COPY - VOLUME(PRIMARY) (SECONDARY)SSID CCA SSID CCADEVICE LEVEL STATE PATH STATUS SER # LSS SER # LSS AUTORESYNC------ --------- -------------- ----------- ----------- ----------- ----------3109 N/A SIMPLEX N/A 1603 09 .... .. N/A00016 03 ....... ..ICK02206I PPRCOPY QUERY FUNCTION COMPLETED SUCCESSFULLYICK00001I FUNCTION COMPLETED, HIGHEST CONDITION CODE WAS 0

Example: B-8 Displaying SF looking towards the output of the z/OS DEVSERV command

UNIT DTYPE M CNT VOLSER CHPID=PATH STATUS RTYPE SSID CFW TC DFW PIN DC-STATE CCA DDC CYL CU-TYPE 00800,33909 ,A,001,SYSRES,1E=+ 0E=+ 42=+ 7E=< 2E=+ 4E=+ 5E=+ 83=+ 2107 4024 Y YY. YY. N SIMPLEX 3C 3C 100 2107 ** FENCED DEVICE C8FFFF00 C0FFFF00 C0FFFF00 C0FFFF00 00000000 00000000

Example: B-9 Displaying SF looking towards the output of the z/OS VARY DEVICE command

IEA434I DEVICE ONLINE IS NOT ALLOWED, SOFT FENCEDIOS000I devn,chp,SOF,cmd,stat,[sens],VOLUME SOFT FENCED

Example: B-10 Query FENCES z/VM command

>>--Query--FENCES---.-rdev------.--------->< '-rdev-rdev-'

The DS CLI was updated so that both commands showlcu and showlss accept the parameter sfstate.

Example B-11 shows the relevant output upon entering showlcu in the DS CLI. For the showlss command, the relevant output looks similar.

Example: B-11 DS CLI showlcu command

If you specify the -sfstate parameter, the output includes the SoftFence state table.

dscli> showlcu -sfstate efDate/Time: May 22, 2012 8:43:04 AM MST IBM DSCLI Version: 6.6.31.6 DS:IBM.2107-1300861

490 IBM DS8870 Architecture and Implementation

Page 509: IBM 8870 Archt

ID EFGroup 1addrgrp Econfgvols 4subsys 0x1111conbasetype 3990-6pprcconsistgrp Disabledxtndlbztimout 120 secsccsesstimout 300 secsxrcsesstimout 300 secscrithvmode Disabledresgrp RG0============Soft Fence State============Name ID sfstate========================- EFFC Disabled- EFFD Disabled- EFFE Disabledffff EFFF Disableddscli>

Report field definitions

Name The user-assigned nickname for this volume object.

ID The unique identifier that is assigned to this volume object. A volume ID is four hexadecimal characters (0x0000 - 0xFEFF).

sfstate The Soft Fence state. Can have one of the following values.

Enabled The host has set this volume to the Soft Fence state.

Disabled The host has not set this volume to the Soft Fence state.

N/A The host cannot set this volume to the Soft Fence state. For example, an alias volume.

B.3.2 How to reset a Soft Fence statusThere are two ways to reset an SF status on those volumes that have been the PPRC primary ones before, either automatically or manually.

Removing SF automaticallyThis happens during the resync (in other words: failback) processing.

Removing SF manuallyTo manually remove SF, use the following commands:

� GPDS/PPRC HYPERSW UNFENCE command that was newly introduced.

Appendix B. Resiliency improvements 491

Page 510: IBM 8870 Archt

� CLEARFENCE parameter within the ICKDSF CONTROL command, as shown in Example B-12.

Example: B-12 ICKDSF CONTROL command with the CLEARFENCE parameter

CONTROL CLEARFENCE DDNAME(ddname) | UNITADDRESS(UUUU) SUBCHSET(subchset-identifier) SCOPE(’DEV’|’LSS’) SERIAL(sssss)

� UNFENCE z/VM command that was newly introduced and is illustrated in Example B-13.

Example: B-13 Applying the UNFENCE z/VM command

>>--UNFENCE---.-rdev------.-------->< '-rdev-rdev-'

� DS CLI commands manageckdvol and managefbvol, which now recognize the sfdisable parameter. In Example B-14, we illustrate the manageckdvol command. The output for managefbvol looks similar.

Example: B-14 DS CLI manageckdvol command specifically regarding the sfdisable parameter

>>-manageckdvol--+---------------------------+------------------> '- -dev-- storage_image_ID-'

>-- -action--+- migstart-----+--+-------------------------+-----> +- migcancel----+ '- -eam--+- rotatevols-+-' +- migpause-----+ '- rotateexts-' +- migresume----+ +- sfdisable----+ +- tierassign---+ '- tierunassign-'

>--+-----------------------------+--+------------------+--------> '- -extpool-- extent_pool_ID-' '- -tier--+- ssd-+-' +- ent-+ '- nl--'

>--+- Volume_ID--+----------+-+-------------------------------->< | '- . . . -' | '- " - " ----------------'

Parameters:::::::-action migstart|migcancel|migpause|migresume (Required) Specifies that one of the following actions is to be performed::::::::

sfdisable Sends a Soft Fence reset command to each specified volume. Cannot be used with any other parameter.

492 IBM DS8870 Architecture and Implementation

Page 511: IBM 8870 Archt

Related publications

The publications that are listed in this section are considered particularly suitable for a more detailed discussion of the topics that are covered in this book.

IBM Redbooks publicationsFor more information about ordering the following publications, see “How to get IBM Redbooks publications” on page 494. Some of the documents that are referenced here might be available in softcopy only:

� IBM System Storage DS8000 Host Attachment and Interoperability, SG24-8887

� DS8800 Performance Monitoring and Tuning, SG24-8013

� IBM System Storage DS8000 Copy Services for IBM System z, SG24-6787

� IBM System Storage DS8000 Copy Services for Open Systems, SG24-6788

� IBM System Storage DS8000 Series: IBM FlashCopy SE, REDP-4368

� Multiple Subchannel Sets: An Implementation View, REDP-4387

� IBM DS8870 Disk Encryption, REDP-4500

� DS8000 Thin Provisioning, REDP-4554

� IBM System Storage DS8000: Remote Pair FlashCopy (Preserve Mirror), REDP-4504

� IBM System Storage DS8000: LDAP Authentication, REDP-4505

� DS8000: Introducing Solid State Drives, REDP-4522

� IBM System Storage DS8000 Copy Services Scope Management and Resource Groups, REDP-4758

� DS8000 I/O Priority Manager, REDP-4760

� IBM DS8000 Easy Tier, REDP-4667

� IBM System Storage DS8000 Easy Tier Server, REDP-5013

� IBM System Storage DS8000 Easy Tier Heat Map Transfer, REDP-5015

� IBM System Storage DS8000: Easy Tier Application, REDP-5014

� IBM System Storage DS8000: z/OS Distributed Data Backup, REDP-4701

� IBM DS8870 and VMware Synergy, REDP-4915

� IBM System Storage DS8870 Space Reclamation with Veritas Storage Foundation, REDP-5022

� Tivoli Storage Productivity Center V5.1 Technical Guide, SG24-8053

� Data Migration to IBM Disk Storage Systems, SG24-7432

© Copyright IBM Corp. 2014. All rights reserved. 493

Page 512: IBM 8870 Archt

Other publicationsThe following publications also are relevant as further information sources. Some of the documents that are referenced here might be available in softcopy only:

� IBM DS8870 Introduction and Planning Guide, GC27-4209

� IBM DS8000 Host Systems Attachment Guide, GC27-4210

� IBM DS8000 Open Application Programming Interface Installation and Reference, GC27-4211

� IBM DS8000 Series Command-Line Interface User's Guide, GC27-4212

� IBM System Storage Multipath Subsystem Device Driver User’s Guide, GC52-1309

� “Outperforming LRU with an adaptive replacement cache algorithm,” by N. Megiddo and D. S. Modha, in IEEE Computer, volume 37, number 4, pages 58–65, 2004

� “SARC: Sequential Prefetching in Adaptive Replacement Cache” by Binny Gill, et al., Proceedings of the USENIX 2005 Annual Technical Conference, pages 293–308

� “AMP: Adaptive Multi-stream Prefetching in a Shared Cache” by Binny Gill, et al., in USENIX File and Storage Technologies (FAST), February 13–16, 2007, San Jose, CA

� “WOW: Wise Ordering for Writes – Combining Spatial and Temporal Locality in Non-Volatile Caches” by B. S. Gill and D. S. Modha, fourth USENIX Conference on File and Storage Technologies (FAST), 2005, pages 129–142

� VPNs Illustrated: Tunnels, VPNs, and IPSec, by Jon C. Snader, Addison-Wesley Professional (November 5, 2005), ISBN-10: 032124544X

Online resourcesThe following websites also are relevant as further information sources:

� IBM data storage feature activation (DSFA) website

http://www.ibm.com/storage/dsfa

� Documentation for the DS8000: The Information Center

http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp

� System Storage Interoperation Center (SSIC)

http://www.ibm.com/systems/support/storage/config/ssic

� Security planning website

http://publib16.boulder.ibm.com/doc_link/en_US/a_doc_lib/aixbman/security/ipsec_planning.htm

� VPN Implementation, S1002693:

http://www.ibm.com/support/docview.wss?&rs=1114&uid=ssg1S1002693

How to get IBM Redbooks publicationsYou can search for, view, or download IBM Redbooks publications, Redpapers, Hints and Tips, draft publications, and Additional materials, and order hardcopy IBM Redbooks publications or CD-ROMs at this website:

http://www.ibm.com/redbooks

494 IBM DS8870 Architecture and Implementation

Page 513: IBM 8870 Archt

Help from IBM� IBM Support and downloads

http://www.ibm.com/support

� IBM Global Services

http://www.ibm.com/services

Related publications 495

Page 514: IBM 8870 Archt

496 IBM DS8870 Architecture and Implementation

Page 515: IBM 8870 Archt

(1.0” spine)0.875”<

->1.498”

460 <->

788 pages

IBM DS8870 Architecture and Im

plementation

IBM DS8870 Architecture and

Implem

entation

IBM DS8870 Architecture and

Implem

entation

IBM DS8870 Architecture and Im

plementation

Page 516: IBM 8870 Archt

IBM DS8870 Architecture and

Implem

entation

IBM DS8870 Architecture and

Implem

entation

Page 517: IBM 8870 Archt
Page 518: IBM 8870 Archt

®

SG24-8085-02 ISBN 0738439169

INTERNATIONAL TECHNICALSUPPORTORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE

IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information:ibm.com/redbooks

®

IBM DS8870 Architecture and Implementation

Dual IBM POWER7+ based controllers

All-flash drive configuration

Enhanced Business Class configuration option

This IBM Redbooks publication describes the concepts, architecture, and implementation of the IBM DS8870. The book provides reference information to assist readers who need to plan for, install, and configure the DS8870.

The IBM DS8870 is the most advanced model in the IBM DS8000 series and is equipped with IBM POWER7+ based controllers. Various configuration options are available that scale from dual 2-core systems up to dual 16-core systems with up to 1 TB of cache. The DS8870 also features enhanced 8 Gbps device adapters and host adapters. Connectivity options, with up to 128 Fibre Channel/IBM FICON ports for host connections, make the DS8870 suitable for multiple server environments in open systems and IBM System z environments.

The DS8870 supports advanced disaster recovery solutions, business continuity solutions, and thin provisioning. All disk drives in the DS8870 storage system have the Full Disk Encryption (FDE) feature. The DS8870 also can be integrated in a Lightweight Directory Access Protocol (LDAP) infrastructure. The DS8870 features high-density storage enclosures and can be equipped with flash drives. An all-flash drive configuration is also available.

The DS8870 can automatically optimize the use of each storage tier, particularly flash drives, through the IBM Easy Tier feature, which is available at no extra charge. Easy Tier is covered in separate publications: IBM DS8000 Easy Tier Concepts and Usage, REDP-4667; IBM System Storage DS8000 Easy Tier Server, REDP-5013; IBM System Storage DS8000 Easy Tier Application, REDP-5014; and IBM System Storage DS8000 Easy Tier Heat Map Transfer, REDP-5015.

Back cover