hypersocket implementation guide

Upload: warawut-nakakeat

Post on 16-Oct-2015

65 views

Category:

Documents


0 download

DESCRIPTION

Implementation for Hypersocket by IBM

TRANSCRIPT

  • ibm.com/redbooks

    Front cover

    HiperSockets Implementation Guide

    Bill WhiteRoy Costa

    Michael GambleFranck InjeyGiada Rauti

    Karan Singh

    Discussing architecture, functions, and operating systems support

    Planning and implementation

    Setting up examples for z/OS, z/VM and Linux on System z

  • International Technical Support Organization

    HiperSockets Implementation Guide

    March 2007

    SG24-6816-01

  • Copyright International Business Machines Corporation 2002, 2006Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions setforth in GSA ADP Schedule Contract with IBM Corp.

    Second Edition (March 2007)This edition applies to the HiperSockets on IBM System z, for use with z/OS V1R8, z/VM V5R2, and Linux System z.

    Comments may be addressed to:IBM Corporation, International Technical Support OrganizationDept. HYTD Mail Station P0992455 South RoadPoughkeepsie, NY 12601-5400

    When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you.

    Take Note! Before using this information and the product it supports, be sure to read the general information in Notices on page vii.

  • Contents

    Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiTrademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

    Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixThe team that wrote this IBM Redbooks publication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixBecome a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xComments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x

    Chapter 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.1.1 HiperSockets benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1.2 Installation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.2 Server integration with HiperSockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 HiperSockets mode of operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    1.3.1 HiperSockets usage example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.4 HiperSockets functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    1.4.1 Broadcast support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4.2 Multicast support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4.3 IP Version 6 support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4.4 Hardware assists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4.5 VLAN support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.4.6 HiperSockets Network Concentrator on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.4.7 DYNAMICXCF and Sysplex subplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.4.8 HiperSockets Accelerator on z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    1.5 Operating system support summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161.6 Test configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    Chapter 2. Hardware definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.1 System configuration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.2 HCD definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    2.2.1 Channel Path definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.2.2 Control unit definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.2.3 I/O device definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    2.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    Chapter 3. z/OS support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    3.1.1 z/OS implementation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.2 Hardware definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.3 VTAM and TCP/IP started task JCL procedures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    3.3.1 Locating the TCP/IP profile dataset from the TCP/IP JCL procedure . . . . . . . . . . 433.3.2 Locating the VTAM start options dataset from the VTAM JCL procedure . . . . . . 43

    3.4 HiperSockets implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.4.1 HiperSockets implementation environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.4.2 Implementation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.4.3 VTAM customization for HiperSockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.4.4 TCP/IP profile customization for HiperSockets . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.4.5 Verification of the HiperSockets configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    3.5 DYNAMICXCF HiperSockets implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Copyright IBM Corp. 2002, 2006 iii

  • 3.5.1 DYNAMICXCF implementation environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.5.2 Implementation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523.5.3 VTAM configuration for DYNAMICXCF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523.5.4 TCP/IP configuration for DYNAMICXCF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.5.5 Verification of the DYNAMICXCF configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    3.6 VLAN HiperSockets implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583.6.1 VLAN HiperSockets environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583.6.2 Implementation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593.6.3 VTAM customization for VLAN HiperSockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593.6.4 TCP/IP profile customization for VLAN HiperSockets. . . . . . . . . . . . . . . . . . . . . . 593.6.5 Verify VLAN implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    3.7 TCP/IP Sysplex subplex over HiperSockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.7.1 Subplex implementation environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643.7.2 Implementation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663.7.3 VTAM configuration setup for Sysplex subplex. . . . . . . . . . . . . . . . . . . . . . . . . . . 663.7.4 TCP/IP configuration setup for sysplex Subplex. . . . . . . . . . . . . . . . . . . . . . . . . . 673.7.5 Verification of the IP subplex over HiperSockets . . . . . . . . . . . . . . . . . . . . . . . . . 68

    3.8 HiperSockets Accelerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733.8.1 HiperSockets Accelerator implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753.8.2 HiperSockets Accelerator implementation steps. . . . . . . . . . . . . . . . . . . . . . . . . . 753.8.3 VTAM configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763.8.4 TCP/IP configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763.8.5 HiperSockets Accelerator verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

    3.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

    Chapter 4. z/VM support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.2 z/VM HiperSockets support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

    4.2.1 Implementation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.2.2 z/VM definitions for guest systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

    4.3 HiperSockets network definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864.3.1 TCP/IP definitions for z/VM host system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864.3.2 z/VM guest system network definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

    4.4 VLAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894.4.1 VLAN definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

    4.5 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

    Chapter 5. Linux support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

    5.1.1 Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955.1.2 Linux configuration example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

    5.2 Setup for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955.2.1 z/VM definitions when running Linux guest systems. . . . . . . . . . . . . . . . . . . . . . . 965.2.2 Linux I/O definitions - initial install of Linux system. . . . . . . . . . . . . . . . . . . . . . . . 965.2.3 Linux I/O definitions - adding to an existing Linux system . . . . . . . . . . . . . . . . . . 975.2.4 Permanent Linux definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

    5.3 VLAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025.4 HiperSockets Network Concentrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065.5 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

    Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111iv HiperSockets Implementation Guide

    IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

  • Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

    IBM Redbooks publications collections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

    Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Contents v

  • vi HiperSockets Implementation Guide

  • Notices

    This information was developed for products and services offered in the U.S.A.

    IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

    IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

    The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

    This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

    Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.

    IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.

    Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

    This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.

    COPYRIGHT LICENSE:

    This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. Copyright IBM Corp. 2002, 2006. All rights reserved. vii

  • TrademarksThe following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:

    developerWorksHiperSocketsIBMMVSRedbooks

    Redbooks (logo) System zSystem z9TivoliVSE/ESA

    VTAMz/OSz/VMz9

    The following terms are trademarks of other companies:

    SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries.

    Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

    Other company, product, or service names may be trademarks or service marks of others. viii HiperSockets Implementation Guide

  • Preface

    This IBM Redbook discusses the System z HiperSockets function. It offers a broad description of the architecture, functions, and operating systems support.

    This IBM Redbooks publication will help you plan and implement System z HiperSockets. It provides information about the definitions needed to configure HiperSockets for the supported operating systems.

    This IBM Redbooks publication is intended for system programmers, network planners, and system engineers who will plan and install HiperSockets. A solid background in network and TCP/IP is assumed.

    The team that wrote this IBM Redbooks publicationThis IBM Redbooks publication was produced by a team of specialists from around the world working at the International Technical Support Organization, Poughkeepsie Center.

    Bill White is a Project Leader and Senior Networking Specialist at the International Technical Support Organization, Poughkeepsie Center.

    Roy Costa is an Advisory Systems Programmer at the International Technical Support Organization, Poughkeepsie Center. He has over 20 years of experience in z/VM systems programming. Roy has worked with Linux on System z for more than five years and has provided technical advice and support to numerous IBM Redbooks publications for the past 10 years.

    Michael Gamble is a Systems Management specialist with over 40 years experience in programming, real time environments and system support. He has been involved with VM since 1979 and Linux on System z since 2000. He has written many utilities and tools for use within the VM and Linux environments to ease and automate support work. He currently works in the Integrated Technology Delivery Linux team to support over 250 SLES servers under several z/VM systems in the USA and Canada.

    Franck Injey is an I/T Architect at the International Technical Support Organization, Poughkeepsie Center.

    Giada Rauti is an Advisory I/T Specialist working at the IT Services Tivoli Lab in Rome, Italy. She holds a Laurea degree in Physics from La Sapienza University in Rome. She has been working for twenty years in IBM. She has 17 years of experience in the LAN and WAN networking. Her areas of expertise include SNA/APPN and TCP/IP in z/OS and z/VM environments.

    Karan Singh is a systems programmer for IBM Global Services with 10 years of experience in z/OS systems operation.

    Thanks to the following people for their contributions to this project:Alexandra WinterIBM Systems and Technology Group, Development, Boeblingen

    Bob Haimowitz Copyright IBM Corp. 2002, 2006 ix

    International Technical Support Organization, Raleigh Center

  • Thanks to the authors of the first edition:

    Rama AyyarGlobal Technology Services, West-Pennant Hills, NSW Australia.

    Velibor UskokovicGlobal Technology Services, Toronto, Ontario Canada

    Become a published authorJoin us for a two- to six-week residency program! Help write an IBM Redbooks publication dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and clients.

    Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you will develop a network of contacts in IBM development labs, and increase your productivity and marketability.

    Find out more about the residency program, browse the residency index, and apply online at:ibm.com/redbooks/residencies.html

    Comments welcomeYour comments are important to us!

    We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review IBM Redbooks publication form found at:

    ibm.com/redbooks

    Send your comments in an Internet note to:[email protected]

    Mail your comments to:IBM Corporation, International Technical Support OrganizationDept. HYTD Mail Station P0992455 South RoadPoughkeepsie, NY 12601-5400x HiperSockets Implementation Guide

  • Chapter 1. Overview

    This chapter provides a high-level overview of System z HiperSockets, and also an introduction of the HiperSockets configuration that we have used while writing this IBM Redbooks publication.

    The topics covered in this chapter are: Overview Server integration with HiperSockets HiperSockets mode of operation HiperSockets functions Operating system support summary Test configuration

    1 Copyright IBM Corp. 2002, 2006 1

  • 1.1 OverviewHiperSockets is a technology that provides high-speed Transmission Control Protocol/Internet Protocol (TCP/IP) connectivity between servers within a System z. This technology eliminates the requirement for any physical cabling or external networking connection among these virtual servers. It works similar to an internal Local Area Network (LAN). HiperSockets is very useful if you have very huge data flow among these virtual servers.

    HiperSockets uses internal Queued Input/Output (iQDIO) at memory speeds to pass traffic among these virtual servers.

    HiperSockets is a Licensed Internal Code (LIC) function that emulates the Logical Link Control (LLC) layer of an OSA-Express QDIO interface.The following operating systems support HiperSockets: z/OS z/VM Linux on System z VSE/ESA

    1.1.1 HiperSockets benefitsThe following is a list of HiperSockets benefits: Cost saving

    You can use HiperSockets to communicate among consolidated servers in a single processor. Therefore, you can eliminate all the hardware boxes running these separate servers. With HiperSockets, there are zero external components or cables to pay for, to replace, to maintain, or to wear out. The more consolidation of servers, the greater your savings potential for costs associated with external servers and their associated networking components.

    SimplicityHiperSockets is part of z/Architecture technology, including QDIO and advanced adapter interrupt handling. The data transfer itself is handled much like a cross address space memory move, using the memory bus. HiperSockets is application transparent and appears as a typical TCP/IP device. Its configuration is simple, making installation easy. It is supported by existing, known management and diagnostic tools.

    AvailabilityWith HiperSockets, there are no network hubs, routers, adapters, or wires to break or maintain. The reduced number of network external components greatly improves availability.

    High performanceConsolidated servers that have to access corporate data residing on the System z can do so at memory speeds with latency close to zero, by bypassing all the network overhead and delays.Also, you can customize HiperSockets to accommodate varying traffic sizes. With HiperSockets, you can define a maximum frame size according to the traffic characteristics for each HiperSockets. In contrast, LANs such as Ethernet and Token Ring have a maximum frame size predefined by their architecture.2 HiperSockets Implementation Guide

  • PriorityPriority queuing is a capability supported by the QDIO architecture and introduced for z/OS environment only. It sorts outgoing IP message traffic according to the service policy that you have set up for the specific priority assigned in the IP header. It is used for HiperSockets Accelerator function.

    SecurityBecause there is no server-to-server traffic outside the System z, HiperSockets has no external components, and therefore it provides a very secure connection. For security purposes, you can connect servers to different HiperSockets. All security features, such as firewall filtering, are available for HiperSockets interfaces in the same way as they are with other TCP/IP network interfaces. In a Sysplex environment, subplexing allows you to define security zones. Thus, only members within the same security zone may communicate with each other.

    VLAN supportA virtual LAN allows you to divide a physical network administratively into separate logical networks. These logical networks operate as though they are physically independent of each other. This allows for traffic flow over HiperSockets and between HiperSockets and OSA-Express features. Inside each single HiperSockets LAN, you can define multiple VLAN connections (up to a maximum of four).

    Sysplex connection improvementHiperSockets can also improve TCP/IP communications within a sysplex environment when the DYNAMICXCF facility is used.

    1.1.2 Installation planning The following are the steps needed to implement HiperSockets: Apply the OS maintenances level that provides HiperSockets support. Define the HiperSockets CHPIDs and I/O devices to your configuration. Update TCP/IP configuration with the parameters that support HiperSockets.

    HiperSockets is a LIC function, which may require EC level maintenance. Check with your local service representative to insure your System z has the required EC level installed. There is no extra charge for HiperSockets.

    HiperSockets connectivityHiperSockets supports: Up to sixteen independent HiperSockets. Up to 12288 I/O devices across all 16 HiperSockets. VLAN support, with a maximum four VLANs for each defined HiperSocket. Spanned channel support, which allows sharing of HiperSockets across multiple Logical

    Channel SubSystems (LCSS). Up to 4096 TCP/IP stack connections across all HiperSockets. For z/OS, z/VM and Linux,

    and VSE/ESA, the maximum number of TCP/IP stacks or HiperSockets communication queues that can concurrently connect on a single z9 EC, z9 BC, z990, or z890 server is 4096.

    Up to 16000 IP addresses across all 16 HiperSockets, which means that a total of 16000 IP addresses can be kept for the 16 possible IP address lookup tables. These IP addresses include the HiperSockets interface, and also Virtual IP addresses (VIPA) and Chapter 1. Overview 3

    dynamic Virtual IP Addresses (DVIPA) that are defined to the TCP/IP stack.

  • z/OS allows the operation of multiple TCP/IP stacks within a single image. The read control and write control I/O devices are required only once per image, and are controlled by VTAM. Each TCP/IP stack within the same z/OS image requires one I/O device for data exchange. If you run one TCP/IP stack per logical partition, z/OS requires three I/O devices (as do z/VM and Linux). Each additional TCP/IP stack in a z/OS logical partition requires only one additional I/O device for data exchange. The I/O device addresses can be shared between z/OS systems running in different logical partitions. Therefore, the number of I/O devices is not a limitation for z/OS.

    1.2 Server integration with HiperSocketsMany data center environments today are multi-tiered server applications, with a variety of middle-tier servers surrounding the System z data and transaction server. Interconnecting this multitude of servers requires the cost and complexity of many networking connections and components. The performance and availability of the inter-server communication is dependent on the performance and stability of the set of connections. The more servers involved, the greater the number of network connections and complexity to install, administer, and maintain.

    Figure 1-1 shows two configurations.

    The configuration on the left shows a server farm surrounding a System z server, with its corporate data and transaction servers. This configuration is very complex, involving the backup of the servers and network connections. It is also very expensive and has a high administrative cost.

    The configuration on the right consolidates the mid-tier workload onto multiple Linux virtual servers running on a System z server in a very reliable, high-speed network that HiperSockets provides, over which these servers can communicate. In addition, these consolidated servers also have direct high-speed access to database and transaction servers running under z/OS on the same System z server. The external network connection for all servers is concentrated over a few high-speed OSA-Express interfaces.

    Figure 1-1 Server consolidation

    z/OS Consolidation

    System z serverMultiple external servers

    many network connections

    TCP/IP network TCP/IP network

    OSA-Express

    z/OS

    z/VM

    Linux Guest Systems

    HiperSockets

    few network connections4 HiperSockets Implementation Guide

  • 1.3 HiperSockets mode of operationHiperSockets implementation is based on the OSA-Express Queued Direct Input/Output (QDIO) protocol, hence HiperSockets is called internal QDIO (iQDIO). The LIC emulates the link control layer of an OSA-Express QDIO interface. Typically, before you can transport a packet on an external LAN, you have to build a LAN frame, and insert the MAC address of the destination host or router on that LAN into the frame. HiperSockets do not use LAN frames, destination hosts, or routers. TCP/IP stacks are addressed by inbound data queue addresses instead of MAC addresses.

    The System z LIC maintains a lookup table of IP addresses for each HiperSocket. This table represents an internal LAN. At the time at which a TCP/IP stack starts a HiperSockets device, the device is registered in the IP address lookup table with its IP address, and its input and output data queue pointers. If a TCP/IP device is stopped, the entry for this device is deleted from the IP address lookup table.

    HiperSockets copy data synchronously from the output queue of the sending TCP/IP device to the input queue of the receiving TCP/IP device by using the memory bus to copy the data through an I/O instruction.

    The controlling operating system that performs I/O processing is identical to OSA-Express in QDIO mode. The data transfer time is similar to a cross-address space memory move, with latency close to zero. To get a data move total elapsed time, you have to add the operating system I/O processing time to the LIC data move time.

    HiperSockets operations are executed on the CP where the I/O request is initiated. HiperSockets starts read or write operations. The completion of a data move is indicated by the sending side to the receiving side with a Signal Adapter (SIGA) instruction. Optionally, the receiving side can use dispatcher polling instead of handling SIGA interrupts. The I/O processing is performed reducing demand on the System Assist Processor (SAP). This new implementation is also called thin interrupt.Chapter 1. Overview 5

  • The data transfer itself is handled much like a cross-address space memory transfer using the memory bus, not the server I/O bus. HiperSockets does not contend with other system I/O activity and it does not use CP cache resources. See Figure 1-2.

    Figure 1-2 HiperSockets basic operation

    The HiperSockets operational flow consists of five steps: 1. Each TCP/IP stack (image) registers its IP addresses into HiperSockets server-wide

    Common Address Lookup table. There is one lookup table for each HiperSockets internal LAN. The scope of each LAN is the logical partitions that are defined to share the HiperSockets IQD CHPID.

    2. Then, the address of the TCP/IP stacks receive buffers are appended to the HiperSockets queues.

    3. When data is being transferred, the send operation of HiperSockets performs a table lookup for the addresses of the sending and receiving TCP/IP stacks and their associated send and receive buffers.

    4. The sending processor copies the data from its send buffers into the target CP processors receive buffers (System z9 server memory).

    5. The sending processor optionally delivers an interrupt to the target TCP/IP stack. This optional interrupt uses the thin interrupt support function of the System z server, which means the receiving host is going to look ahead, detecting and processing inbound data. This technique reduces the frequency of real I/O or external interrupts.

    HiperSockets TCP/IP devices are configured similar to OSA-Express QDIO devices. Each HiperSockets requires the definition of a channel path identifier (CHPID) similar to any other I/O interface. HiperSockets is not allocated a CHPID until it is defined. It also does not take an I/O cage slot. Customers who have used all the available CHPIDs on the server cannot enable HiperSockets; therefore, you must include HiperSockets in the customers overall channel I/O planning. The CHPID type for HiperSockets is IQD, and the CHPID number must be in the range from hex 00 to hex FF. No other I/O interface can use a CHPID number

    Note: You must define the source and destination interfaces to the same HiperSockets.

    Device Driver

    TCP/IP TCP/IPTCP/IP

    Device Driver

    Device Driver

    Logical partitionvirtualServer 1

    Common Lookup Table across entire HiperSockets LAN

    1 1

    2 2

    4/53

    System z

    2

    1

    Logical partitionvirtualServer 2

    Logical partitionvirtualServer 36 HiperSockets Implementation Guide

  • defined for a HiperSockets, even though HiperSockets does not occupy any physical I/O connection position.

    We recommend assigning the CHPID addresses starting at the high end of the CHPID addressing range (xFF, FE, FD, and FC...) to minimize possible addressing conflicts with real channels. This is similar to the approach used when defining other internal channels.

    Real LANs have a maximum frame size limit defined by their protocol. The maximum frame size for Ethernet is 1492 bytes, and for Gigabit Ethernet there is the jumbo frame option for a maximum frame size of 9 kilobytes (KB). The maximum frame size for a HiperSockets is assigned when the HiperSockets CHPID is defined. You can select frame sizes of 16 KB, 24 KB, 40 KB, and 64 KB. The default maximum frame size is 16 KB. The selection depends on the data characteristics transported over a HiperSockets, which is also a trade-off between performance and storage allocation. The MTU size used by the TCP/IP stack for the HiperSockets interface is also determined by the maximum frame size. See Table 1-1.

    Table 1-1 Maximum frame size and MTU size

    The maximum frame size is defined in the hardware configuration, which is displayed in IOCP as CHPARM.

    An IP address is registered with its HiperSockets interface by the TCP/IP stack at the time at which the TCP/IP device is started. IP addresses are removed from an IP address lookup table when a HiperSockets device is stopped. Under operating system control, you can reassign IP addresses to other HiperSockets interfaces on the same HiperSockets LAN. This allows flexible backup of TCP/IP stacks.

    1.3.1 HiperSockets usage example Each HiperSockets is identified by a Channel Path Identifier (CHPID) number. As for all other input/output operations, operating systems address a HiperSockets interface via device numbers specified during the CHPID definition process.

    Maximum frame size Maximum TransmissionUnit size

    16 KB 8 KB

    24 KB 16 KB

    40 KB 32 KB

    64 KB 56 KB

    Note: Reassignment is only possible within the same HiperSockets LAN. A HiperSockets is one network or subnetwork. Reassignment is only possible for the same operating system type. For example, an IP address originally assigned to a Linux TCP/IP stack can only be reassigned to another Linux TCP/IP stack, a z/OS dynamic VIPA can only be reassigned to another z/OS TCP/IP stack, or a z/VM TCP/IP VIPA can only be reassigned to another z/VM TCP/IP stack. The LIC performs the reassignment in force mode. It is up to the operating systems TCP/IP stack to control this change.Chapter 1. Overview 7

  • There are many possibilities for applying HiperSockets technology. Figure 1-3 shows the use of three possible HiperSockets in a System z.

    Figure 1-3 HiperSockets usage example

    The three HiperSockets illustrated in Figure 1-3 are used as follows: HiperSockets with CHPID FD

    Connected to this HiperSockets are all servers in the System z, which are: The multiple Linux servers running under z/VM in LPAR-1 The z/VM TCP/IP stack running in LPAR-1 All z/OS servers in sysplex A (logical partition s 5 to 7) for non-sysplex traffic All z/OS servers in sysplex B (logical partitions 8 to 10) for non-sysplex traffic

    HiperSockets with CHPID FEThis is the connection used by sysplex A (logical partitions 5 to 7) to transport TCP/IP user-data traffic among the three sysplex logical partitions.If the following prerequisites are met, HiperSockets is automatically used within a single sysplex environment: XCF dynamics are defined to the TCP/IP stacks. HiperSockets is available to the TCP/IP stacks.

    HiperSockets with CHPID FFThis is the connection used by sysplex B (logical partitions 8 to 10) to transport TCP/IP data traffic among the three sysplex logical partitions.

    Note: SNA/APPN traffic is supported over HiperSockets in conjunction with Enterprise Extender.

    System z

    Sysplex A

    z/OSLP -5

    z/OSLP -6

    z/OSLP -7

    FE

    Sysplex B

    z/OSLP -1

    z/OSLP -2

    z/OSLP -3

    FF

    z/VM LP -1

    Linux Guest1

    Linux Guest2

    Linux Guestn

    z/VMTCP/IP........

    HiperSockets x'FD'8 HiperSockets Implementation Guide

  • 1.4 HiperSockets functions The functions supported by HiperSockets are discussed in the following sections: 1.4.1, Broadcast support on page 9 1.4.2, Multicast support on page 9 1.4.3, IP Version 6 support on page 9 1.4.4, Hardware assists on page 9 1.4.5, VLAN support on page 10 1.4.6, HiperSockets Network Concentrator on Linux on page 10 1.4.7, DYNAMICXCF and Sysplex subplexing on page 12 1.4.8, HiperSockets Accelerator on z/OS on page 14

    1.4.1 Broadcast supportBroadcasts are now supported across HiperSockets on Internet Protocol Version 4 (IPv4) for applications. Now, applications that use the broadcast function can propagate the broadcast frames to all TCP/IP applications that are using HiperSockets. This support is applicable to Linux, z/OS, and z/VM environments.

    1.4.2 Multicast supportMulticast is now supported across HiperSockets on Internet Protocol Version 4 (IPv4) for applications. Now, applications that use the multicast function can propagate the multicast frames to all TCP/IP applications that are using HiperSockets. This support is applicable to Linux, z/OS, and z/VM environments.

    1.4.3 IP Version 6 supportHiperSockets supports Internet Protocol Version 6 (IPv6). IPv6 is the protocol designed by the Internet Engineering Task Force (IETF) to replace Internet Protocol Version 4 (IPv4) to help satisfy the demand for additional IP addresses.

    The support of IPv6 on HiperSockets (CHPID type IQD) is exclusive to System z9, and is supported by z/OS and z/VM. IPv6 support is currently available on the OSA-Express2 and OSA-Express features in the z/OS, z/VM, and Linux on System z9 environments.

    HiperSockets support of IPv6 (CHPID type IQD) on System z9 requires, at a minimum, the following: z/OS V1.7. z/VM V5.2 with PTF in APAR VM63952. Support of guests is expected to be transparent to

    z/VM if the device is directly connected to the guest (pass through).

    1.4.4 Hardware assistsA complementary virtualization technology is available for z9 EC, z9 BC, z990, and z890, which includes: QDIO Enhanced Buffer-State Management (QEBSM), which are two new hardware

    instructions designed to help eliminate the overhead of hypervisor interception.Chapter 1. Overview 9

  • Host Page-Management Assist (HPMA) is an interface to the z/VM main storage management function designed to allow the hardware to assign, lock, and unlock page frames without z/VM hypervisor assistance.

    These hardware assists allow a cooperating guest operating system to initiate QDIO operations directly to the applicable channel, without interception by z/VM, thereby helping to provide additional performance improvements. The z990 and z890 servers require MCL updates. Support is integrated in the z9 EC and z9 BC LIC.

    1.4.5 VLAN support Virtual Local Area Networks (VLANs), IEEE standard 802.1q, is being offered for HiperSockets in a Linux on System z environment. VLANs can reduce overhead by allowing networks to be organized by traffic patterns rather than physical location. This enhancement permits traffic flow on a VLAN connection both over HiperSockets and between HiperSockets and an OSA-Express GbE, 1000BASE-T Ethernet, or Fast Ethernet feature.

    VLANs facilitate easy administration of logical groups of servers that can communicate as though they were on the same LAN. They also facilitate easier administration of moves, adds, and changes in members of these groups. VLANs are also designed to provide a degree of low-level security to provide a greater degree of isolation.

    With servers, where multiple TCP/IP stacks exist, sharing one or more VLAN support can provide a greater degree of flexibility. You can group the same server in the same VLAN for different types of application and they exchange a high flow of data or use VLAN for security reasons to separate different lines of business. See Figure 1-4.

    Figure 1-4 HiperSockets VLAN

    1.4.6 HiperSockets Network Concentrator on LinuxTraffic between HiperSockets and OSA-Express can be transparently bridged using the HiperSockets Network Concentrator, without requiring intervening network routing overhead, thus increasing performance and simplifying the network configuration. This is achieved by configuring a connector Linux system that has HiperSockets and OSA-Express connections

    HiperSockets

    VLAN 1110.10.11.0/24

    VLAN 1310.10.13.0/24

    z/VMLP 1

    z/OSLP 4

    LinuxLP 5

    LinuxLP 8

    LinuxLP 10

    z/VMLP 7

    CHPID x'F4'

    ... ...LinuxLinux LinuxLinux

    HiperSockets VLAN10 HiperSockets Implementation Guide

    defined. The HiperSockets Network Concentrator registers itself with HiperSockets as a

  • special network entity to receive data packets destined for an IP address on the external LAN via an OSA-Express port. The HiperSockets Network Concentrator also registers IP addresses to the OSA-Express on behalf of the TCP/IP stacks using HiperSockets, hence providing inbound and outbound connectivity.

    HiperSockets Network Concentrator support is performed using the next-hop-IP-address in the Queued Direct Input/Output (QDIO) header, instead of using a Media Access Control (MAC) address. Therefore, VLANs in a switched Ethernet fabric are not supported. TCP/IP stacks that use only HiperSockets to communicate among each other with no external network connection are seen as no different (the HiperSockets support them), and the networking characteristics are unchanged.

    The HiperSockets Network Concentrator, shown in Figure 1-5 on page 12, is a mechanism to connect systems with HiperSockets interfaces to the external network using the same subnet. In other words, the Linux system appears as though they are directly connected to the physical network. A Linux system acts as a forwarder for traffic between the OSA interface and the internal HiperSockets connected systems (z/VM, z/OS, VSE, and Linux on z). Refer to Linux on System z, Device Drivers, Features, and Commands, SC33-8281, for detailed information.

    HiperSockets Network Concentrator can be a useful solution if you have a Linux on System z (native logical partition or Guest under z/VM) and a huge traffic among servers inside System z and also the requirement of high-speed communications to the external network. This bridges function, without routing functions, and it does not consume other subnets.

    In addition, HiperSockets Network Concentrator allows you to migrate systems from the LAN into a System z environment, without changing IP address and network routing. Thus, HiperSockets Network Concentrator helps to simplify network configuration and administration.

    We recommend that you always have backup connections.

    Note: IP fragmentation does not work for multicast bridging.The MTU of the HiperSockets link and OSA must be of the same size. Multicast packets not fitting in the link MTU are discarded. Chapter 1. Overview 11

  • Figure 1-5 represents an example of a HiperSockets Network Concentrator (HSNC) on Linux using OSA-Express.

    Figure 1-5 HiperSockets Network Concentrator on Linux

    To exploit HiperSockets Network Concentrator unicast and multicast support, a Linux distribution including the qeth driver (dated 2003-10-31 or later) from the June 2003 stream is required. This is for Kernel 2.4 or later.

    See the developerWorks Web site at:http://www-128.ibm.com/developerworks/linux/linux390/index.html

    1.4.7 DYNAMICXCF and Sysplex subplexing HiperSockets can also improve TCP/IP communications within a sysplex environment when the DYNAMICXCF facility is used. When an DYNAMICXCF HiperSockets device and link are activated, a subnetwork route is created across the HiperSockets link. The subnetwork is created by using the DYNAMICXCF IP address and mask. This allows any logical partition within the same server to be reached, even ones that are not within the sysplex. The logical partition that is outside of the sysplex environment must define at least one IP address for the HiperSockets endpoint that is within the subnetwork defined by the DYNAMICXCF IP address and mask

    z/OS Communications Server now allows you to subdivide a sysplex network into multiple subplex scopes from a sysplex networking function perspective. For example, some VTAM and TCP/IP instances in a sysplex may belong to one subplex, while other VTAM or TCP/IP instances in the same sysplex belong to different subplexes.

    Network Concentrator scenario

    E800-E802192.0.1.1

    TCPIP

    LP-A12z/VM VMLinux7

    SYSPLEXSYSPLEX

    LP-A23

    z/OS SC30

    E800-E802192.0.1.4

    LP-A24

    z/OS SC31

    E800-E802192.0.1.5

    LP-A25

    z/OS SC32

    E800-E802192.0.1.6Linux

    LNXSU2

    (7000-7002)E808-E80A192.0.1.3(7000-7002)

    E804-E806192.0.1.2

    LinuxLNXRH2

    HiperSockets CHPID F4 192.0.1.0/24

    (7200-7203)

    192.0.1.0/24

    OSA2200-2203CHPID 06192.0.1.812 HiperSockets Implementation Guide

  • With subplexing, you are able to build security zones. Thus, only members within the same security zone may communicate with each other. Subplex members are VTAM nodes and TCP/IP stacks that are grouped in security zones to isolate communication.

    A subplex is a subset of a Sysplex that consists of selected members. These members are connected and communicate through dynamic cross-system coupling facility (XCF) groups to each other, using the following methods: XCF links (for cross-system IP and VTAM connections) IUTSAMEH (for IP connections within a logical partition) HiperSockets (IP connections cross-logical partitions in the same server)Subplexes do not communicate with members outside the subset of the Sysplex. For example in Figure 1-6, TCP/IP stacks with connectivity to the internal network can be isolated from TCP/IP stacks connected to an external network using subplexing.

    TCP/IP stacks are defined as members of a subplex group with a defined group ID. For example, in Figure 1-6 TCP/IP, stacks within Subplex 1 are able to communicate only with stacks within the same subplex group. They are not able to communicate with stacks in Subplex 2.

    In an environment where a single logical partition has access to internal and external networks through two TCP/IP stacks, those stacks are assigned to two different subplex group IDs. Even though IUTSAMEH is the communication method, it is controlled automatically through the association of subplex group IDs, thus creating two separate security zones within the logical partition.

    Figure 1-6 Subplexing multiple security zone

    Dedicated LPARs with single TCP/IP stacks

    External Network(e.g. Internet)

    z/OS LPAR

    Appl1 Appl2

    TCPIP

    VTAM

    z/OS LPAR

    Appl1 Appl2

    TCPIP

    VTAM

    z/OS LPAR

    Appl1 Appl2

    TCPIP

    VTAM

    z/OS LPAR

    Appl1 Appl2

    TCPIP

    VTAM

    Internal Network

    External Network(e.g.Internet)

    VTAM VTAMz/OS LPAR

    Appl3 Appl4

    TCPIPB

    Subplex 2

    Appl3 Appl4

    TCPIPB

    Subplex 2

    Appl1 Appl2

    TCPIPA

    Appl1 Appl2

    TCPIPA

    Subplex 1 Subplex 1

    IUTSAMEH

    Communications within same Subplex

    VLAN IDs may be associated with Subplex

    VLAN IDs may be associated with Subplex

    No communications to dissimilar Subplexes

    Internal Network

    No communications to dissimilar Subplexes

    HiperSockets XCF

    Subplex 2 Subplex 1

    HiperSockets

    Multi-purpose LPARswith dual TCP/IP stacks

    z/OS LPAR

    XCF

    IUTSAMEHChapter 1. Overview 13

  • 1.4.8 HiperSockets Accelerator on z/OSHiperSockets Accelerator is supported by z/OS. It allows a z/OS TCP/IP router stack to efficiently route IP packets from an OSA-Express (QDIO) interface to a HiperSockets (iQDIO) interface and vice versa. The routing is done by the z/OS Communications Server device drivers at the lowest possible software data link control level. IP packets do not have to be processed at the higher level TCP/IP stack routing function, hence reducing the path-length and improving performance.

    If a TCP/IP router stack is required, the selection must be based on the following considerations: Performance. Availability: A backup is required for the OSA-Express network connection, and also for

    the TCP/IP router stack. System and administrative overhead: How many additional logical partitions, operating

    systems, and TCP/IP stacks are required? How many different operating systems are on the path to the application?

    Figure 1-7 represents an example of an HiperSockets Accelerator routing stack with four OSA-Express interfaces in a single System z that has multiple logical partitions. These logical partitions could be running z/OS, z/VM, or Linux on System z, and also z/VM with numerous guest systems.

    Figure 1-7 HiperSockets Accelerator on z/OS: routing stack implementation

    System z z/VM

    Linux Guest

    1

    Linux Guest

    2

    Linux Guest

    n........

    LP 7 LP 8 LP 9 LP10 LP11 LP12

    LP4 LP5 LP6

    OSA1 OSA2 OSA3 OSA4

    ENet1 ENet2 ENet3 ENet4

    z/OSHiperSocketsAccelerator

    x'FC'

    x'FE'

    x'FD'

    x'FF'14 HiperSockets Implementation Guide

  • Figure 1-8 illustrates how HiperSockets Accelerator works. The solid line connecting TCP/IP1 and TCP/IP3 represents the normal path through the TCP/IP stacks routing function, while the dotted line represents the accelerated path through the VTAM device driver.

    Figure 1-8 HiperSockets Accelerator flow

    You can activate the HiperSockets Accelerator by configuring the IQDIO Routing option in the TCP/IPprofile using the IPCONFIG statement.The TCP/IP stack automatically detects an IP packet prerouting across a HiperSockets Accelerator eligible route. Eligible routes are from OSA-Express (QDIO) to HiperSockets (iQDIO), and from HiperSockets (iQDIO) to OSA-Express (QDIO).Figure 1-8 shows what happens when a TCP/IPA sends something to the TCP/IPX. The process is explained as follows:1. The first packet consists of the TCP/IP routing stack in TCP/IPH, which creates IQDIO

    routing route entries for source TCP/IPA, destination TCP/IPX, the gateway for external network. These entries are added to the IQDIO routing table (Chapter 3, z/OS support on page 39 an example for this). The destination stack TCP/IPX must be reachable through HiperSockets.

    2. Starting with the second packet, all subsequent packets for the same destination take the optimized device driver path, and do not traverse the routing function of the TCP/IP routing stack. No change is required for target stacks. There is a timer built into the HiperSockets Accelerator function. Based on this timer, if a specific IQDIORouting entry is not used for 90 seconds, it is deleted from this table.

    Therefore, just for the first time that a TCP/IP host sends the first packet of new entries, it is created in the IQDIORouting table and it is involved in the TCP/IP2 routing stack. The IP packets that follow this first packet are routed through the VTAM device driver.

    Restriction: HiperSockets Accelerator cannot be enabled if IPSECURITY (or FIREWALL prior to z/OS V1.8) or NODATAGRAMFWD are specified in the IPCONFIG statement.

    Logical partitionTCP/IP X

    External LAN

    TCP/IP H

    Communication Server

    System z

    TCP/IP A

    OSA-E

    HiperSocketsiQDIO LAN

    AcceleratedQDIO - iQDIO routing

    z/OSLogical partition

    22

    11

    iQDIO Device Driver

    QDIO Device DriverChapter 1. Overview 15

  • If any IP packets have to be fragmented in order to be routed between QDIO and iQDIO (or vise versa), then they are not accelerated and the normal path through the TCP/IP stack routing function is taken. You can prevent IP fragmentation conflicts by using path MTU discovery (PATHMTUDISCOVERY in IPCONFIG), or by coding the appropriate MTU size in the static route statement (if static routes are used). For more details on defining MTU discovery and MTU sizes, refer to z/OS Communications Server, IP Configuration Reference, SC31-8776.

    The HiperSockets Accelerator is very useful when you have a huge amount traffic inside HiperSockets, which has to have high availability, load balancing, and high performance. (You require a z/OS.) Figure 1-7 on page 14 shows the function of a single TCP/IP stack that has multiple direct physical connectivity to the OSAs LANs similar to the HiperSockets router. However, you can have more TCP/IP stacks to provide a redundant path in case one of the z/OS HiperSockets Accelerator images suffers an outage. This single stack can then connect to all the remaining TCP/IP stacks in other images within Server z, which require connectivity to the OSA LANs, using HiperSockets connectivity. Remember that HiperSockets Accelerator works with the Data Link Control Layer when it is not providing additional functions such as fragmentation.

    1.5 Operating system support summaryAll HiperSockets functions supported on System z are summarized in Table 1-2 with information about the minimum release and maintenance levels required.

    Table 1-2 Summary of HiperSockets supported functionsFunction z/OS z/VM Linux on

    System z

    Shared, spanned CHPID. V1.5

    VLAN: Allows networks to be organized by traffic patterns rather than physical location. This enhancement permits traffic flow on a VLAN connection both over HiperSockets and between HiperSockets and an OSA-E GbE, 1000BASE-T Ethernet, or Fast Ethernet feature.

    V1.5 z/VM 5.1 +ptfs

    z/VM 5.2 +ptfs

    2.4 kernel

    Network Concentrator: Traffic between HiperSockets and OSA-Express can be transparently bridged without requiring network routing overhead.

    n/a n/a 2.4 kernel

    Broadcast in IPv4. V1.5 z/VM 5.1 2.4 kernel

    Multicast in IPv4. V1.5 z/VM 5.1 2.4 kernel

    IPv6. V1.7 z/VM 5.2 +ptfs

    No16 HiperSockets Implementation Guide

  • 1.6 Test configurationFigure 1-9 shows the HiperSockets base configuration used through out the examples in this IBM Redbooks publication. Based on this configuration, we implement the new functions and place the definition for each system in the specific session. Additional setup for specific scenarios such as DYNAMICXCF, HiperSockets Accelerator, and so on, are documented in the relevant sections.

    Figure 1-9 HiperSockets base configuration scenario

    We used four logical partitions to set up and verify HiperSockets support with z/OS, z/VM, and Linux. Because HiperSockets are shared among logical partitions, you must define the CHPIDs as shared in the hardware definitions.

    For HiperSockets F4, we used the IP network address 192.0.1.0/24: Our configuration does not include Virtual IP Addresses (VIPA), which are also supported.

    DYNAMIC XCF: Connects all images within the same Sysplex through a dynamic XCF connection, created by the DYNAMICXCF definition in the TCP/IP profile.

    V1.5 n/a n/a

    Sysplex Subplexing: Supports multiple security zones in a Sysplex.

    V1.8 n/a n/a

    HiperSockets Accelerator: Allows a z/OS TCP/IP router stack to efficiently route IP packets from an OSA-Express (QDIO) interface to a HiperSockets (iQDIO) interface and vice versa.

    V1.5 n/a n/a

    QDIO Enhanced Buffer State Management (QEBSM) and Host Page-Management Assist (HPMA) interface to the z/VM main storage Management.

    n/a z/VM 5.2 +ptfs

    2.6.16 kernel

    Function z/OS z/VM Linux on System z

    HiperSockets base scenario

    E800-E802192.0.1.1

    TCPIP

    LP-A12z/VM VMLinux7

    SYSPLEXSYSPLEX

    LP-A23

    z/OS SC30

    E800-E802192.0.1.4

    LP-A24

    z/OS SC31

    E800-E802192.0.1.5

    LP-A25

    z/OS SC32

    E800-E802192.0.1.6Linux

    LNXSU2

    (7000-7002)E808-E80A192.0.1.3(7000-7002)

    E804-E806192.0.1.2

    LinuxLNXRH2

    HiperSockets CHPID F4 192.0.1.0/24Chapter 1. Overview 17

  • For a HiperSockets connection, three I/O devices are required. One device for read control (even numbered), one device for write control (odd numbered), and one device for data exchange.

    The logical partitions are configured as follows:1. In the logical partition-A12, there are two Linux, one Red Hat, and one SUSE (LNXRH2

    and LNXSU2), systems running as guests under z/VM, along with the z/VM system (VMLINUX7). The Linux systems use DEDICATE, and control their HiperSockets connections directly. Each of the two systems has one interface to HiperSockets through CHPID F4. For LNXRH2, we allocated real addresses E804-E806 that map virtual unit addresses 7000-7002 to the three I/O devices. For LNXSU2, we used the next available unit addresses in a similar way, in ascending order.

    2. The logical partition-A23 runs a z/OS image (SC30) and is part of a sysplex. This image connects to HiperSockets F4 by E800-E802 devices.

    3. The logical partition-A24 runs a z/OS image (SC31) and is part of a sysplex. This image connects to HiperSockets F4 by E800-E802 devices.

    4. The logical partition-A25 runs a z/OS image (SC32) and is part of a sysplex. This image connects to HiperSockets F4 by E800-E802 devices.

    Table 1-3 shows the details of the test HiperSockets configuration.

    Table 1-3 Shows the details of the test HiperSockets configuration

    In order to operate a HiperSockets connection, all required devices must first be online to the operating system. At the time a TCP/IP stack starts a HiperSockets device, the stacks HiperSockets interface information is registered in an IP address lookup table. One IP address lookup table is maintained per HiperSockets.

    As the HiperSockets devices for our configuration are started, two internal tables are created.

    Table 1-4 on page 19 is the lookup table for CHPID F4. This table represents all TCP/IP stacks connected to HiperSockets F4. The I/O queue pointer is a real storage address that is set when the TCP/IP stack brings up the connection and is directly associated with the data exchange device address.

    LP Name Environment System name

    CHPID Device address IP address

    A12 z/VM VMLINUX7 F4 E800-E802 192.0.1.1

    A12 Linux under z/VM LNXRH2 F4 E804-E806 192.0.1.2

    A12 Linux under z/VM LNXSU2 F4 E808-E80A 192.0.1.3

    A23 z/OS sysplex SC30 F4 E800-E802 192.0.1.4

    A24 z/OS sysplex SC31 F4 E800-E802 192.0.1.5

    A25 z/OS sysplex SC32 F4 E800-E802 192.0.1.6

    Note: Each logical partition can use the same unit addresses. When multiple TCP/IP stacks are present in a single logical partition, then the unit addresses must be unique to each TCP/IP stack, for example, the process in logical partition-A12 shown in Table 1-3.18 HiperSockets Implementation Guide

  • We now look at a data transfer operation. For example, Linux LNXRH2 wants to send a packet to z/OS SC30 over HiperSockets. According to our configuration in Figure 1-3 on page 8, both servers are connected to HiperSockets F4.

    The steps executed for the send operation are:1. Linux LNXRH2 performs a send operation (SIGA instruction), passing the destination IP

    address.2. The HiperSockets searches in the IP address lookup table F4 (see Table 1-3 on page 18)

    for the routing destination IP address 192.0.1.4, which is the IP address for z/OS SC30. The IP address lookup table F4 represents the HiperSockets that the sending TCP/IP stack is connected to. Direct routing across different HiperSockets is not supported.

    3. The HiperSockets finds the entry for IP address 192.0.1.4 in the IP address lookup table F4.

    4. The hardware copies the data from the Linux LNXRH2 send queue to the z/OS (SC30) receive queue.

    5. Optionally, the hardware initiates a Program Controlled Interrupt (PCI) to inform the destination (SC30) that data has arrived. In this case, optionally means that the hardware can either deliver an interrupt to the receiving side or the operating system on the receiving side works with dispatcher polling. This option is negotiated between the hardware and the operating system at the time the HiperSockets interface is started. Table 1-4 shows the IP address lookup for HiperSockets CHPID 4F.

    Table 1-4 IP address lookup table for HiperSockets CHPID F4

    Note: The routing destination IP address is the next-hop IP address in the link header. In our example, the next-hop IP address and the destination IP address in the IP packet header are identical, because the next-hop is the target host. In a case where the next-hop is a router stack, the next-hop IP address in the link header and the destination IP address in the IP packet header are not identical.

    Note: If an entry is not found, the packet is discarded. This is considered an error condition.

    IP address Logical partition name

    Device addresses Input/Output queue pointer

    192.0.1.1 A12 E800-E801 E802

    192.0.1.2 A12 E804-E805 E806

    192.0.1.3 A12 E808-E809 E80A

    192.0.1.4 A23 E800-E801 E802

    192.0.1.5 A24 E800-E801 E802

    192.0.1.6 A25 E800-E801 E802 Chapter 1. Overview 19

  • 20 HiperSockets Implementation Guide

  • Chapter 2. Hardware definitions

    In this chapter, we describe how to update your system hardware configuration with CHPID and I/O device definitions to support HiperSockets. We discuss various configuration considerations, and then go through by the step-by-step Hardware Configuration Definition (HCD) procedure we used to configure our environment.This chapter contains the following: System configuration considerations HCD definitions

    2 Copyright IBM Corp. 2002, 2006 21

  • 2.1 System configuration considerationsHiperSockets is defined as a channel connection with a channel path type IQD. Even though there is no physical attachment associated with a HiperSockets CHPID, the CHPID numbers cannot be used for other I/O connections.

    These are the hardware configuration rules for z9 EC, z9 BC, z990, and z890: HiperSockets requires the definition of a CHPID defined as type=IQD. This CHPID is

    treated like any other CHPID, and is counted as one of the available channels within the z9 EC, z9 BC, z990, and z890 servers.With the introduction of the new channel subsystem, transparent sharing of HiperSockets is possible with the extension to the Multiple Image facility (MIF). HiperSockets channels can be configured to multiple Logical Channel Subsystems (LCSS). They are transparently shared by any or all of the configured logical partitions without regard to the LCSS to which the partition is configured.A HiperSockets channel can be defined as spanned in HCD and has no PCHID association.

    Up to 64 control units can be defined on each IQD CHPID. If more than one control unit is defined for an IQD CHPID, a logical address is required for each control number. Control unit logical addresses can range from X00 to X3F.

    Up to 256 I/O devices can be connected to an IQD control unit.Each TCP/IP connection to a HiperSockets requires three devices: one control read, one control write, and one data exchange device. See 1.3, HiperSockets mode of operation on page 5 for more details.

    The total number of all HiperSockets I/O devices may not exceed 12288. When you define an IQD CHPID, you have the option to specify the maximum frame size

    to be used by the HiperSockets. This is done through the CHPARM parameter. Valid CHPARM parameter values and their resulting maximum frame sizes are shown in Table 2-1. The selected maximum frame size for a HiperSockets is used by the TCP/IP stacks to define the Maximum Transmission Unit (MTU) size for the interface.

    Table 2-1 IQD CHPID maximum frame size and MTU size

    Note: With ICP IOCP for z9 EC, z9 BC, z990 and z890, the optional CHPARM keyword replaces optional keyword OS. Although the OS keyword is currently accepted, IOCP will disallow it in the future.

    CHPARM=value Maximum frame size Maximum TransmissionUnit size

    00 (default) 16 KB 8 KB40 24 KB 16 KB

    80 40 KB 32 KB

    C0 64 KB 56 KB22 HiperSockets Implementation Guide

  • 2.2 HCD definitionsAs with all channel-attached devices, an IQD CHPID must be defined by a channel path, a control unit, and I/O devices in your system configuration.

    This section shows the steps needed to define an IQD CHPID, using the z/OS Hardware Configuration Definition (HCD) tool. We have included examples of the following definitions: Spanned Channel Path Control Unit Devices

    2.2.1 Channel Path definitionsThe process of defining a HiperSockets channel, control unit, and device is similar to defining any other set of channel, control unit and device on z/OS using the Hardware Configuration Definiton (HCD) ISPF application. The differences are: During channel definition, a screen will be displayed to set the maximum frame size for the

    HiperSockets channel. A HiperSockets channel has no associated PCHID. A minimum of three IQD devices must be defined for a IQD control unit.

    Follow your installations procedure to access the HCD main screen and enter the appropriate IODF file to begin the definition process.1. Starting from the HCD main menu screen, select 1, as displayed in Figure 2-1.

    Figure 2-1 HCD main menu screen

    z/OS V1.7 HCD Command ===> ________________________________________________________________ Hardware Configuration Select one of the following. 1 1. Define, modify, or view configuration data 2. Activate or process configuration data 3. Print or compare configuration data 4. Create or view graphical configuration report 5. Migrate configuration data 6. Maintain I/O definition files 7. Query supported hardware and installed UIMs 8. Getting started with this dialog 9. What's new in this release For options 1 to 5, specify the name of the IODF to be used. I/O definition file . . . 'SYS6.IODF65.WORK' + F1=Help F2=Split F3=Exit F4=Prompt F9=Swap F12=Cancel F22=Command Chapter 2. Hardware definitions 23

  • 2. On the next screen, entitled Define, Modify, or View Configuration Data, select Option 3 - Processors, as shown in Figure 2-2.

    Figure 2-2 Define, Modify, or View Configuration Data panel

    3. Figure 2-3 shows the Processor List defined in the IODF data set. Select the processor to update and press Enter, as shown in Figure 2-3.

    Figure 2-3 Processor List

    4. On a System z processor, this will display the Channel Subsystem list. Select a channel subsystem where the HiperSockets channel will be defined, as shown in Figure 2-4 on page 25.

    z/OS V1.7 HCD +______________Define, Modify, or View Configuration Data ______________+ | | | Select type of objects to define, modify, or view data. | | | | 3_ 1. Operating system configurations | | consoles | | system-defined generics | | EDTs | | esoterics | | user-modified generics | | 2. Switches | | ports | | switch configurations | | port matrix | | 3. Processors | | channel subsystems | | partitions | | channel paths | | 4. Control units | | 5. I/O devices | | F1=Help F2=Split F3=Exit F9=Swap F12=Cancel | +_______________________________________________________________________+

    Goto Filter Backup Query Help ------------------------------------------------------------------------------ Processor List Row 1 of 7 More: > Command ===> _______________________________________________ Scroll ===> CSR Select one or more processors, then press Enter. To add, use F11.

    / Proc. ID Type + Model + Mode+ Serial-# + Description _ ISGSYN 2064 1C7 LPAR __________ ________________________________ _ ISGS11 2064 1C7 LPAR __________ ________________________________ _ P000STP1 2084 C24 LPAR 01534A2084 ________________________________ _ P000STP2 2094 S08 LPAR 0BAD4E2094 ________________________________ s SCZP101 2094 S18 LPAR 02991E2094 ________________________________ _ SCZP801 2064 1C7 LPAR 010ECB2064 ________________________________ _ SCZP901 2084 C24 LPAR 026A3A2084 ________________________________ 24 HiperSockets Implementation Guide

  • Figure 2-4 Channel Subsystem List

    This displays the Channel Path List screen (see Figure 2-5).

    Figure 2-5 Channel Path List

    5. Press F11 to add a channel path.

    Goto Backup Query Help ------------------------------------------------------------------------------ Channel Subsystem List Row 1 of 3 Command ===> _______________________________________________ Scroll ===> CSR Select one or more channel subsystems, then press Enter. To add, use F11. Processor ID . . . : SCZP101 CSS Devices in SS0 Devices in SS1 / ID Maximum + Actual Maximum + Actual Description _ 0 65280 14022 0 0 ________________________________ _ 1 65280 14096 0 0 ________________________________ s 2 65280 14041 65535 1 ________________________________ ******************************* Bottom of data ********************************

    Goto Filter Backup Query Help ------------------------------------------------------------------------------ Channel Path List Row 1 of 138 More: > Command ===> _______________________________________________ Scroll ===> CSR Select one or more channel paths, then press Enter. To add use F11. Processor ID . . . . : SCZP101 Configuration mode . : LPAR Channel Subsystem ID : 2 DynEntry Entry + / CHPID Type+ Mode+ Switch + Sw Port Con Mngd Description _ 00 OSD SPAN __ __ __ No 1000BaseT _ 01 OSD SPAN __ __ __ No 1000BaseT Chapter 2. Hardware definitions 25

  • 6. Enter all the required information, as shown in Figure 2-6.In our scenario, we set: Channel path ID to F4. Channel path type to IQD. (required for a HiperSockets channel.) Operation mode to SPAN, because the IQD CHPID is shared among logical partitions

    across Channel Subsystems. All other parameters to default. These are not relevant to IQD CHPIDs.

    Figure 2-6 Add a channel path

    7. When the definitions are completed, press Enter. If the definition process has been started from a production IODF, a screen appears allowing you to create a work IODF. Enter the appropriate information to create a work IODF. The next screen that appears is the Specify Maximum Frame Size, as shown Figure 2-7 on page 27; select a Maximum Frame Size.

    8. Press F4 for a list of the four possible options. We chose the default size of 16 KB. As this is the default, no OS value will appear in the IOCP. Figure 2-7 on page 27 shows OS values for Maximum Frame Size other than 16 KB in IOCP. Press Enter.

    Add Channel Path Specify or revise the following values. Processor ID . . . . : SCZP101 Configuration mode . : LPAR Channel Subsystem ID : 2 Channel path ID . . . . F4 + PCHID . . . ___ Number of CHPIDs . . . . 1 Channel path type . . . IQD + Operation mode . . . . . SPAN + Managed . . . . . . . . No (Yes or No) I/O Cluster ________ + Description . . . . . . ________________________________ Specify the following values only if connected to a switch: Dynamic entry switch ID __ + (00 - FF) Entry switch ID . . . . __ + Entry port . . . . . . . __ +

    Important: The Maximum Frame Size is directly related to the Maximum Transmission Unit used by TCP/IP. See Table 2-1 on page 22 for the corresponding values26 HiperSockets Implementation Guide

  • Figure 2-7 Define the maximum frame size

    9. Complete the Access List for the partitions sharing the channel and press Enter.In our example, we defined the IQD CHPID as shared by one logical partition on Channel Subsystem 1 (see Figure 2-8) and three logical partitions on channel subsystem 2 (see Figure 2-9 on page 28).

    Figure 2-8 Define Access List - screen 1

    +___________ Specify Maximum Frame Size ____________+| || || Specify or revise the value below. || || Maximum frame size || in KB . . . . . . . 16+ || || F1=Help F2=Split F3=Exit F4=Prompt || F5=Reset F9=Swap F12=Cancel |+___________________________________________________+

    Define Access List Row 21 of 45 Command ===> _________________________________________ Scroll ===> CSR Select one or more partitions for inclusion in the access list. Channel subsystem ID : 2 Channel path ID . . : F4 Channel path type . : IQD Operation mode . . . : SPAN Number of CHPIDs . . : 1 / CSS ID Partition Name Number Usage Description _ 1 A1F F CF Trainer Sysplex FACIL06 _ 1 A11 1 OS Testplex SC75 / 1 A12 2 OS VMLINUX7 _ 1 A13 3 OS Trainer Sysplex #@$2 _ 1 A14 4 OS Trainer Sysplex #@$3 _ 1 A15 5 OS SC58 _ 1 A16 6 OS _ 1 A17 7 OS VMLINUX1 _ 1 A18 8 OS WTSCPLX1 SC53 _ 1 A19 9 OS WTSCPLX1 SC47 Chapter 2. Hardware definitions 27

  • Figure 2-9 Define Access List - screen 2

    Now we return to Channel Path List, with CHPID F4 defined, as shown in Figure 2-10.

    Figure 2-10 Channel Path After the CHIPID is defined

    10.Now press F20 to scroll to the right to verify the access list (see Figure 2-11 on page 29 and Figure 2-12 on page 29).

    Define Access List Row 36 of 45 Command ===> _________________________________________ Scroll ===> CSR Select one or more partitions for inclusion in the access list. Channel subsystem ID : 2 Channel path ID . . : F4 Channel path type . : IQD Operation mode . . . : SPAN Number of CHPIDs . . : 1 / CSS ID Partition Name Number Usage Description _ 2 A2F F CF WTSCPLX5 CF39 _ 2 A21 1 OS SC76 _ 2 A22 2 OS VMLINUX3 / 2 A23 3 OS WTSCPLX5 SC30 / 2 A24 4 OS WTSCPLX5 SC31 / 2 A25 5 OS WTSCPLX5 SC32 _ 2 A26 6 OS SC60 _ 2 A27 7 OS _ 2 A28 8 OS WTSCPLX1 SC69 _ 2 A29 9 OS WTSCPLX1 SCxx

    Channel Path List Row 129 of 138 More: >Command ===> _______________________________________________ Scroll ===> CSR Select one or more channel paths, then press Enter. To add use F11. Processor ID . . . . : SCZP101 Configuration mode . : LPAR Channel Subsystem ID : 2 DynEntry Entry + / CHPID Type+ Mode+ Switch + Sw Port Con Mngd Description _ F2 IQD SPAN __ __ __ No _________________________________ F3 IQD SPAN __ __ __ No _________________________________ F4 IQD SPAN __ __ __ No _________________________________ F5 IQD SPAN __ __ __ No _________________________________ F6 IQD SPAN __ __ __ No _________________________________ F7 IQD SPAN __ __ __ No _________________________________ FC IQD SHR __ __ __ No _________________________________ FD IQD SHR __ __ __ No _________________________________ FE IQD SHR __ __ __ No _________________________________ FF IQD SHR __ __ __ No ________________________________28 HiperSockets Implementation Guide

  • Figure 2-11 Verify the Channel Path Access List screen 1

    Figure 2-12 Verify the Channel Path Access List screen 2

    Channel Path List Row 131 of 138 More: < >Command ===> _______________________________________________ Scroll ===> CSR Select one or more channel paths, then press Enter. To add, use F11. Channel Subsystem ID : 2 1=A21 2=A22 3=A23 4=A24 5=A25 6=A26 7=A27 8=A28 9=A29 A=A2A B=A2B C=A2C D=A2D E=A2E F=A2F I/O Cluster --------- Partitions 2x ----- / CHPID Type+ Mode+ Mngd Name + 1 2 3 4 5 6 7 8 9 A B C D E F PCHID _ F4 IQD SPAN No ________ _ _ a a a _ _ _ _ _ _ _ _ _ _ ___ _ F5 IQD SPAN No ________ _ _ a a a _ _ _ _ _ _ _ _ _ _ ___ _ F6 IQD SPAN No ________ _ _ a a a _ _ _ _ _ _ _ _ _ _ ___ _ F7 IQD SPAN No ________ _ _ a a a _ _ _ _ _ _ _ _ _ _ ___ _ FC IQD SHR No ________ a a a a a a a a a a a a _ _ _ ___ _ FD IQD SHR No ________ a a a a a a a a a a a a _ _ _ ___ _ FE IQD SHR No ________ a a a a a a a a a a a a _ _ _ ___ _ FF IQD SHR No ________ a a a a a a a a a a a a _ _ _ ___

    Channel Path List Row 131 of 138 More: < Command ===> _______________________________________________ Scroll ===> CSR Select one or more channel paths, then press Enter. To add, use F11. Channel Subsystem ID : 2 1=A11 2=A12 3=A13 4=A14 5=A15 6=A16 7=A17 8=A18 9=A19 A=A1A B=A1B C=A1C D=A1D E=A1E F=A1F I/O Cluster --------- Partitions 1x ----- / CHPID Type+ Mode+ Mngd Name + 1 2 3 4 5 6 7 8 9 A B C D E F PCHID _ F4 IQD SPAN No ________ _ a _ _ _ _ _ _ _ _ _ _ _ _ _ ___ _ F5 IQD SPAN No ________ _ a _ _ _ _ _ _ _ _ _ _ _ _ _ ___ _ F6 IQD SPAN No ________ _ a _ _ _ _ _ _ _ _ _ _ _ _ _ ___ _ F7 IQD SPAN No ________ _ a _ _ _ _ _ _ _ _ _ _ _ _ _ ___ _ FC IQD SHR No ________ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ___ _ FD IQD SHR No ________ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ___ _ FE IQD SHR No ________ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ___ _ FF IQD SHR No ________ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ___

    Note: Figure 2-11 and Figure 2-12 show that no PCHID association is required for IQD CHPID.Chapter 2. Hardware definitions 29

  • 2.2.2 Control unit definitionsStarting at the Channel Path List screen, select the CHPID to get to the control unit list, as shown in Figure 2-13.

    Figure 2-13 Select a control unit list

    11.Press F11 to add a control unit when the screen shown in Figure 2-14 appears.

    Figure 2-14 Add a control unit

    12.Enter the required information, as shown in Figure 2-15 on page 31, and then press Enter.In our example, we set: Control unit number to E800 Control unit type to IQD (required for a HiperSockets control unit.)

    Channel Path List Row 130 of 138 More: >Command ===> _______________________________________________ Scroll ===> CSR