sun cluster 3 configuration guide 412383

354
Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only i Sun Cluster 3 Configuration Guide The information contained in this document is considered Sun Proprietary/Confidential: Internal Use Only - Sun Employees and Authorized Resellers. Copyright laws must be respected and therefore this document must not be distributed to end user customers. The information contained in this document may be used by the employees of Sun Authorized Resellers to create more informed positioning and proposals of Sun's products and strategies, and to present more convincing and forceful arguments when selling solutions on Sun platform. October 13, 2009

Upload: aneeskhaan

Post on 30-Mar-2015

1.181 views

Category:

Documents


6 download

TRANSCRIPT

Page 1: sun cluster 3 configuration guide 412383

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only i

Sun Cluster 3 Configuration GuideThe information contained in this document is considered Sun Proprietary/Confidential: Internal Use Only - Sun Employees and Authorized Resellers. Copyright laws must be respected and therefore this document must not be distributed to end user customers.

The information contained in this document may be used by the employees of Sun Authorized Resellers to create more informed positioning and proposals of Sun's products and strategies, and to present more convincing and forceful arguments when selling solutions on Sun platform.

October 13, 2009

Page 2: sun cluster 3 configuration guide 412383

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 3: sun cluster 3 configuration guide 412383

Contents

Preface ix

1. Sun Cluster 3 Introduction 1

2. Sun Cluster 3 Topologies 3

Clustered Pairs 4

N+1 (Star) 5

Pair + N 6

N*N (Scalable) 7

Diskless Cluster Configurations 8

Single-Node Cluster Configurations 9

3. Server Configuration 11

Boot Device for a Server 15

Heterogeneous Servers in Sun Cluster 15

Generic Server Configuration Rules 15

SPARC Servers 16

x64 Servers 25

4. Clusters with Heterogeneous Servers 35

Generic Rules 35

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only iii

Page 4: sun cluster 3 configuration guide 412383

Mixing Different Types of Servers in a Cluster 36

Sharing Storage Among Different Types of Servers in a Cluster 36

5. Storage Overview 39

Local Storage (Single-Hosted Storage) 39

Heterogeneous Storage in Sun Cluster 39

Shared Storage (Multi-Hosted Storage) 40

Third-Party Storage 58

6. Fibre Channel Storage Support 59

SAN Configuration Support 59

Sun StorEdge A3500FC System 63

Sun StorEdge A5x00 Array 66

Sun StorEdge T3 Array (Single Brick) 74

Sun StorEdge T3 Array (Partner Pair) 78

Sun StorageTek 2540 RAID Array 81

Sun StorEdge 3510 RAID Array 83

Sun StorEdge 3511 RAID Array 88

Sun StorEdge 3910/3960 System 90

Sun StorEdge 6120 Array 92

Sun StorEdge 6130 Array 94

Sun StorageTek 6140 Array 97

Sun Storage 6180 Array 99

Sun StorEdge 6320 System 100

Sun StorageTek 6540 Array 103

Sun Storage 6580/6780 Arrays 105

Sun StorEdge 6910/6960 Arrays 107

Sun StorEdge 6920 System 109

Sun StorEdge 9910/9960 Arrays 111

iv Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 5: sun cluster 3 configuration guide 412383

Sun StorEdge 9970/9980 115

Sun StorageTek 9985/9990 119

Sun StorageTek 9985V/9990V 122

7. SCSI Storage Support 127

Netra st D130 Array 127

Netra st A1000 Array 128

Netra st D1000 Array 129

Sun StorEdge MultiPack 131

Sun StorEdge D2 Array 132

Sun StorEdge S1 Array 134

Sun StorEdge A1000 Array 137

Sun StorEdge D1000 Array 138

Sun StorEdge A3500 Array 140

Sun StorEdge 3120 JBOD Array 142

Sun StorEdge 3310 JBOD Array 148

Sun StorEdge 3310 RAID Array 153

Sun StorEdge 3320 JBOD Array 157

Sun StorEdge 3320 RAID Array 162

8. SAS Storage Support 167

Sun StorageTek 2530 RAID Array 167

Sun Storage J4200 and J4400 JBOD Arrays 169

Sun Storage J4400 JBOD Array 171

9. Ethernet Storage Support 173

Sun StorageTek 2510 RAID Array 173

Sun StorageTek 5000 NAS Appliance 175

Sun StorageTek 5210 NAS Appliance 177

Sun StorageTek 5220 NAS Appliance 177

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only v

Page 6: sun cluster 3 configuration guide 412383

Sun StorageTek 5310 NAS Appliance 178

Sun StorageTek 5320 NAS Appliance 178

Sun StorageTek 5320 NAS Cluster Appliance 178

Sun Storage 7000 Unified Storage System 179

Sun Storage 7110 Unified Storage System 181

Sun Storage 7210 Unified Storage System 181

Sun Storage 7310 Unified Storage System 181

Sun Storage 7410 Unified Storage System 181

10. Network Configuration 183

Cluster Interconnect 183

Public Network 202

11. Software Configuration 219

Solaris Releases 219

Application Services 222

Co-Existence Software 249

Restriction on Applications Running in Sun Cluster 250

Data Configuration 250

RAID in Sun Cluster 3 258

Support for Virtualized OS Environments 259

12. Managing Sun Cluster 3 263

Console Access 263

Cluster Administration and Monitoring 263

13. Sun Cluster 3 Ordering Information 265

Overview of Order Flow Chart 265

Order Flow Chart 266

Agents Edist Download Process 286

vi Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 7: sun cluster 3 configuration guide 412383

A. Campus Clusters 287

Number of Nodes 287

Campus Cluster Room Configurations 287

Applications 288

Guideline for Specs Based Campus Cluster Configurations 288

TrueCopy Support 291

SRDF Support 292

B. Sun Cluster Geographic Edition 295

C. Third-Party Agents 311

D. Revision History 313

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only vii

Page 8: sun cluster 3 configuration guide 412383

viii Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 9: sun cluster 3 configuration guide 412383

Preface

This document is designed to be a high-level pre-sales guide. Given a set of customer requirements, the reader should be able to configure and order a Sun Cluster. This is a growing document. As new applications are supported, new releases are qualified, and newer hardware is introduced this document is modified. Please make sure you have the latest version. Unless otherwise noted, support for Sun Cluster 3 encompasses Sun Cluster 3.0, Sun Cluster 3.1 and Sun Cluster 3.2 versions.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only ix

Page 10: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Related Documentation

TABLE P-1 Sun Cluster 3.0 User Documentation

Title Part Number

Sun Cluster 3.0 12/01 Software Installation Guide. 816-2022

Sun Cluster 3.0 12/01 Hardware Guide. 816-2023

Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide. 816-2024

Sun Cluster 3.0 12/01 Data Services Developers Guide 816-2025

Sun Cluster 3.0 12/01 System Administration Guide 816-2026

Sun Cluster 3.0 12/01 Concepts. 816-2027

Sun Cluster 3.0 Error Messages Manual. 816-2028

Sun Cluster 3.0 12/01 Release Notes. 816-2029

Sun Cluster 3.0 12/01 Release Notes Supplement 816-3753

TABLE P-2 Sun Cluster 3.1 User Documentation

Title Part Number

Sun Cluster 3.1 Software Installation Guide 817-6543

Sun Cluster 3.1 Hardware Administration Guide 817-0168

Sun Cluster 3.1 Data Services Planning and Administration Guide 817-6564

Sun Cluster 3.1 Data Services Developers Guide 817-6555

Sun Cluster 3.1 System Administration Guide 817-6546

Sun Cluster 3.1 Error Messages Guide 817-6558

Sun Cluster 3.1 Release Notes Supplement 816-3381

x Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 11: sun cluster 3 configuration guide 412383

NotesSun Cluster 3 poses restrictions in addition to those imposed by the base hardware and software components. Under no circumstances does Sun Cluster 3 relax the restrictions imposed by the base hardware and software components. It is also important to understand what we mean by REQUIRED and RECOMMENDED.

Configuration rules stated as REQUIRED, must be followed to configure a valid Sun Cluster. It is REQUIRED that a configuration has no single point of failure that could bring the entire cluster down (for example, having mirrored storage).

Configuration rules shown as RECOMMENDED, need not necessarily be followed to configure a valid Sun Cluster. It is RECOMMENDED that a configuration has redundancy within the node, so that if a component fails, the backup component can be used within the node without initiating application fail over to the backup node (for example, redundant network adapters in NAFO group prevent application failover in case of failure of the primary network adapter with Sun Cluster 3.0).

TABLE P-3 Sun Cluster 3.2 User Documentation

Title Part Number

Sun Cluster Software Installation Guide for Solaris OS 819-2970

Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS 819-2993

Sun Cluster Data Services Planning and Administration Guide for Solaris OS 819-2974

Sun Cluster Data Services Developer’s Guide for Solaris OS 819-2972

Sun Cluster System Administration Guide for Solaris OS 819-2971

Sun Cluster Concepts Guide for Solaris OS 819-2969

Sun Cluster Error Messages Guide for Solaris OS 819-2973

Sun Cluster 3.2 Release Notes for Solaris OS 819-6611

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only xi

Page 12: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

xii Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 13: sun cluster 3 configuration guide 412383

CHAPTER 1

Sun Cluster 3 Introduction

Sun Cluster 3 extends Solaris with the cluster framework, enabling the use of core Solaris services such as file systems, devices, and networks seamlessly across a tightly coupled cluster and maintaining full Solaris compatibility for existing applications.

Key Benefits■ Higher / Near continuous availability of existing applications based on Solaris

services such as highly available file system and network services. ■ Integrates/extends the benefits of Solaris scalability to dotCOM application

architectures by providing scalable and available file and network services for horizontal applications.

■ Ease of management of the cluster platform by presenting a simple unified management view of shared system resources.

A Typical Sun Cluster 3 ConfigurationA typical Sun Cluster configuration has the following components.

Hardware Components■ Servers with local storage (storage devices hosted by one node).■ Shared storage (storage devices hosted by more than one node).■ Cluster Interconnect for private communication among the cluster nodes.■ Public Network Interfaces for connectivity to the outside world.■ Administrative Workstation for managing the cluster.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 1

Page 14: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

In order to be a supported Sun Cluster configuration, the configuration of hardware components in a cluster must first be supported by the corresponding base product groups for each hardware component. For example, in order for a Sun Cluster configuration composed of two Sun Fire V880 servers connected to two StorEdge 3510 storage devices to be supported, the V880 and SE 3510 base product groups must support connecting a SE 3510 to a V880 in a standalone configuration.

Software Components■ Solaris Operating Environment running on each cluster node.■ Sun Cluster 3 software running on each cluster node.■ Data Services - applications with agents and fault monitors - running on one or

more cluster nodes.■ Cluster file system providing global access to the application data.■ Sun Management Center running on the administrative workstation providing

ease of management.

FIGURE 1-1 A Typical Sun Cluster 3 Configuration

Storage

Node

Storage

NodeNode Node

Admin. Workstation

Console Access

PublicNetwork

with Sun Management Center

logical diagram: physical connections & number of units are dependent on storage/interconnect used

Cluster Interconnect

Cluster Interconnect

2 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 15: sun cluster 3 configuration guide 412383

CHAPTER 2

Sun Cluster 3 Topologies

A topology is the connection scheme that connects the cluster nodes to the storageplatforms used in the cluster. Sun Cluster supports any topology that adheres to thefollowing guidelines:

■ Sun Cluster supports a maximum of sixteen nodes in a cluster, regardless of thestorage configurations that are implemented.

■ A shared storage device can connect to as many nodes as the storage devicesupports.

■ There are common redundant interconnects between all nodes of the cluster.

Shared storage devices do not need to connect to all nodes of the cluster. However,these storage devices must connect to at least two nodes.

While Sun Cluster does not require you to configure a cluster by using specifictopologies, the following topologies are described to provide the vocabulary todiscuss a cluster’s connection scheme. These topologies are typical connectionschemes:

■ “Clustered Pairs” on page 4■ “N+1 (Star)” on page 5■ “Pair + N” on page 6■ “N*N (Scalable)” on page 7■ “Diskless Cluster Configurations” on page 8■ “Single-Node Cluster Configurations” on page 9

For more information on these topologies, see the definitions and diagrams thatfollow.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 3

Page 16: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Clustered Pairs

FIGURE 2-1 Clustered Pair Topology

Clustered Pair Features■ Nodes are configured in pairs, i.e., possible configurations include two, four, six,

or eight nodes.■ Each pair has shared storage, connected to both the nodes of the pair.■ A maximum of 8 nodes are supported.

Clustered Pair Benefits■ All nodes are part of the same cluster configuration, reducing cost and

simplifying administration.■ Since each pair has its own shared storage, no one node needs to be of

significantly higher capacity then others.■ The cost of cluster interconnect is spread across all pairs.

logical diagram: physical connections & number of units are dependent on

Storage Storage Storage Storage

Interconnect

Interconnect

Node Node Node Node

storage/interconnect used.

4 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 17: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 TOPOLOGIES

N+1 (Star)

FIGURE 2-2 N+1 Topology

N+1 Features■ All shared storage is dual-hosted, and physically attached to exactly two cluster

nodes.■ A single server is designated as backup for all other nodes. The other nodes are

called primary nodes.■ A maximum of 8 nodes are supported.

N+1 BenefitsThe cost of backup node is spread over all primary nodes.

Storage Storage Storage

Interconnect

Interconnect

Node Node Node Node

logical diagram: physical connections & number of units are dependent onstorage/interconnect used.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 5

Page 18: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

N+1 LimitationsThe capacity of the backup node is the limiting factor in growing the N+1 cluster.For example, in a 4 node E6x00 cluster, the growth of the cluster is limited by thenumber of slots available, for population with CPU / IO boards, in the backup node.Hence, the backup node should be equal or larger in capacity to the largest primarynode.

Pair + N

FIGURE 2-3 Pair + N topology (N = 2 here)

Pair + N Features■ All shared storage is dual hosted and physically attached to a single pair of

nodes.■ A maximum of 16 SPARC nodes or 8 x64 nodes are supported.

Pair + N BenefitsApplications can access data from nodes which are not directly connected to thestorage device.

logical diagram: physical connections & number of units are dependent on

Storage Storage

Interconnect

Interconnect

Node Node Node Node

storage/interconnect used.

6 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 19: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 TOPOLOGIES

Pair + N LimitationsThere may be heavy data traffic on the cluster interconnect.

N*N (Scalable)

FIGURE 2-4 N*N (Scalable) topology (N = 4 here)

N*N (Scalable) Features■ Shared storage is connected to every node in the cluster.■ All nodes have access to the same LUNs.■ A maximum of 16 SPARC nodes or 8 x64 nodes are supported.

The maximum number of nodes sharing a LUN is specified by the shared storagedevice. Refer to the respective shared storage device section for the maximumnumber of nodes.

Storage Storage

Interconnect

Interconnect

Node Node Node Node

logical diagram: physical connections & number of units are dependent onstorage/interconnect used.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 7

Page 20: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

N*N (Scalable) Benefits■ This topology enables support of up to 16-node Oracle Parallel Server/Real

Application Cluster configurations. See “Oracle Real Application Cluster(OPS/RAC)” on page 245. OPS/RAC requires connectivity of shared storage toevery node running an OPS/RAC instance.

■ Sun Cluster 3 allows failover of HA/Scalable application instance from any nodeto any other node in the cluster. This topology provides connectivity from everynode to the shared storage device. Using this topology, in the event of a fail over,the application can use the local path from the node to the storage device, ratherthan going through the interconnect.

N*N (Scalable) Limitations■ The maximum number of N*N nodes supported depends upon the shared storage

device. Some storage products support shared storage to only 2 nodes, others upto 8 or more. Refer to the shared storage device sections for details.

■ The data service may have restrictions on the maximum nodes supported. See theappropriate software sections for details.

Diskless Cluster Configurations

FIGURE 2-5 Diskless Cluster Configuration (N = 4 here)

Diskless Cluster Features■ Shared storage is not part of this configuration.

■ A maximum of 8 nodes are supported.

Interconnect

Interconnect

Node Node Node Node

logical diagram: physical connections & number of units are dependent onstorage/interconnect used.

8 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 21: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 TOPOLOGIES

Diskless Cluster BenefitsThis configuration allows for clusters without shared storage to be supported. Theseconfigurations are ideal for deploying applications that require no shared storage.

Diskless Cluster RecommendationsFor increased availability, the addition of a quorum device is recommended. Theminimum number of nodes in a diskless cluster is 2 with Quorum Server.

Single-Node Cluster Configurations

Single-Node Cluster FeaturesOne node or domain comprises the entire cluster.

Single-Node Cluster BenefitsThis configuration allows for a single node to run as a functioning clusterdeployment, offering users the benefits of having application managementfunctionality, application restart functionality as well the ability to start a clusterwith one node and growing the size of the cluster as time progresses. HA storage isnot required with single-node clusters.

Single-Node Cluster Limitations■ Requires Sun Cluster 3.1 version 10/03 or later■ True failover is impossible due to the presence of only one node in the cluster

Single-Node Cluster RecommendationsSingle node clusters are ideal for users learning how to manage a cluster, observecluster behavior (for agent development purposes) or to begin a cluster with theintention of adding nodes as time goes on.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 9

Page 22: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

10 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 23: sun cluster 3 configuration guide 412383

CHAPTER 3

Server Configuration

Table 3-1 and table 3-2 below lists the servers supported with Sun Cluster 3. Allother components, like storage and network interfaces, may not be supported withall the servers. Refer to the other chapters to ensure you have a supported SunCluster configuration.

TABLE 3-1 Supported SPARC Servers

Servers

Sun Blade T6300 Server Module

Sun Blade T6320 Server Module

Sun Blade T6340 Server Module

Sun Enterprise 10K

Sun Enterprise 220R

Sun Enterprise 250

Sun Enterprise 3x00

Sun Enterprise 420R

Sun Enterprise 450

Sun Enterprise 4x00

Sun Enterprise 5x00

Sun Enterprise 6x00

Sun Fire 12K

Sun Fire 15K

Sun Fire 280R

Sun Fire 3800

Sun Fire 4800

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 11

Page 24: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Fire 4810

Sun Fire 6800

Sun Fire E20K

Sun Fire E25K

Sun Fire E2900

Sun Fire E4900

Sun Fire E6900

Sun Fire T1000

Sun Fire T2000

Sun Fire V120

Sun Fire V125

Sun Fire V1280

Sun Fire V210

Sun Fire V215

Sun Fire V240

Sun Fire V245

Sun Fire V250

Sun Fire V440

Sun Fire V445

Sun Fire V480

Sun Fire V490

Sun Fire V880

Sun Fire V890

Sun Netra 120

Sun Netra 1280

Sun Netra 1290

Sun Netra 20

Sun Netra 210

Sun Netra 240 AC/DC

Sun Netra 440 AC/DC

TABLE 3-1 Supported SPARC Servers (Continued)

Servers

12 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 25: sun cluster 3 configuration guide 412383

SERVER CONFIGURATION

Sun Netra CT 900 CP3010

Sun Netra CT 900 CP3060

Sun Netra CT 900 CP3260

Sun Netra t 1120

Sun Netra t 1125

Sun Netra t 1400

Sun Netra t 1405

Sun Netra T1 AC200/DC200

Sun Netra T2000

Sun Netra T5220

Sun Netra T5440

Sun SPARC Enterprise M3000

Sun SPARC Enterprise M4000

Sun SPARC Enterprise M5000

Sun SPARC Enterprise M8000

Sun SPARC Enterprise M9000

Sun SPARC Enterprise T1000

Sun SPARC Enterprise T2000

Sun SPARC Enterprise T5120

Sun SPARC Enterprise T5140

Sun SPARC Enterprise T5220

Sun SPARC Enterprise T5240

Sun SPARC Enterprise T5440

TABLE 3-2 Supported x64 Servers

Servers

Sun Blade X6220

Sun Blade X6240

Sun Blade X6250

TABLE 3-1 Supported SPARC Servers (Continued)

Servers

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 13

Page 26: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Blade X6270 Server Module

Sun Blade X6440 Server Module

Sun Blade X6450 Server Module

Sun Blade X8400 Server Module

Sun Blade X8420 Server Module

Sun Blade X8440 Server Module

Sun Blade X8450 Server Module

Sun Fire V20z

Sun Fire V40z

Sun Fire X2100 M2

Sun Fire X2200 M2

Sun Fire X4100

Sun Fire X4100 M2

Sun Fire X4140

Sun Fire X4150

Sun Fire X4170

Sun Fire X4200

Sun Fire X4200 M2

Sun Fire X4240

Sun Fire X4250

Sun Fire X4270

Sun Fire X4275

Sun Fire X4440

Sun Fire X4450

Sun Fire X4540

Sun Fire X4600

Sun Fire X4600 M2

Sun Netra X4200 M2

Sun Netra X4250

Sun Netra X4450

TABLE 3-2 Supported x64 Servers (Continued)

Servers

14 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 27: sun cluster 3 configuration guide 412383

SERVER CONFIGURATION

Boot Device for a ServerAny local storage device, supported by the base platform as a boot device, can beused as a boot device for the server in the cluster as well.

■ Boot-device LUN(s) cannot be visible to other nodes in the cluster.

■ It is recommended to mirror the root disk.

■ Multipathed boot is supported with Sun Cluster when the drivers associated withSAN 4.3 (or later) are used in conjunction with an appropriate storage device(e.g., the local disks on a SF v880 or a SAN connected fiber storage device).

Heterogeneous Servers in Sun ClusterThe rules that describe which servers can participate in the same cluster havechanged. We no longer have the server family definitions. Instead now we have anew set of rules that define mixing at the level of underlying networking/storagetechnologies. This change vastly increases the flexibility of configurations. Use thenew set of rules described in “Clusters with Heterogeneous Servers” on page 35 tofind out which servers can be clustered together.

Generic Server Configuration RulesThese configuration rules apply to any type of server in a cluster:

■ The rule for minimum number of CPUs per node have changed. It is no longerrequired to have minimum 2 CPUs per node. Systems with only 1 CPU are nowsupported as cluster nodes.

■ Cluster node minimum-memory requirements:

■ Releases prior to Sun Cluster 3.2 1/09: 512MB

■ Starting with Sun Cluster 3.2 1/09: 1GB■ Alternate pathing (AP) is not supported with Sun Cluster 3.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 15

Page 28: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SPARC Servers

Sun Blade 6000 and 6048These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Server Module in a Sun Blade 6000 or 6048 is used as anode in the cluster:

■ The following Sun Blade 6000 and 6048 Server Modules are supported as clusternodes:

■ Sun Blade T6300 Server Module

■ Sun Blade T6320 Server Module

■ Sun Blade T6340 Server Module

■ Minimum Sun Cluster release: 3.1 8/05 (update 4)

■ Minimum Solaris release: See Blade Server Module product info

Sun Netra 20, t 1120/1125, t 1400/1405 and T1AC200/DC200These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Netra T1 AC200/DC200, t 1120/1125, t 1400/1405, 20 isused as a node in the cluster:

■ Netra servers allow the use of E1 PCI expander for provisioning extra PCI slots inthe system. While the use of the expander with these systems for any otherpurpose is supported, its use for cluster connections (shared storage, clusterinterconnect, public network interfaces) is supported only with Netra T1AC200/DC200.

Sun Netra 210These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Netra 210 is used as a node in the cluster:

■ Due to limited card support, only Diskless Cluster Configurations using theonboard Ethernet ports is currently supported. Additional card support is TBD.

16 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 29: sun cluster 3 configuration guide 412383

SERVER CONFIGURATION

Sun Netra CT 900 CP3010These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Netra CT 900 CP3010 is used as a node in the cluster:

■ Sun Cluster 3.2 is required.

■ Private interconnect auto discovery may not show all adapters. Privateinterconnect information can be manually entered during Solaris Cluster install.

■ Oracle RAC is not supported as of August 2007.

■ Connection to storage should only be direct as storage switches are not supportedas of August 2007.

Sun Netra CT 900 CP3060These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Netra CT 900 CP3060 is used as a node in the cluster:

■ For all configurations, the built in network switches should be port VLANed ortag VLANed to separate traffic on each of the cluster interconnects and for SunCluster auto discovery to work properly during installation.

■ Restrictions associated with SANBlaze HBA:

■ Connection to storage should only be direct as storage switches are notsupported as of September 2007.

■ The default global_fencing setting in Sun Cluster 3.2 must not be changed fromits default value of “pathcount.” See Table 3-1 for additional storagerestrictions.

■ MPxIO is not supported as of September 2007 due to limitations of third partyHBAs and third party drivers.

The Sun HBAs do not have any specific restrictions.

Sun Netra CT 900 CP3260These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Netra CT 900 CP3260 is used as a node in the cluster:

■ For all configurations, the built in network switches should be port VLANed ortag VLANed to separate traffic on each of the cluster interconnects and for SunCluster auto discovery to work properly during installation.

■ Only the Sun Netra CP3200 ARTM-FC (XCP32X0-RTM-FC-Z) is supported forshared storage connectivity. This also is a single point of failure.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 17

Page 30: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Netra T5220These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Netra T5220 is used as a node in the cluster:

■ As of Jan’ 09, there are some supported NICs and storage for SC shared storage

Sun Enterprise 3x00-6x00 and 10KThese configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun Enterprise 3x00-6x00, 10K server/domain is used as anode in the cluster:

■ Only SBus system boards are supported in Sun Enterprise 3x00, 4x00, 5x00, 6x00,and 10K servers. As an exception, PCI I/O boards can be used for SCI-PCIconnectivity only.

■ For Sun Enterprise 3x00, 4x00, 5x00, and 6x00 servers, it is recommended to haveminimum 2 CPU/Memory boards and minimum 2 I/O boards in each server. ForSun Enterprise 10K server, it is recommended to have minimum 2 System boards ineach domain.

■ For Sun Enterprise 3x00, 4x00, 5x00, 6x00, and 10K servers, it is recommended tohave the mirrored components of a storage set attach to different system boardsin a server/domain. This provides protection from the failure of a board.

■ For Sun Enterprise 3x00, 4x00, 5x00, 6x00, and 10K servers, when two networkinterfaces are configured as part of a NAFO group, it is recommended to have eachinterface attach to a separate system board in the server/domain.

■ Dynamic reconfiguration (DR) is now supported with Sun Enterprise 10K. Thissupport requires Sun Cluster 3.0 12/01, Solaris 8 10/01, and SSP 3.5.

Sun Fire T1000/T2000, Sun SPARC EnterpriseT1000/T2000 and Netra T2000These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun Fire T2000 server is used as a node in the cluster:

■ Two-node Sun Fire T2000 and Sun Netra T2000 clusters installed with Solaris 1011/06 (or later) and KU 118833-30 (or later) can configure e1000g clusterinterconnects using back-to-back cabling, otherwise Ethernet switches arerequired. See Info Doc number 88928 for more info.

■ For the T1000 server, only SVM is supported. Support for VxVM is planned for afuture date.

18 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 31: sun cluster 3 configuration guide 412383

SERVER CONFIGURATION

■ Sun Cluster supports SCSI storage on the T2000 and requires two PCI-X slots forHBAs. Some T2000 servers shipped with a disk controller that occupies one of thePCI-X slots and some ship with a disk controller that is integrated onto themotherboard. In order to have SCSI storage supported with Sun Cluster, it isrequired to have two open PCI-X slots for SCSI HBAs. SCSI storage is notsupported with Sun Cluster and the T1000 because the T1000 has only one PCI-Xslot.

■ To configure internal disk mirroring in the T2000 servers, follow the specialinstructions in the Sun Fire T2000 Server Product Notes. However, when theprocedure instructs you to install the Solaris OS, do not do so. Instead, return tothe cluster installation guide and follow those instructions for the Solaris OSinstallation.

Please note that, in this config guide, the name “Sun Fire T1000” refers to the SunFire T1000 or the Sun SPARC Enterprise T1000 server. Likewise, the name “Sun FireT2000” refers to the Sun Fire T2000 or the Sun SPARC Enterprise T2000 server.

Sun Fire V125Operating System Requirements:

■ Solaris 8 beginning with HW 7/03 OS (with mandatory patch 109885-15)

■ Solaris 9 beginning with 9/04 OS

■ Solaris 10 OS

Solaris Cluster for this server maybe configured differently for Sun Cluster 3.0, SunCluster 3.1 or Sun Cluster 3.2. Tagged VLAN is supported in SC3.1U4 and laterrelease. For Server with only 2 onboard ethernet ports and no other ethernet cards,tagged VLAN must be used.

For use of a single dual-port HBA, please follow guideline under “Shared Storage(Multi-Hosted Storage)” and configuration requirements for its use.

Sun Fire V210 and V240Sun Cluster 3.0/3.1 support for these servers may require a patch (depending on theversion of Solaris involved in the configuration).

For Sun Cluster 3.0 configurations:

■ No patch is required for Solaris 8 configurations.■ For Solaris 9 support, patches 112563-10 and 114189-01 are required.

For Sun Cluster 3.1 configurations prior to 3.1 10/03 (update 1):

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 19

Page 32: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ For Solaris 8 support, patch #113800-03 is required.■ For Solaris 9 support, patch #113801-03 is required.

For Sun Cluster 3.1 10/03 (update 1) configurations or later:

■ No patch required.

Sun Fire V215 and V245These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun Fire V215 or V245 is used as a node in the cluster:

■ To configure internal disk mirroring in the Sun Fire 215 and V245 servers, followthe special instructions in the Sun Fire V215/V245 Server Product Notes.However, when the procedure instructs you to install the Solaris OS, do not do so.Instead, return to the cluster installation guide and follow those instructions forthe Solaris OS installation.

■ With Solaris 9, Sun Cluster support for the V215 and V245 requires KU patch122300-10 and SAN 4.4.13 or later. Please note that Solaris 9 does not support PCI-e adapters.

Sun Fire V440/Netra 440These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun Fire V440 is used as a node in the cluster:

■ The hardware RAID 1 functionality of the Sun Fire V440 and Netra 440 requiresthe following patches:

■ Solaris 8: No patch requirement

■ Solaris 9: Patch 113277-33 or later

■ Solaris 10: Patch 119374-02 or later

Sun Fire V445These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun Fire V445 is used as a node in the cluster:

■ To configure internal disk mirroring in the Sun Fire V445 servers, follow thespecial instructions in the Sun Fire V445 Server Product Notes. However, whenthe procedure instructs you to install the Solaris OS, do not do so. Instead, returnto the cluster installation guide and follow those instructions for the Solaris OSinstallation.

20 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 33: sun cluster 3 configuration guide 412383

SERVER CONFIGURATION

■ Both Solaris 9 and 10 are supported with Sun Cluster for the V445. Please notethat Solaris 9 supports only PCI-X (and not PCI-Express) cards.

Sun Fire V480/V880 and V490/V890These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun Fire V480/V490 or a Sun Fire V880/V890 is used as anode in the cluster:

■ Using MPxIO for multipathing to the local disks of a V480/V490 or V880/V890 issupported as long as the SAN 4.3 (or later) drivers being used. All othermultipathing solutions (such as DMP, DLM or SEDLM) to the local disks in aV480/V490 or a V880/V890 is NOT supported.

■ The “Hot-Plug” feature of the V880/V890 is supported. To get more informationon this feature, including a list of hot-pluggable cards, please see the Sun FireV880 and the Sun Fire V890 Product Notes at http://www.sun.com/products-n-solutions/hardware/docs

Sun Fire V1280/Netra 1280, Netra 1290 and E2900These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun Fire V1280/Netra 1280 is used as a node in thecluster:

■ Dynamic reconfiguration of the SF V1280/Netra 1280’s CPU and memory boardswhile the system remains online is supported with Sun Cluster 3. For moreinformation on this feature as well as its requirements, please consult the SFV1280/Netra 1280 base product documentation.

Sun Fire 3800, 4800/4810 and 6800These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun Fire 3800, 4800, 4810, or 6800 domain is used as anode in the cluster:

■ For Sun Fire 3800, 4800, 4810, and 6800 servers, it is required to configure the SunFireplane Interconnect System as two segments when the server is divided into 2or more domains. For Sun Fire 6800 server, it is required that the segments beimplemented along the power boundary.

■ It is supported to have multiple domains from a server in the same cluster.Clustering in a box - a cluster where all the nodes are domains from the sameserver - is supported. However, there can be single points of failure for the wholecluster in such configurations. For example, a 2-node cluster across two domains

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 21

Page 34: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

of a Sun Fire 3800, or a cluster with primary and backup domain in same segmentof Sun Fire 6800 will have the common powerplane as the single point of failure.A 2 node cluster on a single Sun Fire 6800, where each node is a domain in adifferent segment implemented across the power boundary, is a good cluster-in-a-box solution with appropriate fault isolation built-in.

■ It is recommended to have minimum 2 CPU/Memory board and minimum 2 I/Oassembly in each domain, whenever possible.

■ For the cluster interconnect, it is recommended that at least two independentinterconnects attach to different I/O assemblies in a domain. When all theindependent interconnects of a cluster interconnect attach to the same I/Oassembly, it is required that at least two independent interconnects attach todifferent controllers in the I/O assembly.

■ It is recommended to have the mirrored components of a storage set attach todifferent I/O assemblies in a domain. When the mirrored components of a storageset attach to same the I/O assembly, it is recommended that they attach to differentcontrollers in the I/O assembly.

■ When two or more network interfaces are configured as part of a NAFO group, itis recommended to have each interface attach to different I/O assemblies in adomain. When the different interfaces of a NAFO group are attached to the sameI/O assembly, it is recommended that they attach to different controllers in the I/Oassembly.

■ Dynamic reconfiguration (DR) is now supported. This support requires SunCluster 3.0 12/01 (or later). Jaguar or other multi-core CPUs require patch 111335-26 (or later) or patch 117124-05 (or later).

■ XMITS PCI IO boats are supported.

Sun Fire E4900/E6900These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun Fire Enterprise 4900 or 6900 domain is used as a nodein the cluster:

■ For Sun Fire Enterprise 4900 and 6900 servers, it is required to configure the SunFireplane Interconnect System as two segments when the server is divided into 2or more domains. For Sun Fire Enterprise 6900 server, it is required that thesegments be implemented along the power boundary.

■ It is supported to have multiple domains from a server in the same cluster.Clustering in a box - a cluster where all the nodes are domains from the sameserver - is supported. However, there can be single points of failure for the wholecluster in such configurations. For example, a 2-node cluster across two domainsof a Sun Fire 3800, or a cluster with primary and backup domain in same segmentof Sun Fire Enterprise 6900 will have the common powerplane as the single pointof failure. A 2 node cluster on a single Sun Fire Enterprise 6900, where each nodeis a domain in a different segment implemented across the power boundary, is agood cluster-in-a-box solution with appropriate fault isolation built-in.

22 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 35: sun cluster 3 configuration guide 412383

SERVER CONFIGURATION

■ It is recommended to have minimum 2 CPU/Memory board and minimum 2 I/Oassembly in each domain, whenever possible.

■ For the cluster interconnect, it is recommended that at least two independentinterconnects attach to different I/O assemblies in a domain. When all theindependent interconnects of a cluster interconnect attach to the same I/Oassembly, it is required that at least two independent interconnects attach todifferent controllers in the I/O assembly.

■ It is recommended to have the mirrored components of a storage set attach todifferent I/O assemblies in a domain. When the mirrored components of a storageset attach to the same I/O assembly, it is recommended that they attach to differentcontrollers in the I/O assembly.

■ When two or more network interfaces are configured as part of a NAFO group, itis recommended to have each interface attach to different I/O assemblies in adomain. When the different interfaces of a NAFO group are attached to the sameI/O assembly, it is recommended that they attach to different controllers in the I/Oassembly.

■ Dynamic reconfiguration (DR) is now supported. This support requires SunCluster 3.0 12/01 (or later). Jaguar or other multi-core CPUs require patch 111335-26 (or later) or patch 117124-05 (or later).

■ XMITS PCI IO boats are supported.

Sun Fire 12K, 15K, E20K and E25KThese configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun Fire 12K/15K domain is used as a node in the cluster:■ It is supported to have multiple domains from a server in the same cluster.

Clustering in a box - a cluster where all the nodes are domains from the sameserver - is supported.

■ It is recommended to have minimum 2 CPU/Memory board and minimum 2 I/Oboards in each domain.

■ For the cluster interconnect, it is recommended that at least two independentinterconnects attach to different I/O boards in a domain.

■ It is recommended to have the mirrored components of a storage set attach todifferent I/O boards in a domain.

■ When two or more network interfaces are configured as part of a NAFO group, itis recommended to have each interface attach to different I/O boards in a domain.

■ Dynamic reconfiguration (DR) is supported. This support requires Sun Cluster 3.012/01 (or later) Jaguar or other multi-core CPUs require patch 111335-26 (or later)or patch 117124-05 (or later).

■ Slot 1 Dynamic Reconfiguration is supported. This allows SF 12k/15ks that areclustered to be able to dynamically reconfigure the boards in Slot 1 while thesystems remain online. For Solaris 8 support, Solaris 8 2/02 and a SMS version 1.3or higher is required. For Solaris 9 Support, Solaris 9 4/03 and patch #114271-02 isrequired. For more information, please see Sun Product Intro #Q3FY2003-30I.

■ XMITS PCI IO boats are supported.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 23

Page 36: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun SPARC Enterprise M4000, M5000, M8000 andM9000These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun SPARC Enterprise M4000, M5000, M8000 or M9000 isused as a node in the cluster:■ It is supported to have multiple domains from a server in the same cluster.

Clustering in a box - a cluster where all the nodes are domains in the same server- is supported. However, there can be a single point of failure for the wholecluster in such configurations.

■ It is recommended to have a minimum of 2 CPU/Memory boards in a domainwhenever possible.

■ It is recommended to have separate IO Units per cluster node (domain) wheneverpossible. It is possible to create cluster nodes that share the same IO Unit and thisis supported. However, there can be a single point of failure for the whole clusterin such configurations.

■ It is recommended to have a minimum of 2 IO Units in a domain wheneverpossible.

■ Dynamic reconfiguration is supported.

Sun SPARC Enterprise T5120 andT5220These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun SPARC Enterprise T5120 or T5220 is used as a node inthe cluster:■ For LDOM configuration, Sun Cluster is supported in control domain only■ For nxge drivers, please refer to base product documentation for proper

/etc/system parameters

Sun SPARC Enterprise T5140 andT5240These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun SPARC Enterprise T5140 or T5240 is used as a node inthe cluster:■ For LDOM configuration, Sun Cluster is supported in control domain only■ For nxge drivers, please refer to base product documentation for proper

/etc/system parameters■ As of April’08, InfiniBand HCA/Switches are not yet supported with T5140 or

T5240■ As of April’08, SCSI Storage are not yet enabled with T5140 or T5240

24 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 37: sun cluster 3 configuration guide 412383

SERVER CONFIGURATION

Sun SPARC Enterprise T5440These configuration rules apply in addition to the “Generic Server ConfigurationRules” on page 15 when a Sun SPARC Enterprise T5440 is used as a node in thecluster:■ Please refer to Support for Virtualized OS Environments section in Software

chapter for LDoms support

x64 ServersPlease note that x64 requires the following patches: 120501-04, 120490-01, 120498-01.

Sun Blade 6000 and 6048

TABLE 3-3 Sun Blade 6000 and 6048 Support Matrix

Solaris Starting with Solaris 10 11/06Sun Blade X6240, X6440: Starting with Solaris 10 5/08Sun Blade X6270: Starting with Solaris 10 10/08

Solaris Cluster Starting with Solaris Cluster 3.1 8/05

Supported ServerModules

Sun Blade X6220Sun Blade X6240Sun Blade X6250Sun Blade X6270Sun Blade X6440 (Excludes ‘Barcelona’ processors, e.g., model 8354)Sun Blade X6450

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-5, “Cluster Interconnects: PCI-E ExpressModule NetworkInterfaces for x64 Servers,” on page 198 and Table 10-15, “PublicNetwork: PCI-E ExpressModule Network Interfaces for x64 Servers,” onpage 213.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 25

Page 38: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Blade 8000

Sun Blade 8000 P

TABLE 3-4 Sun Blade 8000 Support Matrix

Solaris Starting with Solaris 10 6/06Sun Blade X8450: Starting with Solaris 10 8/07

Supported ServerModules

Sun Blade X8400Sun Blade X8420Sun Blade X8440Sun Blade X8450

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards X1028A-Z, X4731A, X5040A-Z, X7282A-Z, X7283A-Z, X7284A-Z,X7287A-Z, SG-XPCIE2FCGBE-Q-Z, SG-XPCIE2FCGBE-E-Z

InfinibandInterconnect

X1288A-Z

TABLE 3-5 Sun Blade 8000 P Support Matrix

Solaris Starting with Solaris 10 6/06

Supported ServerModules

Sun Blade X8400Sun Blade X8420Sun Blade X8440Sun Blade X8450 (Starting with Solaris 10 8/07)

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards X5040A-Z

26 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 39: sun cluster 3 configuration guide 412383

SERVER CONFIGURATION

Sun Fire V20z

Sun Fire V40z

TABLE 3-6 Sun Fire V20z Support Matrix

Solaris Starting with Solaris 9 9/04a

a The onboard hardware RAID disk mirroring of the V20z requires Solaris 9 patch 119443-02 or later.

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

Special Notes The Sun Fire V20z currently requires two X4422A cards. The earlierV20z revisions only support a single X4422A. These are the A55marketing numbers (380-0979 chassis assembly/FRU) and A55*Lmarketing numbers (380-1168 chassis assembly/FRU). Later revisionse.g., the 380-1194 chassis assembly/FRU using marketing numberA55*M, are supported.

For more information, see the Sun Fire V20z Server Just the Facts,SunWIN token #400844.

TABLE 3-7 Sun Fire V40z Support Matrix

Solaris Starting with Solaris 9 4/04a

a The onboard hardware RAID disk mirroring of the V40z requires Solaris 9 patch 119443-02 or later. or Solaris 10patch 119375-02 or later.

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 27

Page 40: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Fire X2100 M2 and X2200 M2

Sun Fire X4100 and X4200

Sun Fire X4100 M2

TABLE 3-8 Sun Fire X2100 M2 and X2200 M2 Servers

Solaris Starting with Solaris 10 11/06

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

TABLE 3-9 Sun Fire X4100/X4200 Support Matrix

Solaris Starting with Solaris 10 HW1

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

TABLE 3-10 Sun Fire X4100 M2 Support Matrix

Solaris Starting with Solaris 10 HW1

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

28 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 41: sun cluster 3 configuration guide 412383

SERVER CONFIGURATION

Sun Fire X4140 and X4240

Sun Fire X4150

Sun Fire X4170

TABLE 3-11 Sun Fire X4140 and X4240 Support Matrix

Solaris Starting with Solaris 10 8/07

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

TABLE 3-12 Sun Fire X4150 Support Matrix

Solaris Starting with Solaris 10 8/07

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

TABLE 3-13 Sun Fire X4170 Support Matrix

Solaris Starting with Solaris 10 10/08

Sun Cluster Starting with Sun Cluster 3.1 8/05

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 29

Page 42: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Fire X4200 M2

Sun Fire X4250

Sun Fire X4270

TABLE 3-14 Sun Fire X4200 M2 Support Matrix

Solaris Starting with Solaris 10 HW1

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

TABLE 3-15 Sun Fire X4250 Support Matrix

Solaris Starting with Solaris 10 8/07

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

TABLE 3-16 Sun Fire X4270 Support Matrix

Solaris Starting with Solaris 10 10/08

Sun Cluster Starting with Sun Cluster 3.1 8/05

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

30 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 43: sun cluster 3 configuration guide 412383

SERVER CONFIGURATION

Sun Fire X4275

Sun Fire X4440

Sun Fire X4450

TABLE 3-17 Sun Fire X4275 Support Matrix

Solaris Starting with Solaris 10 10/08

Sun Cluster Starting with Sun Cluster 3.1 8/05

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

TABLE 3-18 Sun Fire X4440 Support Matrix

Solaris Starting with Solaris 10 8/07

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

TABLE 3-19 Sun Fire X4450 Support Matrix

Solaris Starting with Solaris 10 8/07

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 31

Page 44: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Fire X4540

Sun Fire X4600

Sun Fire X4600 M2

TABLE 3-20 Sun Fire X4540 Support Matrix

Solaris Starting with Solaris 10 8/07

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

TABLE 3-21 Sun Fire X4600 Support Matrix

Solaris Starting with Solaris 10 1/06

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

TABLE 3-22 Sun Fire X4600 M2 Support Matrix

Solaris Starting with Solaris 10 1/06

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

32 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 45: sun cluster 3 configuration guide 412383

SERVER CONFIGURATION

Sun Netra X4200 M2

Sun Netra X4250

Sun Netra X4450

TABLE 3-23 Sun Netra X4200 M2 Support Matrix

Solaris Starting with Solaris 10 11/06

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

TABLE 3-24 Sun Netra X4250 Support Matrix

Solaris Starting with Solaris 10 8/07

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

TABLE 3-25 Sun Netra X4450 Support Matrix

Solaris Starting with Solaris 10 8/07 + patches

Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.

Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64Servers,” on page 194 and Table 10-12, “Public Network: PCI NetworkInterfaces for x64 Servers,” on page 209.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 33

Page 46: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

34 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 47: sun cluster 3 configuration guide 412383

CHAPTER 4

Clusters with HeterogeneousServers

Note – The rules that describe which servers can participate in the same clusterhave changed. We no longer have the server family definitions. Instead now we havea new set of rules that define mixing at the level of underlying networking/storagetechnologies. This change vastly increases the flexibility of configurations. Use thenew set of rules described below to find out which servers can be clustered together.

Generic RulesThese rules must be followed while configuring clusters with heterogeneous servers:

■ Cluster configurations must comply with the topology definitions specified in“Sun Cluster 3 Topologies” on page 3.

■ Cluster configurations must comply with the support matrices listed in othersections (for example, “Server Configuration” on page 11, “Storage Overview” onpage 39, and “Network Configuration” on page 183) of the configuration guide.

■ If there are any restrictions placed on server/storage connectivity orserver/network connectivity by the base platforms and the individualnetworking/storage components, then these restrictions override the Sun Clusterconfiguration rules.

■ SCSI storage can connected to a maximum of two nodes simultaneously.

■ Fiber storage can be connected to a maximum of four nodes simultaneously (withthe exception of the SE 99x0 storage which can be connected to a maximum of 8nodes simultaneously).

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 35

Page 48: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Mixing Different Types of Servers in aClusterAll nodes in the cluster share the cluster interconnect. Hence, whether two or moreservers can participate in the same cluster is completely defined by the technologyused for the cluster interconnect. Note that these servers may or may not be able toshare a storage device (please check the storage configuration section of theconfiguration guide for more information). Cluster interconnects between thevarious nodes in a cluster must be of the same interconnect technology (i.e., fastEthernet, gigabit Ethernet, SCI). For Ethernet, all interconnects must operate at thesame speed.

Sharing Storage Among Different Typesof Servers in a ClusterWhether a storage device can be shared among different types of servers in a clusteris dictated by the underlying technology used by the storage device, any storagenetworking infrastructure in between, and the I/O bus type in the servers.

Parallel SCSI DevicesThe rules for sharing Parallel SCSI devices among different cluster nodes are:

■ Node I/O bus types (SBus, PCI, cPCI) cannot be mixed on the same SCSI bus.■ Similar SCSI technology (SE SCSI, HVD, LVD, etc.) must be used on the same

SCSI bus. The grouping that define similar SCSI technology are listed in Table 4-1,“SCSI Interface Groupings,” on page 37.

36 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 49: sun cluster 3 configuration guide 412383

CLUSTERS WITH HETEROGENEOUS SERVERS

Table 4-1, “SCSI Interface Groupings,” on page 37 gives the SCSI interfaces,supported in different servers in Sun Cluster 3, grouped by the underlying SCSItechnology. Each grouping also defines the mixing scope of the servers using theseinterfaces in Sun Cluster 3.

■ Example 1: 40MB/s single-ended Ultra SCSI is only supported on PCI clusternodes (due to the set of Sun Cluster 3 supported servers) and allows mixing ofany HBAs within this set.

■ Example 2: Cluster configs would not support mixing of HVD across node bustypes, SBus 1065A HBA with PCI 6541A HBA.

■ Example 3: Cluster configs would not support LVD SCSI mixing of speeds (e.g.,80MB/s HBA with a 160MB/s HBA) nor SCSI type (e.g., 40MB/s SE SCSI HBAwith a 160MB/s LVD SCSI HBA).

■ Example 4: 4 nodes of V480, clustered pair topology with 4 A1000s. This shows aconfig that was supported under previous rules as well.

■ Example 5: Two v880s and one 6800 with 2 S1’s using the Pair + N topology with6800 as the + N node.

Fibre-Channel Host-Connected StorageThis section describes the general rules for Fibre Channel connected storage.

TABLE 4-1 SCSI Interface Groupings

Group SCSI Interfaces in the Group

40MB/s SE Ultra SCSI - PCI Netra T1 AC200/DC200 onboard SCSINetra t 1120/1125 onboard SCSINetra t 1400/1405 onboard SCSINetra 20 onboard SCSI1032A SunSwift PCI6540A Dual-channel single-ended UltraSCSI [US2S]

HVD SCSI - SBus 1065A SBus-to-differential Ultra SCSI [UDWIS/S]

HVD SCSI - PCI 6541A Dual-channel differential UltraSCSI [UD2S]

320MB/s LVD SCSI-PCI SG-(X)PCI1SCSI-LM320SG-(X)PCI1SSCI-LM320-ZSG-XPCI2SCSI-LM320 [Jasper 320]SG-XPCI2SCSI-LM320-ZSunFire V440 onboard SCSINetra 440 On Board SCSI

160MB/s LVD SCSI - PCI 6758A StorEdge PCI Dual Ultra 3SCSI [Jasper]

80MB/s LVD SCSI- PCI 2222A Dual FE +Dual SCSI Adapter [Cauldron]

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 37

Page 50: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Storage, HBA, server and other component requirements take precedence overany Sun Cluster rules.

■ Both SAN and direct-connected FC storage are supported.

■ Node I/O bus type mixing is allowed, e.g., PCIe and PCI-X, or SBus and PCI.

■ FC speeds may be mixed.

■ Connectivity between the nodes of a cluster and shared data must use logicallyseparate paths. It is recommended to use physically separate paths. “Paths” inthis context refers to connections to the submirrors of an SVM mirrored volume orthe MPxIO paths to a highly available RAID volume, for example.

Please refer to chapter 5, Storage Overview, for additional Sun Cluster details,including any exceptions to the above rules.

Also refer to the specific storage and SAN product documentation for productdetails.

Note on MultipathingMultipathed vs. non-multipathed connections are assumed to be consistent for allnodes logically connected to the shared storage device. For example, if MPxIO isused to connect to one node, MPxIO must also be used to connect this sharedstorage to the other cluster nodes. Similarly for non-multipathed connections, allsuch shared connections must be non-multipathed connections to all logically-connected cluster nodes.

38 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 51: sun cluster 3 configuration guide 412383

CHAPTER 5

Storage Overview

Any storage device (a single disk, tape, or a CD-ROM, or an array enclosureconsisting of several disks) connected to the nodes in a Sun Cluster, is a globaldevice accessible by all the cluster nodes through the global namespace.

Any storage inside a node, including internal disks and tape storage, is local storageand cannot be shared.

Local Storage (Single-Hosted Storage)Local storage consists of storage devices connected to only one node. Such storagedevices are not considered highly available. They can be used for:

■ Setting up root, usr, swap, /globaldevices.■ Hosting application binaries, configuration files.■ Storing anything other than the application data.■ Any storage device, along with its cable, junction, and host bus adapter,

supported by the base server can be used for local storage in Sun Cluster.

Heterogeneous Storage in Sun ClusterAll storage devices shown as supported for shared storage in “Fibre Channel StorageSupport” on page 59 and “SCSI Storage Support” on page 127 can be used in anycombination in Sun Cluster 3. No restrictions are imposed on combinations of sharedstorage in Sun Cluster beyond those imposed by the interoperability of theindividual storage devices themselves.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 39

Page 52: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Shared Storage (Multi-Hosted Storage)Shared storage consists of storage devices connected to more than one node suchthat one or more LUNs, or volumes are accessible from each connected cluster node.Such devices are considered highly available. They can be used for:

■ Hosting application data■ Hosting application binaries, configuration files■ Setting up quorum devices

Please consult each storage device’s section for maximum node connectivity andother guidelines. The following are general guidelines:

■ Some parallel SCSI devices can be split into two functionally separate devices. Seeeach specific storage device for details.

■ Parallel SCSI devices can only share a LUN or volume between two nodes in thesame cluster.

■ Fibre Channel (FC) devices can share a LUN, or volume, between two or morecluster nodes within the same cluster.

■ In some cases, FC devices may present different LUNs to different clusters or non-clustered nodes.

■ FC devices may be directly connected to FC switches, to HBAs, or attacheddirectly to cluster nodes. See the specific storage device in question forrestrictions.

■ Sun Cluster highly recommends that each sub-mirror of a mirrored volume orpath to a multi-path IO connection use separate host adapter cards and controllerchips

■ Sun Cluster now supports the use of a single dual-port HBA in supportedconfigurations as a single adapter used to connect shared storage devices. Notethat the usage of a single adapter decreases availability and reliability of thecluster and while we don’t require two HBAs, it is still strongly recommended.

Storage products are supported with a specific set of servers, as listed in the tableslater in this chapter. See Table 5-1, “FC Storage for SPARC Servers,” on page 42 andTable 5-4, “SCSI Storage for SPARC Servers,” on page 50.

For a storage configuration (storage device, HBA, switch) to be considered forsupport with Sun Cluster, it MUST be supported by Network Storage. If NetworkStorage does not support a given configuration, then Sun Cluster cannot support theconfiguration.

40 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 53: sun cluster 3 configuration guide 412383

STORAGE OVERVIEW

Configuration Requirements for use of singledual-port HBA for storage connectivity■ Solaris Volume Manager and Solstice Disk Suite only

■ Dual-String Mediators are not supported

■ Disksets must have a minimum of 2 disks

■ Storage products are supported with a specific set of servers, as listed in the tableslater in this chapter. See Table 5-1, “FC Storage for SPARC Servers,” on page 42and Table 5-4, “SCSI Storage for SPARC Servers,” on page 50.

For a storage configuration (storage device, HBA, switch) to be considered forsupport with Sun Cluster, it MUST be supported by Network Storage. If NetworkStorage does not support a given configuration, then Sun Cluster cannot support theconfiguration.

Quorum Devices in Sun ClusterSun StorEdge A3500 and A3500FC arrays cannot be used as quorum devices. Exceptfor these arrays, all supported shared storage devices can act as quorum devices.

If you use Sun StorEdge A3500 or A3500FC arrays for shared storage in your cluster,you must use a different device if you need a quorum device.

Supported Fibre Channel (FC) Storage DevicesTable 5-1 lists the FC storage devices supported with Sun Cluster and the servertypes that can share these storage devices in clusters. Once you have determinedwhether your server and storage combination is supported, refer to the storage

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 41

Page 54: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

details section to find other supported components. If you have mixed types ofservers in your cluster, refer to “Sharing Storage Among Different Types of Serversin a Cluster” on page 36 for additional restrictions.

TABLE 5-1 FC Storage for SPARC Servers

Server Sun StorEdge Arrays

Su

nS

torag

eTek2540

RA

IDA

rray

Su

nS

torE

dg

e3510

RA

IDA

rray

Su

nS

torE

dg

e3511

RA

IDA

rray

Su

nS

torE

dg

e3910/3960

System

Su

nS

torE

dg

e6120

Array

Su

nS

torE

dg

e6130

Array

Su

nS

torag

eTek6140

Array

Su

nS

torag

e6180

Array

Su

nS

torE

dg

e6320

System

Su

nS

torag

eTek6540

Array

Su

nS

torag

e6580/6780

Arrays

Su

nS

torE

dg

e6910/6960

Arrays

Su

nS

torE

dg

e6920

System

Su

nS

torE

dg

e9910/9960

Arrays

Su

nS

torE

dg

e9970/9980

Su

nS

torag

eTek9985/9990

Su

nS

torag

eTek9985V

/9990V

Sun Blade T6300 • • • • • • • • • • • • • •

Sun Blade T6320 • • • • • • • • • • • •

Sun Blade T6340 • • • • • • • •

Sun Enterprise 10Ka • • • • • • • • • • • •

Sun Enterprise 220R • • • • • • • • • • • • • •

Sun Enterprise 250 • • • • • • • • • •

Sun Enterprise 3000a • • • • • • • • • •

Sun Enterprise 3500a • • • • • • • • • • • •

Sun Enterprise 4000a • • • • • • • • • •

Sun Enterprise 420R • • • • • • • • • • • • • •

Sun Enterprise 450 • • • • • • • • • • • • • •

Sun Enterprise 4500a • • • • • • • • • • • •

Sun Enterprise 5000a • • • • • • • • • •

Sun Enterprise 5500a • • • • • • • • • • • •

Sun Enterprise 6000a • • • • • • • • • •

Sun Enterprise 6500a • • • • • • • • • • • •

Sun Fire 12K • • • • • • • • • • • • • •

Sun Fire 15K • • • • • • • • • • • • • •

Sun Fire 280R • • • • • • • • • • • • • •

Sun Fire 3800 • • • • • • • • • • • •

42 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 55: sun cluster 3 configuration guide 412383

STORAGE OVERVIEW

Sun Fire 4800 • • • • • • • • • • • • • •

Sun Fire 4810 • • • • • • • • • • • • • •

Sun Fire 6800 • • • • • • • • • • • • • •

Sun Fire E20K • • • • • • • • • • • • •

Sun Fire E25K • • • • • • • • • • • • •

Sun Fire E2900 • • • • • • • • • • • • • • •

Sun Fire E4900 • • • • • • • • • • • • • • •

Sun Fire E6900 • • • • • • • • • • • • • • •

Sun Fire T1000 • • • • • • • • • • • • • • • •

Sun Fire T2000 • • • • • • • • • • • • • • • •

Sun Fire V120

Sun Fire V125 • • •

Sun Fire V1280 • • • • • • • • • • • • • • •

Sun Fire V210 • • • •

Sun Fire V215 • • • • • • • • • • • •

Sun Fire V240 • • • • • • • • • • • •

Sun Fire V245 • • • • • • • • • • • •

Sun Fire V250 • • • •

Sun Fire V440 • • • • • • • • • • • • •

Sun Fire V445 • • • • • • • • • • • • •

Sun Fire V480 • • • • • • • • • • • • • •

Sun Fire V490 • • • • • • • • • • • • • • •

Sun Fire V880 • • • • • • • • • • • • • •

TABLE 5-1 FC Storage for SPARC Servers (Continued)

Server Sun StorEdge Arrays

Su

nS

torag

eTek2540

RA

IDA

rray

Su

nS

torE

dg

e3510

RA

IDA

rray

Su

nS

torE

dg

e3511

RA

IDA

rray

Su

nS

torE

dg

e3910/3960

System

Su

nS

torE

dg

e6120

Array

Su

nS

torE

dg

e6130

Array

Su

nS

torag

eTek6140

Array

Su

nS

torag

e6180

Array

Su

nS

torE

dg

e6320

System

Su

nS

torag

eTek6540

Array

Su

nS

torag

e6580/6780

Arrays

Su

nS

torE

dg

e6910/6960

Arrays

Su

nS

torE

dg

e6920

System

Su

nS

torE

dg

e9910/9960

Arrays

Su

nS

torE

dg

e9970/9980

Su

nS

torag

eTek9985/9990

Su

nS

torag

eTek9985V

/9990V

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 43

Page 56: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Fire V890 • • • • • • • • • • • • •

Sun Netra 120

Sun Netra 1280 • • • • • • • • • • •

Sun Netra 1290 • • • • • • • • • • •

Sun Netra 20 • • • • •

Sun Netra 240 • • • • •

Sun Netra 440 • • • • • • • • •

Sun Netra CT 900 CP3010 • •

Sun Netra CT 900 CP3060 • • • •

Sun Netra CT 900 CP3260 • •

Sun Netra t 1120/1125 • • • • •

Sun Netra t 1400/1405 • • • • • • •

Sun Netra T1AC200/DC200

Sun Netra T2000 • • • • • • • • • •

Sun Netra T5220 • • • •

Sun Netra T5440 • • • •

Sun SPARC EnterpriseM3000

• • • • • • • • • •

Sun SPARC EnterpriseM4000

• • • • • • • • • • • • •

Sun SPARC EnterpriseM5000

• • • • • • • • • • • •

Sun SPARC EnterpriseM8000

• • • • • • • • • • • •

TABLE 5-1 FC Storage for SPARC Servers (Continued)

Server Sun StorEdge Arrays

Su

nS

torag

eTek2540

RA

IDA

rray

Su

nS

torE

dg

e3510

RA

IDA

rray

Su

nS

torE

dg

e3511

RA

IDA

rray

Su

nS

torE

dg

e3910/3960

System

Su

nS

torE

dg

e6120

Array

Su

nS

torE

dg

e6130

Array

Su

nS

torag

eTek6140

Array

Su

nS

torag

e6180

Array

Su

nS

torE

dg

e6320

System

Su

nS

torag

eTek6540

Array

Su

nS

torag

e6580/6780

Arrays

Su

nS

torE

dg

e6910/6960

Arrays

Su

nS

torE

dg

e6920

System

Su

nS

torE

dg

e9910/9960

Arrays

Su

nS

torE

dg

e9970/9980

Su

nS

torag

eTek9985/9990

Su

nS

torag

eTek9985V

/9990V

44 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 57: sun cluster 3 configuration guide 412383

STORAGE OVERVIEW

Sun SPARC EnterpriseM9000

• • • • • • • • • • •

Sun SPARC EnterpriseT5120

• • • • • • • • • • • • • •

Sun SPARC EnterpriseT5140

• • • • • • • • • • • • • •

Sun SPARC EnterpriseT5220

• • • • • • • • • • • • • •

Sun SPARC EnterpriseT5240

• • • • • • • • • • • • • •

Sun SPARC EnterpriseT5440

• • • • • • • • • • • • • •

External I/O ExpansionUnit for Sun SPARCEnterprise M4000, M5000,M8000, M9000

• • • • • • • • • • •b •b •b •b

External I/O ExpansionUnit for Sun SPARCEnterprise T5120, T5140,T5220, T5240

• • • • • • • • •b •b •b •b

USBRDT-5240 Uniboardfor Sun Fire 4800, E4900,6800, E6900, 12K, 15K,E20K, E25K

• • • • • •

a Only these servers’ SBus I/O boards are supported for shared cluster storage

b The SE 9900 WWWW includes External I/O Expansion Unit support under the base server

TABLE 5-1 FC Storage for SPARC Servers (Continued)

Server Sun StorEdge Arrays

Su

nS

torag

eTek2540

RA

IDA

rray

Su

nS

torE

dg

e3510

RA

IDA

rray

Su

nS

torE

dg

e3511

RA

IDA

rray

Su

nS

torE

dg

e3910/3960

System

Su

nS

torE

dg

e6120

Array

Su

nS

torE

dg

e6130

Array

Su

nS

torag

eTek6140

Array

Su

nS

torag

e6180

Array

Su

nS

torE

dg

e6320

System

Su

nS

torag

eTek6540

Array

Su

nS

torag

e6580/6780

Arrays

Su

nS

torE

dg

e6910/6960

Arrays

Su

nS

torE

dg

e6920

System

Su

nS

torE

dg

e9910/9960

Arrays

Su

nS

torE

dg

e9970/9980

Su

nS

torag

eTek9985/9990

Su

nS

torag

eTek9985V

/9990V

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 45

Page 58: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 5-2 FC Storage for x64 Servers

Su

nS

torag

eTek2540

RA

IDA

rray

Su

nS

torE

dg

e3510

RA

IDA

rray

Su

nS

torE

dg

e3511

RA

IDA

rray

Su

nS

torE

dg

e6120

Array

Su

nS

torE

dg

e6130

Array

Su

nS

torag

eTek6140

Array

Su

nS

torag

e6180

Array

Su

nS

torE

dg

e6320

System

Su

nS

torag

eTek6540

Array

Su

nS

torag

e6580/6780

Arrays

Su

nS

torE

dg

e6920

System

Su

nS

torE

dg

e9910/9960

Arrays

Su

nS

torE

dg

e9970/9980

Su

nS

torag

eTek9985/9990

Su

nS

torag

eTek9985V

/9990V

Sun Blade X6220 • • • • • • • • • • •

Sun Blade X6240 • • • • • • • • • • • • •

Sun Blade X6250 • • • • • • • • • • • • •

Sun Blade X6270 • • • • • • • •

Sun Blade X6440 • • • • • • • • • • • • •

Sun Blade X6450 • • • • • • • • • • • • •

Sun Blade X8400 • • • • • • • • •

Sun Blade X8420 • • • • • • • • • • • • • • •

Sun Blade X8440 • • • • • • • • • • • • • • •

Sun Blade X8450 • • • • • • • • • • • • • • •

Sun Fire V40z • • • • • • • • • •

Sun Fire X2100 M2 • • • • • • • • • • • •

Sun Fire X2200 M2 • • • • • • • • • • • • • •

Sun Fire X4100 • • • • • • • • • • • •

Sun Fire X4100 M2 • • • • • • • • • • • • • • •

Sun Fire X4140 • • • • • • • • • • • • • •

Sun Fire X4150 • • • • • • • • • • • • • •

Sun Fire X4170 • • • •

Sun Fire X4200 • • • • • • • • • • • •

Sun Fire X4200 M2 • • • • • • • • • • • • • • •

Sun Fire X4240 • • • • • • • • • • • • • •

Sun Fire X4250 • • • • • •

Sun Fire X4270 • • • •

46 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 59: sun cluster 3 configuration guide 412383

STORAGE OVERVIEW

Sun Fire X4275 • • • •

Sun Fire X4440 • • • • • • • • • • • • • •

Sun Fire X4450 • • • • • • • • • • • • • •

Sun Fire X4540 • • • • • •

Sun Fire X4600 • • • • • • • • • • • • •

Sun Fire X4600 M2 • • • • • • • • • • • • • • •

Sun Netra X4200 M2 • • • • • • • • •

Sun Netra X4250 • • • •

Sun Netra X4450 • • • •

TABLE 5-2 FC Storage for x64 Servers (Continued)

Su

nS

torag

eTek2540

RA

IDA

rray

Su

nS

torE

dg

e3510

RA

IDA

rray

Su

nS

torE

dg

e3511

RA

IDA

rray

Su

nS

torE

dg

e6120

Array

Su

nS

torE

dg

e6130

Array

Su

nS

torag

eTek6140

Array

Su

nS

torag

e6180

Array

Su

nS

torE

dg

e6320

System

Su

nS

torag

eTek6540

Array

Su

nS

torag

e6580/6780

Arrays

Su

nS

torE

dg

e6920

System

Su

nS

torE

dg

e9910/9960

Arrays

Su

nS

torE

dg

e9970/9980

Su

nS

torag

eTek9985/9990

Su

nS

torag

eTek9985V

/9990V

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 47

Page 60: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

For other storage arrays and other x64 servers, please refer to the specific server

TABLE 5-3 Older FC Storage and Platform Compatibility Matrix

Server

Sun StorEdge Arrays Sun StorEdge Arrays

Su

nS

torE

dg

eA

3500FC

System

Su

nS

torE

dg

eA

5x00A

rray

Su

nS

torE

dg

eT

3A

rray(S

ing

leB

rick) b

Su

nS

torE

dg

eT

3A

rray(P

artner

Pair) b

Su

nS

torE

dg

eA

3500FC

System

Su

nS

torE

dg

eA

5x00A

rray

Su

nS

torE

dg

eT

3A

rray(S

ing

leB

rick) e

Su

nS

torE

dg

eT

3A

rray(P

artner

Pair) b

Netra t 1120/1125 Sun Fire T2000 • •

Netra t 1400/1405 Sun Fire V120

Netra T1 AC200/DC200 Sun Fire V210

Netra 20 • Sun Fire V215 • •

Netra 120 Sun Fire V240 • •

Netra 240 Sun Fire V245 • •

Netra 440 • • Sun Fire V250 • •

Netra 1280 •d • • Sun Fire 280R •c • •

Netra 1290 • • • Sun Fire V440 • •

Sun Enterprise 220R • • • Sun Fire V445 • •

Sun Enterprise 250 • • • Sun Fire V480 •c • •

Sun Enterprise 420R • • • Sun Fire V490 •c • •

Sun Enterprise 450 • • • Sun Fire V880 •d • •

Sun Enterprise 3000a • • • • Sun Fire V890 •d • •

Sun Enterprise 3500a • • • • Sun Fire V1280 • • • •

Sun Enterprise 4000a • • • • Sun Fire E2900 • •

Sun Enterprise 4500a • • • • Sun Fire 3800 •c • •

Sun Enterprise 5000a • • • • Sun Fire 4800/4810 •c • •

Sun Enterprise 5500a • • • • Sun Fire E4900 • •

Sun Enterprise 6000a • • • • Sun Fire 6800 •c • •

48 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 61: sun cluster 3 configuration guide 412383

STORAGE OVERVIEW

discussion in Chapter 3.

Supported SCSI Storage DevicesTable 5-4 lists the SCSI storage devices supported with Sun Cluster and the servertypes that can share these storage devices. Once you have determined whether yourserver and storage combination are supported, refer to the storage details section to

Sun Enterprise 6500a • • • • Sun Fire E6900 • •

Sun Enterprise 10Ka • • • • Sun Fire 12K/15K •c • •

Sun Fire T1000 Sun Fire E20K/E25K • •

Sun SPARC EnterpriseT5440

• •

a Only these servers’ SBus I/O boards are supported for cluster shared storage

b The T2000 is supported with the T3+ only

c Only Sun StorEdge A5200 supported

d Only Sun StorEdge A5100/A5200 supported

e The T2000 is supported with the T3+ only

TABLE 5-3 Older FC Storage and Platform Compatibility Matrix (Continued)

Server

Sun StorEdge Arrays Sun StorEdge Arrays

Su

nS

torE

dg

eA

3500FC

System

Su

nS

torE

dg

eA

5x00A

rray

Su

nS

torE

dg

eT

3A

rray(S

ing

leB

rick) b

Su

nS

torE

dg

eT

3A

rray(P

artner

Pair) b

Su

nS

torE

dg

eA

3500FC

System

Su

nS

torE

dg

eA

5x00A

rray

Su

nS

torE

dg

eT

3A

rray(S

ing

leB

rick) e

Su

nS

torE

dg

eT

3A

rray(P

artner

Pair) b

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 49

Page 62: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

find other supported components. If you have mixed types of servers in your cluster,refer to “Sharing Storage Among Different Types of Servers in a Cluster” on page 36for additional restrictions.

TABLE 5-4 SCSI Storage for SPARC Servers

Server

Sun Netra Sun StorEdge Arrays

Netra

stD

130A

rray

Netra

stA

1000A

rray

Netra

stD

1000A

rray

Su

nS

torE

dg

eS

1A

rray

Su

nS

torE

dg

eD

2A

rray

Su

nS

torE

dg

eA

3500A

rray

Su

nS

torE

dg

e3120

JBO

DA

rray

Su

nS

torE

dg

e3310

JBO

DA

rray

Su

nS

torE

dg

e3310

RA

IDA

rray

Su

nS

torE

dg

e3320

JBO

DA

rray

Su

nS

torE

dg

e3320

RA

IDA

rray

Sun Enterprise 10K

Sun Enterprise 220R • • • • • • •

Sun Enterprise 250 • • • • • • •

Sun Enterprise 3x00 • • •

Sun Enterprise 420R • • • • • • •

Sun Enterprise 450 • • • • • • •

Sun Enterprise 4x00 • • •

Sun Enterprise 5x00 • • •

Sun Enterprise 6x00 • • •

Sun Fire 12K • • • • •

Sun Fire 15K • • • • •

Sun Fire 280R • • • • • • •

Sun Fire 3800

Sun Fire 4800 • • • • •

Sun Fire 4810 • • • • •

Sun Fire 6800 • • • • •

Sun Fire E20K • • • •

Sun Fire E25K • • • •

Sun Fire E2900 • • • • •

Sun Fire E4900 • • •

50 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 63: sun cluster 3 configuration guide 412383

STORAGE OVERVIEW

Sun Fire E6900 • • •

Sun Fire T1000 • • • • • • •

Sun Fire T2000a • • • • • • •

Sun Fire V120 •

Sun Fire V125 • • • • • •

Sun Fire V1280 • • • • • • •

Sun Fire V210 • • • • • • •

Sun Fire V215 • • • • • • •

Sun Fire V240 • • • • • • •

Sun Fire V245 • • • • • • •

Sun Fire V250 • • • • • • •

Sun Fire V440 • • • • • • •

Sun Fire V445 • • • • • • •

Sun Fire V480 • • • • • • •

Sun Fire V490 • • • • • • •

Sun Fire V880 • • • • • • •

Sun Fire V890 • • • • • • •

Sun Netra 120 •

Sun Netra 1280 • • • • • • •

Sun Netra 1290 • • • • • • •

Sun Netra 20 • • • • • • • • •

Sun Netra 240 • • • • • • • •

Sun Netra 440 • • • • • •

TABLE 5-4 SCSI Storage for SPARC Servers (Continued)

Server

Sun Netra Sun StorEdge Arrays

Netra

stD

130A

rray

Netra

stA

1000A

rray

Netra

stD

1000A

rray

Su

nS

torE

dg

eS

1A

rray

Su

nS

torE

dg

eD

2A

rray

Su

nS

torE

dg

eA

3500A

rray

Su

nS

torE

dg

e3120

JBO

DA

rray

Su

nS

torE

dg

e3310

JBO

DA

rray

Su

nS

torE

dg

e3310

RA

IDA

rray

Su

nS

torE

dg

e3320

JBO

DA

rray

Su

nS

torE

dg

e3320

RA

IDA

rray

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 51

Page 64: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Netra t 1120/1125 • • • • • • • •

Sun Netra t 1400/1405 • • • • • • • • • •

Sun Netra T1AC200/DC200

• •

Sun Netra T2000 • • • •

Sun Netra T5220 • • • • •

Sun Netra T5440 • • •

Sun SPARC EnterpriseM3000

• • • •

Sun SPARC EnterpriseM4000

• • • • •

Sun SPARC EnterpriseM5000

• • • • •

Sun SPARC EnterpriseM8000

• • • • •

Sun SPARC EnterpriseM9000

• • • • •

Sun SPARC EnterpriseT5120

• • • • •

Sun SPARC EnterpriseT5140

• • • • •

Sun SPARC EnterpriseT5220

• • • • •

TABLE 5-4 SCSI Storage for SPARC Servers (Continued)

Server

Sun Netra Sun StorEdge Arrays

Netra

stD

130A

rray

Netra

stA

1000A

rray

Netra

stD

1000A

rray

Su

nS

torE

dg

eS

1A

rray

Su

nS

torE

dg

eD

2A

rray

Su

nS

torE

dg

eA

3500A

rray

Su

nS

torE

dg

e3120

JBO

DA

rray

Su

nS

torE

dg

e3310

JBO

DA

rray

Su

nS

torE

dg

e3310

RA

IDA

rray

Su

nS

torE

dg

e3320

JBO

DA

rray

Su

nS

torE

dg

e3320

RA

IDA

rray

52 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 65: sun cluster 3 configuration guide 412383

STORAGE OVERVIEW

Sun SPARC EnterpriseT5240

• • • • •

Sun SPARC EnterpriseT5440

• • • • •

External I/O ExpansionUnit for Sun SPARCEnterprise M4000, M5000,M8000, M9000 Servers

• • •

a Support for SCSI storage with the Sun Fire T2000 server requires two PCI-X slots for HBAs. T2000 severswith a disk controller that occupies one of the PCI-X slots are not supported with Sun Cluster and SCSIstorage.

TABLE 5-5 SCSI Storage for x64 Servers

Server

Su

nS

torE

dg

e3120

JBO

DA

rray

Su

nS

torE

dg

e3310

JBO

DA

rray

Su

nS

torE

dg

e3310

RA

IDA

rray

Su

nS

torE

dg

e3320

JBO

DA

rray

Su

nS

torE

dg

e3320

RA

IDA

rray

Sun Fire V20z •

Sun Fire V40z • • • • •

Sun Fire X2100 M2 • • • • •

Sun Fire X2200 M2 • • • • •

TABLE 5-4 SCSI Storage for SPARC Servers (Continued)

Server

Sun Netra Sun StorEdge Arrays

Netra

stD

130A

rray

Netra

stA

1000A

rray

Netra

stD

1000A

rray

Su

nS

torE

dg

eS

1A

rray

Su

nS

torE

dg

eD

2A

rray

Su

nS

torE

dg

eA

3500A

rray

Su

nS

torE

dg

e3120

JBO

DA

rray

Su

nS

torE

dg

e3310

JBO

DA

rray

Su

nS

torE

dg

e3310

RA

IDA

rray

Su

nS

torE

dg

e3320

JBO

DA

rray

Su

nS

torE

dg

e3320

RA

IDA

rray

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 53

Page 66: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Fire X4100 • • • • •

Sun Fire X4100 M2 • • • • •

Sun Fire X4140 • • • • •

Sun Fire X4150

Sun Fire X4170

Sun Fire X4200 • • • • •

Sun Fire X4200 M2 • • • • •

Sun Fire X4240 • • • • •

Sun Fire X4250 • • •

Sun Fire X4270

Sun Fire X4275

Sun Fire X4440 • • • • •

Sun Fire X4450 • • • • •

Sun Fire X4540 • • •

Sun Fire X4600 • • • • •

Sun Fire X4600 M2 • • • • •

Sun Netra X4200 M2 • • • • •

Sun Netra X4250 • • • • •

Sun Netra X4450 • • • • •

TABLE 5-5 SCSI Storage for x64 Servers (Continued)

Server

Su

nS

torE

dg

e3120

JBO

DA

rray

Su

nS

torE

dg

e3310

JBO

DA

rray

Su

nS

torE

dg

e3310

RA

IDA

rray

Su

nS

torE

dg

e3320

JBO

DA

rray

Su

nS

torE

dg

e3320

RA

IDA

rray

54 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 67: sun cluster 3 configuration guide 412383

STORAGE OVERVIEW

TABLE 5-6 SAS Storage for SPARC Servers

Server

Su

nS

torag

eTek2530

RA

IDA

rray

Su

nS

torag

eJ4400

JBO

DA

rray

Su

nS

torag

eJ4200

JBO

DA

rraySun Fire E2900 •

Sun Fire T1000 • • •

Sun Fire T2000 • • •

Sun Fire V125 •

Sun Fire V1280 •

Sun Fire V215 • • •

Sun Fire V245 • • •

Sun Fire V445 • • •

Sun Fire V480 •

Sun Fire V490 •

Sun Fire V880 •

Sun Fire V890 •

Sun Netra T2000 •

Sun Netra T5440 •

Sun SPARC Enterprise M4000 • • •

Sun SPARC Enterprise M5000 • • •

Sun SPARC Enterprise M8000 • • •

Sun SPARC Enterprise M9000 • • •

Sun SPARC Enterprise T1000 - See Sun FireT1000

Sun SPARC Enterprise T2000 - See Sun FireT2000

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 55

Page 68: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun SPARC Enterprise T5120 • • •

Sun SPARC Enterprise T5140 • • •

Sun SPARC Enterprise T5220 • • •

Sun SPARC Enterprise T5240 • • •

Sun SPARC Enterprise T5440 • •

External I/O Expansion Unit for SunSPARC Enterprise T5120, T5140, T5220 andT5240 Servers

TABLE 5-7 SAS Storage for x64 Servers

Server

Su

nS

torag

eTek2530

RA

IDA

rray

Su

nS

torag

eJ4400

JBO

DA

rray

Su

nS

torag

eJ4200

JBO

DA

rray

Sun Fire X2100 M2 • • •

Sun Fire X2200 M2 • • •

Sun Fire X4100 •

Sun Fire X4100 M2 • • •

Sun Fire X4140 • • •

TABLE 5-6 SAS Storage for SPARC Servers (Continued)

Server

Su

nS

torag

eTek2530

RA

IDA

rray

Su

nS

torag

eJ4400

JBO

DA

rray

Su

nS

torag

eJ4200

JBO

DA

rray

56 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 69: sun cluster 3 configuration guide 412383

STORAGE OVERVIEW

Supported Ethernet-Connected Storage DevicesPlease see the indicated sections for the following products:

■ “Sun StorageTek 2510 RAID Array” on page 173

■ “Sun StorageTek 5000 NAS Appliance” on page 175

■ “Sun Storage 7000 Unified Storage System” on page 179

Sun Fire X4150 • • •

Sun Fire X4170 • • •

Sun Fire X4200 •

Sun Fire X4200 M2 • • •

Sun Fire X4240 • • •

Sun Fire X4250 • • •

Sun Fire X4270 • • •

Sun Fire X4275 • • •

Sun Fire X4440 • • •

Sun Fire X4450 • • •

Sun Fire X4600 • • •

Sun Fire X4600 M2 • • •

Sun Netra X4200 M2 •

Sun Netra X4450 •

TABLE 5-7 SAS Storage for x64 Servers (Continued)

Server

Su

nS

torag

eTek2530

RA

IDA

rray

Su

nS

torag

eJ4400

JBO

DA

rray

Su

nS

torag

eJ4200

JBO

DA

rray

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 57

Page 70: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Third-Party StoragePlease see the following link for information on supported third-party storage:http://www.sun.com/software/cluster/osp/index.html

58 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 71: sun cluster 3 configuration guide 412383

CHAPTER 6

Fibre Channel Storage Support

This chapter discusses Fibre Channel storage support in Sun Cluster, both as direct-attach and SAN configurations.

SAN Configuration SupportThis section pertains to SAN-switch-connected shared storage support.

Server/Switch/Storage SupportUsing supported storage switches, it is possible to connect supported Fibre Channelstorage devices and supported servers in a Storage Area Network (SAN)configuration. These configurations are supported with Sun Cluster as long as theyare within the range of supported devices and limitations listed below. Supportedconfigurations are comprised of supported SAN HBAs, switches and storage devices(all listed below) while following the SAN support rules (also listed below).

SAN Support RulesIn order to create a supported SAN connected cluster configuration, the followingrules must be followed:

■ The HBA/SAN/Storage configuration must be listed in the HBA, Storage, andSwitch sections below.

■ Cascading up to two layers of switches is supported.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 59

Page 72: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ The configuration must be supported by Network Storage. Please see the NWS“what works with what” matrices, particularly the latest SAN matrix or the SE9900 series matrix (if you are using a SE 9900 series storage array). You can findthese matrices at http://mysales.central/public/storage/products/matrix.html

Supported SAN SoftwareSAN software is supported as follows, unless noted otherwise by storage array, FCswitch, HBA, server, or other documentation.

■ Solaris 10: With associated SAN related patches.

■ Solaris 9: Sun StorEdge SAN Foundation Software - release 4.4.15 is supported.

■ Solaris 8: Sun StorEdge SAN Foundation Software - release 4.4.12 is supported.

Supported SAN StoragePlease refer to the section on the storage device of interest for support details. Inorder to put together a supported configuration, please match a supportedserver/HBA combination with a supported SAN switch below and a supportedSAN storage device from Table 5-1, “FC Storage for SPARC Servers,” on page 42.This configuration must adhere to the SAN support rules listed above. Once thiscombination is complete, please check it against the Network Storage what workswith what matrices to ensure that both groups support the configuration. If they do,the configuration is supported, if not, additional testing will need to be done toenable this support.

Supported SAN Host Bus Adapters (HBAs)To find if a given server and HBA combination can be supported in a SANenvironment, please see the specific storage device details in this chapter to checkfor the appropriate storage/HBA configuration. If a given combination of storage,server and HBA are supported, then you may proceed to choosing a SAN switch.The HBAs supported in a Sun Cluster SAN are listed in the following sections. Referto individual storage device sections for exceptions.

1Gb HBAs■ SBus: * (X)6757A Sun StorEdge SBus Dual FC Network Adapter

■ PCI:

60 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 73: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

■ (X)6727A Sun StorEdge PCI Dual Fibre Channel Network Adapter

■ (X)6799A Sun StorEdge PCI Single Fibre Channel Network Adapter

■ cPCI: (X)6748A Sun StorEdge CompactPCI Dual Fibre Channel Network Adapter

2Gb HBAs■ SBus: none

■ PCI:

■ SG-(X)PCI1FC-QF2 ((X)6767A) Sun StorEdge 2G FC PCI Single Fibre ChannelHBA

■ SG-(X)PCI2FC-QF2 ((X)6768A) Sun StorEdge 2G FC PCI Dual Fibre ChannelHBA

■ SG-(X)PCI1FC-JF2 JNI 2Gb PCI Single Port Fibre Channel HBA

■ SG-(X)PCI2FC-JF2 JNI 2Gb PCI Dual Port Fibre Channel HBA

■ SG-(X)PCI1FC-EM2 Emulex 2Gb PCI

■ SG-(X)PCI2FC-EM2 Emulex 2Gb PCI

■ SG-(X)PCI1FC-QL2 Sun StorEdge 2G FC PCI Single Fibre Channel HBA

■ SG-(X)PCI2FC-QF2-Z Sun StorEdge 2G FC PCI Dual Fibre Channel HBA

■ cPCI: none

4Gb HBAs■ SBus: none

■ PCI:

■ SG-(X)PCI1FC-QF4 Sun StorEdge 4G FC PCI Single Fibre Channel NetworkAdapter

■ SG-(X)PCI2FC-QF4 Sun StorEdge 4G FC PCI Dual Fibre Channel NetworkAdapter

■ SG-(X)PCI1FC-EM4 Emulex Single Port 4Gb Fiber Channel HBA

■ SG-(X)PCI2FC-EM4 Emulex Dual Port 4Gb Fiber Channel HBA

■ cPCI: none

■ PCI-E

■ SG-(X)PCIE1FC-QF4 Sun StorEdge 4G FC PCI-E Single Fibre Channel NetworkAdapter

■ SG-(X)PCIE2FC-QF4 Sun StorEdge 4Gb PCI-E Dual Port Fibre Channel HBA

■ SG-(X)PCIE1FC-EM4 Emulex 4Gb Single Port PCI-E

■ SG-(X)PCIE2FC-EM4 Emulex 4Gb Dual Port PCI-E

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 61

Page 74: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ PCI-E ExpressModules

■ SG-XPCIE2FC-QB4-Z

■ SG-XPCIE2FC-EB4-Z

■ SG-XPCIE2FCGBE-Q-Z

■ SG-XPCIE2FCGBE-E-Z

■ Sun Blade 8000/8000 P NEM

■ SG-XPCIE20FC-NEM-Z Sun StorageTek 4Gb FC NEM 20-Port HBA■ Sun Netra CT 900

■ SG-XPCIE2FC-ATCA-Z Sun StorageTek 4Gb Fibre Channel ATCA HBA

■ XCP32X0-RTM-FC-Z Sun Netra CP3200 ARTM-FC

8Gb HBAs■ PCI-E

■ SG-XPCIE1FC-EM8-Z

■ SG-XPCIE2FC-EM8-Z

■ SG-XPCIE1FC-QF8-Z

■ SG-XPCIE2FC-QF8-Z

Supported SAN SwitchesThe following switches are supported in a Sun Cluster SAN environment. In order to puttogether a supported configuration, please match a supported server/HBA combinationwith a switch from the following list and a supported SAN storage device (listed below).This configuration must adhere to the SAN support rules listed above. Once thiscombination is complete, please check it against the Network Storage what works withwhat matrices to ensure that both groups support the configuration. If they do, theconfiguration is supported, if not, additional testing will need to be done to enable thissupport.

■ Sun 8 and 16 port 1Gb switches

■ Sun 8, 16 and 64 port 2Gb switches

■ Brocade 200E, 300, 3101 (4G, 8G bps), 2800, 3200, 3250, 3800, 3850, 3900, 4100, 4900,5000, 6400, 12000, 24000, 48000, 5100, 5300, DCX (4G, 8G bps), DCX-4S (4G, 8G bps)switches

■ McData 4300, 4400, 4500, 4700, 6064, 6140, Intrepid 10000 switches

■ QLogic 5200, 5202, 5600, 5602, 5802V (4G and 8G), 9100, 9200 switches

1. Does not support distance solutions e.g. campus or metro cluster

62 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 75: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

■ Cisco MDS 9020, 9120, 9124, 9134, 9140, 9216A, 9216i, 9222i1, 9506, 9509, 9513 switches

Please see SAN WWWW for any possible constraints or limitations.

Sun StorEdge A3500FC System

SE A3500FC Configuration RulesDaisy-chaining of the controller modules is not supported.

Node Connectivity LimitsSE A3500FC systems can connect to 2 cluster nodes.

Hub SupportHubs are required to connect hosts to A3500FC in cluster configurations. AnA3500FC controller module is connected to two hosts via hubs. Each StorEdgeA3500FC controller module is connected to two hubs. Both hosts are connected toboth the hubs. It is required that both hubs be connected to different host busadapters on a node. Figure 6-1 on page 66 shows how to configure an A3500FC unitas shared storage.

Up to four A3500FC controller modules can be connected to a hub. You can connectcontroller modules in the same or separate cabinets.

RAID RequirementsAn SE A3500FC controller module with the redundant controllers providesappropriate hardware redundancy. An SE A3500FC controller also has hardwareRAID capabilities built in. Hence, software mirroring of data is not required.

However, a software volume manager can be used for managing the data. Also, acluster configuration with an SE 3500FC array with a single controller module issupported and requires volume management or software mirroring.

1. iSCSI/FCIP options not yet supported as of Dec’07

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 63

Page 76: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

MultipathingOnly the Redundant Disk Array Controller (RDAC) driver from Sun StorEdge RAIDManager 6.22 is supported.

Volume Manager SupportAll volume manager releases supported by Sun Cluster 3 and the A3500

Software, Firmware, and PatchesThere are no Sun Cluster 3 specific requirements.

Sharing SE A3500FC SystemsThere are no Sun Cluster 3 specific requirements.

Quorum DevicesSun StorEdge A3500 and A3500FC arrays cannot be used as quorum devices.

Campus ClusterCampus clusters are not supported.

64 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 77: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

SE A3500FC Support MatrixTo select a supported configuration, first check Table 5-1, “FC Storage for SPARCServers,” on page 42 to see if your server and storage combination is supported. If itis, select your host adapters from TABLE 6-1.

TABLE 6-1 SE A3500FC Support Matrix

Server Host Adapter Part Number

Sun Enterprise 3x00, 4x00,5x00, 6x00

onboard FC-AL socket

FC-AL SBus Host Adapter 6730A

Sun Enterprise 10K FC-AL SBus Host Adapter 6730A

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 65

Page 78: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SE A3500FC Other Components

SE A3500FC Sample Configuration Diagrams

FIGURE 6-1 Sun StorEdge A3500FC as Shared Storage

Sun StorEdge A5x00 Array

TABLE 6-2 SE A3500FC Supported Components

Component Part Number

FC controller module 6538A

FC-AL seven-port Hub 6732A

FC-AL GBIC 6731A

2-meter, fiber-optic cable 973A

15-meter, fiber-optic cable 978A

5

Node 1 Node 2

0 1 2 3 4 5 6 0 1 2 3 4 6

HA 1 HA 2

0 10 1

HA 1 HA 2

0 10 1

Hub A Hub A

FCAL portController A

FCAL portController B

A3500FC

66 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 79: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

This section covers Sun Cluster requirements when configured with the SunStorEdge A5000, A5100, or A5200.

SE A5x00 Configuration RulesDaisy-chaining of A5x00s is not supported.

Both full- and split-loops are supported.

Node Connectivity LimitsSE A5x00 arrays can connect to 2 cluster nodes.

Switch and Hub SupportThe Sun StorEdge FC network switches (6746A, SG-XSW16-32P) are supported inpython mode. The SE A5x00 arrays can also be directly attached without hubs orswitches.

RAID RequirementsIn order to ensure data redundancy and hardware redundancy, software mirroringacross boxes is required. Mirroring of data between the two halves of the sameA5x00 unit is not supported.

MultipathingMultipathing (for example, using DMP, MPxIO, etc.) is not supported with A5x00s.

Volume Manager SupportAll volume manager releases supported by Sun Cluster 3 and the A5x00.

Software, Firmware, and PatchesThere are no Sun Cluster 3 specific requirements.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 67

Page 80: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sharing SE A5x00 ArraysBox sharing is not supported.

SE A5x00 Support MatrixTo select a supported configuration, first check Table 5-1, “FC Storage for SPARCServers,” on page 42 to see if your server and storage combination is supported. If itis, select your host adapters from TABLE 6-2, TABLE 6-3, or TABLE 6-4.

TABLE 6-3 Sun Cluster 3 and SE A5000 Support Matrix

Server Host Bus Adapter Connectivity A5x00 Configuration

Sun Enterprise220R, 250, 420R, 450

6729A Direct-Attached full-loop only, each hostmust be on a differentloop.

Hub (6732A) full-loop only

Sun Enterprise3x00-6x00

onboard FCsocket6730A6757A

Direct-Attached full-loop, split-loop

Hub (6732A)

6757A Switch (6746A, SG-XSW16-32P)

Sun Enterprise 10K 6730A6757A

Direct-Attached full-loop, split-loop

Hub (6732A)

6757A Switch (6746A, SG-XSW16-32P)

Sun Fire 280R, 4800,4810, 6800

6799A, 6727A Direct-Attached full-loop, split-loop

Hub (6732A)

Switch (6746A, SG-XSW16-32P)

Sun Fire 3800 6748A Direct-Attached full-loop, split-loop

Hub (6732A)

Switch (6746A, SG-XSW16-32P)

68 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 81: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

TABLE 6-4 Sun Cluster 3 and SE A5100 Support Matrix

Server Host Bus Adapter Connectivity A5x00 Configuration

Netra 1280Sun Fire 280R,V480/V490,V880/V890, V1280,4800, 4810, 6800

6799A, 6727A Direct-Attached full-loop, split-loop

Hub (6732A)

Switch (6746A, SG-XSW16-32P)

Sun Enterprise220R, 250, 420R,450

6729A Direct-Attached full-loop only, each hostmust be on a differentloop.

Hub (6732A) full-loop only

Sun Enterprise3x00-6x00

onboard FC socket6730A6757A

Direct-Attached full-loop, split-loop

Hub (6732A)

6757A Switch (6746A, SG-XSW16-32P)

Sun Enterprise 10K 6730A6757A

Direct-Attached full-loop, split-loop

Hub (6732A)

6757A Switch (6746A, SG-XSW16-32P)

Sun Fire 3800 6748A Direct-Attached full-loop, split-loop

Hub (6732A)

Switch (6746A, SG-XSW16-32P)

TABLE 6-5 Sun Cluster 3 and SE A5200 Support Matrix

Server Host Bus Adapter Connectivity A5x00 Configuration

Netra 1280Sun Fire 280R,V480/V490,V880/V890, V1280,4800, 4810, 6800,12K/15K

6799A, 6727A Direct-Attached full-loop, split-loop

Hub (6732A)

Switch (6746A, SG-XSW16-32P)

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 69

Page 82: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SE A5x00 Other ComponentsThe part numbers referenced in the support matrix tables are:

Sun Enterprise220R, 250, 420R,450

6729A Direct-Attached full-loop only, each hostmust be on a differentloop.

Hub (6732A) full-loop only

Sun Enterprise3x00-6x00

onboard FC socket6730A6757A

Direct-Attached full-loop, split-loop

Hub (6732A)

6757A Switch (6746A, SG-XSW16-32P)

Sun Enterprise 10K 6730A6757A

Direct-Attached full-loop, split-loop

Hub (6732A)

6757A Switch (6746A, SG-XSW16-32P)

Sun Fire 3800 6748A Direct-Attached full-loop, split-loop

Hub (6732A)

Switch (6746A, SG-XSW16-32P)

TABLE 6-6 SE A5x00 Part Number Descriptions

Part # Description

6729A FC-100 Host Adapter

6730A FC-AL SBus Host Adapter

6748A Sun StorEdge cPCI Dual FC Network Adapter

6799A Sun StorEdge PCI Single FC Network Adapter

6727A Sun StorEdge PCI Dual FC Network Adapter

6757A Sun StorEdge SBus Dual FC Network Adapter

TABLE 6-5 Sun Cluster 3 and SE A5200 Support Matrix (Continued)

Server Host Bus Adapter Connectivity A5x00 Configuration

70 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 83: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

Other components supported with SE A5x00 are listed below:

SE A5x00 Sample Configuration DiagramsSome sample configurations for connecting SE A5x00 as shared storage are:

■ Direct-attached, full-loop A5x00 configuration: Figure 6-2 on page 72.

■ Direct-attached, split-loop A5x00 configuration: Figure 6-3 on page 72.

■ Hub-attached, full-loop, single-loop A5x00 configuration: Figure 6-4 on page 73.

■ Hub-attached, full-loop, dual-loop A5x00 configuration: Figure 6-5 on page 74.

6732A FC-AL seven-port Hub

6746A Sun StorEdge Network FC Switch -8

SG-XSW16-32P Sun StorEdge Network FC Switch -16

TABLE 6-7 SE A5x00 Supported Components

Component Part Number

FC-AL GBIC 6731A

Interface Board 6734A

2-meter, fiber-optic cable 973A

15-meter, fiber-optic cable 978A

5-meter, fiber-optic cable 9715A

TABLE 6-6 SE A5x00 Part Number Descriptions (Continued)

Part # Description

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 71

Page 84: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 6-2 Direct-Attached, Full-Loop A5x00 Configuration

FIGURE 6-3 Direct-Attached, Split-Loop A5x00 Configuration.

Data

a0 b0

Mirror

b0

Node 2Node 1

HA 1 HA 2

0 10 1

HA 1 HA 2

0 10 1

a0

A5x00 #1 A5x00 #2

A5x00 #1

a0 b0

A5x00 #2

Node 2Node 1

HA 1 HA 2

0 10 1

HA 1 HA 2

0 10 1

a0 b0 b1b1 a1a1

Data Data’ Mirror Mirror’

72 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 85: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

FIGURE 6-4 Hub-Attached, Full-Loop, Single-Loop A5x00 Configuration

5

Mirror

a0 a1 b0 b1

Node 1

0 1 2 3 4 5 6 0 1 2 3 4 6

HA 1 HA 2

0 10 1

HA 1 HA 2

0 10 1

Data

a0 b0 b1

A5x00 #1

Hub A Hub A

A5x00 #2

Node 2

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 73

Page 86: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 6-5 Hub-Attached, Full-Loop, Dual-Loop A5x00 Configuration

Sun StorEdge T3 Array (Single Brick)

SE T3 Single Brick Configuration Rules

Node Connectivity LimitsT3A arrays can connect to two nodes. T3B arrays can connect to up to 4 nodes.

a0

15

Mirror

a0 a1 b0 b1

Node 1 Node 2

0 1 2 3 4 5 6 0 1 2 3 4 5 6

Data

Data’

a1 b0 b1

0 1 2 3 4 6 0 2 3 4 5 6

Hub A Hub B

Mirror’

HA 1 HA 2

0 10 1

HA 1 HA 2

0 10 1

a0 a1 b0 b1

a0 b0 b1a1

Hub A Hub B

74 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 87: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

Hub and Switch SupportHubs/Switches are required to connect a T3 brick to multiple nodes in the cluster.

If a T3 is connected to >2 nodes then switches are mandatory.

RAID RequirementsIn order to ensure data redundancy and hardware redundancy, host-based mirroringbetween two arrays is required.

MultipathingMultipathing (for example using DMP, MPxIO, etc.) is not supported with T3 singlebrick configurations.

Volume Manager SupportAll volume manager releases supported by Sun Cluster 3 and the T3.

Software, Firmware, and PatchesThere are no Sun Cluster 3 specific requirements.

Sharing T3 Single Bricks with Multiple Clusters/Non-Clustered NodesSun Cluster 3 requires exclusive access to LUNs that store its shared data. SunStorEdge T3 array supports LUN masking and LUN mapping with FW2.1. With thisfeature a LUN can be assigned exclusively to a cluster of nodes. Using this feature aSun StorEdge T3 storage device can be shared among multiple clusters and non-clustered hosts.

SE T3 Single Brick Support Matrix and ExceptionsTo determine whether your configuration is supported:

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 75

Page 88: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determinewhether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-8 or Table 6-9 to determine if there is limited HBA support.

4. If HBA support is not limited, you can use your server and storage combinationwith host adapters listed in the SAN WWWW(http://mysales.central/public/storage/products/matrix.html)

TABLE 6-8 T3A Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Netra 20, Netra 1290 6727A

Netra 440, Netra 1280Sun Fire V240, V250, 280R, V440, V480, V880,V1280, E2900, E4900, E6900

6799A, 6727A

Sun Enterprise 220R, 250, 420R, 450 6799A, 6727A

Sun Enterprise 3x00-6x00 onboard FCAL socketa

6730Aa

a Supported in arbitrated loop configurations only (no SAN configurations).

6757A

Sun Enterprise 10K 6730Aa, 6757A

TABLE 6-9 T3B Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Netra 20 6727A

Sun Enterprise 3x00-6x00 onboard FCAL socketa

6730Aa

a Supported in arbitrated loop configurations only (no SAN configura-tions).

Sun Enterprise 10K 6730Aa, 6757A

76 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 89: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

SE T3 Single Brick Other ComponentsThe part numbers referenced in the support matrix tables above are:

Other components supported with T3 are listed below:

SE T3 Single Brick Sample ConfigurationDiagramsThe figure below shows how to configure T3 in single brick configuration as sharedstorage.

TABLE 6-10 SE T3 Single Brick Part Number Descriptions

Part Number Description

6730A FC-AL SBus Host Adapter

6748A Sun StorEdge cPCI Dual FC Network Adapter

6799A Sun StorEdge PCI Single FC Network Adapter

6727A Sun StorEdge PCI Dual FC Network Adapter

6757A Sun StorEdge SBus Dual FC Network Adapter

6732A FC-AL seven-port Hub

6746A Sun StorEdge Network FC Switch -8 (1GB)

SG-XSW16-32P Sun StorEdge Network FC Switch -16 (1GB)

Brocade 2800 1GB Brocade 2800 Switch

SG-XSW8-2GB Sun StorEdge Network FC Switch -8 (2GB)

SG-XSW16-2GB Sun StorEdge Network FC Switch -16 (2GB)

Brocade 3800 2GB Brocade 3800 Switch

TABLE 6-11 SE T3 Single Brick Supported Components

Component Part # of the Component

FC-AL GBIC 6731A

2-meter, fiber-optic cable 973A

15-meter, fiber-optic cable 978A

5-meter, fiber-optic cable 9715A

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 77

Page 90: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 6-6 Sun StorEdge T3 in Single-Brick Configuration as Shared Storage

Sun StorEdge T3 Array (Partner Pair)

SE T3 Partner Pair Configuration Rules

Node Connectivity LimitsT3 (T3A) arrays can connect to two nodes. T3+ (T3B) arrays can connect to up to 4nodes.

Hub and Switch SupportHubs or switches are required when connecting to 2 nodes. FC switches are requiredwhen connecting to more than 2 nodes.

RAID RequirementsA T3 partner pair has full hardware redundancy built-in. Hence, it is supported touse hardware RAID5 for data availability. This automatically implies that a clusterconfiguration with a single T3 partner pair is supported.

Mirror

Node1

T3 brick

Data

T3 brick

HA 1 HA 2

Switch

Node4

HA 1 HA 2

Node3

HA 1 HA 2

Node2

HA 1 HA 2

Switch

78 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 91: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

MultipathingUse of Sun StorEdge Traffic Manager (MPxIO) is required for having dual pathsfrom server to the T3PP arrays. No other multipathing solution (for example VeritasDMP) is supported.

Volume Manager SupportAll volume manager releases supported by Sun Cluster 3 and the T3.

Software, Firmware, and PatchesT3 partner pair support requires Sun Cluster 3.0 7/01 (or later), and Solaris 8 7/01(or later).

FW2.1 is required to use LUN masking and LUN mapping.

Sharing T3 Partner Pairs with Multiple Clusters/Non-Clustered NodesSun Cluster 3 requires exclusive access to LUNs that store its shared data. SunStorEdge T3 array supports LUN masking and LUN mapping with FW2.1. With thisfeature, a LUN can be assigned exclusively to a cluster of nodes. Using this feature,a Sun StorEdge T3 storage device can be shared among multiple clusters and non-clustered nodes.

SE T3 Partner Pair Support MatrixTo determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 to see if yourchosen server/storage combination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Refer to the SAN WWWW(http://mysales.central/public/storage/products/matrix.html) for additionalinformation and restrictions.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 79

Page 92: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SE T3 Partner Pair Other ComponentsThe following table lists part number of components that you might use in yourcluster configuration.

Other components supported with T3 are listed below:

TABLE 6-12 SE T3 Partner-Pair Part Number Descriptions

Part Number Description

6730A FC-AL SBus Host Adapter

6748A Sun StorEdge cPCI Dual FC Network Adapter

6799A Sun StorEdge PCI Single FC Network Adapter

6727A Sun StorEdge PCI Dual FC Network Adapter

6757A Sun StorEdge SBus Dual FC Network Adapter

6732A FC-AL seven-port Hub

6767A 2GB Sun StorEdge PCI Dual FC Network Adapter

6768A 2GB Sun StorEdge PCI Single FC Network Adapter

6746A Sun StorEdge Network FC Switch -8

SG-XSW16-32P Sun StorEdge Network FC Switch -16

Brocade 2800 1GB Brocade 2800 Switch

SG-XSW8-2GB Sun StorEdge Network FC Switch -8 (2GB)

SG-XSW16-2GB Sun StorEdge Network FC Switch -16 (2GB)

Brocade 3800 2GB Brocade 3800 Switch

TABLE 6-13 SE T3 Partner-Pair Supported Components

Component Part # of the Component

FC-AL GBIC 6731A

2-meter, fiber-optic cable 973A

15-meter, fiber-optic cable 978A

5-meter, fiber-optic cable 9715A

80 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 93: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

SE T3 Partner Pair Sample ConfigurationDiagramsThe following illustration shows how to configure 2 T3 partner pairs as sharedstorage.

FIGURE 6-7 Sun StorEdge T3 Partner Pair as Shared Storage

Sun StorageTek 2540 RAID Array

ST 2540 Configuration Rules:■ Sun Cluster supports both Simplex (ST2540 with 1x controller) and Duplex

(ST2540 with 2x controllers) configurations

Node 1 Node 2

HA 1 HA 2 HA 1 HA 2

Switch Switch

RAID 5 Data

RAID 5 Data

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 81

Page 94: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Node Connectivity Limits■ A maximum of 4 nodes can be connected to any one LUN using DAS cabling, 8

nodes when through a SAN.

Hubs and Switches■ FC switches are supported. ST2540 can also be directly attached.

RAID Requirements■ Simplex Configuration:

■ Two ST2540 arrays will be required.

■ Data need to be mirrored across arrays using Volume Manager Software (HostBased Mirroring).

■ Duplex Configuration:

■ A single ST2540 array is supported with properly configured dual controllers,multipathing, and hardware RAID.

Multipathing■ Sun StorEdge Traffic Manager (MPXIO) is required in Duplex Configuration

(ST2540 with 2x controllers)

ST 2540 Volume Manager Support■ There are no Sun Cluster specific requirements. Please note the base product

documentation regarding Volume Manager support.

Software, Firmware, and Patches■ Please see ST2540 release notes.

Sharing ST 2540 Arrays■ LUN masking will enable sharing across multiple platforms. See product

documentation for further details

82 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 95: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

ST 2540 Support Matrix and Exceptions:To determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determinewhether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-14 to determine if there is limited HBA support

4. If HBA support is not limited, you can use your server and storage combinationwith host adapters in the “Server Search” under the “Searches” tab of theInterop Tool, https://interop.central.sun.com/interop/interop

Sun StorEdge 3510 RAID ArrayThis section describes the configuration rules for using Sun StorEdge 3510 RAID.Only SE 3510 RAID units can be used as shared storage devices with Sun Cluster 3.SE 3510 JBOD units can be attached to SE 3510 RAID units for additional storage,but cannot be used independently of the SE 3510 RAID units in a Sun Cluster 3configuration.

SE 3510 RAID Configuration Rules■ Both AC and DC power supplies are supported.

■ Up to 8 additional 3510 JBOD units can be connected to an existing clustered 3510RAID device. SE 3510 JBOD units are NOT supported in a clustered configurationunless they are connected to a SE 3510 RAID unit.

TABLE 6-14 ST 2540 Array/Server combinations with Limited HBA Support

Server Host Adapter

Netra CT 900 CP3060 SG-XPCIE2FC-ATCA-Z

Netra CT 900 CP3260 XCP32X0-RTM-FC-Z

Netra T2000 SG-XPCI2FC-QF4

Netra T5220 SG-XPCI2FC-EM4-Z, SG-XPCI2FC-QF4,SG-XPCIE2FC-EM4, SG-XPCIE2FC-QF4

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 83

Page 96: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Logical Volumes are NOT supported.

■ Connecting up to eight initiators to one channel is supported when using theSAN 4.3 (or later) drivers. The SE 3510 has a total of 8 host ports set up in pairs (4channels). Any driver versions predating SAN 4.3 limit the maximum amount ofinitiators connected to a single SE 3510 channel at 1.

■ A single SE 3510 can be used for shared storage as long as it is configured withdual controllers.

■ A maximum of 8 target IDs can be configured per channel (256 LUNs).

If running SAN 4.3 and 3.27r controller firmware or later above restriction isremoved.

Node Connectivity LimitsThe SE 3510 RAID array can connect to up to 8 nodes.

Hub and Switch SupportFC switches are supported. SE 3510 RAID arrays can also be directly attached.

RAID Requirements■ SE 3510 RAID arrays can be used without a software volume manager if you have

correctly configured dual controllers, multipathing, and hardware RAID.

■ A single 3510 is supported with properly configured dual controllers,multipathing, and hardware RAID.

■ Single controller SE 3510 RAID units are supported as long as they are mirroredto another array.

■ Hardware RAID is supported with the SE 3510 RAID array, with or withoutsoftware mirroring.

MultipathingSun StorEdge Traffic manager (MPxIO) is required with dual-controller SE 3510configurations.

84 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 97: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

Volume Manager SupportAll volume manager releases supported by Sun Cluster 3 and the SE3510 RAIDarray.

Software, Firmware, and PatchesSE 3510 RAID support requires Sun StorEdge SAN Foundation 4.2 software andfirmware patch ID #113723-03 or later. The latest supported firmware is 4.21.

Sharing an SE 3510 RAID ArrayWith the usage of LUN Masking/Mapping, several clustered and non-clustereddevices can share an SE 3510.

SE 3510 RAID Support Matrix and ExceptionsTo determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determinewhether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-15 to determine if there is limited HBA support.

TABLE 6-15 SE 3510 Array/Server Combinations with Limited HBA Support

Server Host Adapter

Netra CT 900 CP3010 FC2312-PMC-FF (a SBS PCI mezzanine card)

Netra CT 900 CP3060 SB-AMC55 a (a SANBlaze advanced mezzanine card)SG-XPCIE2FC-ATCA-Z

Netra T5220 SG-XPCI2FC-EM4-Z, SG-XPCI2FC-QF4, SG-XPCIE2FC-EM4, SG-XPCIE2FC-QF4

Sun Enterprise 3500-6500, E10k 6757A

Sun Fire T1000 SG-(X)PCIE2FC-QF4SG-(X)PCIE2FC-EM4

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 85

Page 98: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

4. If HBA support is not limited, you can use your server and storage combinationwith host adapters listed at the SAN WWWW(http://mysales.central/public/storage/products/matrix.html). Additionally, usethe “Server Search” under the “Searches” tab of the Interop Tool,https://interop.central.sun.com/interop/interop

In FIGURE 6-9, the same set of LUNs is mapped to channels 0 and 5; a different set ofLUNs is mapped to channels 1 and 4.

a Netra CT 900 ATCA Blade Server supports any ATCA card that complies with PICMIG 3.x specifications. Thethird party HBA has been tested with the Sun Netra CT 900 using the CP3060 blade under Sun Cluster, but thisHBA is not a Sun product and thus not supported by Sun. A Sun branded HBA is scheduled to be qualified andsupported in the Q1CY08 time frame

86 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 99: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

SE 3510 RAID Sample Configuration Diagrams

FIGURE 6-8 Direct-Attached, 4-Node, Dual-Controller SE 3510 RAID Configuration

FIGURE 6-9 Switch-Attached, Dual-Controller SE 3510 RAID Configuration

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 87

Page 100: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun StorEdge 3511 RAID ArrayThis section describes the configuration rules for using Sun StorEdge 3511 RAID.

Only SE 3511 RAID units can be used as shared storage devices with Sun Cluster 3.SE 3511 JBOD units can be attached to SE 3511 RAID units for additional storage, butcannot be used independently of the SE 3511 RAID units in a Sun Cluster 3configuration. PLEASE READ RECOMMEND USES AND LIMITATIONS OF THESE 3511 IN THE SE 3511 BASE PRODUCT DOCUMENTATION.

SE 3511 RAID Configuration Rules■ Both AC and DC power supplies are supported.

■ Up to 5 SE 3511 JBOD arrays can be connected to an SE 3511 RAID array in a SunCluster configuration.

■ It is highly recommended that the 4.11 or later array firmware be used with theSE3511. In particular, SE 3511 firmware releases earlier than 4.11 are exposed toCR5059398, which can lead to data corruption. In general, it is recommended thatthe latest supported firmware level be installed to benefit from the available fixes.

■ Logical Volumes are NOT supported.

■ Node Connectivity Limits

The SE 3511 RAID array can connect to up to 8 nodes.

A maximum of 8 nodes can directly connect to a LUN on an SE 3511 RAID array. Amaximum of 8 nodes can be connected through a switch to a LUN on an SE 3511RAID array.

Hub and Switch SupportFC switches are supported. The SE 3511 array can also be directly attached, withoutswitches.

RAID Requirements■ SE 3511 arrays can be used without a software volume manager with properly

configured dual controllers, multipathing, and hardware RAID.

■ A single SE 3511 array is supported with properly configuration dual controllers,multipathing, and hardware RAID.

88 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 101: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

■ Single controller SE 3511 RAID arrays are supported as long as they are mirroredto another array.

MultipathingSun StorEdge Traffic Manager (MPxIO) is required with dual-controller SE 3511configurations.

Volume Manager SupportAll volume manager releases supported by Sun Cluster 3 and the SE3511 RAIDarray.

Software, Firmware, and PatchesSE 3511 RAID array support requires Sun StorEdge SAN Foundation 4.4 (or later)software. The latest supported firmware is 4.21.

Sharing an SE 3511 RAID ArrayUsing LUN Masking/Mapping, several clustered and non-clustered devices canshare a SE 3511 RAID array

SE 3511 RAID Support Matrix and ExceptionsTo determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determinewhether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 89

Page 102: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

3. Check Table 6-16 to determine if there is limited HBA support.

4. If HBA support is not limited, you can use your server and storage combinationwith host adapters listed at the SAN WWWW(http://mysales.central/public/storage/products/matrix.html). Additionally, usethe “Server Search” under the “Searches” tab of the Interop Tool,https://interop.central.sun.com/interop/interop

Sun StorEdge 3910/3960 SystemThis section describes the configuration rules for using Sun StorEdge 3910/3960 asshared storage.

SE 3910/3960 Configuration Rules

Node Connectivity LimitsSE 3910/3960 systems can connect to up to 4 nodes.

TABLE 6-16 SE 3511 Array/Server combinations with Limited HBA Support

Server Host Adapter

Netra CT 900 CP3010 FC2312-PMC-FF (a SBS PCI mezzanine card)

Netra CT 900 CP3060 SB-AMC55 a (a SANBlaze advanced mezzanine card)SG-XPCIE2FC-ATCA-Z

a Netra CT 900 ATCA Blade Server supports any ATCA card that complies with PICMIG 3.x specifications. Thethird party HBA has been tested with the Sun Netra CT 900 using the CP3060 blade under Sun Cluster, but thisHBA is not a Sun product and thus not supported by Sun. A Sun branded HBA is scheduled to be qualified andsupported in the Q1CY08 time frame.

Sun Enterprise 3500-6500, E10k 6757A

Sun Fire T1000 SG-(X)PCIE2FC-QF4SG-(X)PCIE2FC-EM4

90 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 103: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

Hub and Switch SupportFC switches are supported. SE 3910/3960 systems can also be directly attached,without switches.

RAID Requirements■ SE 3910/3960 systems can be used without software volume management with

properly configured dual controllers, multipathing, and hardware RAID.

■ T3 single bricks require software mirroring.

MultipathingSE 3910/3960 systems require Sun StorEdge Traffic Manager (MPxIO).

Volume Manager SupportAll volume manager releases supported by Sun Cluster 3 and the SE3910/3960.

Software, Firmware, and PatchesThere are no Sun Cluster 3 specific requirements.

Sharing an SE 3910/3690 SystemAn SE 3910/3960 system can be shared among multiple clustered and non-clusterednodes. If the 3900 series system uses a T3FW older than 2.1, then each clusterconnection should be in its own zone. If the 3900 series system uses T3FW2.1 (orlater), then the LUN masking and LUN mapping capabilities can be used forproviding exclusive access.

SE 3910/3960 Support MatrixTo determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 to see if yourchosen server/storage combination is supported with Sun Cluster.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 91

Page 104: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Refer to the SAN WWWW(http://mysales.central/public/storage/products/matrix.html) for additionalinformation and restrictions.

Sun StorEdge 6120 Array

SE 6120 Configuration RulesThere is a maximum limit of 64 LUNS for any 6120/30 cluster configuration. Greaterthan 16 LUN support requires SE 6120 firmware version 3.1 or higher.

Node Connectivity LimitsThe SE 6120 can connect to up to 8 nodes.

A maximum of 4 nodes can be connected to any one LUN.

Hubs and SwitchesFC switches are required.

RAID Requirements■ SE 6120 arrays are supported without software volume management, if you have

properly configured 6120 partner pairs, multipathing, and hardware RAID.

■ A single 6120 partner pair is supported with properly configured multipathingand hardware RAID.

■ 6120 single bricks require software mirroring.

■ RAID 5 is supported for use with SE 6120 partner pair configurations.

92 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 105: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

MultipathingSun StorEdge Traffic Manager (MPxIO) is required with SE 6120 partner pairconfigurations.

Volume Manager Support■ All volume manager releases supported by Sun Cluster 3 and the SE 6120.

■ SE 6120 arrays are supported without software volume management withproperly configured multipathing and hardware RAID.

Software, Firmware, and PatchesSE 6120 firmware version 3.1 or higher required to support more than 16 LUNs.

Sharing SE 6120 ArraysUsing LUN masking, several clustered and non-clustered nodes can share an SE6120.

SE 6120 Support Matrix and ExceptionsTo determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determinewhether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-18 to determine if there is limited HBA support.

TABLE 6-17 SE 6120/6130 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Netra 20 6799A, 6727A

Sun Fire T1000 SG-(X)PCIE2FC-QF4SG-(X)PCIE2FC-EM4

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 93

Page 106: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

4. If HBA support is not limited, you can use your server and storage combinationwith host adapters listed in the SAN WWWW(http://mysales.central/public/storage/products/matrix.html)

Sun StorEdge 6130 Array

SE 6130 Configuration Rules

Node Connectivity Limits■ Maximum limit of 64 LUNS does not apply to the 6130 cluster configuration.

■ The SE 6130 array can connect to up to 8 nodes. However, the SE 6130 is notcompatible with RAC or CVM in configurations of more than 4 nodes.

■ The SE 6130 supports up to 8 nodes per LUN.

Hubs and Switches■ FC switches are required if more than two nodes are connected to the same SE

6130. The SE 6130 can be direct attached if it is connected to only two nodes.

RAID Requirements■ SE 6130 arrays are supported without software volume management, if you have

a properly configured 6130, multipathing, and hardware RAID.

■ A single SE 6130 is supported with properly configured multipathing andhardware RAID.

MultipathingSun StorEdge Traffic Manager (MPxIO) is required with the SE 6130.

SE 6130 Volume Manager Support■ All volume manager releases supported by Sun Cluster 3 and the SE 6130.

94 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 107: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

■ SE 6130 arrays are supported without software volume management withproperly configured multipathing and hardware RAID.

Software, Firmware, and Patches■ Please see SE 6130 release notes.

■ SE 6130 Update 1, 2, and 3 are supported

Sharing SE 6130 ArraysLUN masking will enable sharing of clustered and non-clustered systems.

SE 6130 Support Matrix and ExceptionsTo determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determinewhether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-18 to determine if there is limited HBA support.

4. If HBA support is not limited, you can use your server and storage combinationwith host adapters listed in the SAN WWWW(http://mysales.central/public/storage/products/matrix.html)

SE 6130 Sample Configuration DiagramsThe figures that follow show how to configure an SE 6130 as shared storage.

TABLE 6-18 SE 6120/6130 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Netra 20 6799A, 6727A

Sun Fire T1000 SG-(X)PCIE2FC-QF4SG-(X)PCIE2FC-EM4

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 95

Page 108: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 6-10 Sun StorEdge 6130 as direct attached Storage

FIGURE 6-11 Sun StorEdge SE 6130 as SAN Storage

96 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 109: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

Sun StorageTek 6140 Array

ST 6140 Configuration Rules

Node Connectivity Limits■ The ST 6140 array can connect to up to 8 nodes including Oracle RAC

configurations.

■ The ST 6140 supports up to 8 nodes per LUN.

Hubs and Switches■ FC switches are required if more than four nodes are connected to the same ST

6140. The ST 6140 can be direct attached if it is connected to four nodes or less.

RAID Requirements■ ST 6140 arrays are supported without software volume management, if you have

a properly configured ST 6140, multipathing, and hardware RAID.

■ A single ST 6140 is supported with properly configured multipathing andhardware RAID.

MultipathingSun StorEdge Traffic Manager (MPxIO) is required with the ST 6140.

ST 6140 Volume Manager Support■ There are no Sun Cluster specific requirements. Please note the base product

documentation regarding Volume Manager support.

■ ST 6140 arrays are supported without software volume management withproperly configured multipathing and hardware RAID.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 97

Page 110: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Software, Firmware, and Patches■ Please see ST 6140 release notes.

Sharing ST 6140 ArraysLUN masking will enable sharing of clustered and non-clustered systems.

ST 6140 Support Matrix and ExceptionsTo determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42 and Table 5-2,“FC Storage for x64 Servers,” on page 46 to determine whether your chosenserver and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-19 to determine if there is limited HBA support.

4. If HBA support is not limited, you can use your server and storage combination

with host adapters listed at the SAN WWWW(http://mysales.central/public/storage/products/matrix.html). Additionally, usethe “Server Search” under the “Searches” tab of the Interop Tool,https://interop.central.sun.com/interop/interop

TABLE 6-19 ST 6140 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Netra CT 900 CP3060 SG-XPCIE2FC-ATCA-Z

Netra CT 900 CP3260 XCP32X0-RTM-FC-Z

Netra T5220 SG-XPCI2FC-EM4-Z, SG-XPCI2FC-QF4,SG-XPCIE2FC-EM4, SG-XPCIE2FC-QF4

Sun Fire T1000 SG-(X)PCIE2FC-QF4SG-(X)PCIE2FC-EM4

98 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 111: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

Sun Storage 6180 Array

SS 6180 Configuration Rules

Node Connectivity Limits■ The SS 6180 array can connect to up to 8 nodes including Oracle RAC

configurations.

■ The SS 6180 supports up to 8 nodes per LUN.

Hubs and Switches■ The SS 6180 may be direct attached when connected up to four nodes.

■ FC switches are required if more than four nodes are connected to the SS 6180.

RAID Requirements■ SS 6180 arrays are supported without software volume management with

properly configured multipathing, and hardware RAID.

■ A single SS 6180 is supported with properly configured multipathing andhardware RAID.

MultipathingSun StorEdge Traffic Manager (MPxIO) is required with the SS 6180.

SS 6180 Volume Manager Support■ There are no Sun Cluster specific requirements. Please note the base product

documentation regarding Volume Manager support.

■ SS 6180 arrays are supported without software volume management withproperly configured multipathing and hardware RAID.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 99

Page 112: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Software, Firmware, and Patches■ Starting with Sun Cluster 3.1u4.

■ Solaris 9 and Solaris 10 - see the SS 6180 release notes for details.

■ Please see the SS 6180 release notes.

Sharing SS 6180 ArraysLUN masking will enable sharing of clustered and non-clustered systems.

SS 6180 Support Matrix and ExceptionsTo determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42 and Table 5-2,“FC Storage for x64 Servers,” on page 46 to determine whether your chosenserver and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-20 to determine if there is limited HBA support.

4. If HBA support is not limited, you can use your server and storage combinationas listed in the “Server Search” under the “Searches” tab of the Interop Tool,https://interop.central.sun.com/interop/interop

Sun StorEdge 6320 SystemThis section describes the configuration rules for using Sun StorEdge 6320 as sharedstorage.

TABLE 6-20 SS 6180 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

No limited HBA supportat this time

100 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 113: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

SE 6320 Configuration RulesThere is a maximum limit of 64 LUNS in any 6320 configuration. Greater than 16LUN support requires SE 6320 firmware version 3.1 or higher.

Node Connectivity LimitsSE 6320 systems can connect to up to 8 nodes. However, they are not compatiblewith RAC or CVM in configurations of more than 4 nodes. Multiple 8 node N*Nclusters may be connected to an SE6320 array.

Currently, a maximum of 4 nodes can be connected to a SE 6320 LUN.

Hub and Switch SupportFC switches are required. This switches can be the optional 6320’s front-end switchesor compatible external switches.

Switches are supported with the “switchless” version of the 6320 (SE 6320 SL).

RAID Requirements■ SE 6320 systems can be used without software volume management with

properly configured multipathing and hardware RAID.

■ A single 6320 is supported with properly configured multipathing and RAID.

■ Otherwise, data must be mirrored to another array.

MultipathingSun StorEdge Traffic Manager (MPxIO) is required.

Volume Manager Support■ All volume manager releases supported by Sun Cluster 3 and the SE 6320.

■ SE 6320 arrays are supported without software volume management withproperly configured multipathing and hardware RAID.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 101

Page 114: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Software, Firmware, and PatchesSE 6320 systems require firmware version 3.1 (or later) to support more than 16LUNs.

Sharing SE 6320 Systems with Multiple Clusters andNon-Clustered HostsSun Cluster 3 requires exclusive access to LUNs that store its shared data. Using theLUN masking a SE 6320 LUN can be assigned to multiple nodes. This facility can beused to a share a SE 6320 among multiple clustered and non-clustered nodes.

SE 6320 Support Matrix and ExceptionsTo determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determinewhether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-21 to determine if there is limited HBA support.

4. If HBA support is not limited, you can use your server and storage combinationwith host adapters listed in the SAN WWWW(http://mysales.central/public/storage/products/matrix.html)

TABLE 6-21 SE 6320 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Netra 20, Netra 1120/1225, Netra t 1400/1405 6799A, 6727A

Sun Fire T1000 SG-(X)PCIE2FC-QF4SG-(X)PCIE2FC-EM4

102 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 115: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

SE 6320 Sample Configuration Diagrams

FIGURE 6-12 Sun StorEdge SE 6320 Connected Through Switches to Cluster Nodes

FIGURE 6-13 Sun StorEdge SE 6320 Directly Connected to Cluster Nodes

Sun StorageTek 6540 Array

Node 1 Node 2

HBA 1 HBA 2 HBA 1 HBA 2

Switch

SE 6320

Switch

Node 1 Node 2

HBA 1 HBA 2 HBA 1 HBA 2

SE 6320

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 103

Page 116: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

ST 6540 Configuration Rules

Node Connectivity Limits■ The ST 6540 array can connect to up to 8 nodes including Oracle RAC clusters.

Hubs and Switches■ FC switches are required if more than four nodes are connected to the same ST

6540.

RAID Requirements■ ST 6540 arrays are supported without software volume management, if you have

a properly configured ST 6540, multipathing, and hardware RAID.

■ A single ST6540 is supported with properly configured multipathing andhardware RAID.

MultipathingSun StorEdge Traffic Manager (MPxIO) is required with the ST 6540.

ST 6540 Volume Manager Support■ There are no Sun cluster 3 specific requirements, please note the base product

documentation.

■ ST 6540 arrays are supported without software volume management withproperly configured multipathing and hardware RAID.

Software, Firmware, and Patches■ Please see ST 6540 release notes.

Sharing ST 6540 ArraysLUN masking will enable sharing of clustered and non-clustered systems.

104 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 117: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

ST 6540 Support Matrix and ExceptionsTo determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determinewhether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-22 to determine if there is limited HBA support.

4. If HBA support is not limited, you can use your server and storage combinationwith host adapters as listed in the “Server Search” under the “Searches” tab ofthe Interop Tool, https://interop.central.sun.com/interop/interop

Sun Storage 6580/6780 ArraysThis section describes the configuration rules for using Sun Storage 6580/6780 asshared storage.

SS 6580/6780 Configuration Rules

Node Connectivity LimitsThese arrays can connect to up to 8nodes.

TABLE 6-22 SE 6540 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Sun Fire T1000 SG-(X)PCIE2FC-QF4SG-(X)PCIE2FC-EM4

Netra T5220 SG-XPCI2FC-EM4-Z, SG-XPCI2FC-QF4,SG-XPCIE2FC-EM4, SG-XPCIE2FC-QF4

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 105

Page 118: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Hub and Switch SupportFC switches are supported.

RAID Requirements■ SS 6580/6780 systems can be used without software volume management if you

have properly configured multipathing and hardware RAID.

■ A single SS 6580/6780 system is supported with properly configuredmultipathing and hardware RAID.

MultipathingSun StorEdge Traffic Manager (MPxIO) is required.

Volume Manager SupportAll volume manager releases supported by Sun Cluster 3 and the SS 6580/6780.

Software, Firmware, and PatchesThere are no Sun Cluster 3 specific requirements.

Sharing SS 6580/6780 Systems with Multiple Clustersand Non-Clustered HostsSun Cluster 3 requires exclusive access to LUNs that store its shared data. Using theLUN masking capabilities in the SVE an SS 6580/6780 LUN can be assigned tomultiple nodes. This facility can be used to a share a SS 6580/6780 among multipleclustered/non-clustered nodes.

SS 6580/6780 Support MatrixTo determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42, or Table 5-2, “FCStorage for x64 Servers,” on page 46, to see if your chosen server/storagecombination is supported with Sun Cluster.

106 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 119: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-23 to determine if there is limited HBA support

4. If HBA support is not limited, you can use your server and storage combinationwith host adapters listed by the “Server Search” under the “Searches” tab ofthe Interop Tool, https://interop.central.sun.com/interop/interop

Sun StorEdge 6910/6960 ArraysThis section describes the configuration rules for using Sun StorEdge 6910/6960 asshared storage.

SE 6910/6960 Configuration Rules

Node Connectivity LimitsThese arrays can connect to up to 2 nodes.

Hub and Switch SupportFC switches are supported.

RAID Requirements■ SE 6910/6960 systems can be used without software volume management if you

have properly configured multipathing and hardware RAID.

■ A single 6910/6960 system is supported with properly configured multipathingand hardware RAID.

TABLE 6-23 SS 6580/6780 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

No limited HBA supportat this time

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 107

Page 120: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

MultipathingSun StorEdge Traffic Manager (MPxIO) is required.

Volume Manager SupportAll volume manager releases supported by Sun Cluster 3 and the SE 6910/6960.

Software, Firmware, and PatchesThere are no Sun Cluster 3 specific requirements.

Sharing SE 6910/60 Systems with Multiple Clusters andNon-Clustered HostsSun Cluster 3 requires exclusive access to LUNs that store its shared data. Using theLUN masking capabilities in the SVE an SE69x0 LUN can be assigned to multiplenodes. This facility can be used to a share a SE69x0 among multiple clustered/non-clustered nodes.

SE 6910/6960 Support MatrixTo determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 to see if yourchosen server/storage combination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-24 to determine if there is limited HBA support

TABLE 6-24 SE 6910/6960 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Sun Fire T1000 SG-(X)PCIE2FC-QF4SG-(X)PCIE2FC-EM4

108 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 121: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

4. Refer to the SAN WWWW(http://mysales.central/public/storage/products/matrix.html) for additionalinformation and restrictions

Sun StorEdge 6920 SystemThis section describes the configuration rules for using Sun StorEdge 6920 as sharedstorage.

SE 6920 Configuration Rules

Node Connectivity LimitsThese arrays can connect to up to 8 nodes.

Hub and Switch SupportFC switches are supported.

RAID Requirements■ SE 6920 systems can be used without software volume management if you have

properly configured multipathing and hardware RAID.

■ A single 6920 system is supported with properly configured multipathing andhardware RAID.

SE 6920 MultipathingSun StorEdge Traffic Manager (MPxIO) is required.

Volume Manager Support■ All volume manager releases supported by Sun Cluster 3 and the SE 6920.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 109

Page 122: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ SE 6920 arrays are supported without software volume management withproperly configured multipathing and hardware RAID.

Software, Firmware, and PatchesThere are no Sun Cluster 3 specific requirements.

Sharing SE 6920 with Multiple Clusters and Non-Clustered HostsSun Cluster 3 requires exclusive access to LUNs that store its shared data. Using theLUN masking capabilities in the SVE an SE 6920 LUN can be assigned to multiplenodes. This facility can be used to a share a SE 6920 among multiple clustered andnon-clustered nodes.

Sun StorEdge 6920 system V. 3.0.0 support with SunClusterThe Remote Replication, Snapshot and Local Mirroring features of system

V. 3.0.0 are supported with Sun Cluster 3 and the SE 6920. The SE 6920’svirtualization feature is supported with the use of the following storage arrays asback end non_VLV luns storage: T3B and the SE 6020/6120. For information on thirdparty storage please consult http://www.sun.com/software/cluster/osp/

SE 6920 Support MatrixTo determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 to see if yourchosen server/storage combination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Refer to the SAN WWWW(http://mysales.central/public/storage/products/matrix.html) for additionalinformation and restrictions.

110 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 123: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

4. Check Table 6-25 to determine if there is limited HBA support

Sun StorEdge 9910/9960 ArraysThis section describes the configuration rules for using Sun StorEdge 9910/9960 asshared storage.

Note – For a configuration to be supported in a Sun Cluster configuration, it mustalso be supported by the SE 9900 team. Please check the SE 9900 series “what workswith what” matrix first to ensure a given configuration is supported by the SE 9900team. Also note that new server support is typically not released by the SE 9900team/Hitachi until after the server’s GA.

SE 9910/9960 Configuration RulesSun Cluster 3 requires HOST MODE=09 and System Option Mode 185=O. See HDSMK-90RD017-7 “9900 Sun Solaris Configuration Guide” for more info.

Node Connectivity LimitsA maximum of 8 SPARC nodes, or 4 x64 nodes, in a given cluster can be connectedsimultaneously to a SE 9910/9960 LUN.

Hub and Switch SupportFC switches are supported.

TABLE 6-25 SE 6920 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Sun Fire T1000 SG-(X)PCIE2FC-QF4SG-(X)PCIE2FC-EM4

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 111

Page 124: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

RAID Requirements■ The SE 9910/9960 can be used without software volume management if you have

properly configured multipathing and hardware RAID.

■ A single 9910/9960 volume is supported with properly configured multipathingand hardware RAID.

■ Without multipathing, data must be mirrored to another array or to anothervolume within the array using an independent I/O path.

MultipathingSun Cluster 3 now supports use of Sun StorEdge Traffic Manager (MPxIO) and SunDynamic Link Manager (SDLM- formerly HDLM) for having multiple paths from acluster node to the ST 99x0 array. MPxIO is the multipathing solution applicable toSun HBAs, SDLM is the multipathing solution applicable to both JNI HBAs and SunHBAs (Sun HBA support with SDLM limited to SDLM 5.0/5.1/5.4). SDLM supportsboth Solaris 8 and Solaris 9 (Sol 9 support limited to SDLM 4.1, 5.0, 5.1 and 5.4).

No other storage multipathing solutions (for example Veritas DMP) are supportedwith Sun Cluster.

By using multiple paths and either MPxIO or SDLM in conjunction with hardwareRAID, the requirement to host base mirror the data on a SE 9910/9960 is removed.

Please note that only SDLM versions 5.0,5.1 and 5.4 support VxVM (versions 3.2 and3.5)

SDLM/HDLM support is limited to SPARC and sharing of a LUN to only 2 clusternodes. There is no SDLM/HDLM for Solaris x86.

Volume Manager Support■ MPxIO: All volume manager releases supported by Sun Cluster 3 and the SE

9910/9960.

■ SDLM: Please refer to “Multipathing” on page 112.

■ SE 9910/9960 arrays are supported without software volume management withproperly configured multipathing and hardware RAID.

Software, Firmware, and PatchesThere are no Sun Cluster 3 specific requirements.

112 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 125: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

Sharing an SE 9910/9960 Among Several Clusters orNon-Clustered SystemsA single SE 9910/9960 can be utilized by several separately clustered or non-clustered devices. The main requirement for this functionality being that the ports ofthe SE 9910/9960 must be assigned properly so that no two clusters can see eachother’s storage. This can be done either through physical cabling or by usingSANtinel.

SE 9910/9960 Special Features

TrueCopySun StorEdge 9910/9960 TrueCopy is supported with Sun Cluster 3 with thefollowing configuration details:

■ Both synchronous and asynchronous modes of operations are supported.

■ When using an MPxIO LUN as the Command Control Interface (CCI) commanddevice, CCI 01-10-03/02 and microcode 01-18-09-00/00 or better must be used.

■ TrueCopy can be used with Sun Cluster to replicate data outside of the cluster

■ Starting with Sun Cluster 3.2, a Sun Cluster feature enables TrueCopy to replicatedata within the same cluster as an alternative to host-based mirroring.See“TrueCopy Support” on page 291 for more info.

■ TrueCopy pair LUNs cannot be used as a quorum device.

■ Command Device LUNs cannot be used as a quorum device.

SANtinel and LUSESANtinel and LUSE are both supported for usage within a Sun Cluster 3environment. Please see the SE 9900 series documentation for more information onSANtinel and LUSE.

ShadowImageSun StorEdge 9900 ShadowImage is now supported with Sun Cluster 3 with thefollowing configuration details:

■ When using an MPxIO LUN as the Command Control Interface (CCI) commanddevice, CCI 01-10-03/02 and microcode 01-18-09-00/00 or better must be used.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 113

Page 126: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ The Remote Console may be used

Caution: This note applies to configurations using host-based mirroring withSE 9910/9960 arrays. If ShadowImage is used to restore data from a suspendedpair (PSUS), make sure that you perform the relevant volume-manager stepsprior to executing either a reverse-copy or a quick-restore. This will ensure thatyou don’t corrupt your mirror.

Graphtrack and LUN ManagerGraphtrack and LUN Manager are supported

SE 9910/9960 Support MatrixTo determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 or Table 5-2, “FCStorage for x64 Servers,” on page 46 to see if your chosen server/storagecombination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60

3. And choose a supported FC switch from the list in “Supported SAN Switches” onpage 62

TABLE 6-26 SE 9910/9960 Array/Server Combinations with Additional HBA Support

Server Host Adaptera

a When selecting one of these “XT8-FC” HBAs, all HBAs sharing the LUN are required to be an “XT8-FC” HBA, althoughnot necessarily the same model.

Sun Enterprise 220R, 250, 420R, 450Sun Fire V880, V1280, 4800/4810, E4900, 6800, E6900, 12K/15K, E20K/E25K

XT8-FCE-6460-NXT8-FCI-1063-N

Sun Enterprise 3x00-6x00 XT8-FC64-1063-NXT8-FCE-1473-N

Sun Enterprise 10K XT8-FC64-1063-NXT8-FCE-1473-NXT8-FCE-6460-N

Sun Fire 280R, V480 XT8-FCE-6460-N

Sun Fire 3800 XT8-FCC-6460-N

114 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 127: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

4. Refer to the “Sun StorEdge 9900 Systems: What Works With What SupportMatrix,” SunWIN Token Number 344150, for additional details.

Sun StorEdge 9970/9980This section describes the configuration rules for using Sun StorEdge 9970/9980 asshared storage.

Note – For a configuration to be supported in a Sun Cluster configuration, it mustalso be supported by the SE 9900 team. Please check the SE 9900 series “what workswith what” matrix first to ensure a given configuration is supported by the SE 9900team. Also note that new server support is typically not released by the SE 9900team/Hitachi until after the server’s GA.

SE 9970/9980 Configuration RulesFor 9970/9980, Sun Cluster 3 requires HOST MODE=09 - see HDS MK-92RD123-5“9900 Series Sun Solaris Configuration Guide” for more info.

Node Connectivity LimitsA maximum of 8 SPARC nodes, or 4 x64 nodes, in a given cluster can be connectedsimultaneously to a SE 9970/9980 LUN.

Hub and Switch SupportFC switches are supported.

RAID Requirements■ SE 9970/9980 arrays can be used without software volume management if you

have properly configured multipathing and hardware RAID.

■ A single SE 9970/9980 array is supported with properly configured multipathingand hardware RAID.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 115

Page 128: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Without multipathing, data must be mirrored to another array or to anothervolume within the 9970/9980 array using an independent I/O path.

MultipathingSun Cluster 3 now supports use of Sun StorEdge Traffic Manager (MPxIO) and SunDynamic Link Manager (SDLM- formerly HDLM) for having multiple paths from acluster node to the SE 99x0 array. MPxIO is the multipathing solution applicable toSun HBAs, SDLM is the multipathing solution applicable to both JNI HBAs and SunHBAs (Sun HBA support with SDLM limited to SDLM 5.0/5.1/5.4). SDLM supportsboth Solaris 8 and Solaris 9 (Sol 9 support limited to SDLM 4.1, 5.0,5.1,5.4).

No other storage multipathing solutions (for example Veritas DMP) are supportedwith Sun Cluster.

By using multiple paths and either MPxIO or SDLM in conjunction with hardwareRAID, the requirement to host base mirror the data on a SE 9970/9980 is removed.

SDLM/HDLM support is limited to SPARC and sharing of a LUN to only 2 clusternodes. There is no SDLM/HDLM for Solaris x86.

Note – Only SDLM versions 5.0,5.1 and 5.4support VxVM (versions 3.2 and 3.5).

Volume Manager Support■ MPxIO: All volume manager releases supported by Sun Cluster 3 and the SE

9970/9980.

■ SDLM: Please refer to “Multipathing” on page 116.

■ SE 9970/9980 arrays are supported without software volume management withproperly configured multipathing and hardware RAID.

Sharing an SE 9970/9980 Among Several Clusters orNon-Clustered SystemsA single SE 9970/9980 can be utilized by several separately clustered or non-clustered devices. The main requirement for this functionality being that the ports ofthe SE 9970/9980 must be assigned properly so that no two clusters can see eachother’s storage. This can be done either through physical cabling or by usingSANtinel.

116 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 129: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

Software, Firmware, and PatchesThere are no Sun Cluster 3 specific requirements.

SE 9970/9980 Special Features

TrueCopySun StorEdge 9970/9980 TrueCopy is supported with Sun Cluster 3 with thefollowing configuration details:

■ Both synchronous and asynchronous modes of operations are supported.

■ When using an MPxIO LUN as the Command Control Interface (CCI) commanddevice, CCI 01-10-03/02 and microcode 21-02-23-00/00 or better must be used.

■ TrueCopy can be used with Sun Cluster to replicate data outside of the cluster

■ Starting with Sun Cluster 3.2, a Sun Cluster feature enables TrueCopy to replicatedata within the same cluster as an alternative to host-based mirroring.See“TrueCopy Support” on page 291 for more info.

■ TrueCopy pair LUNs cannot be used as a quorum device.

■ Command Device LUNs cannot be used as a quorum device.

SANtinel and LUSESANtinel and LUSE are both supported for usage within a Sun Cluster 3environment. Please see the SE 9970/9980 series documentation for moreinformation on SANtinel and LUSE.

ShadowImageSun StorEdge 9970/9980 ShadowImage is now supported with Sun Cluster 3 withthe following configuration details:

■ When using an MPxIO LUN as the Command Control Interface (CCI) commanddevice, CCI 01-10-03/02 and microcode 21-02-23-00/00 or better must be used.

■ The Remote Console may be used

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 117

Page 130: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Caution – This note applies to configurations using host-based mirroring with SE9970/9980 arrays. If ShadowImage is used to restore data from a suspended pair(PSUS), make sure that you perform the relevant volume-manager steps prior toexecuting either a reverse-copy or a quick-restore. This will ensure that you don’tcorrupt your mirror.

Graphtrack and LUN ManagerGraphtrack and LUN Manager are supported

SE 9970/9980 Support MatrixTo determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 or Table 5-2, “FCStorage for x64 Servers,” on page 46 to see if your chosen server/storagecombination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60

3. And choose a supported FC switch from the list in “Supported SAN Switches” onpage 62

4. Refer to the “Sun StorEdge 9900 Systems: What Works With What SupportMatrix,” SunWIN Token Number 344150, for additional details.

TABLE 6-27 SE 9970/9980 Array/Server Combinations with Additional HBA Support

Server Host Adaptera

a When selecting one of these “XT8-FC” HBAs, all HBAs sharing the LUN are re-quired to be an “XT8-FC” HBA, although not necessarily the same model.

Sun Enterprise 3x00-6x00 XT8-FC64-1063-NXT8-FCE-1473-N

Sun Enterprise 10K XT8-FCE-1473-N

Sun Fire 3800 XT8-FCC-6460-N

Sun Fire 12K/15K, E20K/E25K XT8-FCE-6460-NXT8-FCI-1063-N

118 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 131: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

Sun StorageTek 9985/9990This section describes the configuration rules for using Sun StorageTek 9985/9990 asshared storage.

Note – For a configuration to be supported in a Sun Cluster configuration, it mustalso be supported by the ST 9900 team. Please check the ST 9900 series “what workswith what” matrix first to ensure a given configuration is supported by the ST 9900team. Also note that new server support is typically not released by the ST 9900team/Hitachi until after the server’s GA.

ST 9985/9990 Configuration Rules

Node Connectivity LimitsA maximum of 8 SPARC nodes, or 4 x64 nodes, in a given cluster can be connectedsimultaneously to a ST 9985/9990 LUN.

Hub and Switch SupportFC switches are supported.

RAID Requirements■ ST 9985/9990 arrays can be used without software volume management if you

have properly configured multipathing and hardware RAID.

■ A single ST 9985/9990 array is supported with properly configured multipathingand hardware RAID.

■ Without multipathing, data must be mirrored to another array or to anothervolume within the ST 9985/9990 array using an independent I/O path.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 119

Page 132: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

MultipathingSun Cluster 3 now supports use of Sun StorEdge Traffic Manager (MPxIO) and SunDynamic Link Manager (SDLM- formerly HDLM) for having multiple paths from acluster node to the ST 9985/9990 array. MPxIO is the multipathing solutionapplicable to Sun HBAs, SDLM is the multipathing solution applicable to both JNIHBAs and Sun HBAs (Sun HBA support with SDLM limited to SDLM 5.0/5.1/5.4).SDLM supports both Solaris 8 and Solaris 9 (Sol 9 support limited to SDLM 4.1, 5.0,5.1 and 5.4).

No other storage multipathing solutions (for example Veritas DMP) are supportedwith Sun Cluster.

By using multiple paths and either MPxIO or SDLM in conjunction with hardwareRAID, the requirement to host base mirror the data on a ST 9985/9990 is removed.

SDLM/HDLM support is limited to SPARC and sharing of a LUN to only 2 clusternodes. There is no SDLM/HDLM for Solaris x86.

Note – Only SDLM versions 5.0,5.1 and 5.4 support VxVM (versions 3.2 and 3.5).

Volume Manager Support■ MPxIO: All volume manager releases supported by Sun Cluster 3 and the ST

9985/9990.

■ SDLM: Please refer to “Multipathing” on page 120.

■ ST 9985/9990 arrays are supported without software volume management withproperly configured multipathing and hardware RAID.

Software, Firmware, and PatchesThere are no Sun Cluster 3 specific requirements.

Sharing an ST 9985/9990 Among Several Clusters orNon-Clustered SystemsA single ST 9985/9990 can be utilized by several separately clustered or non-clustered devices. The main requirement for this functionality being that the ports ofthe ST 9985/9990 must be assigned properly so that no two clusters can see eachother’s storage. This can be done either through physical cabling or by usingSANtinel.

120 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 133: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

ST 9985/9990 Special Features

TrueCopySun StorageTek 9985/9990 TrueCopy is supported with Sun Cluster 3 with thefollowing configuration details:

■ Both synchronous and asynchronous modes of operations are supported.

■ TrueCopy can be used with Sun Cluster to replicate data outside of the cluster

■ Starting with Sun Cluster 3.2, a Sun Cluster feature enables TrueCopy to replicatedata within the same cluster as an alternative to host-based mirroring.See“TrueCopy Support” on page 291 for more info.

■ TrueCopy pair LUNs cannot be used as a quorum device.

■ Command Device LUNs cannot be used as a quorum device.

Universal ReplicatorUniversal Replicator is supported with Sun Cluster 3 as follows:

■ Universal Replicator can be used with Sun Cluster to replicate data outside of thecluster.

■ Using Universal Replicator to copy replicate within a cluster is not supported.

■ Universal Replicator pair LUNs cannot be used as a quorum device.

■ Command Device LUNs cannot be used as a quorum device.

SANtinel and LUSESANtinel and LUSE are both supported for usage within a Sun Cluster 3environment. Please see the ST 9985/9990 series documentation for moreinformation on SANtinel and LUSE.

ShadowImageSun StorageTek 9985/9990 ShadowImage is now supported with Sun Cluster 3 withthe following configuration details:

■ Microcode versions TBD

■ The Remote Console may be used

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 121

Page 134: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Caution – This note applies to configurations using host-based mirroring with ST9985/9990arrays. If ShadowImage is used to restore data from a suspended pair(PSUS), make sure that you perform the relevant volume-manager steps prior toexecuting either a reverse-copy or a quick-restore. This will ensure that you don’tcorrupt your mirror.

Graphtrack and LUN ManagerGraphtrack and LUN Manager are supported

ST 9985/9990 Support MatrixTo determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 or Table 5-2, “FCStorage for x64 Servers,” on page 46 to see if your chosen server/storagecombination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60

3. And choose a supported FC switch from the list in “Supported SAN Switches” onpage 62

4. Refer to the “Sun StorEdge 9900 Systems: What Works With What SupportMatrix,” SunWIN Token Number 344150, for additional details.

Sun StorageTek 9985V/9990VThis section describes the configuration rules for using Sun StorageTek 9985V/9990Vas shared storage.

Note – For a configuration to be supported in a Sun Cluster configuration, it mustalso be supported by the ST 9900 team. Please check the ST 9900 series “what workswith what” matrix first to ensure a given configuration is supported by the ST 9900team. Also note that new server support is typically not released by the ST 9900team/Hitachi until after the server’s GA.

122 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 135: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

ST 9985V/9990V Configuration Rules

Node Connectivity Limits■ A maximum of 8 SPARC nodes, or 4 x64 nodes, in a given cluster can be

connected simultaneously to a ST 9985V/9990V LUN.

■ ST 9985V/9990V also support up to 16 SPARC nodes to a LUN when used withOracle RAC. See Table 11-13, “Oracle RAC Support with Sun Cluster 3.2 forSPARC,” on page 247 for more info.

Hub and Switch SupportFC switches are supported.

RAID Requirements■ ST 9985V/9990V arrays can be used without software volume management if you

have properly configured multipathing and hardware RAID.

■ A single ST 9985V/9990V array is supported with properly configuredmultipathing and hardware RAID.

■ Without multipathing, data must be mirrored to another array or to anothervolume within the ST 9985V/9990V array using an independent I/O path.

MultipathingSun Cluster 3 now supports use of Sun StorEdge Traffic Manager (MPxIO) and SunDynamic Link Manager (SDLM- formerly HDLM) for having multiple paths from acluster node to the ST 9985V/9990V array. MPxIO is the multipathing solutionapplicable to Sun HBAs, SDLM is the multipathing solution applicable to both JNIHBAs and Sun HBAs (Sun HBA support with SDLM limited to SDLM 5.0/5.1/5.4).SDLM supports both Solaris 8 and Solaris 9 (Sol 9 support limited to SDLM 4.1, 5.0,5.1 and 5.4).

No other storage multipathing solutions (for example Veritas DMP) are supportedwith Sun Cluster.

By using multiple paths and either MPxIO or SDLM in conjunction with hardwareRAID, the requirement to host base mirror the data on a ST 9985V/9990V isremoved.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 123

Page 136: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SDLM/HDLM support is limited to SPARC and sharing of a LUN to only 2 clusternodes. There is no SDLM/HDLM for Solaris x86.

Note – Only SDLM versions 5.0, 5.1 and 5.4 support VxVM (versions 3.2 and 3.5).

Volume Manager Support■ MPxIO: All volume manager releases supported by Sun Cluster 3 and the ST

9985V/9990V.

■ SDLM: Please refer to “Multipathing” on page 120.

■ ST 9985V/9990V arrays are supported without software volume managementwith properly configured multipathing and hardware RAID.

Software, Firmware, and PatchesThere are no Sun Cluster 3 specific requirements.

Sharing an ST 9985V/9990V Among Several Clusters orNon-Clustered SystemsA single ST 9985V/9990V can be utilized by several separately clustered or non-clustered devices. The main requirement for this functionality being that the ports ofthe ST 9985V/9990V must be assigned properly so that no two clusters can see eachother’s storage. This can be done either through physical cabling or by usingSANtinel.

ST 9985V/9990V Special Features

TrueCopySun StorageTek 9985V/9990V TrueCopy is supported with Sun Cluster 3 with thefollowing configuration details:

■ Both synchronous and asynchronous modes of operations are supported.

■ CCI package version 01-19-03/04 and later can be used on the host side.

■ TrueCopy can be used with Sun Cluster to replicate data outside of the cluster

124 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 137: sun cluster 3 configuration guide 412383

FIBRE CHANNEL STORAGE SUPPORT

■ Starting with Sun Cluster 3.2, a Sun Cluster feature enables TrueCopy to replicatedata within the same cluster as an alternative to host-based mirroring.See“TrueCopy Support” on page 291 for more info.

■ TrueCopy pair LUNs cannot be used as a quorum device.

■ Command Device LUNs cannot be used as a quorum device.

Universal ReplicatorUniversal Replicator is supported with Sun Cluster 3 as follows:

■ Universal Replicator can be used with Sun Cluster to replicate data outside of thecluster.

■ Using Universal Replicator to replicate data within a cluster is not supported.

■ Universal Replicator pair LUNs cannot be used as a quorum device.

■ Command Device LUNs cannot be used as a quorum device.

SANtinel and LUSESANtinel and LUSE are both supported for usage within a Sun Cluster 3environment. Please see the ST 9985V/9990V series documentation for moreinformation on SANtinel and LUSE.

ShadowImageSun StorageTek 9985V/9990V ShadowImage is now supported with Sun Cluster 3with the following configuration details:

■ Microcode versions TBD

■ The Remote Console may be used

Caution – This note applies to configurations using host-based mirroring with ST9985V/9990V arrays. If Shadowimage is used to restore data from a suspended pair(PSUS), make sure that you perform the relevant volume-manager steps prior toexecuting either a reverse-copy or a quick-restore. This will ensure that you don’tcorrupt your mirror.

Graphtrack and LUN ManagerGraphtrack and LUN Manager are supported

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 125

Page 138: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

ST 9985V/9990V Support MatrixTo determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 or Table 5-2, “FCStorage for x64 Servers,” on page 46 to see if your chosen server/storagecombination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in“Supported SAN Host Bus Adapters (HBAs)” on page 60. Note: Only Sun-branded Emulex and QLogic HBAs are supported for N*N configurations largerthan 8 nodes.

3. And choose a supported FC switch from the list in “Supported SAN Switches” onpage 62

4. Refer to the “Sun StorEdge 9900 Systems: What Works With What SupportMatrix,” SunWIN Token Number 344150, for additional details.

126 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 139: sun cluster 3 configuration guide 412383

CHAPTER 7

SCSI Storage Support

This chapter covers Sun Cluster supported SCSI storage devices.

Netra st D130 Array

Netra st D130 RAID RequirementsIn order to ensure data redundancy and hardware redundancy, software mirroringacross boxes is required.

The other configuration rules for using Netra st D130 as shared storage are listedbelow.

■ Daisy Chaining of Netra st D130 is not supported.■ Host Adapters supported with Netra st D130 are listed below:

TABLE 7-1 Sun Cluster and Netra st D130 Support Matrix for SPARC

Host Host Adapter Part # for Host AdapterMaximum NodeConnectivity

Netra T1 AC200/DC200 onboard UltraSCSIporta

a Onboard SCSI port must be used for one storage connection due to the limited number of PCI slots on the server.

2

SunSwift Adapter, PCI 1032A

Netra t 1400/1405 SunSwift Adapter, PCI 1032A

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 127

Page 140: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Cables supported with Netra st D130 are listed below:

Figure 7-1 below shows how to configure Netra st D130 as a shared storage.

FIGURE 7-1 Netra st D130 as Shared Storage

Netra st A1000 Array

Netra st A1000 RAID RequirementsIn order to ensure data redundancy and hardware redundancy, software mirroringacross boxes is required.

The other configuration rules for using Netra st A1000 are used as a shared storageare listed below.

TABLE 7-2 Netra st D130 Supported Cables

Cable Part # of Cable

2-meter Ultra SCSI-3/SCSI-3 cable 1139A

2-meter SCSI-3/VHDCI cable with right angle connector 959A

0.36-meter SCSI-3 cable with right-angled connector 6917A

0.8-meter Ultra SCSI-3/SCSI-3 cable 1134A

Netra T1

Netra

Netra T1200 200

st D130Mirror

Netrast D130

Data

onboard port

HBA

onboard port

HBA

128 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 141: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

■ Daisy-chaining of Netra st A1000 arrays is not supported.■ The support matrix for Netra st A1000 with Sun Cluster 3 is:

■ Cables supported with Netra st A1000 are listed below:

Netra st D1000 Array

Netra st D1000 RAID RequirementsIn order to ensure data redundancy and hardware redundancy, software mirroringacross boxes is required. Hence, a cluster configuration with a single st D1000 in asplit-bus configuration, with data mirrored across the two halves of st D1000, is notsupported.

The other configuration rules for using Netra st D1000 as shared storage are listedbelow.

■ Daisy chaining of Netra st D1000s is not supported.■ Single Netra st D1000, in split-bus configuration, is not supported.

TABLE 7-3 Netra st A1000 and Sun Cluster 3 Support Matrix for SPARC

Servers Host Bus Adapters ConnectivityMaximum NodeConnectivity

Netra t 1120/1125, Netra1400/1405, Netra 20

UltraSCSIadapter (6541A)

DirectAttached

2

TABLE 7-4 Netra st A1000 Supported Cables

Cable Part # of the Cable

0.16-meter, SCSI-3 cable, SCSI-3/SCSI-3 with right angled connector 991A

2-meter, SCSI-3 cable, SCSI-3/VHDCI with right angled connector 992A

4-meter, SCSI-3 cable, SCSI-3/VHDCI with right angled connector 993A

4.0-meter, 68-pin to VHDC differential SCSI cable 3830A

10.0-meter, 68-pin to VHDC differential SCSI cable 3831A

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 129

Page 142: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Host adapter supported with Netra st D1000 is listed below:

■ Cables supported with Netra st D1000 are listed below:

The figure below shows how to configure the Netra st D1000 as shared storage.

TABLE 7-5 Sun Cluster 3 and Netra st D1000 Support Matrix for SPARC

Server Host AdapterPart # of the HostAdapter

Maximum NodeConnectivity

Netra 1120/1125, Netra t1400/1405, Netra 20, Netra 240

PCI-to-differentialUltraSCSI HostAdapter (UD2S)

6541A 2

TABLE 7-6 Netra st D1000 Supported Cables

Cable Part # of the Cable

0.16-meter, SCSI-3 cable, SCSI-3/SCSI-3 with right angled connector 991A

2-meter, SCSI-3 cable, SCSI-3/VHDCI with right angled connector 992A

4-meter, SCSI-3 cable, SCSI-3/VHDCI with right angled connector 993A

4.0-meter, 68-pin to VHDC differential SCSI cable 3830A

10.0-meter, 68-pin to VHDC differential SCSI cable 3831A

130 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 143: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

FIGURE 7-2 Two Netra st D1000s, in Single-Bus Configuration, as Shared Storage.

Sun StorEdge MultiPack

SE Multipack RAID RequirementsIn order to ensure data redundancy and hardware redundancy, software mirroringacross boxes is required.

The other configuration rules for using MultiPack as a shared storage are listedbelow.

■ Daisy Chaining of MultiPacks is not supported.■ Host adapters supported with MultiPack are listed below:

TABLE 7-7 Sun Cluster 3 and SE Multipack Support Matrix for SPARC

Host Host Adapter Part # for Host AdapterMaximum NodeConnectivity

Sun Enterprise220R, 250, 420R, 450

Dual-channel single-ended UltraSCSI hostadapter, PCI (US2S)

6540A- SCSI Loop lengthmust not exceed 3m (1.5meters if using 3-6 disks)

2

SunSwift Adapter, PCI 1032A- SCSI Loop lengthmust not exceed 6m

NODE 1

HA1

HA2

NODE 2

Data MirrorNetra st D1000 #1 Netra st D1000 #2

HA1

HA2

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 131

Page 144: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Cables supported with MultiPack are listed below:

Figure 7-3 below shows how to configure MultiPack as a shared storage:

FIGURE 7-3 Sun StorEdge MultiPack as Shared Storage

Sun StorEdge D2 Array

SE D2 RAID RequirementsSince D2 doesn’t have RAID capabilities built-in, host-based mirroring usingVxVM/SDS is required.

This host based mirroring requirement ensures the physical path redundancy. Withdual ESM modules, there are no single points of failure in a D2 array. Hence, acluster configuration with a single D2 in a split-bus configuration, with datamirrored across the two halves of the D2, is supported.

TABLE 7-8 SE Multipack Supported Cables

Cable Part # of Cable

68-pin, 0.8 meter, external SCSI cable 901A

68-to-68 pin, 2.0 meter, external SCSI cable 902A

NODE 1

HA1

HA2

NODE 2

Data Mirror

HA1

HA2

MultiPack #2MultiPack #1

132 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 145: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

SE D2 Support MatrixThe support matrix for D2 with Sun Cluster 3 is:

TABLE 7-9 Sun StorEdge D2 and Sun Cluster 3 Support Matrix for SPARC

Host Host Adapter CableMax. SCSI BusLengthe

e From each host to D2, including the internal bus lengths

Maximum NodeConnectivity

Netra t 1120/1125, Netra1400/1405, Netra 20, Netra 240AC/DC, Netra 1280Sun Enterprise 220R, 250, 420RSun Fire 280R, V480/V490,V880/V890, V1280

Sun StorEdge PCI dualUltra 3 SCSI (6758A)SG-XPCI2SCSI-LM320

0.8m (1136A)1.2m (1137A)2m (1138A)4m (3830B)10m (3831B)

25m 2

Netra 440a

a In order to use the Netra 440's onboard SCSI port to connect to shared storage, patch #115275-02 must be installed.

Onboard SCSI Port6757A

Sun Fire V210, V240b, V250, V440c

b The onboard SCSI port must be used for one shared storage connection due to the server only having one PCI slot.

c In order to use the SF V440's onboard SCSI port to connect to shared storage, patch #115275-02 must be installed.

Onboard SCSI portSun 6758SG-XPCI2SCSI-LM320

0.8m (1132A)2m (3832A)4m (3830A)10m (3831A)

25m

Sun Fire V215/V245, V445 SGXPCI1SCSI-LM320-ZSGXPCI2SCSI-LM320-ZSGXPCIE2SCSIU320Zd

(x)4422A-2

d This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.

0.8m (1136A)1.2m (1137A)2m (1138A)4m (3830B)10m (3831B)

25m

Sun Fire T1000 SG-(X)PCIE2SCSIU320Z 0.8m (1132A)2m (3832A)4m (3830A)10m (3831A)

12m

Sun Fire T2000 SG-XPCIE2SCSIU320Z

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 133

Page 146: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun StorEdge S1 Array

SE S1 Array RAID RequirementsIn order to ensure data redundancy and hardware redundancy, software mirroringacross boxes is required.

The other configuration rules for using Sun StorEdge S1 as shared storage are listedbelow.

■ Daisy Chaining of Sun StorEdge S1 is not supported.■ Sun StorEdge S1 is supported in direct attached configurations.

134 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 147: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

■ Host adapters supported with Sun StorEdge S1 are listed below:

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 135

Page 148: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 7-10 Sun StorEdge S1 and Sun Cluster 3 Support Matrix for SPARC

Host Host Adapter CableMax. SCSIBus Lengthg

MaximumNodeConnectivity

Netra T1 AC200/DC200a

Netra t 1400/1405, Netra 20onboard UltraSCSI port 0.8m (1134A)

2m (1139A)3m 2

SunSwift Adapter, PCI(1032A)

Netra t 1400/1405, Netra 20Sun Enterprise 220R, 250, 420R, 450

Sun StorEdge PCI dualUltra 3 SCSI (6758A)Sun StorEdge Dual FastEthernet + SCSI Adapter(2222A)Sun 4422,SG XPCI2SCSI-LM320

0.8m (1132A)2m (3832A)4m (3830A)10m (3831A)

12m

Netra 120b/Sun Fire V120 Sun StorEdge Dual FastEthernet + SCSI Adapter(2222A)4422Onboard SCSI port

Netra 240 AC/DC, Netra 1280,Netra 1290Sun Fire V210c, V240, V250, 280R,V440d, V480/V490, V880/V890,V1280

Onboard SCSI portSun 2222ASun 4422Sun 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Netra 440 e Onboard SCSI portSun 6758X4422ASG-PCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire V125 Onboard SCSIX4422A-2SGXPCI2SCSILM320-ZSGXPCI1SCSILM320-Z

Sun Fire V215/V245, V445 SGXPCI1SCSI-LM320-ZSGXPCI2SCSI-LM320-ZSGXPCIE2SCSIU320Zf

(x)4422A-2

Sun Fire T1000 SG-(X)PCIE2SCSIU320Z

Sun Fire T2000 SG-XPCIE2SCSIU320Z

136 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 149: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

The figure below shows how to configure Sun StorEdge S1 as a shared storage in aNetra T1 200 cluster:

FIGURE 7-4 Sun StorEdge S1 as Shared Storage

Sun StorEdge A1000 Array

SE A1000 RAID RequirementsIn order to ensure data redundancy and hardware redundancy, software mirroringacross boxes is required.

The configuration rules for using Sun StorEdge A1000 are used as a shared storageare listed below.

■ Daisy-chaining of A1000 arrays is supported.

a The on-board SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.

b The on-board SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.

c The on-board SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.

d In order to use the SF V440's onboard SCSI port to connect to shared storage, patch #115275-02 must be installed.

e In order to use the Netra 440's onboard SCSI port to connect to shared storage, patch #115275-02 must be installed.

f This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.

g This includes connectivity to both the hosts.

Netra T1

Sun

Netra T1200 200

StorEdge S1Mirror

SunStorEdge S1

Data

onboard port

HBA

onboard port

HBA

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 137

Page 150: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ The support matrix for A1000 with Sun Cluster 3 is:

■ Cables supported with Sun StorEdge A1000 are listed below:

Sun StorEdge D1000 Array

SE D1000 RAID RequirementsIn order to ensure data redundancy and hardware redundancy, software mirroringacross boxes is required. Hence, a cluster configuration with a single D1000 in asplit-bus configuration, with data mirrored across the two halves of D1000, is notsupported.

The other configuration rules for using D1000 as shared storage are listed below.

■ Daisy chaining of D1000s is not supported.

TABLE 7-11 SE A1000 and Sun Cluster Support Matrix for SPARC

Servers Host Bus AdaptersMax NodeConnectivity

Netra 440Sun Enterprise 220R, 250, 420R, 450Sun Fire 280R, V440, V480/V490,V880/V890, V1280

PCI-to-differential UltraSCSIadapter (6541A)

DirectAttached2 Nodes

Sun Enterprise 3x00-6x00 (SBus only) SBus-to-differential UltraSCSIadapter, UDWIS/S (1065A)

DirectAttached2 Nodes

TABLE 7-12 SE A1000 Supported Cables

Cable Part # of the Cable

0.16-meter, SCSI-3 cable, SCSI-3/SCSI-3 with right angled connector 991A

2-meter, SCSI-3 cable, SCSI-3/VHDCI with right angled connector 992A

4-meter, SCSI-3 cable, SCSI-3/VHDCI with right angled connector 993A

4.0-meter, 68-pin to VHDC differential SCSI cable 3830A

10.0-meter, 68-pin to VHDC differential SCSI cable 3831A

138 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 151: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

■ Host adapters supported with D1000 are listed below:

■ Cables supported with D1000 are listed below:

The figure below shows how to configure D1000 as shared storage.

TABLE 7-13 Sun Cluster and SE D1000 Support Matrix for SPARC

Server Host AdapterPart # of theHost Adapter

MaximumNodeConnectivity

Netra 440, Netra 1280Sun Enterprise 220R, 250, 420R, 450Sun Fire 280R, V440, V480/V490,V880/V890, V1280

PCI-to-differentialUltraSCSI HostAdapter (UD2S)

6541A 2

Sun Enterprise 3x00, 4x00, 5x00,6x00

SBus-to-differentialUltraSCSI HostAdapter (UDWIS/S)

1065A

TABLE 7-14 SE D1000 Supported Cables

Cable Part # of the Cable

0.8-meter, UltraSCSI differential jumper cable. Shipped with D1000 array

2.0-meter, 68-pin to VHDC differential SCSI cable 3832A

4.0-meter, 68-pin to VHDC differential SCSI cable 3830A

10.0-meter, 68-pin to VHDC differential SCSI cable 3831A

12-meter, external differential UltraSCSI cable 979A

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 139

Page 152: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 7-5 Two Sun StorEdge D1000s, in Single-Bus Configuration, as Shared Storage.

Sun StorEdge A3500 Array

SE A3500 RAID RequirementsAn A3500 controller module with the redundant controllers provides appropriatehardware redundancy. An A3500 controller also has hardware RAID capabilitiesbuilt-in. Hence, software mirroring of data is not required. However a softwarevolume manager can be used for managing the data. Also, a cluster configurationwith an A3500 array with a single controller module is supported.

The other configuration rules for using Sun StorEdge A3500 as shared storage arelisted below:

■ Daisy-chaining of the controller modules is not supported.■ Sun StorEdge A3500, and A3500FC arrays cannot be used as quorum devices.■ A3500 Light is supported.■ It is required to connect the two SCSI ports of a controller module to different Host

Adapters on a node.

NODE 1

HA1

HA2

NODE 2

Data Mirror

HA1

HA2

D1000 #1 D1000 #2

140 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 153: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

■ Host Adapter supported with A3500 is listed below:

■ Cables supported with A3500 are listed in the table below.

Figure 7-6 on page 142 shows how to configure A3500 as a shared storage.

TABLE 7-15 Sun Cluster 3 and SE A3500 Support Matrix for SPARC

Servers Host AdapterPart # for HostAdapter

Maximum NodeConnectivity

Sun Enterprise 3x00,4x00, 5x00, 6x00

SBus-to-differential UltraSCSIHost Adapter (UDWIS/S)

1065A 2

TABLE 7-16 A3500 Supported Cables

Cable Part # for Cable

4m, 68-pin to VHDC differential SCSI cable 3830A

10m, 68-pin to VHDC differential SCSI cable 3831A

12m, external differential UltraSCSI cable 979A

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 141

Page 154: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 7-6 Single A3500 Configuration

Sun StorEdge 3120 JBOD Array

SE 3120 JBOD Array Configuration Details■ Daisy Chaining is not supported.■ SCSI bus length is 12 meters.■ Data may be mirrored between the halves of a single dual-bus SE3120 JBOD array.

This enables Sun Cluster configurations with a single dual-bus SE 3120 JBODarray.

■ Data in single-bus SE 3120 JBOD arrays must be mirrored against another storagearray.

■ NOTE: The single bus SE 3120 array in figure 7-7 must have its data mirrored toanother array.

Controller A

Controller B

A3500 Controller Module

HA

HA

HA

HA

HA

HA

Quorum Device

142 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 155: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

The support matrix for the SE 3120 JBOD with Sun Cluster 3 is listed below:

TABLE 7-17 Sun Cluster 3 and SE3120 JBOD Support Matrix for SPARC

Server Host Adapter Cable

MaximumNodeConnectivity

Sun Enterprise 220R x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

0.8m(1136A)1.2m(1137A)2m (1138A)4m (3830B)10m (3831B)

2

Sun Enterprise 250 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Enterprise 420R x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Enterprise 450 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire 12K/15K x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire 280R x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire 4800 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire 6800 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire E2900 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire E6900 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire T1000 SG-(X)PCIE2SCSIU320Z

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 143

Page 156: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Fire T2000 SG-(X)PCI1SCSI-LM320SG-(X)PCI1SCSI-LM320-ZSG-XPCIE2SCSIU320Z

Sun Fire V125 Onboard SCSI, X4422A-2SGXPCI2SCSILM320-ZSGXPCI1SCSILM320-Z

Sun Fire V1280 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire V210 onboard SCSI portx2222, 4422, 6758ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire V215 SGXPCI1SCSILM320-ZSGXPCI2SCSILM320-ZSGXPCIE2SCSIU320Za

(x)4422A-2

Sun Fire V240 onboard SCSI portx2222, 4422, 6758ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire V245 SGXPCI1SCSILM320-ZSGXPCI2SCSILM320-ZSGXPCIE2SCSIU320Zb

(x)4422A-2

Sun Fire V250 onboard SCSI portx2222, 4422, 6758ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire V440 onboard SCSI portx2222, 4422, 6758ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

TABLE 7-17 Sun Cluster 3 and SE3120 JBOD Support Matrix for SPARC (Continued)

Server Host Adapter Cable

MaximumNodeConnectivity

144 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 157: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

Sun Fire V445 SGXPCI1SCSILM320-ZSGXPCI2SCSILM320-ZSGXPCIE2SCSIU320Zc

(x)4422A-2

Sun Fire V480 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire V490 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire V880 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire V890 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Netra 1120/1125 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Netra 1280/1290 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Netra 1400/1405 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Netra 20 x2222, 4422, 6758SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Netra 240 AC/DC onboard SCSI portx2222, 4422, 6758ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

TABLE 7-17 Sun Cluster 3 and SE3120 JBOD Support Matrix for SPARC (Continued)

Server Host Adapter Cable

MaximumNodeConnectivity

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 145

Page 158: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Netra 440 onboard SCSI port6758A, X4422ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Netra T5220 SG-XPCIE2SCSIU320Z,SGXPCI2SCSILM320-Z

Sun Netra T5440 SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise M4000 SG-XPCI2SCSI-LM320-ZSG-XPCI1SCSI-LM320-ZSG-XPCIE2SCSIU320Z

Sun SPARC Enterprise M5000 SG-XPCI2SCSI-LM320-ZSG-XPCI1SCSI-LM320-ZSG-XPCIE2SCSIU320Z

Sun SPARC Enterprise M8000 SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise M9000 SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise T5120 SG-(X)PCIE2SCSIU320Z

Sun SPARC Enterprise T5220 SG-(X)PCIE2SCSIU320Z

Sun SPARC Enterprise T5140 SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise T5240 SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise T5440 SG-XPCIE2SCSIU320Z

External I/O Expansion Unit forSun SPARC Enterprise M4000,M5000, M8000 and M9000 Servers

SG-(X)PCI2SCSILM320-Z

a This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.

b This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.

c This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.

TABLE 7-17 Sun Cluster 3 and SE3120 JBOD Support Matrix for SPARC (Continued)

Server Host Adapter Cable

MaximumNodeConnectivity

146 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 159: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

TABLE 7-18 Sun Cluster 3 and SE3120 JBOD Support Matrix for x64

Server Host Adapter Cable

MaximumNodeConnectivity

Sun Fire V40z SG-XPCI1SCSI-LM320SG-XPCI1SCSI-LM320-ZSG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

0.8m(1136A)1.2m(1137A)2m (1138A)4m (3830B)10m (3831B)

2

Sun Fire X2100 M2 SG-XPCIE2SCSIU320Z

Sun Fire X2200 M2 SG-XPCIE2SCSIU320Z

Sun Fire X4100 SG-XPCI1SCSI-LM320

Sun Fire X4100 M2 SG-XPCIE2SCSI-LM320

Sun Fire X4140 SG-XPCIE2SCSIU320Z

Sun Fire X4200 SG-XPCI1SCSI-LM320

Sun Fire X4200 M2 SG-XPCIE2SCSI-LM320

Sun Fire X4240 SG-XPCIE2SCSIU320Z

Sun Fire X4250 SG-XPCIE2SCSIU320Z

Sun Fire X4440 SG-XPCIE2SCSIU320Z

Sun Fire X4450 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

Sun Fire X4540 SG-XPCIE2SCSIU320Z

Sun Fire X4600 SG-XPCIE2SCSIU320Z

Sun Fire X4600 M2 SG-XPCIE2SCSIU320Z

Sun Netra X4200 M2 SG-XPCI2SCSI-LM320-Z

Sun Netra X4250 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

Sun Netra X4450 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 147

Page 160: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 7-7 SE 3120 Single-Bus Configuration

FIGURE 7-8 SE 3120 Dual-Bus Configuration

Sun StorEdge 3310 JBOD ArrayThis section describes the configuration rules for using Sun StorEdge 3310 JBOD (aSE 3310 without RAID controllers) as a shared storage.

148 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 161: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

SE 3310 JBOD Configuration Details■ Both AC and DC power supplies are supported.

■ It IS supported to have a single dual-bus 3310 JBOD cluster configuration whichis split into two separate halves that are then mirrored against each other. Thisconfiguration would make a single SE 3310 JBOD act like two separate storagedevices- this configuration must have the “-02” revision of the I/O boardsinstalled. Please contact the SE 3310 Product Manager for more information.

■ Connecting an expansion 3310 JBOD units to an existing 3310 JBOD in a clusterconfiguration is NOT supported.

■ There is a SCSI loop length (length of cables to both hosts +.5m internal 3310 buslength +.3m jumper cable if using a single bus configuration) limitation of 12m ona single SCSI loop with the SE 3310 JBOD.

■ For additional configuration information, please see the “SE 3310 Release Notes”as doc# 816-7290 at http://docs.sun.com

■ For questions concerning support of specific configurations of the SF 2900 pleasecontact product marketing directly.

■ SE 3310 JBOD with the V440/Netra 440’s shared on-board SCSI is supported. Thatis, the V440’s on-board SCSI can be used for connecting the SE 3310 JBOD ascluster shared storage.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 149

Page 162: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

The support matrix for the SE 3310 JBOD with Sun Cluster 3 is listed below:

TABLE 7-19 Sun Cluster 3 and SE3310 JBOD Support Matrix for SPARC

Server Host Adapter Cable

MaximumNodeConnectivity

Netra 1120/1125, Netra 1400/1405,Sun Enterprise 220R, 250, 420R, 450

x22224422/4422A-26758ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

0.8m (1136A)1.2m (1137A)2m (1138A)4m (3830B)10m (3831B)

2

Netra 20, Netra 1280, Netra 1290Sun Fire 280R, V440, V480/V490,V880/V890, V1280, E2900, 4800,6800, 12K/15K, E20K/E25K

x22224422A/4422A-26758ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Netra 240 AC/DCSun Fire V210a, V240, V250,V440

onboard SCSI portx22224422A/4422A-26758ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Netra 440 Onboard SCSI port6758AX4422A/4422A-2SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Netra T5220 SG-XPCIE2SCSIU320Z,SGXPCI2SCSILM320-Z

Sun Fire V125 Onboard SCSIX4422A-2SGXPCI2SCSILM320-ZSGXPCI1SCSILM320-Z

Sun Fire V215/V245, V445 SGXPCI1SCSI-LM320-ZSGXPCI2SCSI-LM320-ZSGXPCIE2SCSIU320Zb

(x)4422A-2

Sun Fire T1000 SG-(X)PCIE2SCSIU320Z

150 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 163: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

Sun Fire T2000 SG-(X)PCI1SCSI-LM320SG-(X)PCI1SCSI-LM320-ZSG-XPCIE2SCSIU320Z

Netra T2000 SGXPCI2SCSILM320-Z

Sun SPARC Enterprise M3000 SG-XPCIE2SCSIU320Z

Sun SPARC EnterpriseM4000/M5000

SG-XPCI2SCSI-LM320-ZSG-XPCI1SCSI-LM320-ZSG-XPCIE2SCSIU320Z

Sun SPARC EnterpriseM8000/M9000

SG-XPCIE2SCSIU320Z

Sun SPARC EnterpriseT5120/T5220

SG-XPCIE2SCSIU320Z

Sun SPARC EnterpriseT5140/T5240

SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise T5440 SG-XPCIE2SCSIU320Z

a The onboard SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.

b This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.

TABLE 7-19 Sun Cluster 3 and SE3310 JBOD Support Matrix for SPARC (Continued)

Server Host Adapter Cable

MaximumNodeConnectivity

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 151

Page 164: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 7-20 Sun Cluster 3 and SE3310 JBOD Support Matrix for x64

Server Host Adapter Cable

MaximumNodeConnectivity

Sun Fire V40z SG-XPCI1SCSI-LM320SG-XPCI1SCSI-LM320-ZSG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

0.8m(1136A)1.2m(1137A)2m (1138A)4m (3830B)10m (3831B)

2

Sun Fire X2100 M2 SG-XPCIE2SCSIU320Z

Sun Fire X2200 M2 SG-XPCIE2SCSIU320Z

Sun Fire X4100 SG-XPCI1SCSI-LM320

Sun Fire X4100 M2 SG-XPCIE2SCSI-LM320

Sun Fire X4140 SG-XPCIE2SCSIU320Z

Sun Fire X4200 SG-XPCI1SCSI-LM320

Sun Fire X4200 M2 SG-XPCIE2SCSI-LM320

Sun Fire X4240 SG-XPCIE2SCSIU320Z

Sun Fire X4440 SG-XPCIE2SCSIU320Z

Sun Fire X4450 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

Sun Fire X4600 SG-XPCIE2SCSIU320Z

Sun Fire X4600 M2 SG-XPCIE2SCSIU320Z

Sun Netra X4200 M2 SG-XPCI2SCSI-LM320-Z

Sun Netra X4250 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

Sun Netra X4450 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

152 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 165: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

FIGURE 7-9 Direct-Attached SE 3310 JBOD Configuration

Sun StorEdge 3310 RAID ArrayThis section describes the configuration rules for using Sun StorEdge 3310 RAID (aSE 3310 with either one or two RAID controllers) as a shared storage.

SE 3310 RAID Configuration Details■ Both AC and DC power supplies are supported.

■ The SE 3310 RAID version (a 3310 with either a single or dual RAID controllers)must be mirrored against another storage array in Sun Cluster configurations.

■ Connecting a maximum of 1 additional expansion 3310 JBOD unit to an existing3310 RAID device in a cluster configuration IS supported. This brings theexpansion JBOD under the control of the RAID controller, enabling the cluster tosee both the 3310 RAID device and the expansion JBOD as one device.

■ There is a SCSI cable length (length of cables to both hosts) limitation of 25 m perSCSI loop with the SE 3310 RAID.

■ For additional configuration information, please see the “SE 3310 Release Notes”as doc# 816-7292 at http://docs.sun.com

■ The SE 3310 RAID with the V440/Netra 440’s shared on-board SCSI is supportedand requires minimum patch release 113722-06.

■ Logical Volumes are NOT supported. For more information, please see bug ID4881785.

b0 b1a0 a1

SE 3310 #1 SE 3310 #2

Node 1

Data

Node 2

Data

0 1

HA 1

0 1 0 1

HA 2

0 1

b0 b1a0 a1

MirrorMirror

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 153

Page 166: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ The latest supported firmware is 4.21.

TABLE 7-21 Sun Cluster 3 and SE3310 RAID Support Matrix for SPARC

Server Host Adapter Cable

MaximumNodeConnectivity

Netra 1120/1125, Netra 1400/1405Sun Enterprise 220R, 250, 420R, 450

6758Ax22224422SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

0.8m(1136A)1.2m(1137A)2m (1138A)4m (3830B)10m (3831B)

2

Netra 20, Netra 1280, Netra 1290Sun Fire 280R, V440, V480/V490,V880/V890, V1280, E2900, 4800,6800, 12K/15K, E20K/E25

6758Ax22224422SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Netra 240Sun Fire V210a, V240, V250

onboard SCSI portx222244226758ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Netra 440 6758AX4422ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Netra T5220 SG-XPCIE2SCSIU320Z,SGXPCI2SCSILM320-Z

Sun Fire V125 Onboard SCSIX4422A-2SGXPCI2SCSILM320-ZSGXPCI1SCSILM320-Z

Sun Fire V215/V245, V445 SGXPCI1SCSI-LM320-ZSGXPCI2SCSI-LM320-ZSGXPCIE2SCSIU320Zb

(x)4422A-2

Sun Fire T1000 SG-(X)PCIE2SCSIU320Z

Sun Fire T2000 SG-(X)PCI1SCSI-LM320SG-(X)PCI1SCSI-LM320-ZSG-XPCIE2SCSIU320Z

154 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 167: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

Netra T2000 SGXPCI2SCSILM320-Z

Sun SPARC Enterprise M3000 SG-XPCIE2SCSIU320Z

Sun SPARC EnterpriseM4000/M5000

SG-XPCI2SCSI-LM320-ZSG-XPCI1SCSI-LM320-ZSG-XPCIE2SCSIU320Z

Sun SPARC EnterpriseM8000/M9000

SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise T5120/T5220 SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise T5140/T5240 SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise T5440 SG-XPCIE2SCSIU320Z

a The onboard SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.

b This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.

TABLE 7-22 Sun Cluster 3 and SE3310 RAID Support Matrix for x64

Server Host Adapter Cable

MaximumNodeConnectivity

Sun Fire V20z X4422Aa 0.8m(1136A)1.2m(1137A)2m (1138A)4m (3830B)10m (3831B)

2

Sun Fire V40z SG-XPCI1SCSI-LM320SG-XPCI1SCSI-LM320-ZSG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Sun Fire X2100 M2 SG-XPCIE2SCSIU320Z

Sun Fire X2200 M2 SG-XPCIE2SCSIU320Z

Sun Fire X4100 SG-XPCI1SCSI-LM320

Sun Fire X4100 M2 SG-XPCIE2SCSI-LM320

Sun Fire X4140 SG-XPCIE2SCSIU320Z

Sun Fire X4200 SG-XPCI1SCSI-LM320

Sun Fire X4200 M2 SG-XPCIE2SCSI-LM320

TABLE 7-21 Sun Cluster 3 and SE3310 RAID Support Matrix for SPARC (Continued)

Server Host Adapter Cable

MaximumNodeConnectivity

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 155

Page 168: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Fire X4240 SG-XPCIE2SCSIU320Z

Sun Fire X4440 SG-XPCIE2SCSIU320Z

Sun Fire X4450 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

Sun Fire X4600 SG-XPCIE2SCSIU320Z

Sun Fire X4600 M2 SG-XPCIE2SCSIU320Z

Sun Netra X4200 M2 SG-XPCI2SCSI-LM320-Z

Sun Netra X4250 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

Sun Netra X4450 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

a Requires “GLM Device Driver for the Sun Dual Gigabit Ethernet and Dual SCSI/P Adapter 1.0” for Solaris 9 x86 athttp://www.sun.com/software/download/products/4134849d.html or Solaris 10 x86 at ht-tp://www.sun.com/software/download/products/441b0a3e.html. The glm driver for Solaris 10 x86 is bundledstarting with Update 2, making this SDLC download unnecessary.

TABLE 7-22 Sun Cluster 3 and SE3310 RAID Support Matrix for x64 (Continued)

Server Host Adapter Cable

MaximumNodeConnectivity

156 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 169: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

FIGURE 7-10 Direct-Attached SE 3310 RAID Configuration

Sun StorEdge 3320 JBOD ArrayThis section describes the configuration rules for using Sun StorEdge 3320 JBOD (aSE 3320 without RAID controllers) as a shared storage.

SE 3320 JBOD Configuration Details■ Both AC and DC power supplies are supported.

■ It IS supported to have a single dual-bus 3320 JBOD cluster configuration whichis split into two separate halves that are then mirrored against each other. Thisconfiguration would make a single SE 3320 JBOD act like two separate storagedevices.

■ Connecting an expansion 3320 JBOD units to an existing 3320 JBOD in a clusterconfiguration is NOT supported.

■ Effective July 14, 2008, new dual-hosted single-bus SE3320 JBOD configurationsare not supported. Cabling the array into Split-Bus mode using 2 meter cables isthe current supported method for all new installations. See Field Action Bulletins(FAB) 239464 (Dual hosted Sun StorageTek 3320 JBOD in Single-Busconfigurations may experience parity errors) for details.

■ For additional configuration information, please see the “SE 3320 Release Notes”as doc# 816-7290 at http://docs.sun.com

Node 1 Node 2

0 1

HA 1

0 1 0 1

HA 2

0 1

a2 a3a0 a1

SE 3310 #1 SE 3310 #2

Data

a2 a3a0 a1

Mirror

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 157

Page 170: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ For questions concerning support of specific configurations of the SF 2900 pleasecontact product marketing directly.

■ SE 3320 JBOD with the V440/Netra 440’s shared on-board SCSI is supported. Thatis, the V440’s on-board SCSI can be used for connecting the SE 3320 JBOD ascluster shared storage.

158 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 171: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

The support matrix for the SE 3320 JBOD with Sun Cluster 3 is listed below:

TABLE 7-23 Sun Cluster 3 and SE3320 JBOD Support Matrix for SPARC

Server Host Adapter Cable

MaximumNodeConnectivity

Netra 1120/1125, Netra 1400/1405Sun Enterprise 220R, 250, 420R, 450

x22224422A-4422A-26758ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

0.8m (1136A)1.2m (1137A)2m (1138A)4mc (3830B)10mc (3831B)

2

Netra 20, Netra 1280, Netra 1290Sun Fire 280R, V440, V480/V490,V880/V890, V1280, E2900, 4800,6800, 12K/15K, E20K/E25K

x22224422A/4422A-26758ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Netra 240 AC/DCSun Fire V210a, V240, V250,V440

onboard SCSI portx22224422A/4422A-26758ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Netra 440 Onboard SCSI port6758AX4422A/4422A-2SG-PCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Netra T2000 SGXPCI2SCSILM320-Z

Netra T5220 SG-XPCIE2SCSIU320Z,SGXPCI2SCSILM320-Z

Netra T5440 SG-XPCIE2SCSIU320Z

Sun Fire V125 Onboard SCSIX4422A-2SGXPCI2SCSILM320-ZSGXPCI1SCSILM320-Z

Sun Fire V215/V245, V445 SGXPCI1SCSI-LM320-ZSGXPCI2SCSI-LM320-ZSGXPCIE2SCSIU320Zb

(x)4422A-2

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 159

Page 172: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Fire T1000 SG-(X)PCIE2SCSIU320Z

Sun Fire T2000 SG-(X)PCI1SCSI-LM320SG-(X)PCI1SCSI-LM320-ZSG-XPCIE2SCSIU320Z

Sun SPARC Enterprise M3000 SG-XPCIE2SCSIU320Z

Sun SPARC EnterpriseM4000/M5000

SG-XPCI2SCSI-LM320-ZSG-XPCI1SCSI-LM320-ZSG-XPCIE2SCSIU320Z

Sun SPARC EnterpriseM8000/M9000

SG-XPCIE2SCSIU320Z

Sun SPARC EnterpriseT5120/T5220

SG-XPCIE2SCSIU320Z

Sun SPARC EnterpriseT5140/T5240

SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise T5440 SG-XPCIE2SCSIU320Z

External I/O Expansion Unit forSun SPARC Enterprise M4000,M5000, M8000 and M9000 Servers

SG-(X)PCI2SCSILM320-Z

a The onboard SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.

b This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.

c Effective July 14, 2008, no longer supported for SE3320 JBOD SC configs. See FAB 239646 discussion above.

TABLE 7-23 Sun Cluster 3 and SE3320 JBOD Support Matrix for SPARC (Continued)

Server Host Adapter Cable

MaximumNodeConnectivity

160 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 173: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

TABLE 7-24 Sun Cluster 3 and SE3320 JBOD Support Matrix for x64

Server Host Adapter Cable

MaximumNodeConnectivity

Sun Fire V40z SG-XPCI1SCSI-LM320SG-XPCI1SCSI-LM320-ZSG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

0.8m(1136A)1.2m(1137A)2m (1138A)4m (3830B)10m (3831B)

2

Sun Fire X2100 M2 SG-XPCIE2SCSIU320Z

Sun Fire X2200 M2 SG-XPCIE2SCSIU320Z

Sun Fire X4100 SG-XPCI1SCSI-LM320

Sun Fire X4100 M2 SG-XPCIE2SCSI-LM320

Sun Fire X4140 SG-XPCIE2SCSIU320Z

Sun Fire X4200 SG-XPCI1SCSI-LM320

Sun Fire X4200 M2 SG-XPCIE2SCSI-LM320

Sun Fire X4240 SG-XPCIE2SCSIU320Z

Sun Fire X4250 SG-XPCIE2SCSIU320Z

Sun Fire X4440 SG-XPCIE2SCSIU320Z

Sun Fire X4450 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

Sun Fire X4540 SG-XPCIE2SCSIU320Z

Sun Fire X4600 SG-XPCIE2SCSIU320Z

Sun Fire X4600 M2 SG-XPCIE2SCSIU320Z

Sun Netra X4200 M2 SG-XPCI2SCSI-LM320-Z

Sun Netra X4250 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

Sun Netra X4450 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 161

Page 174: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun StorEdge 3320 RAID ArrayThis section describes the configuration rules for using Sun StorEdge 3320 RAID (aSE 3320 with either one or two RAID controllers) as a shared storage.

SE 3320 RAID Configuration Details■ Both AC and DC power supplies are supported.

■ The SE 3320 RAID version (a 3320 with either a single or dual RAID controllers)must be mirrored against another storage array in Sun Cluster configurations.

■ Connecting a maximum of 1 additional expansion 3320 JBOD unit to an existing3320 RAID device in a cluster configuration IS supported. This brings theexpansion JBOD under the control of the RAID controller, enabling the cluster tosee both the 3320 RAID device and the expansion JBOD as one device.

■ There is a SCSI cable length (length of cables to both hosts) limitation of 25 m perSCSI loop with the SE 3320 RAID.

■ For additional configuration information, please see the “SE 3320 Release Notes”as doc# 816-7292 at http://docs.sun.com

■ The SE 3320 RAID with the V440/Netra 440’s shared on-board SCSI is supportedand requires minimum patch release 113722-06.

■ Logical Volumes are NOT supported. For more information, please see bug ID4881785.

162 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 175: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

■ The latest supported firmware is 4.21.

TABLE 7-25 Sun Cluster 3 and SE3320 RAID Support Matrix for SPARC

Server Host Adapter Cable

MaximumNodeConnectivity

Netra 1120/1125, Netra 1400/1405,Netra 20, Netra 1280, Netra 1290Sun Enterprise 220R, 250, 420R, 450Sun Fire 280R, V440, V480/V490,V880/V890, V1280, E2900, 4800,6800, 12K/15K, E20K/E25K

6758Ax22224422SG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

0.8m(1136A)1.2m(1137A)2m (1138A)4m (3830B)10m (3831B)

2

Netra 240Sun Fire V210a, V240, V250

onboard SCSI portx222244226758ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Netra 440 6758AX4422ASG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

Netra T2000 SGXPCI2SCSILM320-Z

Netra T5220 SG-XPCIE2SCSIU320Z,SGXPCI2SCSILM320-Z

Netra T5440 SG-XPCIE2SCSIU320Z

Sun Fire V125 Onboard SCSIX4422A-2SGXPCI2SCSILM320-ZSGXPCI1SCSILM320-Z

Sun Fire V215/V245, V445 SGXPCI1SCSI-LM320-ZSGXPCI2SCSI-LM320-ZSGXPCIE2SCSIU320Zb

(x)4422A-2

Sun Fire T1000 SG-(X)PCIE2SCSIU320Z

Sun Fire T2000 SG-(X)PCI1SCSI-LM320SG-(X)PCI1SCSI-LM320-ZSG-XPCIE2SCSIU320Z

Sun SPARC Enterprise M3000 SG-XPCIE2SCSIU320Z

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 163

Page 176: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun SPARC EnterpriseM4000/M5000

SG-XPCI2SCSI-LM320-ZSG-XPCI1SCSI-LM320-ZSG-XPCIE2SCSIU320Z

Sun SPARC EnterpriseM8000/M9000

SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise T5120/T5220 SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise T5140/T5240 SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise T5440 SG-XPCIE2SCSIU320Z

External I/O Expansion Unit for SunSPARC Enterprise M4000, M5000,M8000 and M9000 Servers

SG-(X)PCI2SCSILM320-Z

a The onboard SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.

b This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.

TABLE 7-25 Sun Cluster 3 and SE3320 RAID Support Matrix for SPARC (Continued)

Server Host Adapter Cable

MaximumNodeConnectivity

164 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 177: sun cluster 3 configuration guide 412383

SCSI STORAGE SUPPORT

TABLE 7-26 Sun Cluster 3 and SE3320 RAID Support Matrix for x64

Server Host Adapter Cable

MaximumNodeConnectivity

Sun Fire V40z SG-XPCI1SCSI-LM320SG-XPCI1SCSI-LM320-ZSG-XPCI2SCSI-LM320SG-XPCI2SCSI-LM320-Z

0.8m(1136A)1.2m(1137A)2m (1138A)4m (3830B)10m (3831B)

2

Sun Fire X2100 M2 SG-XPCIE2SCSIU320Z

Sun Fire X2200 M2 SG-XPCIE2SCSIU320Z

Sun Fire X4100 SG-XPCI1SCSI-LM320

Sun Fire X4100 M2 SG-XPCIE2SCSI-LM320

Sun Fire X4140 SG-XPCIE2SCSIU320Z

Sun Fire X4200 SG-XPCI1SCSI-LM320

Sun Fire X4200 M2 SG-XPCIE2SCSI-LM320

Sun Fire X4240 SG-XPCIE2SCSIU320Z

Sun Fire X4250 SG-XPCIE2SCSIU320Z

Sun Fire X4440 SG-XPCIE2SCSIU320Z

Sun Fire X4450 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

Sun Fire X4540 SG-XPCIE2SCSIU320Z

Sun Fire X4600 SG-XPCIE2SCSIU320Z

Sun Fire X4600 M2 SG-XPCIE2SCSIU320Z

Sun Netra X4200 M2 SG-XPCI2SCSI-LM320-Z

Sun Netra X4250 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

Sun Netra X4450 SG-XPCIE2SCSIU320ZSGXPCI2SCSILM320-Z

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 165

Page 178: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 7-11 Direct-Attached SE 3320 RAID Configuration

FIGURE 7-12 Direct-Attached SE 3320 RAID with Attached JBODs (for additional storage)

Node 1 Node 2

0 1

HA 1

0 1 0 1

HA 2

0 1

a2 a3a0 a1

SE 3320 #1 SE 3320 #2

Data

a2 a3a0 a1

Mirror

Node 1 Node 2

0 1

HA 1

0 1 0 1

HA 2

0 1

a2 a3a0 a1

Data

a2 a3a0 a1

Data

B1 B2

SE 3320 #1JBOD #1

a2 a3a0 a1

Mirror

a2 a3a0 a1

Mirror

B1 B2

SE 3320 #1JBOD #2

Note: B1 port on 3320’s represents single bus connection port

166 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 179: sun cluster 3 configuration guide 412383

CHAPTER 8

SAS Storage Support

This chapter covers Sun Cluster supported SAS storage devices.

Sun StorageTek 2530 RAID Array

ST 2530 Configuration Rules:■ Sun Cluster supports both Simplex (with one controller) and Duplex (with two

controllers) configurations.

■ For Simplex configuration, ST2530 array requires volume manager software suchas SVM or VxVM to mirror data across two arrays.

■ For Duplex configuration, ST2530 array can be supported with properlyconfigured dual controllers, multipathing software, hardware RAID and withoutvolume manager software.

Node Connectivity Limits■ A maximum of 3 nodes can be connected to any one LUN.

Hubs and Switches■ SAS Expanders are not supported as of May’08.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 167

Page 180: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

RAID Requirements■ ST 2530 arrays are supported without software mirroring when properly

configured with dual controllers, multipathing, and hardware RAID providing in-array data redundancy.

■ A single 2530 array is supported when properly configured with dual controllers,multipathing, and hardware RAID providing in-array data redundancy.

Multipathing■ Sun StorEdge Traffic Manager (MPXIO) is required in Duplex Configuration

(ST2530 with 2x controllers). Solaris MPT Patch 125081-14 or later is required toconfig Sun Cluster.

ST 2530 Volume Manager Support■ There are no Sun Cluster specific requirements. Please refer to the ST 2530

product documentation regarding Volume Manager support.

Software, Firmware, and Patches■ CAM Build 6.0.1 Build 10 is the minimum requirement for Sun Cluster

■ x64: Starting with Solaris 10 8/07

■ Please see the ST 2530 release notes for other requirements.

Sharing ST 2530 Arrays■ LUN masking will enable sharing across multiple platforms. Please refer to the

base product documentation for further details.

ST 2530 Support Matrix and Exceptions:To determine whether your configuration is supported:

1. First check Table 5-6, SAS Storage for SPARC Servers, on page 55, or Table 5-7,SAS Storage for x64 Servers, on page 56 to determine whether your chosenserver and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list below:

■ SG-XPCI8SAS-E-Z

168 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 181: sun cluster 3 configuration guide 412383

SAS STORAGE SUPPORT

■ SG-XPCIE8SAS-E-Z

3. Check Table 8-1 to determine if there is limited HBA support

4. If HBA support is not limited, you can use your server and storage combinationwith host adapters as indicated by the “Server Search” under the Interop Tool“Searches” tab, https://interop.central.sun.com/interop/interop

Sun Storage J4200 and J4400 JBODArrays

J4200/J4400 Configuration Rules:■ Sun Cluster supports both SAS and SATA HDDs.

■ The J4200 and J4400 products require installed HDDs to be all SAS or SATA.Mixing is not permitted.

■ When configuring SATA HDDs in J4200/J4400 shared storage:

■ Each SAS I/O Module (SIM) is required to have only a single host connection.Thus, all J4200 or J4400 shared storage configurations using SATA HDDS musthave dual SIMs.

■ SCSI-reservation-based fencing and quorum-device support must be disabled.See Software Quorum in the Sun Cluster 3.2 1/09 documentation for more info.

■ A single J4200 or J4400 with SATA HDDs is supported, however due to the single-host-connection-per-SIM requirement, exhibits single points of failure.

■ A single J4200 or J4400 with SAS HDDs is supported when configured with dualSIMs, MPxIO, and proper data redundancy, however provides less availability.

■ Also see the J4200/J4400 Release Notes and SAS Multipathing Guide foradditional information.

TABLE 8-1 ST 2530 Array/Server combinations with Limited HBA Support

Server Host Adapter

Sun Netra T2000 SG-XPCI8SAS-E-Z

Sun Netra X4200 M2 SG-XPCIE8SAS-E-Za

a Not NEBS tested

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 169

Page 182: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Node Connectivity Limits■ A LUN can only be shared by two cluster nodes.

RAID Requirements■ It is recommended to mirror shared data in a J4200 or J4400 with another array.

■ When configured with dual SIMs and MPxIO, shared data can be mirrored withina single J4200 or J4400 with SAS HDDS, but with less availability.

■ When a J4200 or J4400 array is configured with a single SIM, shared data must bemirrored to another array.

Multipathing■ Sun Cluster support with SAS multipathing is enabled and qualified when using

SAS HDDs.

Volume Manager Support■ There are no Sun Cluster specific requirements. Please note the base product

documentation regarding volume manager support.

Software, Firmware, and Patches■ SAS HDDs: Sun Cluster support starts with Solaris 10 5/08 (update 5).

(J4200/J4400 product support starts with Solaris 10 8/07 (update 4))

■ SATA HDDs: Sun Cluster support starts with Solaris 10 10/08 (update 6) and SunCluster 3.2 1/09 (update 2), plus patches. The Software Quorum feature of SunCluster 3.2 1/09 is required. Refer to the Sun Cluster 3.2 1/09 documentation fordetails.

Sharing J4200/J4400 JBOD Arrays■ A J4200 or J4400 cannot be shared with another cluster or with non-cluster nodes.

J4200/J4400 Support Matrix and Exceptions:To determine whether your configuration is supported:

170 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 183: sun cluster 3 configuration guide 412383

SAS STORAGE SUPPORT

1. First check Table 5-6, SAS Storage for SPARC Servers, on page 55, or Table 5-7,SAS Storage for x64 Servers, on page 56 to determine whether your chosenserver and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list below:

■ SG-XPCI8SAS-E-Z

■ SG-XPCIE8SAS-E-Z

3. Check Table 8-2 to determine if there is limited HBA support

If HBA support is not limited, you can use your server and storage combinationwith host adapters as indicated by the “Server Search” under the Interop Tool“Searches” tab, https://interop.central.sun.com/interop/interop

Sun Storage J4400 JBOD ArraySee “Sun Storage J4200 and J4400 JBOD Arrays” on page 169.

TABLE 8-2 SS J4200/4400 Array/Server combinations with Limited HBA Support

Server Host Adapter

None at this time

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 171

Page 184: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

172 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 185: sun cluster 3 configuration guide 412383

CHAPTER 9

Ethernet Storage Support

This chapter covers Sun Cluster supported Ethernet-connected shared storagedevices.

Sun StorageTek 2510 RAID Array

ST 2510 Configuration Rules:■ Sun Cluster supports Duplex (ST 2510 with 2x controllers) configuration.

■ For Duplex configuration, a single ST 2510 array can be supported with properlyconfigured dual controllers, multipathing, hardware RAID and without volumemanager software.

■ The ST 2510 can only be configured on the same subnet as that of the clusternodes due to bug 6614299.

Node Connectivity Limits■ A maximum of 4 nodes can be connected to any one LUN.

Hubs and Switches■ See subnet restriction the Configuration Rules section above.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 173

Page 186: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

RAID Requirements■ ST 2510 arrays are supported without software mirroring when properly

configured with dual controllers, multipathing, and hardware RAID providing in-array data redundancy.

■ A single 2510 array is supported when properly configured with dual controllers,multipathing, and hardware RAID providing in-array data redundancy.

Multipathing■ For Duplex configuration, the option to use Sun StorEdge Traffic Manager

(MPXIO) is available. If MPXIO is not used, data must be mirrored to anotherarray or to another volume within the ST 2510.

ST 2510 Volume Manager Support■ There are no Sun Cluster specific requirements.

■ Please see the ST 2510 product documentation regarding Volume Managersupport.

Software, Firmware, and Patches■ SPARC server requirements:

■ SC 3.1 8/05 (update 4) + patches, SC 3.2 + patches and later

■ Solaris 10 5/09 (update 7) + patches and later

■ CAM 6.2.0 (FW 6.70.54.11) and later

■ x64 server requirements:

■ SC 3.1 8/05 (update 4) + patches, SC 3.2 + patches and later

■ Solaris 10 8/07 (update 4) + patches, and later

■ CAM 6.0.1 and later

■ Please see the ST2510 Release Notes for ST2510 requirements.

Sharing ST2510 Arrays■ LUN masking will enable sharing across multiple platforms. See product

documentation for details.

174 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 187: sun cluster 3 configuration guide 412383

ETHERNET STORAGE SUPPORT

ST2510 Support Matrix and Exceptions:The StorageTek 2510 product team does not maintain a list of qualified servers. Perthe StorageTek 2500 Just The Facts, SunWIN Token# 500199: ‘The Sun StorageTek2510 iSCSI Array is supported with any ethernet enabled device running in asupported O/S environment.”

Following that model, Sun Cluster 3 supports any Sun server qualified as a clusternode, with any Ethernet interface supported by that server, provided therequirements for Solaris release, patches, etc. are met.

Sun StorageTek 5000 NAS Appliance

ST 5000 NAS Configuration Rules:■ This information covers the following models:

■ Sun StorageTek 5210 NAS Appliance

■ Sun StorageTek 5220 NAS Appliance

■ Sun StorageTek 5310 NAS Appliance

■ Sun StorageTek 5320 NAS Appliance

■ Sun StorageTek 5320 NAS Cluster Appliance

■ Directories created on these Network Attached Storage (NAS) devices can beexported to cluster nodes, mounted on all cluster nodes, and be available forgeneral use by highly available cluster applications.

■ Support includes fencing of failed cluster nodes from NAS directories and therelease of NFS locks during failover.

■ There is no fencing support for NFS-exported file systems when used in a non-global zone, including nodes of a zone cluster.

■ Fencing support of NAS devices is provided in global zones.

■ Device configuration is fairly straightforward, with the creation and exporting ofNAS directories being done as in a non-clustered set-up, with some specialconsiderations for setting up the directory access list:

■ Do not enable general access for cluster nodes or use host groups to grantaccess to directories for the entire cluster as these two actions will hinder thefencing of failed cluster nodes.

■ Do specify access for each directory and each cluster node explicitly instead.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 175

Page 188: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ When adding trusted admin access for the cluster, make sure the trusted adminaccess entry comes before any general admin access entries.

■ It is also a good practice to set the NAS fencing module to load automaticallywhen the NAS device boots. If the NAS device is rebooted, and the fencingmodule is not set to automatically load, failed cluster nodes will not be able to befenced. Please see the Sun Cluster System Administration Guide for details onsetting the NAS fencing module to load at boot time.

■ iSCSI LUNs may only be used as quorum devices.

■ An iSCSI LUN quorum device must be on the same subnet as that of the clusternodes due to bug 6614299.

Node Connectivity Limits■ ST 5000 NAS iSCSI LUN quorum device only supports 2-node clusters.

Hubs and Switches■ See subnet restriction the Configuration Rules section above.

RAID Requirements■ N/A

Multipathing■ N/A

ST 5000 NAS Volume Manager Support■ N/A

Software, Firmware, and Patches■ Starts with SC 3.2 2/08 (update 1).

■ ST 5000 NAS iSCSI LUN quorum device support starts with Solaris 10 6/06(update 2).

■ Starts with NAS firmware version 4.21.

■ Please see the ST 5000 NAS Release Notes for other ST 5000 NAS requirements.

176 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 189: sun cluster 3 configuration guide 412383

ETHERNET STORAGE SUPPORT

Sharing ST 5000 NAS Arrays■ N/A

ST 5000 NAS Support Matrix and Exceptions:The ST 5000 NAS product team does not maintain a list of qualified servers. Per theST 5320 NAS WWWW, SunWIN Token# 472566: ‘A client is any computer on thenetwork that requests file services from the StorageTek 5000 NAS Appliance. The listof clients above represent client environments that have been tested. The list is notall inclusive and additional client OS are scheduled for testing. In general, if a clientimplementation follows the NFS version 2 or 3 protocol or the CIFS specifications, itis supported with the StorageTek 5000 NAS Appliance.”

Following that model, Sun Cluster 3 supports any Sun server qualified as a clusternode, provided the requirements for Solaris release, patches, etc. are met.

Sun StorageTek 5210 NAS Appliance

Sun StorageTek 5210 NAS ApplianceConfiguration Rules:■ Please see Section “Sun StorageTek 5000 NAS Appliance” on page 9-175.

Sun StorageTek 5220 NAS Appliance

Sun StorageTek 5220 NAS ApplianceConfiguration Rules:■ Please see Section “Sun StorageTek 5000 NAS Appliance” on page 9-175.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 177

Page 190: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun StorageTek 5310 NAS Appliance

Sun StorageTek 5310 NAS ApplianceConfiguration Rules:■ Please see Section “Sun StorageTek 5000 NAS Appliance” on page 9-175.

Sun StorageTek 5320 NAS Appliance

Sun StorageTek 5320 NAS ApplianceConfiguration Rules:■ Please see Section “Sun StorageTek 5000 NAS Appliance” on page 9-175.

Sun StorageTek 5320 NAS ClusterAppliance

Sun StorageTek 5320 NAS Cluster ApplianceConfiguration Rules:■ Please see Section “Sun StorageTek 5000 NAS Appliance” on page 9-175.

178 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 191: sun cluster 3 configuration guide 412383

ETHERNET STORAGE SUPPORT

Sun Storage 7000 Unified Storage System

SS 7000 Configuration Rules:■ This information covers the following models:

■ Sun Storage 7110

■ Sun Storage 7210

■ Sun Storage 7310 single- and dual-controller configurations

■ Sun Storage 7410 single- and dual-controller configurations

■ The SS 7000 can only be used as an iSCSI block device. File-level protocols, e.g.,NFS, are not supported except with Oracle RAC.

■ Oracle RAC is supported with the SS 7000 over NFS. See Section “Oracle RealApplication Cluster (OPS/RAC)” on page 11-245 for details.

■ SS 7000 iSCSI LUNs can only be configured on the same subnet as that of thecluster nodes due to bug 6614299.

■ Starting with Sun Storage 7000 Software Update 2009.Q3:

■ SS 7000 iSCSI LUNs can be configured as “scsi2” or “scsi3” quorum devices.

■ SS 7000 iSCSI LUNs can be configured with fencing enabled.

■ SS 7000 with releases prior to Software Update 2009.Q3:

■ SS 7000 iSCSI LUNs must be configured to use Software Quorum - “scsi2” or“scsi3” quorum devices are not supported.

■ SS 7000 iSCSI LUNs must be configured with fencing disabled.

Node Connectivity Limits■ A maximum of 8 nodes can be connected to any one LUN.

Hubs and Switches■ See subnet restriction the Configuration Rules section above.

RAID Requirements■ There are no Sun Cluster specific requirements.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 179

Page 192: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Multipathing■ There are no Sun Cluster specific requirements.

SS 7000 Volume Manager Support■ There are no Sun Cluster specific requirements.

■ Please see the SS 7000 product documentation regarding Volume Managersupport.

Software, Firmware, and Patches■ SS 7000 support starts with Solaris 10 10/08 (update 5).

■ iSCSI LUNs used as “scsi2” or “scsi3” quorum devices and fencing enabled startswith Sun Cluster 3.1 8/05 (update 4).

■ iSCSI LUNs used with Software Quorum and fencing disabled starts with SunCluster 3.2 1/09 (update 2).

■ See Section “Oracle Real Application Cluster (OPS/RAC)” on page 11-245 whenusing RAC with NFS.

■ Please see the SS 7000 documents for SS 7000 requirements.

Sharing SS 7000 Arrays■ TBD

SS 7000 Support Matrix and Exceptions:The Sun Storage 7000 Unified Storage System product team does not maintain a listof qualified servers. The “Sun Storage 7000 Family What Works With What,”SunWIN Token# 555895, 1/27/09 revision, states in the Client/Operating SystemSupport section: “A client is any computer on the network that requests file- orblock-level services from the Storage 7000 Unified Storage System. ... In general, if aclient implementation follows the protocol specifications, it is supported with theStorage 7000 System.”

Following that model, Sun Cluster 3 supports any Sun server qualified as a clusternode, with any Ethernet interface supported by that server, provided therequirements for Solaris release, patches, etc. are met.

180 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 193: sun cluster 3 configuration guide 412383

ETHERNET STORAGE SUPPORT

Sun Storage 7110 Unified Storage System

SS 7110 Configuration Rules:■ Please see “Sun Storage 7000 Unified Storage System” on page 179.

Sun Storage 7210 Unified Storage System

SS 7210 Configuration Rules:■ Please see “Sun Storage 7000 Unified Storage System” on page 179.

Sun Storage 7310 Unified Storage System

SS 7310 Configuration Rules:■ Includes both single- and dual-controller configurations.

■ Please see “Sun Storage 7000 Unified Storage System” on page 179.

Sun Storage 7410 Unified Storage System

SS 7410 Configuration Rules:■ Includes both single- and dual-controller configurations.

■ Please see “Sun Storage 7000 Unified Storage System” on page 179.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 181

Page 194: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

182 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 195: sun cluster 3 configuration guide 412383

CHAPTER 10

Network Configuration

Cluster InterconnectThe cluster interconnect is the network fabric, private to the cluster, forcommunication between the cluster nodes. This fabric is used for cluster-privatecommunication as well as cluster file system data transfer among the nodes. Thefabric consists of transport paths between all nodes of the cluster.

The following are general cluster-interconnect configuration guidelines. Please referto the technology discussions in this chapter for technology-specific guidelines.

■ Each transport path must connect all the nodes in the cluster.■ All private transport paths in the cluster interconnect network fabric must use the

same technology and operate at the same speed. Technology in this discussion,for example, is Ethernet (both fiber and UTP) vs InfiniBand.

■ There can be a maximum of six transport paths.■ It is recommended that at least two transport paths terminate on separate network

adaptors on each node in the cluster.■ A single transport path is supported, although a single point of failure.■ A single multi-port NIC may support more than one transport path, although

could be a single point of failure. Please note that NICs on both sides of atransport path are not required to have the same number of ports.

■ It is recommended that the anticipated data communication traffic between thenodes be taken into consideration while sizing the capacity of the clusterinterconnect.

■ Interconnects can be of the following two types: point-to-point and junction-based.

■ Public and private networks can share a single NIC with multiple ports.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 183

Page 196: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Point-to-Point InterconnectFor 2 node clusters, a point-to-point connection between the nodes forms a completeinterconnect.

FIGURE 10-1 Two Point-to-Point Interconnects in a Two-Node cluster

Junction-Based InterconnectFor clusters with more than two nodes, a switch is necessary to form aninterconnect. Note that this option can be used for a two node cluster as well. UsingVLANs for private interconnect traffic is supported.

Node2Node1

NIC

NIC

NIC

NIC

184 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 197: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

FIGURE 10-2 Two Junction-Based Interconnects in an N-Node Cluster (N <=8)

Private Interconnect Techology SupportPrivate interconnects must operate at the same speed (e.g., a cluster having oneinterconnect path be of gigabit speed and the other path being fast Ethernet speed isnot supported). The transport paths must all use the same technology, e.g., a clusterhaving one Ethernet transport path and one IB transport path is not supported. Thefollowing types of private interconnects are supported with Sun Cluster 3:

Ethernet■ There can be a maximum of 6 independent Ethernet interconnects within a

cluster.

■ All Ethernet ports within an interconnect path must operate at the same speed.■ VLAN Support

■ Sun Cluster supports the use of private interconnect networks over switch-based virtual local area networks (VLAN). In a switch-based VLANenvironment, Sun Cluster enables multiple clusters and non-clustered systemsto share Ethernet switches in two different configurations.

■ The implementation of switch-based VLAN environments is vendor-specific.Since each switch manufacturer implements VLAN differently, the followingguidelines address Sun Cluster requirements regarding how VLANs should beconfigured for use with cluster interconnects.

Switch

Node 1 Node 2 Node 3 Node 4

Switch

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 185

Page 198: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ You must understand your capacity needs before you set up a VLANconfiguration. To do this, you must know the minimum bandwidth necessaryfor your interconnect and application traffic.

■ Interconnect traffic must be placed in the highest priority queue.

■ All ports must be equally serviced, similar to a round robin or first in first outmodel.

■ You must verify that you have properly configured your VLANs to preventpath timeouts.

■ Linking of VLAN switches together is supported. For minimum quality ofservice requirements for your Sun Cluster configuration, please see the SunCluster 3 Release Notes Supplement.

■ VLAN configurations are supported in campus cluster configurations with thesame restrictions as “normal” Sun Cluster configurations.

■ Transport paths may share a switch by using VLANs.

■ Jumbo Frames Support

■ Sun Cluster 3.1 and all updates prior to 3.1 9/04 (update 3) are supported andrequire the following patches:

117950-07 (or later): SC3.1: Core Patch for Solaris 8.

117949-07 (or later): SC3.1: Core Patch for Solaris 9.

■ Sun Cluster 3.1 9/04 (update 3) and later are supported.

■ Agents support:

- Solaris 8 on Sun Cluster supports only Oracle RAC.

- Solaris 9 and later on Sun Cluster supports all Sun Cluster agents.

- When using Scalable Services and jumbo frames on your public network, itis required that the Maximum Transfer Unit (MTU) of the private network isthe same size or larger than the MTU of your public network.

■ Solaris support:

- Solaris 8 requires patch 111883-23 (or later): SunOS 5.8: Sun GigaSwiftEthernet 1.0 driver patch.

- Solaris 9 requires patch 112817-16 (or later): SunOS 5.9: Sun GigaSwiftEthernet 1.0 driver patch.

- Solaris 10 does not have specific patch requirements for this feature.

PCI/SCISCI is supported with maximum 4-node clusters.

■ An SCI interconnect consists of a pair of cable connections.

186 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 199: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

■ A 2-node cluster may deploy point-to-point SCI transport paths.

■ Each point-to-point SCI path uses two SCI cables for a total of 4 cables.

■ Each junction-based SCI path requires two SCI cables to connect to its respectiveSCI switch, e.g., a 2-node cluster using:SCI switches will have a total of 8 cables.

■ A maximum of 4 SCI cards per node is supported. Note that DR is supported on4 SCI configurations but requires patches 117124-05/S9 and 111335-26/S8

■ Configuring more than 2 SCI transport paths requires Sun Cluster 3.1 U1 or later

Sun Fire Link■ Supports up to 4 node Sun Cluster configurations.

■ Only DLPI mode is currently supported.

InfiniBandThe Sun Dual Port 4X IB Host Channel Adapter is supported with maximum 4-nodeclusters.

■ Sun Cluster 3.1 update 4 (or later).

■ Solaris 10 update 1 (or later).

■ Solaris Patch Requirements:

■ 118852-07 (or later) SunOS 5.10: patch kernel/misc/sparcv9/ibcm

■ All cluster configurations require one Sun IB Switch 9P per transport path. IBdoes not support a point-to-point interconnect.

■ Each IB transport path requires one IB cable from an HCA port to the switch, e.g.a two-node cluster using IB will use a total of 4 cables.

■ A maximum of 2 IB transport paths per node is supported. Using two IB HCAcards is recommended for best availability, however using both ports of a singleHCA is supported but may reduce availability. Note that some servers onlysupport a single IB HCA card.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 187

Page 200: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Su

nF

ireC

luster

Lin

k(W

ildcat)

The PCI network interfaces that can be used to set up the cluster interconnect arelisted in Table 10-1.

TABLE 10-1 Cluster Interconnects: PCI Network Interfaces for SPARC Servers

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

filee

X1032

Su

nS

wift

PC

I

X1033

Fast-E

thern

etP

CI

X1034

Qu

ad-fast

Eth

ernet

PC

I f

X1074

SC

iP

CI

X1141

Gig

abit

Eth

ernet

PC

I

X1150/X

3150G

igab

itE

thern

etP

CI

X1151/X

3151G

igab

itE

thern

etP

CI

X1233A

/X1233A

-ZIn

finiB

and

HC

AP

CI

X1236A

-ZIn

finiB

and

HC

AP

CIe

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CI

X4150A

/X4151A

Gig

abit

Eth

ernet

PC

I

X4150A

-2/X4151A

-2G

igab

itE

thern

etP

CI

X4422A

/X4422A

-2C

om

bo

Du

alG

igab

itE

thern

et-Du

alS

CS

IP

CII

X4444A

Qu

ad-g

igab

itE

thern

etcard

i

X4445A

Qu

ad-g

igab

itE

thern

etcard

i

X4447A

-Zx8

PC

I-EQ

uad

Gig

abit

Eth

ernet e,

j

X5544A

/X5544A

-410

Gig

abit

Eth

ernet

PC

I

X7280A

-2G

igab

itE

thern

etU

TP

PC

I-Ed,

j

X7281A

-2G

igab

itE

thern

etM

MF

PC

I-Ej

X7285

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

Sun Netra T1AC 200/DC200

• •

Sun Netra t1120/1125

• • • • • • • • • • • • •

Sun Netra t1400/1405

• • • • • • • • • • • • •

Sun Netra 20 • • • • • • • • • • • • •

Sun Netra 120 • • • • •

Sun Netra 210 •

Sun Netra 240AC/DC

• • • • • • • • • • •

Sun Netra 440 • • • • • • • • • • • • • •

Sun Netra 1280 • • • • • • • • • •

Sun Netra 1290 • • •

188 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 201: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATIONS

un

Fire

Clu

sterL

ink

(Wild

cat)

Sun NetraCP3010

Sun NetraCP3060

Sun NetraCP3260

•c

Sun NetraT2000

•d

• • • • • •

Sun NetraT5220

• • •

Sun NetraT5440

• • • •

Sun Enterprise220R

• • • • • • • • • • • • • •

Sun Enterprise250

• • • • • • • • • • • • • •

Sun Enterprise420

• • • • • • • • • • • • • •

Sun Enterprise450

• • • • • • • • • • • • • •

TABLE 10-1 Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

filee

X1032

Su

nS

wift

PC

I

X1033

Fast-E

thern

etP

CI

X1034

Qu

ad-fast

Eth

ernet

PC

I f

X1074

SC

iP

CI

X1141

Gig

abit

Eth

ernet

PC

I

X1150/X

3150G

igab

itE

thern

etP

CI

X1151/X

3151G

igab

itE

thern

etP

CI

X1233A

/X1233A

-ZIn

finiB

and

HC

AP

CI

X1236A

-ZIn

finiB

and

HC

AP

CIe

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CI

X4150A

/X4151A

Gig

abit

Eth

ernet

PC

I

X4150A

-2/X4151A

-2G

igab

itE

thern

etP

CI

X4422A

/X4422A

-2C

om

bo

Du

alG

igab

itE

thern

et-Du

alS

CS

IP

CII

X4444A

Qu

ad-g

igab

itE

thern

etcard

i

X4445A

Qu

ad-g

igab

itE

thern

etcard

i

X4447A

-Zx8

PC

I-EQ

uad

Gig

abit

Eth

ernet e,

j

X5544A

/X5544A

-410

Gig

abit

Eth

ernet

PC

I

X7280A

-2G

igab

itE

thern

etU

TP

PC

I-Ed,

j

X7281A

-2G

igab

itE

thern

etM

MF

PC

I-Ej

X7285

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 189

Page 202: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Su

nF

ireC

luster

Lin

k(W

ildcat)

Sun Enterprise3x00

• •

Sun Enterprise4x00

• •

Sun Enterprise5x00

• •

Sun Enterprise6x00

• •

Sun Enterprise10K

Sun Fire T1000 • • •

Sun Fire T2000 •d

• • • • • • • •

Sun Fire V120 • • • • •

Sun Fire V125 • • • •

Sun Fire V210a • • • • • • • • • • •

Sun Fire V215 • • • • • • •j

•j

•j

Sun Fire V240 • • • • • • • • • • • • • •

TABLE 10-1 Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

filee

X1032

Su

nS

wift

PC

I

X1033

Fast-E

thern

etP

CI

X1034

Qu

ad-fast

Eth

ernet

PC

I f

X1074

SC

iP

CI

X1141

Gig

abit

Eth

ernet

PC

I

X1150/X

3150G

igab

itE

thern

etP

CI

X1151/X

3151G

igab

itE

thern

etP

CI

X1233A

/X1233A

-ZIn

finiB

and

HC

AP

CI

X1236A

-ZIn

finiB

and

HC

AP

CIe

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CI

X4150A

/X4151A

Gig

abit

Eth

ernet

PC

I

X4150A

-2/X4151A

-2G

igab

itE

thern

etP

CI

X4422A

/X4422A

-2C

om

bo

Du

alG

igab

itE

thern

et-Du

alS

CS

IP

CII

X4444A

Qu

ad-g

igab

itE

thern

etcard

i

X4445A

Qu

ad-g

igab

itE

thern

etcard

i

X4447A

-Zx8

PC

I-EQ

uad

Gig

abit

Eth

ernet e,

j

X5544A

/X5544A

-410

Gig

abit

Eth

ernet

PC

I

X7280A

-2G

igab

itE

thern

etU

TP

PC

I-Ed,

j

X7281A

-2G

igab

itE

thern

etM

MF

PC

I-Ej

X7285

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

190 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 203: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATIONS

un

Fire

Clu

sterL

ink

(Wild

cat)

Sun Fire V245 • • • • • • •j

•j

•j

Sun Fire V250 • • • • • • • • • •

Sun Fire 280R • • • • • • • • • • • • • •

Sun Fire V440 • • • • • • • • • • • • • • •

Sun Fire V445 • • • • • • • • •j

•j

Sun Fire V480 • • • • • • • • • • • • • • •

Sun Fire V490 • • • • • • • • • • • • • • • • • •

Sun Fire V880 • • • • • • • • • • • • • • •

Sun Fire V890 • • • • • • • • • • • • • • • • • •

Sun Fire V1280 • • • • • • • • • • • •

Sun Fire E2900 • • • • • • • • • • • • •

Sun Fire 3800

Sun Fire4800/6800

• • • • • • • • • • • • • • • •k

Sun Fire 4810 • • • • • • • • • • • • •k

TABLE 10-1 Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

filee

X1032

Su

nS

wift

PC

I

X1033

Fast-E

thern

etP

CI

X1034

Qu

ad-fast

Eth

ernet

PC

I f

X1074

SC

iP

CI

X1141

Gig

abit

Eth

ernet

PC

I

X1150/X

3150G

igab

itE

thern

etP

CI

X1151/X

3151G

igab

itE

thern

etP

CI

X1233A

/X1233A

-ZIn

finiB

and

HC

AP

CI

X1236A

-ZIn

finiB

and

HC

AP

CIe

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CI

X4150A

/X4151A

Gig

abit

Eth

ernet

PC

I

X4150A

-2/X4151A

-2G

igab

itE

thern

etP

CI

X4422A

/X4422A

-2C

om

bo

Du

alG

igab

itE

thern

et-Du

alS

CS

IP

CII

X4444A

Qu

ad-g

igab

itE

thern

etcard

i

X4445A

Qu

ad-g

igab

itE

thern

etcard

i

X4447A

-Zx8

PC

I-EQ

uad

Gig

abit

Eth

ernet e,

j

X5544A

/X5544A

-410

Gig

abit

Eth

ernet

PC

I

X7280A

-2G

igab

itE

thern

etU

TP

PC

I-Ed,

j

X7281A

-2G

igab

itE

thern

etM

MF

PC

I-Ej

X7285

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 191

Page 204: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Su

nF

ireC

luster

Lin

k(W

ildcat)

Sun Fire E4900,6900

• • • • • • • • • • • • • • • •

Sun Fire12K/15Kb

• • •g

• • • • • • • • • •k

Sun FireE20K/E25Kb

• • • • • • • • • • • • •

Sun SPARCEnterpriseM3000

• • • • •

Sun SPARCEnterpriseM4000/M5000

• • • • •

Sun SPARCEnterpriseM8000/M9000

• • • • •

Sun SPARCEnterpriseT5120/T5220

• • •h

• • •

TABLE 10-1 Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

filee

X1032

Su

nS

wift

PC

I

X1033

Fast-E

thern

etP

CI

X1034

Qu

ad-fast

Eth

ernet

PC

I f

X1074

SC

iP

CI

X1141

Gig

abit

Eth

ernet

PC

I

X1150/X

3150G

igab

itE

thern

etP

CI

X1151/X

3151G

igab

itE

thern

etP

CI

X1233A

/X1233A

-ZIn

finiB

and

HC

AP

CI

X1236A

-ZIn

finiB

and

HC

AP

CIe

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CI

X4150A

/X4151A

Gig

abit

Eth

ernet

PC

I

X4150A

-2/X4151A

-2G

igab

itE

thern

etP

CI

X4422A

/X4422A

-2C

om

bo

Du

alG

igab

itE

thern

et-Du

alS

CS

IP

CII

X4444A

Qu

ad-g

igab

itE

thern

etcard

i

X4445A

Qu

ad-g

igab

itE

thern

etcard

i

X4447A

-Zx8

PC

I-EQ

uad

Gig

abit

Eth

ernet e,

j

X5544A

/X5544A

-410

Gig

abit

Eth

ernet

PC

I

X7280A

-2G

igab

itE

thern

etU

TP

PC

I-Ed,

j

X7281A

-2G

igab

itE

thern

etM

MF

PC

I-Ej

X7285

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

192 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 205: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATIONS

un

Fire

Clu

sterL

ink

(Wild

cat)

Sun SPARCEnterpriseT5140/T5240

• • • • • •

Sun SPARCEnterpriseT5440

• • • • • •

External I/OExpansion Unitfor Sun SPARCEnterpriseM4000, M5000,M8000, M9000,T5120, T5140,T5220 & T5240

• • • •

a SF V210 onboard gigabit port support requires patch #110648-28

b Do not install PCI SCI cards into hs PCI+ PCI slot 1. For more information see bug 6178223.

c Base and Extended Fabrics, and Sun Netra CP3200 ARTM-FC-Z (XCP32X0-RTM-FC-Z)

d Two-node clusters installed with Solaris 10 11/06 (or later) and KU 118833-30 (or later) can configure e1000g cluster interconnects using back-to-back cabling, otherwise Ethernet switches are required. See Info Doc number 88928 for more info.

e Refer to Info Doc ID: 89736 for details

f Includes support for new LW8-QFE card on SF 1280, Netra 1280 and E2900

g This support requires patch #110900-08 for Solaris 8, patch #112838-06 and 114272-02 for Solaris 9. Max nodes supported is 4 X1074A

h Support in SC3.2U1 or later as CR 6599044 (P2/S2) was tested and integrated in SC3.2U1

TABLE 10-1 Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

filee

X1032

Su

nS

wift

PC

I

X1033

Fast-E

thern

etP

CI

X1034

Qu

ad-fast

Eth

ernet

PC

I f

X1074

SC

iP

CI

X1141

Gig

abit

Eth

ernet

PC

I

X1150/X

3150G

igab

itE

thern

etP

CI

X1151/X

3151G

igab

itE

thern

etP

CI

X1233A

/X1233A

-ZIn

finiB

and

HC

AP

CI

X1236A

-ZIn

finiB

and

HC

AP

CIe

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CI

X4150A

/X4151A

Gig

abit

Eth

ernet

PC

I

X4150A

-2/X4151A

-2G

igab

itE

thern

etP

CI

X4422A

/X4422A

-2C

om

bo

Du

alG

igab

itE

thern

et-Du

alS

CS

IP

CII

X4444A

Qu

ad-g

igab

itE

thern

etcard

i

X4445A

Qu

ad-g

igab

itE

thern

etcard

i

X4447A

-Zx8

PC

I-EQ

uad

Gig

abit

Eth

ernet e,

j

X5544A

/X5544A

-410

Gig

abit

Eth

ernet

PC

I

X7280A

-2G

igab

itE

thern

etU

TP

PC

I-Ed,

j

X7281A

-2G

igab

itE

thern

etM

MF

PC

I-Ej

X7285

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 193

Page 206: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

i Note that the 1280/2900 series boxes do not support the X4444A cards due to a short PCI slot. However, the X4445A is supported in the1280/2900.

j Note that the network interface is not supported with Solaris 9 as Solaris 9 does not support PCIe

k Sun Fire Cluster Link Only Supported on SF 6800, 12k/15k. Only DLPI mode is supported

TABLE 10-2 Cluster Interconnects: PCI Network Interfaces for x64 Servers

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

EP

orts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

filea

X1233A

/X1233A

-ZIn

finiB

and

HC

AP

CI

X1235A

Su

nD

ual

Po

rt4x

IBH

CA

PC

I-X

X1236A

-ZS

un

Du

alP

ort

4xIB

HC

AP

CI-E

X1333A

-4S

un

Du

alP

ort

4xIB

HC

AP

CI-X

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CI

X4150A

/X4150A

-2S

un

Gig

aSw

iftU

TP

PC

I

X4151A

/X4151A

-2S

un

Gig

aSw

iftM

MF

PC

I

X4422A

/X4422A

-2S

un

Sto

rEd

ge

Du

alG

igE

/Du

alS

CS

IP

CI b

X4444A

Su

nQ

uad

Gig

aSw

iftP

CI

UT

P

X4445A

Su

nQ

uad

Gig

aSw

iftP

CI-X

UT

P

X4446A

-ZS

un

x4P

CI-E

Qu

adG

igE

UT

P

X4447A

-ZS

un

x8P

CI-E

Qu

adG

igE

UT

P

X5544A

/X5544A

-4S

un

10G

igE

PC

I/PC

I-Xd

X7280A

-2S

un

PC

I-ED

ual

Gig

EU

TP

X7281A

-2S

un

PC

I-ED

ual

Gig

EM

MF

X7285A

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286A

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

X9271A

Intel

Sin

gle

Gig

Ee

X9272A

Intel

Du

alG

igE

a

X9273A

Intel

Qu

adG

igE

a

Sun Fire V20z • •

Sun Fire V40z • • • • •c

• • • • • •

Sun Fire X2100M2

• • • •

Sun Fire X2200M2

• • • •

Sun Fire X4100 • • • •

Sun Fire X4100M2

• • • •

Sun Fire X4140 • • • • • •

Sun Fire X4150 • • • • •

Sun Fire X4170 • • • •

Sun Fire X4200 • • • • •

194 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 207: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

Sun Fire X4200M2

• • • •

Sun Fire X4240 • • • • • •

Sun Fire X4250 • • • • • • •

Sun Fire X4270 • • • • •

Sun Fire X4275 • • • • •

Sun Fire X4440 • • • • •

Sun Fire X4450 • • • • •

Sun Fire X4540 • • • • • •

Sun Fire X4600 • • • • • • • • •

Sun Fire X4600M2

• • • • • • • • •

Sun Netra X4200M2

• • • • • • • •

Sun Netra X4250 • • • • •

Sun Netra X4450 • • • • •

a Refer to Info Doc ID: 89736 for details

b Requires Sun GigaSwift Ethernet driver for x 86 Solaris 9 1.0. available at http://www.sun.com/software/download/prod-ucts/40f7115e.html

c Do not install X4422A in both V 40z PCI slots 2 and 3 (See CR 6196936)

TABLE 10-2 Cluster Interconnects: PCI Network Interfaces for x64 Servers (Continued)

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

EP

orts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

filea

X1233A

/X1233A

-ZIn

finiB

and

HC

AP

CI

X1235A

Su

nD

ual

Po

rt4x

IBH

CA

PC

I-X

X1236A

-ZS

un

Du

alP

ort

4xIB

HC

AP

CI-E

X1333A

-4S

un

Du

alP

ort

4xIB

HC

AP

CI-X

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CI

X4150A

/X4150A

-2S

un

Gig

aSw

iftU

TP

PC

I

X4151A

/X4151A

-2S

un

Gig

aSw

iftM

MF

PC

I

X4422A

/X4422A

-2S

un

Sto

rEd

ge

Du

alG

igE

/Du

alS

CS

IP

CI b

X4444A

Su

nQ

uad

Gig

aSw

iftP

CI

UT

P

X4445A

Su

nQ

uad

Gig

aSw

iftP

CI-X

UT

P

X4446A

-ZS

un

x4P

CI-E

Qu

adG

igE

UT

P

X4447A

-ZS

un

x8P

CI-E

Qu

adG

igE

UT

P

X5544A

/X5544A

-4S

un

10G

igE

PC

I/PC

I-Xd

X7280A

-2S

un

PC

I-ED

ual

Gig

EU

TP

X7281A

-2S

un

PC

I-ED

ual

Gig

EM

MF

X7285A

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286A

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

X9271A

Intel

Sin

gle

Gig

Ee

X9272A

Intel

Du

alG

igE

a

X9273A

Intel

Qu

adG

igE

a

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 195

Page 208: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

The SBus and cPCI network interfaces that can be used to set up the clusterinterconnect are listed in Table 10-3

d Support starting with Solaris 10 6/06

e Support starting with Solaris 10 3/05 HW1

TABLE 10-3 Cluster Interconnects: SBus and cPCI Network Interfaces for SPARC Servers

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1018

Su

nS

wift

SB

us

X1049

Qu

ad-fast

Eth

ernet

SB

us

X1059

Fast-E

thern

etS

Bu

s

X1140

Gig

abit

Eth

ernet

SB

us

X1232

Su

nS

wift

cPC

I

X1234

Qu

ad-fast

ethern

etcP

CI

X1261

Gig

abit

Eth

ernet

cPC

I

Sun Enterprise 3x00 • • • • •

Sun Enterprise 4x00 • • • • •

Sun Enterprise 5x00 • • • • •

Sun Enterprise 6x00 • • • • •

Sun Enterprise 10K • • • •

Sun Fire 3800 • • •

Sun Fire 4800, 4810, 6800 • • •

196 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 209: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

TABLE 10-4 Cluster Interconnects: PCI-E ExpressModule Network Interfaces for SPARCServers

Servers

Network InterfaceExpressModules

SG

-XP

CIE

2FC

GB

E-E

-ZD

ual

4Gb

FC

Du

alG

bE

Exp

ressMo

du

le

SG

-XP

CIE

2FC

GB

E-Q

-ZD

ual

4Gb

FC

Du

alG

bE

Exp

ressMo

du

le

X1028A

-ZD

ual

10G

bE

XF

PP

CIe

Exp

ressMo

du

lea

a Requires patch 125670-02 or later. Refer to InfoDoc 89736 for details

X1288A

-Z4x

Du

al10G

b/s

IBH

CA

PC

IeE

xpressM

od

ule

X7282A

-ZP

CI-E

xpress

Du

alG

bE

Exp

ressMo

du

leU

TP

X7283A

-ZP

CI-E

xpress

Du

alG

bE

Exp

ressMo

du

leM

MF

X7284A

-Zx4

PC

IeQ

uad

Gb

EE

xpressM

od

ule

X7287A

-ZQ

uad

Gb

EU

TP

x8P

CIe

Exp

ressMo

du

lea

Sun Blade T6300 • • • • • • •

Sun Blade T6320 • • • • • • •

Sun Blade T6340 • • • • • • •

USBRDT-5240 Uniboard forE4800, E4900, E6800, E6900,E12K, E15K, E20K and E25K

• • • • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 197

Page 210: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 10-5 Cluster Interconnects: PCI-E ExpressModule Network Interfaces for x64Servers

Servers

Network InterfaceExpressModules

SG

-XP

CIE

2FC

GB

E-E

-ZD

ual

4Gb

FC

Du

alG

bE

Exp

ressMo

du

le

SG

-XP

CIE

2FC

GB

E-Q

-ZD

ual

4Gb

FC

Du

alG

bE

Exp

ressMo

du

le

X1028A

-Z4x

Du

al10

Gb

EX

FP

Exp

ressMo

du

lea

a Requires patch 125671-02 or later. Refer to InfoDoc 89736 for details

X1288A

-ZD

ual

10Gb

/sIB

HC

AP

CIe

Exp

ressMo

du

le

X7282A

-ZP

CI-E

xpress

Du

alG

bE

Exp

ressMo

du

leU

TP

X7283A

-ZP

CI-E

xpress

Du

alG

bE

Exp

ressMo

du

leM

MF

X7284A

-Zx4

PC

IeQ

uad

Gb

EE

xpressM

od

ule

X7287A

-ZQ

uad

Gb

EU

TP

x8E

xpressM

od

ule

Sun Blade X6220 • • • • • • • •

Sun Blade X6240 • • • • • • • •

Sun Blade X6250 • • • • • • • •

Sun Blade X6270 • • • • • • •

Sun Blade X6440 • • • • • • • •

Sun Blade X6450 • • • • • • • •

198 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 211: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

TABLE 10-6 Cluster Interconnect: Network Express Module (NEM) Network Interfacesfor SPARC Servers

Servers

Network InterfaceNEMs

X4212A

SB

600014-P

ort

Mu

lti-Fab

ricN

EM

X4236A

SB

600024-P

ort

Mu

lti-Fab

ricN

EM

X4250A

SB

600010-P

ort

Gb

EN

EM

X4731A

SB

604812-P

ort

Gb

EN

EM

Sun Blade T6300 • • •

Sun Blade T6320 •a • •

Sun Blade T6340 • •b • •

a Requires X4822A XAUI Pass-Through Fabric ExpansionModule for 10GbE operation

b Requires X1029A Dual 10GbE Fabric Expansion Module for10GbE operation

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 199

Page 212: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 10-7 Cluster Interconnect: Network Express Module (NEM) Network Interfaces forx64 Servers

Servers

Network InterfaceNEMs

X4212A

SB

600014-P

ort

Mu

lti-Fab

ricN

EM

X4236A

SB

600024-P

ort

Mu

lti-Fab

ricN

EM

X4250A

SB

600010-P

ort

Gb

EN

EM

X4731A

SB

604812-P

ort

Gb

EN

EM

Sun Blade X6220 • • •

Sun Blade X6240 • •

Sun Blade X6250 •a

a Requires X1029A Dual 10GbE Fabric Expansion Module for10GbE operation

• •

Sun Blade X6270 • •

Sun Blade X6440 • •

Sun Blade X6450 •a • •

TABLE 10-8 Cluster Interconnect: XAUI Network Interfaces for SPARC Servers

Servers

NetworkInterfaceCards

SE

SX

7XA

1Z

Sun Netra T5220 •

Sun Netra T5440 •

Sun SPARC Enterprise T5120 •

200 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 213: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

The cables/switches supported with each type of cluster interconnect are listedbelow:

Sun SPARC Enterprise T5140 •

Sun SPARC Enterprise T5220 •

Sun SPARC Enterprise T5240 •

Sun SPARC Enterprise T5440 •

TABLE 10-9 Cables for Cluster Interconnect

Network Interface Cable Part # for cable

Fast Ethernet Null Ethernet Cable (for point-to-point only) 3837A

Customer-supplied (for junction based orpoint to point)

Gigabit Ethernet(Copper)

Customer-supplied (for junction based orpoint to point)

Gigabit Ethernet(Fiber)

2m Fiber Optic Cable 973A

15m Fiber Optic Cable 978A

5-meter, fiber-optic cableCustomer Supplied (for junction based orpoint to point)

9715A

10 Gigabit Ethernet(Fiber)

2m Fibre Optic Cable5m Fibre Optic Cable15m Fibre Optic Cable25m Fibre Optic CableCustomer Supplied (for junction based orpoint to point)

X9732AX9733AX9734AX9736A

TABLE 10-8 Cluster Interconnect: XAUI Network Interfaces for SPARC Servers

Servers

NetworkInterfaceCards

SE

SX

7XA

1Z

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 201

Page 214: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

The switches supported with each type of cluster interconnect are listed below:

Public NetworkClients connect to the cluster nodes through public network interfaces. It is requiredthat all nodes in the cluster be independently connected on the same IP subnets.

Sun Cluster 3.0 uses NAFO as a public network interface while later Sun Cluster 3releases use IPMP as a public network interface.

Note – The Sun X1018 and X1059 cards do not support IPMP, thus, they are notsupported as a public network interface with Sun Cluster 3 releases after 3.0.

For ATM networks only LANE mode is supported.

PCI SCI 2m PCI SCI Cable 3901A

5m PCI SCI Cable 3902A

7.5m PCI SCI Cable 3903A

InfiniBand 2m IB Cable 9280A

5m IB Cable 9281A

TABLE 10-10 Switches for Cluster Interconnect

Network Interface Switch Part # of Switch

Fast Ethernet Customer supplied N/A

Gigabit Ethernet Customer supplied N/A

10 Gigabit Ethernet Customer Supplied N/A

PCI SCI 4 port SCI Switch 3895A

Sun Fire Link Sun Fire Link Switch

InfiniBand Sun IB Switch 9P 3152A

Voltaire ISR 9024 withGridvision 5.1

By Solaris Ready Partner

TABLE 10-9 Cables for Cluster Interconnect (Continued)

Network Interface Cable Part # for cable

202 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 215: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

The following Cluster Interconnect tables only indicate that at least one card issupported per server or domain, as applicable. Please ensure the targetedconfiguration meets your customer’s requirements.

Public network PCI interfaces supported with Sun Cluster 3 for SPARC servers arelisted in Table 10-11

TABLE 10-11 Public Network: PCI Network Interfaces for SPARC Servers

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

fileb

X1032

Su

nS

wift

PC

I

X1033

Fast-E

thern

etP

CI

X1034

Qu

ad-F

astE

thern

etP

CI c

X1141

Gig

abit

Eth

ernet

PC

I

X1150/X

3150G

igab

itE

thern

etP

CI

X1151/X

3151G

igab

itE

thern

etP

CI

X1157

Su

nA

TM

155/MM

F5.0

PC

I

X1159

Su

nA

TM

622/MM

F5.0

PC

I

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CII

X4150A

/X4151A

Gig

abit

Eth

ernet

PC

I

X4150A

-2/X4151A

-2G

igab

itE

thern

etP

CI

X4422A

/X4422A

-2C

om

bo

Du

alG

igab

itE

thern

et-Du

alS

CS

IP

CII

X4444A

Qu

ad-G

igab

itE

thern

etP

CI*c

X4445A

Qu

ad-G

igab

itE

thern

etP

CI*c

X4447A

-Zx8

PC

I-EQ

uad

Gig

abit

Eth

ernet b, d

X5544A

/X5544A

-410

Gig

abit

Eth

ernet

PC

I

X7280A

-2G

igab

itE

thern

etU

TP

PC

I-Ed

X7281A

-2G

igab

itE

thern

etM

MF

PC

I-Ed

X7285

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

Sun Netra T1AC 200/DC200

• •

Sun Netra t1120/1125

• • • • • • • • • • • • • • •

Sun Netra t1400/1405

• • • • • • • • • • • • • • •

Sun Netra 20 • • • • • • • • • • • • • • •

Sun Netra 120 • • • • • •

Sun Netra 210 •

Sun Netra 240AC/DC

• • • • • • • • • •

Sun Netra 440 • • • • • • • • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 203

Page 216: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Netra 1280 • • • • • • • • •

Sun Netra 1290 • • • • •

Sun NetraCP3010

Sun NetraCP3060

Sun NetraCP3260

•a

Sun NetraT2000

• • • • • • •

Sun NetraT5220

• • •

Sun NetraT5440

• • • •

Sun Enterprise220R

• • • • • • • • • • • • • • •

Sun Enterprise250

• • • • • • • • • • • • • • •

Sun Enterprise420

• • • • • • • • • • • • • • •

TABLE 10-11 Public Network: PCI Network Interfaces for SPARC Servers (Continued)

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

fileb

X1032

Su

nS

wift

PC

I

X1033

Fast-E

thern

etP

CI

X1034

Qu

ad-F

astE

thern

etP

CI c

X1141

Gig

abit

Eth

ernet

PC

I

X1150/X

3150G

igab

itE

thern

etP

CI

X1151/X

3151G

igab

itE

thern

etP

CI

X1157

Su

nA

TM

155/MM

F5.0

PC

I

X1159

Su

nA

TM

622/MM

F5.0

PC

I

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CII

X4150A

/X4151A

Gig

abit

Eth

ernet

PC

I

X4150A

-2/X4151A

-2G

igab

itE

thern

etP

CI

X4422A

/X4422A

-2C

om

bo

Du

alG

igab

itE

thern

et-Du

alS

CS

IP

CII

X4444A

Qu

ad-G

igab

itE

thern

etP

CI*c

X4445A

Qu

ad-G

igab

itE

thern

etP

CI*c

X4447A

-Zx8

PC

I-EQ

uad

Gig

abit

Eth

ernet b, d

X5544A

/X5544A

-410

Gig

abit

Eth

ernet

PC

I

X7280A

-2G

igab

itE

thern

etU

TP

PC

I-Ed

X7281A

-2G

igab

itE

thern

etM

MF

PC

I-Ed

X7285

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

204 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 217: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

Sun Enterprise450

• • • • • • • • • • • • • • •

Sun Enterprise3x00

Sun Enterprise4x00

Sun Enterprise5x00

Sun Enterprise6x00

Sun Enterprise10K

Sun Fire T1000 • • •

Sun Fire T2000 • • • • • • • •

Sun Fire V120 • • • • • •

Sun Fire V125 • • • •

Sun Fire V210 • • • • • • • • • • • • • • •

Sun Fire V215 • • • • • • •d

•d

•d

TABLE 10-11 Public Network: PCI Network Interfaces for SPARC Servers (Continued)

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

fileb

X1032

Su

nS

wift

PC

I

X1033

Fast-E

thern

etP

CI

X1034

Qu

ad-F

astE

thern

etP

CI c

X1141

Gig

abit

Eth

ernet

PC

I

X1150/X

3150G

igab

itE

thern

etP

CI

X1151/X

3151G

igab

itE

thern

etP

CI

X1157

Su

nA

TM

155/MM

F5.0

PC

I

X1159

Su

nA

TM

622/MM

F5.0

PC

I

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CII

X4150A

/X4151A

Gig

abit

Eth

ernet

PC

I

X4150A

-2/X4151A

-2G

igab

itE

thern

etP

CI

X4422A

/X4422A

-2C

om

bo

Du

alG

igab

itE

thern

et-Du

alS

CS

IP

CII

X4444A

Qu

ad-G

igab

itE

thern

etP

CI*c

X4445A

Qu

ad-G

igab

itE

thern

etP

CI*c

X4447A

-Zx8

PC

I-EQ

uad

Gig

abit

Eth

ernet b, d

X5544A

/X5544A

-410

Gig

abit

Eth

ernet

PC

I

X7280A

-2G

igab

itE

thern

etU

TP

PC

I-Ed

X7281A

-2G

igab

itE

thern

etM

MF

PC

I-Ed

X7285

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 205

Page 218: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Fire V240 • • • • • • • • • • • • • • •

Sun Fire V245 • • • • • • •d

•d

•d

Sun Fire V250 • • • • • • • • • • •

Sun Fire 280R • • • • • • • • • • • • • • •

Sun Fire V440 • • • • • • • • • • • • • • •

Sun Fire V445 • • • • • • • •d

•d

Sun Fire V480 • • • • • • • • • • • • • • • •

Sun Fire V490 • • • • • • • • • • • • • • • • • •

Sun Fire V880 • • • • • • • • • • • • • • • •

Sun Fire V890 • • • • • • • • • • • • • • • • • •

Sun Fire V1280 • • • • • • • • • • •

Sun Fire E2900 • • • • • • • • • • • •

Sun Fire 3800

Sun Fire4800/6800

• • • • • • • • • • • • • • • • •

Sun Fire 4810 • • • • • • • • • • • • • •

TABLE 10-11 Public Network: PCI Network Interfaces for SPARC Servers (Continued)

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

fileb

X1032

Su

nS

wift

PC

I

X1033

Fast-E

thern

etP

CI

X1034

Qu

ad-F

astE

thern

etP

CI c

X1141

Gig

abit

Eth

ernet

PC

I

X1150/X

3150G

igab

itE

thern

etP

CI

X1151/X

3151G

igab

itE

thern

etP

CI

X1157

Su

nA

TM

155/MM

F5.0

PC

I

X1159

Su

nA

TM

622/MM

F5.0

PC

I

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CII

X4150A

/X4151A

Gig

abit

Eth

ernet

PC

I

X4150A

-2/X4151A

-2G

igab

itE

thern

etP

CI

X4422A

/X4422A

-2C

om

bo

Du

alG

igab

itE

thern

et-Du

alS

CS

IP

CII

X4444A

Qu

ad-G

igab

itE

thern

etP

CI*c

X4445A

Qu

ad-G

igab

itE

thern

etP

CI*c

X4447A

-Zx8

PC

I-EQ

uad

Gig

abit

Eth

ernet b, d

X5544A

/X5544A

-410

Gig

abit

Eth

ernet

PC

I

X7280A

-2G

igab

itE

thern

etU

TP

PC

I-Ed

X7281A

-2G

igab

itE

thern

etM

MF

PC

I-Ed

X7285

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

206 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 219: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

Sun Fire E4900,E6900

• • • • • • • • • • • • • • • • •

Sun Fire12K/15K

• • • • • • • • • • • •

Sun FireE20K/E25K

• • • • • • • • • • •

Sun SPARCEnterpriseM3000

• • • • •

Sun SPARCEnterpriseM4000/M5000

• • • • •

Sun SPARCEnterpriseM8000/M9000

• • • • •

Sun SPARCEnterpriseT5120/T5220

• • • • •

TABLE 10-11 Public Network: PCI Network Interfaces for SPARC Servers (Continued)

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

fileb

X1032

Su

nS

wift

PC

I

X1033

Fast-E

thern

etP

CI

X1034

Qu

ad-F

astE

thern

etP

CI c

X1141

Gig

abit

Eth

ernet

PC

I

X1150/X

3150G

igab

itE

thern

etP

CI

X1151/X

3151G

igab

itE

thern

etP

CI

X1157

Su

nA

TM

155/MM

F5.0

PC

I

X1159

Su

nA

TM

622/MM

F5.0

PC

I

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CII

X4150A

/X4151A

Gig

abit

Eth

ernet

PC

I

X4150A

-2/X4151A

-2G

igab

itE

thern

etP

CI

X4422A

/X4422A

-2C

om

bo

Du

alG

igab

itE

thern

et-Du

alS

CS

IP

CII

X4444A

Qu

ad-G

igab

itE

thern

etP

CI*c

X4445A

Qu

ad-G

igab

itE

thern

etP

CI*c

X4447A

-Zx8

PC

I-EQ

uad

Gig

abit

Eth

ernet b, d

X5544A

/X5544A

-410

Gig

abit

Eth

ernet

PC

I

X7280A

-2G

igab

itE

thern

etU

TP

PC

I-Ed

X7281A

-2G

igab

itE

thern

etM

MF

PC

I-Ed

X7285

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 207

Page 220: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun SPARCEnterpriseT5140/T5240

• • • • •

Sun SPARCEnterpriseT5440

• • • • •

External I/OExpansion Unitfor Sun SPARCEnterpriseM4000, M5000,M8000, M9000,T5120, T5140,T5220 & T5240

• • • • •

a Base and Extended Fabrics, and Sun Netra CP3200 ARTM-FC (XCP32X0-RTM-FC-Z)

b Refer to Info Doc ID: 89736 for details

c Includes support for the Sun LW8-QFE card on the SF1280, Netra 1280 and E2900

d Note that the network interface is not supported with Solaris 9 as Solaris 9 does not support PCIe

TABLE 10-11 Public Network: PCI Network Interfaces for SPARC Servers (Continued)

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

fileb

X1032

Su

nS

wift

PC

I

X1033

Fast-E

thern

etP

CI

X1034

Qu

ad-F

astE

thern

etP

CI c

X1141

Gig

abit

Eth

ernet

PC

I

X1150/X

3150G

igab

itE

thern

etP

CI

X1151/X

3151G

igab

itE

thern

etP

CI

X1157

Su

nA

TM

155/MM

F5.0

PC

I

X1159

Su

nA

TM

622/MM

F5.0

PC

I

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CII

X4150A

/X4151A

Gig

abit

Eth

ernet

PC

I

X4150A

-2/X4151A

-2G

igab

itE

thern

etP

CI

X4422A

/X4422A

-2C

om

bo

Du

alG

igab

itE

thern

et-Du

alS

CS

IP

CII

X4444A

Qu

ad-G

igab

itE

thern

etP

CI*c

X4445A

Qu

ad-G

igab

itE

thern

etP

CI*c

X4447A

-Zx8

PC

I-EQ

uad

Gig

abit

Eth

ernet b, d

X5544A

/X5544A

-410

Gig

abit

Eth

ernet

PC

I

X7280A

-2G

igab

itE

thern

etU

TP

PC

I-Ed

X7281A

-2G

igab

itE

thern

etM

MF

PC

I-Ed

X7285

Su

nP

CI-X

Du

alG

igE

UT

PL

ow

Pro

file

X7286

Su

nP

CI-X

Sin

gle

Gig

EM

MF

Lo

wP

rofile

208 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 221: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

Public network PCI interfaces supported with Sun Cluster 3 for x64 servers are listed

TABLE 10-12 Public Network: PCI Network Interfaces for x64 Servers

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

EP

orts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

filea

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CI

X4150A

/X4150A

-2G

igaS

wift

UT

PP

CI

X4151A

/X4151A

-2G

igaS

wift

MM

FP

CI

X4422A

/X4422A

-2S

torE

dg

eD

ual

Gig

E/D

ual

SC

SI

PC

I b

X4444A

Qu

adG

igaS

wift

PC

IU

TP

X4445A

Qu

adG

igaS

wift

PC

I-XU

TP

X4446A

-Zx4

PC

I-EQ

uad

Gig

EU

TP

X4447A

-Zx8

PC

I-EQ

uad

Gig

EU

TP

X5544A

/X5544A

-410

Gig

EP

CI/P

CI-X

d

X7280A

-2P

CI-E

Du

alG

igE

UT

P

X7281A

-2P

CI-E

Du

alG

igE

MM

F

X7285A

PC

I-XD

ual

Gig

EU

TP

Lo

wP

rofile

X7286A

PC

I-XS

ing

leG

igE

MM

FL

ow

Pro

file

X9271A

Intel

Sin

gle

Gig

Ee

X9272A

Intel

Du

alG

igE

a

X9273A

Intel

Qu

adG

igE

a

Sun Fire V20z • •

Sun Fire V40z • • • •c • • • • • •

Sun Fire X2100 M2 • • •

Sun Fire X2200 M2 • • •

Sun Fire X4100 • • •

Sun Fire X4100 M2 • • • • •

Sun Fire X4140 • • • • •

Sun Fire X4150 • • • • •

Sun Fire X4170 • • • •

Sun Fire X4200 • • • •

Sun Fire X4200 M2 • • • •

Sun Fire X4240 • • • • •

Sun Fire X4250 • • • • • •

Sun Fire X4270 • • • •

Sun Fire X4275 • • • •

Sun Fire X4440 • • • • •

Sun Fire X4450 • • • • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 209

Page 222: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

in Table 10-12.

Sun Fire X4540 • • • • •

Sun Fire X4600 • • • • • • •

Sun Fire X4600 M2 • • • • • • •

Sun Netra X4200 M2 • • • • • • •

Sun Netra X4250 • • • • •

Sun Netra X4450 • • • • •

a Refer to Info Doc ID: 89736 for details

b Requires Sun GigaSwift Ethernet driver for x 86 Solaris 9 1.0. available at http://www.sun.com/software/download/prod-ucts/40f7115e.html

c Do not install X4422A in both V 40z PCI slots 2 and 3 (See CR 6196936)

d Support starting with Solaris 10 6/06

e Support starting with Solaris 10 3/05 HW1

TABLE 10-12 Public Network: PCI Network Interfaces for x64 Servers (Continued)

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

EP

orts

X1027

PC

I-ED

ual

10G

igE

Fib

erL

ow

Pro

filea

X2222A

Co

mb

oD

ual

FastE

thern

et-Du

alS

CS

IP

CI

X4150A

/X4150A

-2G

igaS

wift

UT

PP

CI

X4151A

/X4151A

-2G

igaS

wift

MM

FP

CI

X4422A

/X4422A

-2S

torE

dg

eD

ual

Gig

E/D

ual

SC

SI

PC

I b

X4444A

Qu

adG

igaS

wift

PC

IU

TP

X4445A

Qu

adG

igaS

wift

PC

I-XU

TP

X4446A

-Zx4

PC

I-EQ

uad

Gig

EU

TP

X4447A

-Zx8

PC

I-EQ

uad

Gig

EU

TP

X5544A

/X5544A

-410

Gig

EP

CI/P

CI-X

d

X7280A

-2P

CI-E

Du

alG

igE

UT

P

X7281A

-2P

CI-E

Du

alG

igE

MM

F

X7285A

PC

I-XD

ual

Gig

EU

TP

Lo

wP

rofile

X7286A

PC

I-XS

ing

leG

igE

MM

FL

ow

Pro

file

X9271A

Intel

Sin

gle

Gig

Ee

X9272A

Intel

Du

alG

igE

a

X9273A

Intel

Qu

adG

igE

a

210 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 223: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

Public network SBus and cPCI interfaces supported with Sun Cluster 3 are listed inTable 10-13.

TABLE 10-13 Public Network: SBus and cPCI Network Interfaces for SPARC Servers

Servers

Network Interface Cards

On

bo

ardE

thern

et/Gig

abit

Po

rts

X1018

Su

nS

wift

SB

us

a

a Sun Cluster 3.0 support only

X1049

Qu

ad-fast

ethern

etS

Bu

s

X1059

Fast-E

thern

etS

Bu

s

X1140

Gig

abit

Eth

ernet

SB

us

X1147

Su

nA

TM

155/MM

F5.0

SB

us

X1149

Su

nA

TM

622/MM

F5.0

SB

us

X1232

Su

nS

wift

cPC

I

X1234

Qu

ad-fast

ethern

etcP

CI

X1261

Gig

abit

Eth

ernet

cPC

I

Sun Enterprise 3x00 • • • • • • •

Sun Enterprise 4x00 • • • • • • •

Sun Enterprise 5x00 • • • • • • •

Sun Enterprise 6x00 • • • • • • •

Sun Enterprise 10K • • • • • •

Sun Fire 3800 • • •

Sun Fire 4800, 4810, 6800 • • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 211

Page 224: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 10-14 Public Network: PCI-E ExpressModule Network Interfaces for SPARCServers

Servers

Network Interface ExpressModules

SG

-XP

CIE

2FC

GB

E-E

-ZD

ual

4Gb

FC

Du

alG

bE

Exp

ressMo

du

le

SG

-XP

CIE

2FC

GB

E-Q

-ZD

ual

4Gb

FC

Du

alG

bE

Exp

ressMo

du

le

X1028A

-ZD

ual

10G

bE

XF

PE

xpressM

od

ule

a

a Requires patch 125670-02 or later. Refer to InfoDoc 89736 for details.

X7282A

-ZP

CI-E

xpress

Du

alG

bE

Exp

ressMo

du

leU

TP

X7283A

-ZP

CI-E

xpress

Du

alG

bE

Exp

ressMo

du

leM

MF

X7284A

-Zx4

PC

IeQ

uad

Gb

EE

xpressM

od

ule

X7287A

-ZQ

uad

Gb

EU

TP

x8P

CIe

Exp

ressMo

du

lea

Sun Blade T6300 • • • • • • •

Sun Blade T6320 • • • • • • •

Sun Blade T6340 • • • • • • •

USBRDT-5240 Uniboard forE4800, E4900, E6800, E6900,E12K, E15K, E20K and E25K

• • • • •

212 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 225: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

TABLE 10-15 Public Network: PCI-E ExpressModule Network Interfaces for x64 Servers

Servers

Network Interface ExpressModules

SG

-XP

CIE

2FC

GB

E-E

-ZD

ual

4Gb

FC

Du

alG

bE

Exp

ressMo

du

le

SG

-XP

CIE

2FC

GB

E-Q

-ZD

ual

4Gb

FC

Du

alG

bE

Exp

ressMo

du

le

X1028A

-ZD

ual

10G

bE

XF

PE

xpressM

od

ule

a

a Requires patch 125671-02 or later. Refer to InfoDoc 89736 for details.

X7282A

-ZP

CI-E

xpress

Du

alG

bE

Exp

ressMo

du

leU

TP

X7283A

-ZP

CI-E

xpress

Du

alG

bE

Exp

ressMo

du

leM

MF

X7284A

-Zx4

PC

IeQ

uad

Gb

EE

xpressM

od

ule

X7287A

-ZQ

uad

Gb

EU

TP

x8E

xpressM

od

ule

a

Sun Blade X6220 • • • • • • •

Sun Blade X6240 • • • • • • •

Sun Blade X6250 • • • • • • •

Sun Blade X6270 • • • • • •

Sun Blade X6440 • • • • • • •

Sun Blade X6450 • • • • • • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 213

Page 226: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 10-16 Public Network: Network Express Module (NEM) Network Interfaces forSPARC Servers

Servers

Network InterfaceNEMs

X4212A

SB

600014-P

ort

Mu

lti-Fab

ricN

EM

X4236A

SB

600024-P

ort

Mu

lti-Fab

ricN

EM

X4250A

SB

600010-P

ort

Gb

EN

EM

X4731A

SB

604812-P

ort

Gb

EN

EM

Sun Blade T6300 • • •

Sun Blade T6320 •a

a Requires X4822A XAUI Pass-Through Fabric ExpansionModule for 10GbE operation

• •

Sun Blade T6340 • •b

b Requires X1029A Dual 10GbE Fabric Expansion Module for10GbE operation

• •

214 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 227: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

TABLE 10-17 Public Network: Network Express Module (NEM) Network Interfaces for x64Servers

Servers

Network InterfaceNEMs

X4212A

SB

600014-P

ort

Mu

lti-Fab

ricN

EM

X4236A

SB

600024-P

ort

Mu

lti-Fab

ricN

EM

X4250A

SB

600010-P

ort

Gb

EN

EM

X4731A

SB

604812-P

ort

Gb

EN

EM

Sun Blade X6220 • • •

Sun Blade X6240 • •

Sun Blade X6250 •a

a Requires X1029A Dual 10GbE Fabric Expansion Module for10GbE operation

• •

Sun Blade X6270 • •

Sun Blade X6440 • •

Sun Blade X6450 •a • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 215

Page 228: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Network Adapter Failover- NAFOSun Cluster 3.0 provides a feature called Network Adapter Fail Over (NAFO) forhigh availability of public network interfaces. Please note that NAFO is ONLYsupported on Sun Cluster 3.0. IPMP takes the place of NAFO in Sun Cluster 3.1.NAFO detects the failure of a network adapter, and automatically starts using aspare unused network adapter on the same server (if one exists and is configured forthis purpose). Configuration rules for NAFO are listed below:

■ It is required to set-up NAFO for each public network interface.■ It is recommended to configure redundant network adapters for every public

network interface.■ Network adapters that are part of the same NAFO group of different speeds are

now supported. For example, a Quad Gigabit Ethernet controller and a Sun FastEthernet controller can now be part of the same NAFO group.

TABLE 10-18 Public Network: XAUI Network Interfaces for SPARC Servers

Servers

NetworkInterfaceCards

SE

SX

7XA

1Z

Sun Netra T5220 •

Sun Netra T5440 •

Sun SPARC EnterpriseT5120/T5220

Sun SPARC EnterpriseT5140/T5240

Sun SPARC Enterprise T5440 •

216 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 229: sun cluster 3 configuration guide 412383

NETWORK CONFIGURATION

IPMP SupportIPMP, Sun's Network Multipathing implementation for the Solaris OperatingSystem, is easy to use, and enables a server to have multiple network portsconnected to the same subnet. Solaris IPMP software provides resilience fromnetwork adapter failure by detecting the failure or repair of a network adapter andswitching the network address to and from the alternative adapter. Moreover, whenmore than one network adapter is functional, Solaris IPMP increases datathroughput by spreading outbound packets across adapters.

Solaris IPMP provides a solution for most failover scenarios, while requiringminimal system administrator intervention. With Solaris IPMP, there is nodegradation in system or network performance when IPMP functions are notinvoked, and failover functions are accomplished in a short time frame. PublicNetwork Management (PNM), Network Adapter Fail Over (NAFO) supported inSun Cluster 3.0 is officially end of life. Starting with Sun Cluster 3.1, Solaris IPMP isthe replaced technology to ensure public network availability on SunPlex systems.

■ It is recommended to configure redundant network adapters for every publicnetwork interface.

■ The Sun X1018 and X1059 cards do not support IPMP, thus, they are notsupported with Sun Cluster 3.1 as a public network interface.

Public Network Link AggregationLink aggregation is supported by Sun Cluster for public networking. Linkaggregations must be put into IPMP groups for use by Sun Cluster.

There are two options for implementing link aggregation with Sun Cluster:

■ Sun Trunking 1.3.

■ The link aggregation software included with Solaris 10 1/06 (update 1) and later.See dladm(1M).

The Ethernet NIC and Solaris release dictates which option can be used.

Sun Cluster supports Sun Trunking 1.3 with Solaris 8, 9 and 10.

Solaris link aggregation is supported with Solaris 10 1/06 and later. Solaris 10 1/06is the first Solaris release providing this feature.

The Ethernet NIC must be supported by the server. Refer to the Public Networksupport tables earlier in this chapter to determine Sun Cluster support.

Then consult the Solaris link aggregation and Sun Trunking 1.3 hardware supportinformation for configuration requirements:

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 217

Page 230: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Solaris link aggregation: “Compatibility/Patches” section on http://systems-tsc/twiki/bin/view/Netprod/Dladm

■ Sun Trunking 1.3: “Sun Trunking Platform Support Matrix” link onhttp://www.sun.com/products/networking/ethernet/suntrunking/

Public Network VLAN Tagging■ IEEE 802.1Q VLAN Tagging is supported with Sun Cluster

Global NetworkingSun Cluster 3 provides global networking between the clients and the cluster nodesthrough the use of following features:

■ Global Interface (GIF): A global interface is a single network interface forincoming request from all the clients. The responses are sent out directly by theindividual nodes processing the requests. In case the node hosting the globalinterface fails, the interface is failed over to a backup node.

■ Cluster Interconnect: The cluster interconnect is used for request/data transferbetween the cluster nodes, the providing global connectivity to all the clusternodes from any one node.

■ It is strongly recommended to configure redundant network adapters in GIF’sNAFO/IPMP group.

Jumbo Frames Support■ Jumbo frames is supported with Sun Cluster. Refer to the jumbo frames

discussion in the Cluster Interconnect section, page 186, for requirements

218 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 231: sun cluster 3 configuration guide 412383

CHAPTER 11

Software Configuration

Typically, each node in a sun cluster will have the Solaris Operating Environment,Sun Cluster 3, volume management software, and applications along with theiragents and fault monitors running on it.

Solaris ReleasesAll nodes in the cluster are required to run the same version (including the updaterelease) of the operating system.

The Solaris releases supported with Sun Cluster 3 are listed below.

TABLE 11-1 Solaris Releases for Sun Cluster 3.1 SPARC

Supported Solaris Releases

Su

nC

luster

3.15/03

(FC

S)

Su

nC

luster

3.110/03

(up

date

1)

Su

nC

luster

3.14/04

(up

date

2)

Su

nC

luster

3.19/04

(up

date

3)

Su

nC

luster

3.18/05

(up

date

4)

Solaris 8 2/02 (update 7) • • • • •

Solaris 8 HW 12/02 (PSR 1) • • • • •

Solaris 8 HW 5/03 (PSR 2) • • • • •

Solaris 8 HW 7/03 (PSR 3) • • • • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 219

Page 232: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Solaris 8 HW 2/04 (PSR 4) • • • • •

Solaris 9 (FCS) • • • • •

Solaris 9 9/02 (update 1) • • • • •

Solaris 9 12/02 (update 2) • • • • •

Solaris 9 4/03 (update 3) • • • • •

Solaris 9 8/03 (update 4) • • • • •

Solaris 9 12/03 (update 5) • • • • •

Solaris 9 4/04 (update 6) • • • • •

Solaris 9 9/04 (update 7) • • • • •

Solaris 9 9/05 (update 8) • • • • •

Solaris 9 9/05 HW (update 9) •

Solaris 10 (FCS) •

Solaris 10 3/05 HW1 •

Solaris 10 3/05 HW2 •

Solaris 10 1/06 (update 1) •

Solaris 10 6/06 (update 2) •

Solaris 10 11/06 (update 3) •

Solaris 10 8/07 (update 4) •

Solaris 10 5/08 (update 5) •

Solaris 10 10/08 (update 6) •

Solaris 10 5/09 (update 7) •

TABLE 11-1 Solaris Releases for Sun Cluster 3.1 SPARC (Continued)

Supported Solaris Releases

Su

nC

luster

3.15/03

(FC

S)

Su

nC

luster

3.110/03

(up

date

1)

Su

nC

luster

3.14/04

(up

date

2)

Su

nC

luster

3.19/04

(up

date

3)

Su

nC

luster

3.18/05

(up

date

4)

220 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 233: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

TABLE 11-2 Solaris Releases for Sun Cluster 3.2 SPARC

Supported Solaris Releases

Su

nC

luster

3.2(F

CS

)

Su

nC

luster

3.22/08

(up

date

1)

Su

nC

luster

3.21/09

(up

date

2)

Solaris 9 9/05 (update 8) • • •

Solaris 9 9/05 HW (update 9) • • •

Solaris 10 11/06 (update 3) • •

Solaris 10 8/07 (update 4) • •

Solaris 10 5/08 (update 5) • • •

Solaris 10 10/08 (update 6) • • •

Solaris 10 5/09 (update 7) •

TABLE 11-3 Solaris Releases for Sun Cluster 3.2 x64

Supported Solaris Releases

Su

nC

luster

3.2(F

CS

)

Su

nC

luster

3.22/08

(up

date

1)

Su

nC

luster

3.21/09

(up

date

2)

Solaris 10 11/06 (update 3) • •

Solaris 10 8/07 (update 4) • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 221

Page 234: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Application ServicesAn application service is an application along with an agent which makes theapplication highly available and / or scalable in Sun Cluster. Application servicescan be of two types - failover and scalable. Sun Microsystems has developed agentsand fault monitors for a core set of applications. These application services arediscussed in the following sections. Sun Microsystems has also made available anapplication service development toolkit for developing custom agents and faultmonitors for other applications. Unless otherwise noted, all application services aresupported with all hardware components (servers, storage, network interfaces, etc.)stated as supported in previous chapters. Unless otherwise noted, all services are32bit application services. For more information on application services, please seethe Sun Cluster Data Services Planning and Administration Guide athttp://docs.sun.com/

All the Sun Cluster 3.1 agents are supported in the Sun Cluster 3.2 release. If youupgrade Sun Cluster 3.1 software to Sun Cluster 3.2 software, we recommend thatyou upgrade all agents to Sun Cluster 3.2 to utilize any new features and bug fixesin the agent software. If you upgrade the application software you must apply thelatest agent patches to make the new version of the application highly available onSun Cluster. Please check the application support matrix to make sure theapplication version is supported with Sun Cluster.

Solaris 10 5/08 (update 5) • • •

Solaris 10 10/08 (update 6) • • •

Solaris 10 5/09 (update 7) •

TABLE 11-3 Solaris Releases for Sun Cluster 3.2 x64 (Continued)

Supported Solaris Releases

Su

nC

luster

3.2(F

CS

)

Su

nC

luster

3.22/08

(up

date

1)

Su

nC

luster

3.21/09

(up

date

2)

222 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 235: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

All Sun Cluster 3.2 u1 agents are supported on SC 3.2 core. After installing SC 3.2core platform, please download the latest agent packages (e.g. SC 3.2 u1) or applythe latest agent patches. Agents are continuously enhanced to support the latestapplication versions. The latest agent updates or agent patches contains fixes tosupport the newer application versions.

Failover ServicesA failover service has only one instance of the application running in the cluster at atime. In case of application failure, an attempt is made to restart the application onthe same node. If unsuccessful, the application is restarted on one of the survivingnodes, depending on the service configuration. This process is called failover.

The table below lists the failover services supported with Sun Cluster 3.1

TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC

Application Application Version SC Version Solaris Comments

Agfa IMPAX 4.5-5.x 3.1 9 • Requires patch 118983-01 or later

Apache Proxy Server All versionsshipped withSolaris

3.1 8, 9, 10

Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5,6.0

3.1 8, 9, 10 • Supported in failover zones (using thecontainer agent)

Apache Web Server All versionsshipped withSolaris

3.1 8, 9, 10

BEA WebLogic Server 7.0, 8.1, 9.0 3.1 8, 9, 10

DHCP N/A 3.1 8, 9, 10 • Requires patch 116389-09 or later

DNS 3.1 8, 9, 10

HADB (JES) All versionssupported by JESApplication ServerEE are supported(4.4, 4.5)

3.1 8, 9, 10

IBM WebSphere MQ 5.3, 6.0 3.1 8, 9, 10 • Agent supported in a failover zone(using the container agent) in S10.Requires patch 116392-11 or 116428-10.Please refer to Info Doc 83129 for whichpatch to use.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 223

Page 236: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

JES Application Serverpreviously known asSunOne ApplicationServer

All versions till JES5 are supported(8.1EE)

3.1 8, 9, 10

JES Directory Server This agent isowned andsupported by theDirectory Serverproduct group

3.1 • Please contact the Directory Serverproduct group for details

JES Messaging Serverpreviously known asiPlanet MessagingServer (ims)

This agent isowned andsupported by theMessaging Serverproduct group

3.1 • Please contact the Messaging Serverproduct group for details

JES Web Proxy Serverpreviously known asSunOne Proxy Server

All versions till JES5 are supported

3.1 8, 9, 10

JES Web Serverpreviously known asSunOne Web Server

All versions till JES5 are supported(up to andincluding 7.0, 7.0U1, 7.0 U2 and allfuture updates of7.0 release)

3.1 8, 9, 10

MySQL 3.23.54a-4.0.234.1.6-4.1.225.0.15-5.0.45, 5.0.85

3.1 8, 9, 10 • Supported in failover zones (using thecontainer agent)

N1 Grid Engine 5.3 3.1 8, 9 • Requires patch 118689-02 or later

6.0, 6.1 8, 9, 10

N1 Grid ServiceProvisioning System

4.1, 5.0, 5.0u1, 5.1,5.2, 5.21-5.2.4

3.1 8, 9, 10 • Supported in failover zones (using thecontainer agent)

Netbackup This agent isowned andsupported byVeritas/Symantec

3.1 • Please contact Veritas/Symantec fordetails

NFS V3 3.1 8, 9, 10 • Not supported in a Container

TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)

Application Application Version SC Version Solaris Comments

224 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 237: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

Oracle ApplicationServer

9.0.2 - 9.0.3(10g) 3.1 8. 9 • Requires patch 118328-03 or later• Solaris 10 requires patch 118328-04 or

later• Note 1: 9.0.2-9.0.3 = 9iAS• Note 2: 9.0.4 = 10g AS

9.0.4 - 10.1.2 8, 9, 10

Oracle E-BusinessSuite

11.5.8, 11.5.9,11.5.10-11.5.10cu212.0

3.1 8, 9, 10 • Requires patch 116427-05 or later

Oracle Server 8.1.6 32 & 64 bit8.1.7 32 & 64 bit9i 32 & 64 bit

3.1 8, 9 • Note that Oracle 8.1.x have beendesupported by Oracle. However, whenthe customer has continuing support forOracle 8.1.x from Oracle, Sun willcontinue supporting the Sun Cluster HAOracle agent with it.

9i R2 32 & 64 bit10G R1 & R2 64 bit

8, 9, 10 • Both Standard and Enterprise Editionsare supported

11g 9, 10 • Both Standard and Enterprise Editionsare supported

PostgreSQL 7.3.x, 8.0.x, 8.1.x,8.2.x, 8.3.x

3.1 8, 9, 10 • Supported in failover zones (using thecontainer agent)

Samba 2.2.22.2.72.2.82.2.8a

3.1 8, 9

2.2.2 (w/oWinbind)2.2.7a with patch114684-01 (w/oWinbind)2.2.8a with patch114684-02 (w/oWinbind)up to 3.0.14a

9, 10 • Requires 116390-06 for SUNWscsmbv3.1.0 ARCH=SPARC

• Requires 116727-04 for SUNWscsmbv3.1.1 ARCH=SPARC

• Requires 116726-03 for SUNWscsmbv3.1.0 ARCH=ALL

3.0.23d- 3.0.27 8, 9, 10

TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)

Application Application Version SC Version Solaris Comments

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 225

Page 238: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SAP 4.0, 4.5, 4.6, 6.10,6.20, 6.30, 6.40, 7.0,NW 2004 (SR1,SR2, SR3)

3.1 8, 9, 10 • The intermediate releases of SAPapplication, for example 4.6C, 4.6D, etc.,are all supported

• The Sun Cluster Resource Types (RTs) formaking the traditional SAP components(Central Instance and App ServerInstances) Highly Available are:

- SUNW.sap_ci_v2- SUNW.sap_as_v2• The agent part number for making the

traditional SAP components (CI and AS)Highly Available is CLAIS-XXG-9999

• The RTs for making WebAS, SCS, Enqand Replica Highly Available are:

- SUNW.sapwebas,- SUNW.sapscs- SUNW.sapenq- SUNW.saprepl• The Agent part number for making

WebAS, SCS, Enq and Replica HighlyAvailable is CLAIS-XAI-9999

• The RTs for making SAP J2EE HighlyAvailable are:

- SUNW.sapscs- SUNW.sapenq- SUNW.saprepl- SUNW.sap_j2ee• The Agent part numbers for making SAP

J2EE Highly Available are:- CLAIS-XAI-9999- CLAIS-XAE-9999• SAP J2EE agent not supported on S10• In Sun Cluster 3.2 the SAP J2EE

functionality is available in theSUNW.sapwebas RT. There is no separateGDS resource needed to make SAP J2EEHighly Available. One single partnumber CLAIS-XAI-9999 will makeABAP, J2EE or ABAP+J2EE HighlyAvailable. Refer to SC 3.2 section of thisconfig guide for details.

TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)

Application Application Version SC Version Solaris Comments

226 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 239: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

SAP (Continued) • RTs SUNW.sapwebas, SUNW.sap_j2ee,SUNW.sap_as_v2, can be configured inMulti Master configuration. Refer to theadmin guides for details

• NetWeaver 2004 is based on SAP kernel7.0

• NetWeaver 2004 is based on SAP kernel6.40

• Refer to the following document fordetails on SAP agents:http://galileo.sfbay/agent_support_matrix/SAP-Config-Guide/

SAP LiveCache 7.4, 7.5, 7.6 3.1 8, 9, 10 • RTs for making Livecache and XserverHighly Available are:

- SUNW.sap_livecache- SUNW.sap_xserver.• Part number: CLAIS-XXL-9999

SAP MaxDB 7.4, 7.5 7.6, 7.7 3.1 8, 9, 10 • RTs for making MaxDB Highly Availableare:

- SUNW.sapdb- SUNW.sap_xserver• Part number: CLAIS-XAA-9999

Siebel 7.0, 7.5, 7.7 3.1 8 • Apply the latest Sun Cluster 3.1 Siebelagent patch

7.7, 7.8 9

7.8.2 9, 10

Solaris Containers(a.k.a. zones)

Brand type: Nativeand 1X

3.1 10 • Requires SC 3.1 08/05• Solaris 8 zones support added with patch

120590-06

Sun Java ServerMessage Queuepreviously known asJES MQ Server andSunOne MQ Server

All versions till JES5 are supported(3.5, 3.6, 4.0, 4.1)

3.1 8, 9, 10

TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)

Application Application Version SC Version Solaris Comments

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 227

Page 240: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun StorEdgeAvailability Suite

3.1, 3.1 8, 9

3.2.1 9 • Sun Cluster 3.1u4 requires at least Solaris9u9 and patches 116466-09, 116467-09and 116468-13

• HA-ZFS not supported with AVS

4.0 10 • Sun Cluster 3.1u4 requires at least Solaris10u3 and patch 123246-02

• HA-ZFS not supported with AVS

SWIFTAlliance Access 5.0 3.1 8, 9 • Requires patch 118050-03 or later

5.5 9 • Requires patch 118050-03 or later

5.9 9 • Requires patch 118050-05 or later

6.0 10 • Requires S10 01/06 or 11/06 and patch118050-05 or later

SWIFTAllianceGateway

5.0 3.1 9 • Requires patch 118984-04 or later

6.0 10 • Requires S10 01/06 or 11/06 and patch118984-04 or later

6.1 • Supports on all S10 versions supportedby Swift and Sun Cluster

Sybase ASE 12.0-12.5.1, 12.5.2and 12.5.3

3.1 8, 9 • Supported in HA mode only - bothasymmetric and symmetric. TheCompanion Server feature is notsupported.

• NOTE: Latest sybase agent patchesrequired for running the supportedconfigurations

Note - There are two Sybase agents: Onesold by Sun, another sold by Sybase. Thistable refers to the agent sold by Sun.

12.5.2, 12.5.3, 15.0,15.0.1 and 15.0.2

10

WebSphere MessageBroker

5.0, 6.0 3.1 • Requires patch 116728-04 or later

TABLE 11-5 Failover Services for Sun Cluster 3.1 x64

Application Application Version SC Version Solaris Comments

Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5,6.0

3.1 9, 10 • Supported in failover zones (using thecontainer agent)

BEA WebLogic Server 7.0, 8.1 3.1 9, 10

DHCP N/A 3.1 9, 10 • Requires patch 117639-03 or later

TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)

Application Application Version SC Version Solaris Comments

228 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 241: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

DNS 3.1 9, 10

HADB (JES) All versionssupported by JESApplication ServerEE are supported

3.1 9, 10

JES Application Serverpreviously known asSunOne ApplicationServer

All versions till JES5 are supported(up to 8.1EE)

3.1 9, 10

JES Web Proxy Serverpreviously known asSunOne Proxy Server

All versions till JES5 are supported

3.1 9, 10

JES Web Serverpreviously known asSunOne Web Server

All versions till JES5 are supported(up to andincluding 7.0, 7.0U1, 7.0 U2 and allfuture updates of7.0 release)

3.1 9, 10

MySQL 3.23.54a-4.0.23,4.1.6-4.1.22, 5.0.15-5.0.45, 5.0.85

3.1 9, 10 • Supported in failover zones (using thecontainer agent)

N1 Grid Engine 5.3 3.1 9, 10 • Requires patch 118689-02 or later

6.0, 6.1

N1 Grid Service Provi-sioning System

4.1, 5.0, 5.0u1, 5.1,5.2, 5.2.1 - 5.2.4

3.1 9, 10 • Supported in failover zones (using thecontainer agent)

NFS V3 3.1 9, 10

Oracle Server 10G R1 32 bit10G R2 32 & 64 bit

3.1 10 • Both Standard and Enterprise Editionsare supported with Sun Cluster 3.1U4

PostgreSQL 7.3.x, 8.0.x, 8.1.x,8.2.x, 8.3.x

3.1 9, 10 • Supported in failover zones (using thecontainer agent)

Samba 2.2.2 to 3.0.27 3.1 9, 10 • Requires patch 116726-05 or later

TABLE 11-5 Failover Services for Sun Cluster 3.1 x64 (Continued)

Application Application Version SC Version Solaris Comments

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 229

Page 242: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

The tables below lists the failover services supported with Sun Cluster 3.2:

Solaris Containers(a.k.a. zones)

Brand type: Nativeand 1X

3.1 10 • Requires SC 3.1 08/05• lx zones support added with patch

120590-05

Sun Java ServerMessage Queuepreviously known asJES MQ Server andSunOne MQ Server

All versions till JES5 are supported(3.5, 3.6, 4.0, 4.1)

3.1 9, 10 • Always apply the latest agent patch

Sun StorEdgeAvailability Suite

4.0 3.1 10 • Sun Cluster 3.1u4 requires at least Solaris10u3 and patch 123247-02

• HA-ZFS not supported with AVS

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC

Application Application Version SC Version Solaris Comments

Agfa IMPAX 4.5 - 5.x, 6.3 3.2 9, 10 • Agent not supported in non-global zones• Solaris 10 version support is for Agfa

IMPAX 6.3 only

Apache Proxy Server All 2.2.x versionsand all versions ofApache shippedwith Solaris.

3.2 9, 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones• Important note: For Apache versions

2.2.x, the agent supports only standardHTTP server. Apache-SSL and mod_sslare not supported.

Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5,6.0

3.2 9, 10 • Agent supported in global zones, failoverzones (using the container agent) andzone nodes (SC 3.2 support of zones)

Apache Web Server All 2.2.x versionsand all versions ofApache shippedwith Solaris.

3.2 9, 10 • Agent supported in global zones, zonenodes (SC 3.2 support of zones), andZone Clusters (a.k.a. cluster brand zones)

• Agent not supported in failover zones• Important note: For Apache versions

2.2.x, the agent supports only standardHTTP server. Apache-SSL and mod_sslare not supported

TABLE 11-5 Failover Services for Sun Cluster 3.1 x64 (Continued)

Application Application Version SC Version Solaris Comments

230 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 243: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

BEA WebLogic Server 7.0, 8.1, 9.0, 9.2,10.0, 10.2

3.2 9, 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones• Please see the Release Notes that

documents an issue discovered duringthe qualification of WLS in non-globalzones

• Apply the latest patch or upgrade theagent to SC 3.2 u1 or later

DHCP N/A 3.2 9, 10 • Agent not supported in non-global zones

DNS 3.2 9, 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones

HADB (JES) All versionssupported by JESApplication ServerEE are supported(4.4, 4.5)

3.2 9, 10 • Agent not supported in non-global zones

IBM WebSphere MQ 5.3, 6.0, 7.0 3.2 9, 10 • Supported in global zones, failover zones(using the container agent) and zonenodes (SC 3.2 support of zones)

Informix V9.4, 10, 11 and11.5

3.2 10 • Agent available for download athttp://www.sun.com/download underSystems Administration category andClustering sub-category

JES Application Serverpreviously known asSunOne ApplicationServer

All versions till JES5 U1, 9.1, 9.1 UR2,GlassFish V2 UR2

3.2 9, 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones

JES Directory Server 5.2.x. This agent isowned andsupported by theDirectory Serverproduct group

3.2 • Please contact the Directory Serverproduct group: Ludovic Poitou, RegisMarco

• For more info:http://blogs.sfbay.sun.com/Ludo/date/20061106

JES Messaging Serverpreviously known asiPlanet MessagingServer (ims)

6.3. This agent isowned andsupported by theMessaging Serverproduct group

3.2 • Please contact the Messaging Serverproduct group: Durga Tirunagari

• For more info, mail [email protected]

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 231

Page 244: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

JES Web Proxy Serverpreviously known asSunOne Proxy Server

All versions till JES5 are supported(up to 4.0)

3.2 9, 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones

JES Web Serverpreviously known asSunOne Web Server

All versions up toand including JES5 U1 aresupported. Allreleases up to andincluding 7.0, 7.0U1, 7.0 U2 and allfuture updates of7.0 release

3.2 9, 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones

Kerberos Version shippedwith Solaris

3.2 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones

MySQL 3.23.54a-4.0.234.1.6-4.1.225.0.15-5.0.855.1.x

3.2 9, 10 • Agent supported in global zones, failoverzones (using the container agent), zonenodes (SC 3.2 support of zones) andZone Clusters (a.k.a. cluster brand zones)

• MySQL versions 5.0.x and 5.1.x requirepatches 126031-04 (S9), 126032-04 (S10)

N1 Grid Engine 6.0, 6.1 3.2 9, 10 • Agent not supported in non-global zones

N1 Grid ServiceProvisioning System

4.1, 5.0, 5.0u1, 5.1,5.2, 5.2.1 - 5.2.4

3.2 9, 10 • Agent supported in global zones, failoverzones (using the container agent) andzone nodes (SC 3.2 support of zones)

Netbackup This agent isowned andsupported byVeritas/Symantec

3.2 • Please contact Veritas/Symantec fordetails

NFS V3 3.2 9, 10 • Agent not supported in non-global zones

V4 10

Oracle ApplicationServer

9.0.2 - 9.0.3 (10g) 3.2 9 • Note 1: 9.0.2 - 9.0.3 = 9iAS• Note 2: 9.0.4 = 10g AS• Agent supported in global zones and

zone nodes (SC 3.2 support of zones)• Agent not supported in failover zones• Apply the latest agent patch

9.0.4 - 10.1.3.1 9, 10

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

232 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 245: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

Oracle E-BusinessSuite

11.5.8, 11.5.9,11.5.10 -11.5.10cu212.0

3.2 9, 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones• Apply the latest agent patch

Oracle Server 8.1.6 32 & 64 bit8.1.7 32 & 64 bit9i 32 & 64 bit

3.2 9 • Note that Oracle 8.1.x have beendesupported by Oracle. However, whena customer has continuing support forOracle 8.1.x from Oracle, Sun willcontinue supporting the Sun Cluster HAOracle agent with it.

9i R2 32 & 64 bit10G R1 & R2 64 bit11g

9, 10 • Both Standard and Enterprise Editionsare supported

• Supported in non-global zones

10.2.0.4 10 • HA Oracle agent is supported in SolarisContainer (a.k.a. Zone) Clusters

• Support starts with Solaris 10 5/09 andSC 3.2 1/09

• UFS and standalone QFS 5.0 may be usedwith or without SVM

• ASM is not supported with HA Oracle• NAS is not supported with Zone Clusters• Other features that are currently

supported with HA Oracle are supported

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 233

Page 246: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

PostgreSQL 7.3.x, 8.0.x, 8.1.x,8.2.x, 8.3.x

3.2 9, 10 • Agent supported in global zones, failoverzones (using the container agent) andzone nodes (SC 3.2 support of zones)

• PostgreSQL agent in SC 3.2 u2 supportsWrite Ahead Log (WAL) shippingfunctionality. Get this functionality inone of the following ways:

- Install the SC 3.2 u2 agent, or- Upgrade to the SC 3.2 u2 agent, or- Apply the latest agent patch• Feature info: This project enhances the

PostgreSQL agent to provide the abilityto support log shipping functionality as areplacement for shared storage thuseliminating the need for shared storagein a cluster when using PostgreSQLDatabases. This feature provides supportfor PostgreSQL database replicationbetween two different clusters orbetween two different PostgreSQLfailover resources within one cluster.

Samba 2.2.2 to 3.0.27 3.2 9, 10 • Agent supported in global zones, failoverzones (using the container agent) andzone nodes (SC 3.2 support of zones)

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

234 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 247: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

SAP 4.0, 4.5, 4.66.10, 6.20, 6.30, 6.407.0, 7.1NW 2004 (SR1,SR2, SR3)NW 2004s (SR1,SR2, SR3).

3.2 9, 10 • The intermediate releases of SAPapplication, for example 4.6C, 4.6D, etc.,are all supported

• The Sun Cluster Resource Types formaking the traditional SAP components(Central Instance and Application ServerInstances) Highly Available are:SUNW.sap_ci_v2, SUNW.sap_as_v2

• The agent part number to make thetraditional SAP components (CI and AS)Highly Available is CLAIS-XXG-9999

• The RTs for making WebAS, SCS, Enq,Replica Highly Available are:

- SUNW.sapwebas- SUNW.sapscs- SUNW.sapenq- SUNW.saprepl• Agent part number for making WebAS,

SCS, Enq, Replica Highly Available isCLAIS-XAI-9999.

Refer to the admin guides for details onconfiguring ABAP, ABAP+J2EE and J2EE.• RTs SUNW.sapwebas, SUNW.sap_as_v2

can be configured in Multi Masterconfiguration. Refer to the admin guidesfor details.

• Agent supported in global zones andzone nodes (SC 3.2 support of zones).

• Agent not supported in failover zones.• NetWeaver 2004s is based on SAP kernel

7.00• NetWeaver 2004 is based on SAP kernel

6.40• Refer to the following document for

details on SAP agents:http://galileo.sfbay/agent_support_matrix/SAP-Config-Guide

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 235

Page 248: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SAP (Continued) • SAP Exchange Server (XI, another namePI) is an ABAP+Java application basedon SAP NetWeaver. SAP EnterprisePortal is a Java-only application based onSAP NetWeaver. These components ofSAP can be made highly available usingthe “SC Agent for SAP Enqueue Server”(CLAIS-XAI-9999), which includesagents for web application server,message server, enqueue server andenqueue replication server.

• Apply patch 126062-06 to make SAP 7.1Highly Available on SC 3.2 GA or use theSAP WebAS agent (SUNW.sapenq,SUNW.saprepl, SUNW.sapscs,SUNW.sapwebas) from the SC 3.2 1/09(u2) release

• Please refer to the Release Notes beforeconfiguring the SAP resources

SAP LiveCache 7.4, 7.5, 7.6 3.2 9, 10 • RTs for making Livecache and XserverHighly Available are:

- SUNW.sap_livecache- SUNW.sap_xserver• Part number: CLAIS-XXL-9999• Agent supported in global zones and

zone nodes (SC 3.2 support of zones)• Agent not supported in failover zones.• Livecache version 7.6.03.09 required for

S10 SPARC

SAP MaxDB 7.4, 7.5 7.6, 7.7 3.2 9, 10 • RTs for making MaxDB Highly Availableare:

- SUNW.sapdb- SUNW.sap_xserver• Part number: CLAIS-XAA-9999• Agent supported in global zones and

zone nodes (SC 3.2 support of zones)• Agent not supported in failover zones• MaxDB version 7.6.03.09 required for S10

SPARC

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

236 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 249: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

Siebel 7.0, 7.5, 7.7 3.2 9 • Agent not supported in non-global zones• Agent for Siebel 8.0 requires SC 3.2 u1 or

patches to the SC 3.2 Siebel agent:- 126064-02 (Solaris 9)- 126065-02 (Solaris 10)

7.7, 7.8

7.8.2 9, 10

8.0

Solaris Containers(a.k.a. Zones)

Brand type: native,lx, solaris8 andsolaris9

3.2 10 • This agent now supports lx, solaris8 andsolaris9 brand containers in addition tosupporting native Solaris 10 containers

• Container agent requires at least patch126020-01 or a SC 3.2 u1 agent to supportlx and solaris8 brand containers

• Container agent requires patch 126020-03to support solaris9 brand container

Sun Java ServerMessage Queuepreviously known asJES MQ Server andSunOne MQ Server

All versions till JES5 are supported(3.5, 3.6, 4.0, 4.1,4.2, 4.3)

3.2 9, 10 • Agent supported in global zones andwhole root zones (SC support for non-global zones)

• Agent not supported in sparse root zones• Agent not supported in failover zones

Sun StorEdgeAvailability Suite

3.2.1 3.2 9 • Requires Solaris 9u9 and patches 116466-09, 116467-09 and 116468-13

• HA-ZFS not supported with AVS

4.0 10 • Requires Solaris 10u3 and patch 123246-02

• HA-ZFS not supported with AVS.

SWIFTAlliance Access 5.9, 6.0, 6.2 3.2 9, 10 • SC 3.2 SWIFTAlliance Access agentpatch 126085-01 or later required forSolaris 9

• Solaris 10 agents are available fordownload fromhttp://www.sun.com/download

• SWIFT Alliance Access 6.0 is supportedon all S10 versions supported by Swiftand by Sun Cluster. 6.0 is not supportedon Solaris 9.

• SWIFT Alliance Access 6.2 is supportedon Solaris 10 8/07 or later on SPARCplatform with patch 126086-01

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 237

Page 250: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SWIFTAllianceGateway

5.0, 6.0, 6.1 3.2 9, 10 • S10 agents are available for downloadfrom http://www.sun.com/download

• SWIFT Alliance Gateway 6.0 and 6.1 aresupported on all S10 versions supportedby Swift and Sun Cluster. 6.0 and 6.1 arenot supported on Solaris 9

Sybase ASE 12.0 - 12.5.1, 12.5.2,12.5.3

3.2 9 • Supported in HA mode only - bothasymmetric and symmetric. TheCompanion Server feature is notsupported.

Note - There are two Sybase agents. Onesold by Sun, another sold by Sybase. Thistable refers to the agent sold by Sun.• Agent supported in global zones and

zone nodes (SC support of zones)• Agent not supported in failover zones

12.5.2, 12.5.3, 15.0,15.0.1, 15.0.2

10

WebSphere MessageBroker

5.0, 6.0 3.2 9, 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones

TABLE 11-7 Failover Services for Sun Cluster 3.2 x64

Application Application Version SC Version Solaris Comments

Apache Proxy Server All 2.2.x versionsand all versions ofApache shippedwith Solaris.

3.2 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones• Important note: For Apache versions

2.2.x, the agent supports only standardHTTP server. Apache-SSL and mod_sslare not supported.

Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5,6.0

3.2 10 • Agent supported in global zones, failoverzones (using the container agent) andzone nodes (SC 3.2 support of zones)

Apache Web Server All 2.2.x versionsand all versions ofApache shippedwith Solaris.

3.2 10 • Agent supported in global zones, zonenodes (SC 3.2 support of zones), andZone Clusters (a.k.a. cluster brand zones)

• Agent not supported in failover zones• Important note: For Apache versions

2.2.x, the agent supports only standardHTTP server. Apache-SSL and mod_sslare not supported.

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

238 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 251: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

BEA WebLogic Server 7.0, 8.1, 9.0, 9.2,10.0, 10.2

3.2 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones• Please see the Release Notes that

documents an issue discovered duringthe qualification of WLS in non-globalzones

• Apply the latest agent patch or upgradethe agent to SC 3.2 u1

DHCP N/A 3.2 10 • Agent not supported in non-global zones

DNS 3.2 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones

HADB (JES) All versionssupported by JESApplication ServerEE are supported(4.4, 4.5)

3.2 10 • Agent not supported in zones

IBM WebSphere MQ 6.0, 7.0 3.2 10 • Agent supported in global zones, failoverzones (using the container agent) andzone nodes (SC 3.2 support of zones)

Informix V9.4, 10, 11, 11.5 3.2 10 • Agent available for download fromhttp://www.sun.com/download underSystems Administration category andClustering sub-category

JES Application Serverpreviously known asSunOne ApplicationServer

All versions till JES5 U1, 9.1, 9.1 UR2,GlassFish V2 UR2

3.2 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones

JES Web Proxy Serverpreviously known asSunOne Proxy Server

All versions till JES5 are supported(up to 4.0)

3.2 10 • Agent not supported in non-global zones

JES Web Serverpreviously known asSunOne Web Server

All versions up toand including JES5 U1 aresupported. Allreleases up to andincluding 7.0, 7.0U1, 7.0 U2 and allfuture updates of7.0 release.

3.2 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones

TABLE 11-7 Failover Services for Sun Cluster 3.2 x64 (Continued)

Application Application Version SC Version Solaris Comments

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 239

Page 252: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Kerberos Version shippedwith Solaris

3.2 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones

MySQL 3.23.54a - 4.0.234.1.6 - 4.1.225.0.15 - 5.0.855.1.x

3.2 10 • Agent supported in global zones, failoverzones (using the container agent), zonenodes (SC 3.2 support of zones) andZone Clusters (a.k.a. cluster brand zones)

• MySQL versions 5.0.x and 5.1.x requirepatch 126033-05

N1 Grid Engine 6.0, 6.1 3.2 10 • Agent not supported in non-global zones

N1 Grid Service Provi-sioning System

4.1, 5.0, 5.0u1, 5.1,5.2, 5.2.1 - 5.2.4

3.2 10 • Agent supported in global zones, failoverzones (using the container agent) andzone nodes (SC 3.2 support of zones)

NFS V3 3.2 10 • Agent not supported in zones

V4

Oracle ApplicationServer

V10.1.2, 10.1.3.1 3.2 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones

Oracle Server 10G R1 32 bit10G R2 32 & 64 bit

3.2 10 • Both Standard and Enterprise Editionsare supported with Solaris10u3

• Agent supported in non-global zones

10.2.0.4 10 • HA Oracle agent is supported in SolarisContainer (a.k.a. Zone) Clusters

• Support starts with Solaris 10 5/09 andSC 3.2 1/09

• UFS and standalone QFS 5.0 may be usedwith or without SVM

• ASM is not supported with HA Oracle• NAS is not supported with Zone Clusters• Other features that are currently

supported with HA Oracle are supported

TABLE 11-7 Failover Services for Sun Cluster 3.2 x64 (Continued)

Application Application Version SC Version Solaris Comments

240 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 253: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

PostgreSQL 7.3.x, 8.0.x, 8.1.x,8.2.x, 8.3.x

3.2 10 • Agent supported in global zones, failoverzones (using the container agent) andzone nodes (SC 3.2 support of zones)

• PostgreSQL agent in SC 3.2 u2 supportsWrite Ahead Log (WAL) shippingfunctionality. Get this functionality inone of the following ways:

- Install the SC 3.2 u2 agent or- Upgrade to the SC 3.2 u2 agent or- Apply the latest agent patch• Feature info: This project enhances the

PostgreSQL agent to provide the abilityto support log shipping functionality as areplacement for shared storage thuseliminating the need for shared storagein a cluster when using PostgreSQLDatabases. This feature provides supportfor PostgreSQL database replicationbetween two different clusters orbetween two different PostgreSQLfailover resources within one cluster.

SAP NetWeaver 2004s(SR1, SR2, SR3),Web ApplicationServer 7.0, SAP 7.1

3.2 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones• Apply the latest agent patch• NetWeaver 2004s is based on SAP Kernel

7.00• Refer to the following document for

details on SAP agents:http://galileo.sfbay/agent_support_matrix/SAP-Config-Guide/

• See SPARC Table 11-6 for details• Apply patch 126063-07 to make SAP 7.1

Highly Available on SC 3.2 or use theSAP WebAS agent (SUNW.sapenq,SUNW.saprepl, SUNW.sapscs,SUNW.sapwebas) from SC 3.2 u2

SAP LiveCache 7.6 3.2 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones• Requires SAP Livecache version 7.6.01.09

for S10 x86

TABLE 11-7 Failover Services for Sun Cluster 3.2 x64 (Continued)

Application Application Version SC Version Solaris Comments

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 241

Page 254: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SAP MaxDB 7.6, 7.7 3.2 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones• Requires SAP MaxDB version 7.6.01.09

for S10 x86

Samba 2.2.2 to 3.0.27 3.2 10 • Agent supported in global zones, failoverzones (using the container agent) andzone nodes (SC 3.2 support of zones)

Solaris Containers(a.k.a. Zones)

Brand type: native,lx, solaris8 andsolaris9

3.2 10 • This agent now supports lx, solaris8 andsolaris9 brand containers in addition tosupporting native Solaris 10 containers

• Container agent requires at least patch126021-01 or the SC 3.2 u1 agent tosupport lx and solaris8 brand containers

• Container agent requires at least patch126021-03 to support solaris9 brandcontainers

Sun Java ServerMessage Queuepreviously known asJES MQ Server andSunOne MQ Server

All versions till JES5 are supported(3.5, 3.6, 4.0, 4.1,4.2, 4.3)

3.2 10 • Agent supported in global zones, wholeroot non-global zone nodes (SC 3.2support of zones)

• Agent not supported in sparse root non-global zones

• Agent not supported in Failover Zones

Sun StorEdgeAvailability Suite

4.0 3.2 10 • Requires at least Solaris 10u3 and patch123247-02

• HA-ZFS not supported with AVS

Sybase ASE 15.0, 15.0.1 and15.0.2

3.2 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones• Agent available for download from

http://www.sun.com/download

WebSphere MessageBroker

6.0 3.2 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones• Apply the latest agent patch

TABLE 11-7 Failover Services for Sun Cluster 3.2 x64 (Continued)

Application Application Version SC Version Solaris Comments

242 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 255: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

Scalable ServicesA scalable service has one or more instances of applications running in the clustersimultaneously. A global interface provides the view of a single logical service to theoutside world. The application requests are distributed to various running instances,based on the load-balancing policy. In case a node on which an application instanceis running fails, an attempt is made to restart the application on the same node. Ifunsuccessful, the application is restarted on a surviving node or the load isredistributed among the surviving nodes, depending on the service configuration. Incase the node hosting the global interface (GIF) fails, the global interface is failedover to a surviving node, depending on the service configuration.

This section does not include information about Oracle Real Application Cluster(RAC). Please refer to “Oracle Real Application Cluster (OPS/RAC)” on page 245.

The following tables contain the scalable services supported with Sun Cluster 3.1

TABLE 11-8 Supported Scalable Services with Sun Cluster 3.1SPARC

Application Application Version SC Version Solaris Comment

Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5,6.0

3.1 8, 9, 10 • Supported in failover zones (using thecontainer agent)

Apache Web Server All versionsshipped withSolaris

3.1 8, 9, 10

JES Web Serverpreviously known asSunOne Web Server

All versions till JES5 are supported(up to andincluding 7.0, 7.0U1, 7.0 U2 and allfuture updates of7.0 release)

3.1 8, 9, 10

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 243

Page 256: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

The following tables contain the scalable services supported with Sun Cluster 3.2

TABLE 11-9 Supported Scalable Services with Sun Cluster 3.1x64

Application Application Version SC Version Solaris Comment

Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5,6.0

3.1 9, 10 • Supported in failover zones (using thecontainer agent)

Apache Web Server All versionsshipped withSolaris

3.1 9, 10

JES Web Serverpreviously known asSunOne Web Server

All versions till JES5 are supported(up to andincluding 7.0, 7.0U1, 7.0 U2 and allfuture updates of7.0 release)

3.1 9, 10

TABLE 11-10 Supported Scalable Services with Sun Cluster 3.2SPARC

Application Application Version SC Version Solaris Comment

Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5,6.0

3.2 9, 10 • Agent supported in global zones, failoverzones (using the container agent), andzone nodes (SC 3.2 support of zones)

Apache Web Server All 2.2.x versionsand all versions ofApache shippedwith Solaris.

3.2 9, 10 • Agent supported in global zones, zonenodes (SC 3.2 support of zones), andZone Clusters (a.k.a. cluster brand zones)

• Agent not supported in failover zones• Important note: For Apache versions

2.2.x, the agent supports only standardHTTP server. Apache-SSL and mod_sslare not supported.

JES Web Serverpreviously known asSunOne Web Server

All versions up toand including JES5 U1 aresupported. Allreleases up to andincluding 7.0, 7.0U1, 7.0 U2 and allfuture updates of7.0 release.

3.2 9, 10 • Agent supported in global zones andzone nodes (SC 3.2 support of zones)

• Agent not supported in failover zones

244 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 257: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

Oracle Real Application Cluster (OPS/RAC)Oracle Real Application Cluster is supported with Sun Cluster 3. The configurationrules around OPS/RAC support are the following.

Oracle Real Application Cluster Topology SupportN*N (Scalable) topology is no longer required for support of OPS/RAC with SunCluster 3. In order for a configuration to be supported, the nodes in a clusterrunning OPS/RAC must be connected to the same shared storage arrays. Thisallows a subset of the total nodes in a cluster to be running OPS/RAC as long as theOPS/RAC nodes are connected to the same shared storage devices.

RSM is supported with RAC and Sun Cluster 3. This functionality requires SunCluster 3.0 5/02, Oracle 9i RAC 9.2.03 and Solaris 8 or 9. This support is limited toSCI-PCI cards and switches. This support applies to all servers that support SCI-PCI.

TABLE 11-11 Supported Scalable Services with Sun Cluster 3.2x64

Application Application Version SC Version Solaris Comment

Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5,6.0

3.2 10 • Agent supported in global zones, failoverzones (using the container agent) andzone nodes (SC 3.2 support of zones)

Apache Web Server All 2.2.x versionsand all versions ofApache shippedwith Solaris.

3.2 10 • Agent supported in global zones, zonenodes (SC 3.2 support of zones), andZone Clusters (a.k.a. cluster brand zones)

• Agent not supported in failover zones.• Important note: For Apache versions

2.2.x, the agent supports only standardHTTP server. Apache-SSL and mod_sslare not supported.

JES Web Serverpreviously known asSunOne Web Server

All versions up toand including JES5 U1 aresupported.

All releases up toand including 7.0,7.0 U1, 7.0 U2 andall future updatesof 7.0 release

3.2 10 • Agent supported only in zone nodes (SC3.2 support of zones)

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 245

Page 258: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-12 Oracle RAC Support with Sun Cluster 3.1 for SPARC

Version

Maxim

um

No

des

b

b Please refer to the respective storage section for the number of nodes supported

So

laris

H/W

RA

ID

Veritas

CV

Me

e Requires Veritas CVM 3.2 or later

Su

nC

luster

GF

Sf

Sh

aredQ

FS

g

(no

tsu

pp

orted

with

CV

M)

SV

Mfo

rS

un

Clu

ster(O

ban

) gi

NA

S

Fast

Eth

ernet

Gig

abit

Eth

ernet

10Gig

abit

Eth

ernet

SC

I-PC

Iw

ithR

SM

l

Infin

iban

dm

8.1.732bit/64bit/OPFS32bita

a Supported in active-passive mode only

4 8, 9 • • • •k • • •

9iRAC/RACGR132/64bit

4 8, 9 • • • •k • • •

9iRAC/RACGR2 32/64 bit

8c

c Requires Oracle 9.2.0.3 and above plus patch 2854962. Please refer to the respective storage section forthe number of nodes supported

8, 9,10d

d Requires Sun Cluster 3.1 8/05

• • • •h •j • • • • RAC9.2.0.3andabove

10gR1RAC10.1.0.3 andabove

8 8, 9,10

• • • • • • • • • • •

10gR2RAC

8 8, 9,10

• •e • • • • • • • •

11gRAC

8 9, 10 • • • • • • • • • •

246 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 259: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

f Supported with Binary and log files only

g Using shared QFS and SVM for Sun Cluster (OBAN) together is only supported on Solaris 10

h Requires RAC 9.2.0.5 and Oracle patch 3566420

i SVM for Sun Cluster (OBAN) on Solaris 10 requires the following patches: 120809-01, 120807-01, 118822-21, 120537-04

j Requires RAC 9.2.0.5, Oracle patch 3366258

k SE 5210/5310 and ST 5220/5320 support only to two nodes

l SCI-PCI support maximum 4 nodes

m InfiniBand support starts with Solaris 10

TABLE 11-13 Oracle RAC Support with Sun Cluster 3.2 for SPARC

Version

Maxim

um

No

des

c

So

laris

H/W

RA

ID

Veritas

CV

Mg

Su

nC

luster

GF

Sh

Sh

aredQ

FS

i

(no

tsu

pp

orted

with

CV

M)

SV

Mfo

rS

un

Clu

ster(O

ban

)k

NA

S

Fast

Eth

ernet

Gig

abit

Eth

ernet

10GB

Eth

ernet

SC

I-PC

Iw

ithR

SM

o

Infin

iban

dp

9i RAC/RACGR132/64bit

4 9 • • • •m

• • •

9i RAC/RACGR2 32/64 bit

8d 9, 10 • • • •j •l • • • RAC9.2.0.3andabove

10gR1RAC10.1.0.3andabove

8 9, 10 • • • • • • • • • •

10gR2RAC

8 9, 10 • • • • • • • • •

10gRAC10.2.0.3a

16e 10f 4.6.2andabove

• • • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 247

Page 260: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

10g R2RAC10.2.0.4b

4 10f •n

• • •

8 10f • • • • • • • • •

16e 10f 4.6.2andabove

• • • •

11g RAC 4 10f • • • •

8 9, 10 • • • • • • • • •

11gR1RACb

4 10f • • • •

8 10f • • • • • • • • •

16e 10f • • 4.6.2andabove

• • • •

11g RAC11.1.0.7b

4 10f •n • • •

a Requires SC 3.2 2/08 (u1) and above

b Requires SC 3.2 1/09 (u2) and above

c Please refer to the respective storage section for the number of nodes supported

d Requires Oracle 9.2.0.3 and above plus patch 2854962. Please refer to the respective storage section for the numberof nodes supported

e ASM is supported

f Requires Solaris 10 10/08 (u6) and above

g Requires CVM 4.0 and above

h Supported with Binary and log files only

i Using shared QFS and SVM for Sun Cluster (OBAN) together is only supported on Solaris 10

j Requires RAC 9.2.0.5 and Oracle patch 3566420

k SVM for Sun Cluster (OBAN) on Solaris 10 requires the following patches: 120809-01, 120807-01, 118822-21, 120537-04

l Requires RAC 9.2.0.5, Oracle patch 3366258

m SE 5210/5310 and ST 5220/5320 support only

TABLE 11-13 Oracle RAC Support with Sun Cluster 3.2 for SPARC (Continued)

Version

Maxim

um

No

des

c

So

laris

H/W

RA

ID

Veritas

CV

Mg

Su

nC

luster

GF

Sh

Sh

aredQ

FS

i

(no

tsu

pp

orted

with

CV

M)

SV

Mfo

rS

un

Clu

ster(O

ban

)k

NA

S

Fast

Eth

ernet

Gig

abit

Eth

ernet

10GB

Eth

ernet

SC

I-PC

Iw

ithR

SM

o

Infin

iban

dp

248 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 261: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

Co-Existence SoftwareSolaris Resource Manager 1.2 and 1.3 is certified for co-existence with Sun Cluster3.0 7/01 (or later) software.

n Adds support for the Sun Storage 7000 series: 1) When RAC is installed in a global zone, you can also use NFS forClusterware OCR and Voting disks; 2) When RAC installed in a zone cluster, you must use iSCSI LUNs as OCRand Voting devices; 3) If you use iSCSI LUNs for Clusterware OCR and Voting disks, either in the global zone orin a zone cluster, configure the corresponding DID devices with fencing disabled.

o Maximum of 4 nodes with PCI-SCI

p InfiniBand support starts with Solaris 10

TABLE 11-14 Oracle RAC Support with Sun Cluster 3.1 and Sun Cluster 3.2 for x64

Version

Maxim

um

No

des

a

a Please refer to the respective storage section for the number of nodes supported

So

laris

H/W

RA

ID

Veritas

CV

Mc

c Veritas CVM not supported for x64

Su

nC

luster

GF

Sd

d Supported with Binary and log files only

Sh

aredQ

FS

SV

Mfo

rS

un

Clu

ster(O

ban

) e

e Up to four nodes are supported with SVM - larger numbers of nodes requires hardware RAID

NA

S

Fast

Eth

ernet

Gig

abit

Eth

ernet

10GB

Eth

ernet

Infin

iban

d

10gR2RAC 64bit(10.2.0.1andabove)

8b

b Greater than 4 nodes requires SC 3.2 2/08 (u1) and above

10 • • • • • • • • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 249

Page 262: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Restriction on Applications Running inSun ClusterSun Cluster supports running multiple data services in the same cluster. There is nolimit on the number of applications per node, or the kind of applications that run ona node. However, other factors such as application performance, and adverseinteraction between different applications may constrain the configuration ofmultiple applications on the same node.

Data ConfigurationThe application data can be configured on the shared storage in Sun Cluster in oneof the following structures:

■ “Raw Devices” on page 250■ “Raw Volumes / Meta Devices” on page 250■ “File System” on page 253

Raw DevicesSince every shared storage disk is a global device, all the disk partitions, and anyraw data laid out on them, are globally accessible. No other software apart fromSolaris Operating Environment and Sun Cluster 3 is required to configure data onraw devices.

Raw Volumes / Meta DevicesIf raw volumes / Meta devices are used for data storage, a volume manager needs tobe run on each node of the cluster.

Veritas Cluster Volume Manager (CVM) and Solaris Volume Manager for SunCluster (Oban) is supported with only Oracle RAC/OPS clusters.

250 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 263: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

Sun Cluster 3 supports the use of volume managers as listed below:

TABLE 11-15 Sun Cluster 3.1 Supported Volume Managers

Volume Manager Version

Solstice DiskSuite 4.2.1 (Solaris 8 Only)

Solaris Volume Manager (SVM) Solaris 9Solaris 10c

Please see Table 11-1, “SolarisReleases for Sun Cluster 3.1SPARC,” on page 219 for details.

c SVM (Oban) on Sun Cluster 3.1 with Solaris 10 requires the following minimum level ofSolaris patches: 120809-01, 120807-01, 118822-21, 120537-04

Solaris Volume Manager for SC(Oban)a

a For SVM Sun Cluster functionality you will need to order Sun Cluster Advanced Editionfor Oracle RAC.

Solaris 9 9/04 or later with patch116669-03Please see Table 11-1, “SolarisReleases for Sun Cluster 3.1SPARC,” on page 219 for details.

Veritas Volume Manager (VxVM)b.This includes support for the clusterfunctionality - formerly known asCVM

b FMR feature of VxVM is supported only with Sun Cluster 3.1 08/05 with Solaris 9 and 10with Veritas Storage Foundation Suite 4.1

3.2 (Solaris 8 Only)

3.5 (Solaris 8, 9)4.0 MP2 on Sun Cluster 3.1 u2 andearlier on (Solaris 8) Requires SunCluster patch 117949 on (Solaris 9)Requires Sun Cluster patch 1179504.1 (Solaris 8,9 and 10) RequiresVxVM 4.1 patch 117080-02d

d Veritas Volume Manager delivered as part of Veritas Storage Foundation 4.0 and 4.1 isalso supported.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 251

Page 264: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Either VxVM volume manager or Solstice DiskSuite (SDS) can be used for sharedstorage within a cluster configuration. Using VxVM for shared storage and SDS formirroring the root disk is also a supported configuration.

TABLE 11-16 Sun Cluster 3.2 Supported Volume Managers

Volume ManagerPlatform/Version Solaris Notes

Solaris Volume Manager(SVM)

SPARCand x64

SVM support tracks Solarissupport. Please see Table 11-2,“Solaris Releases for SunCluster 3.2 SPARC,” onpage 221 and Table 11-3,“Solaris Releases for SunCluster 3.2 x64,” on page 221for details.

Please see the respective SCRelease Notes for patch and otherrequirements.

Solaris Volume Managerfor SC (Oban)

SPARCand x64

SVM for SC support tracksSolaris support. Please seeTable 11-2, “Solaris Releasesfor Sun Cluster 3.2 SPARC,”on page 221 and Table 11-3,“Solaris Releases for SunCluster 3.2 x64,” on page 221for details.

Please see the respective SCRelease Notes for patch and otherrequirements.

Veritas Volume Manager(VxVM) including CVMsupport

SPARC: 4.1(SC 3.2)

- S9u8 plus required patchesas listed with SunSolve- S9u9- S10u3 plus required patchesas listed with SunSolve- S10u4 plus required patchesas listed with SunSolve

4.1_mp2 patch 117080-07

SPARC: 5.0(SC 3.2)

5.0_mp1 patch 122058-09 and124361-05,

SPARC: 5.0MP3 RP1(SC 3.2u2)

Veritas Volume Manager(VxVM) only

x64: 4.1(SC 3.2)

- S10u3 plus required patchesas listed with SunSolve- S10u4 plus required patchesas listed with SunSolve

4.1_mp1 patch 120586-04

x64: 5.0(SC 3.2u1)

patch 128060-02

x64: 5.0MP3 RP1(SC 3.2u2)

252 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 265: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

File SystemIf the application data is laid out on a file system, the cluster file system enables thefile system data to be available to all the nodes in the cluster. Sun Cluster 3 supportscluster file system on top of a UFS/VxFS laid out on a Veritas volume or SDS metadevice. File system logging is required in Sun Cluster 3.

TABLE 11-17 Veritas File System Support Matrix with Sun Cluster 3.1

Volume Manager Version Solaris Version Notes

Veritas file system(VxFS)

3.4 8 Not Supported with Sun Cluster3.1u3

3.5 8, 9

4.0 MP2a,b

a Requires patch 120107-01

b VxFS 4.0 and 4.1 delivered as part of Veritas Storage Foundation Suite is supported.

8, 9 4.0 MP2 on Sun Cluster 3.1 u2and earlier on (Solaris 8)Requires Sun Cluster patch117949 on (Solaris 9) RequiresSun Cluster patch 117950

4.1b 8, 9, 10 Requires VxFS 4.1 patch 119300-01(Solaris 8), 119301-01(Solaris9),119302-01(Solaris 10) (fix forbug 6227073)

TABLE 11-18 Veritas File System Support Matrix with Sun Cluster 3.2

Platform Version Solaris Notes

SPARC 4.1 - S9u8 plus required patchesas listed with SunSolve- S9u9- S10u3 plus required patchesas listed with SunSolve- S10u4 plus required patchesas listed with SunSolve

Requires 119301-04 (S9) and119302-04 (S10) patches

5.0 Requires 123201-02 (S9) and123202-02 (S10) patches

x64 5.0 - S10 plus required patches aslisted with SunSolve

- Starting with SC3.2u1- Requires 125847-01 patch

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 253

Page 266: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-19 Sun StorEdge QFS (SPARC) Support Matrix with Sun Cluster 3.1

QFS Version Solaris Version

SunClusterVersion Volume Manager Support

HA-SAM TapeLibrarySupport

FFS withHA StoragePlus Only

4.1 (HA) QFSStandalone

8 update 59 update 3

3.1 u1a, b, c

SVM and Veritas VxVM3.5 and above

N/A Yes

4.2 (HA) QFSStandalone

8 update 79 update 3 and later

3.1 u2a, b, c

SVM and Veritas VxVM3.5 and above

N/A Yes

4.2 (Shared)QFS

8 update 79 update 3 and later

3.1 u2c, d

No VM Support N/A N/A

4.3 (HA) QFSStandalone

8 update 79 update 3 and laterSolaris 10

3.1 u3a, b, c

SVM and Veritas VxVM4.0 and above

N/A Yes

4.3 (Shared)QFS

8 update 79 update 3 and laterSolaris 10

3.1 u3c, d

No VM Support N/A N/A

4.4 (HA) QFSStandalone

9 update 3 and laterSolaris 10

3.1 u3a, b, e

SVM and Veritas VxVM4.0 and above

N/A Yes

4.4 (Shared)QFS

9 update 3 and laterSolaris 10

3.1 u3d, e, f

VM/Oban (with Solaris10 only, No S9 support)

N/A N/A

4.5 (HA) QFSStandalone

9 update 3 and laterSolaris 10 u1

3.1 u4a,b,g,h

SVM and Veritas VxVM4.0 and above

N/A Yes

4.5 (Shared)QFS

9 update 3 and laterSolaris 10 u1

3.1 u4d,f,g,h

VM/Oban (with Solaris10 only, No S9 support)

N/A N/A

4.6 (HA) QFSStandalone

9 update 3 and laterSolaris 10 u3

3.1 u4a,b,g

SVM and Veritas VxVM4.1 and above

N/A Yes

4.6 (Shared)SAM-QFS

9 update 3 and laterSolaris 10 u3

3.1 u4d,f,g,i,j

VM/Oban (with Solaris10 only, NO S9 support)

L700 k.a

SL500 FCk.aRefer to j

a Supports with use of HA-NFS Agent

b Supports with use of HA-Oracle Agent

c Supports Oracle 9i only

d Supports with use of RAC Agent(s)

e Supports Oracle 9i, 10gR1 only

f Support with SVM Cluster Functionality (Oban).

g Supports Oracle 9i, 10gR1, and 10gR2

h Supports for SC 3.2 w/QFS 4.5 + QFS 05 patch (Build 4.5.42)

254 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 267: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

COTC: Currently COTC is at release 1.0. Clients outside the cluster is used whenuser applications require access to the data stored on Cluster filesystem(s), thecluster device fencing is lowered so COTC can access the data stored on attachedstorage that is being managed by the cluster. For this configuration user applicationsmust run outside the cluster, this configuration requires that no other data service isused inside the cluster for applications access outside the cluster. This configurationrequires that a logical hostname be used for Shared QFS Metadata trafficcommunications between Shared QFS Metadata Server and Metadata Clients thatexist outside the cluster, this requires extra set-up in SC RG (see QFS Relateddocumentation for configuration examples). It is highly recommended that adedicated network be used for communications between the cluster nodes and thenodes that exist outside the cluster. The storage topology is that must be used isdirect FC attached storage and can be any HWRAID supported in the configurationguide. This is Shared QFS with NO SAM functionality. The cluster nodes provideautomated failover of the MDS. The currently supported node configuration is 2-4nodes inside the cluster, and up to 16 nodes outside the cluster, is what has currentlybeen qualified. If your requirement requires other than mentioned above, a Get-To-Yes must be filed for supportability. See QFS Documentationhttp://docs.sun.com/source/819-7935-10/chapter6.html#94364

HA-SAM: Currently HA-SAM is at release 1.0, HA-SAM provide the feature of theSAM (Storage Archive Management) “Archiving, Staging, Releaser, & Recycler”.Each of these must run on the current Metadata Server. HA-SAM automated failoveris done with use of the SUNW.qfs agent, the Metadata Server in a HA-SAM

i Supports for COTC (Clients-outside-the-cluster, no mixed architecture)

a. Qualified by SAM-QFS QA Only

b. NO QFS ms type filesystems, ma type filesystem only supported

c. No Software Volume Managers supported

d. There are storage prerequisites required for this configuration See QFS documentation for details on prerequisites, & configu-ration examples

e. This configuration has been qualified for use with 16 nodes configuration (2 cluster nodes/14 client nodes)

f. See above matrix for OS support

g. See above matrix for Sun Cluster Support

h. Requires SUNWqfsr & SUNWqfsu packages

See QFS Documentation http://docs.sun.com/source/819-7935-10/chapter6.html#94364

j Supports for HA-SAM

a.Qualified by SAM-QFS QA Only

b. No software Volume Managers

c. Active-Passive Only supported

d. Oracle RMAN not supported

e. NO other data service supported with this configuration

f. Requires SUNWsamfsr & SUNWsamfsu packages

See HA-SAM Documentation http://docs.sun.com/source/819-7931-10/chap08.html#19295 for configuration prerequisites,configuration examples, and more information

k Requires ACSLS Server running the ACSLS 7.x Software

a. Qualified by SAM-QFS QA Only

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 255

Page 268: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

configuration has not been qualified using no other data service other thanSUNW.qfs & SUNW.hasam. This configuration is supported with a maximum of 2cluster nodes, this also requires Shared QFS filesystem(s), As a requirement for thisconfiguration 1-PXFS filesystem must be used for SAM catalog. Currently thisconfiguration has only been qualified to runs in a active-passive configuration. Noother data service is supported in conjunction with this configuration. If yourrequirement requires other than mentioned above, a Get-To-Yes must be filed forsupportability. See HA-SAM Documentation http://docs.sun.com/source/819-7931-10/chap08.html#19295

SAM-QFS Packages Notes:

a) Filesystem Manager - SUNWfsmgrr SUNWfsmgru

b) Filesystem Configurations NO HASAM - SUNWqfsr SUNWqfsu

c) HA-SAM Configurations - SUNWsamfsr SUNWsamfsu

TABLE 11-20 Sun StorEdge QFS (SPARC) Support Matrix with Sun Cluster 3.2

QFS Version Solaris Version

SunClusterVersion Volume Manager Support

HA-SAM TapeLibrarySupport

FFS withHA StoragePlus Only

4.5 (HA) QFSStandalone

9 update 3 and laterSolaris 10 u1

3.2a,b,c,d

a Supports with use of HA-NFS Agent

b Supports with use of HA-Oracle Agent

c Supports Oracle 9i, 10gR1, and 10gR2

d Supports for SC 3.2 w/QFS 4.5 + QFS 05 patch (Build 4.5.42)

SVM and Veritas VxVM4.0 and above

N/A Yes

4.5 (Shared)QFS

9 update 3 and laterSolaris 10 u1

3.2c,d,e,f

e Supports with use of RAC Agent(s)

f Support with SVM Cluster Functionality (Oban).

VM/Oban (with Solaris10 only, No S9 support)

N/A N/A

4.6 (HA) QFSStandalone

9 update 3 and laterSolaris 10 u3

3.2a, b, c

SVM and Veritas VxVM4.1 and above

N/A Yes

4.6 (Shared)SAM-QFS

9 update 3 and laterSolaris 10 u3

3.2c,d,e,g,h

VM/Oban (with Solaris10 only, NO S9 support)

L700 i, 9a

SL500 FC 9aRefer to h

256 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 269: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

g Supports for COTC (Clients-outside-the-cluster, no mixed architecture)

7a. Qualified by SAM-QFS QA Only

7b. NO QFS ms type filesystems, ma type filesystem only supported

7c. No Software Volume Managers supported

7d. There are storage prerequisites required for this configuration See QFS documentation for details on prerequisites, & config-uration examples

7e. This configuration has been qualified for use with 16 nodes configuration (2 cluster nodes/14 client nodes)

7f. See above matrix for OS support

7g. See above matrix for Sun Cluster Support

7h. Requires SUNWqfsr & SUNWqfsu packages

See QFS Documentation http://docs.sun.com/source/819-7935-10/chapter6.html#94364

h Supports for HA-SAM

8a.Qualified by SAM-QFS QA Only

8b. No software Volume Managers

8c. Active-Passive Only supported

8d. Oracle RMAN not supported

8e. NO other data service supported with this configuration

8f. Requires SUNWsamfsr & SUNWsamfsu packages

See HA-SAM Documentation http://docs.sun.com/source/819-7931-10/chap08.html#19295 for configuration prerequisites,configuration examples, and more information

i Requires ACSLS Server running the ACSLS 7.x Software

9a. Qualified by SAM-QFS QA Only

TABLE 11-21 Sun StorEdge QFS (x64) Support Matrix with both Sun Cluster 3.1 and 3.2

QFS Version Solaris VersionSun ClusterVersion Volume Manager Support

HA-SAMTape LibrarySupport

FFS withHA StoragePlus Only

4.5 (HA) QFSStandalone

9 update 3 and laterSolaris 10 FCS - u1

3.1 u4/3.2a, b, c

SVM/VxVm 4.0 andabove

N/A Yes

4.5 (Shared)QFS

9 update 3 and laterSolaris 10 FCS - u1

3.1 u4/3.2c, d, e, f

VM/Oban (with Solaris10 only, No S9 support)

N/A N/A

4.6 (HA)Standalone QFS

9 update 3 and laterSolaris 10 FCS - u3

3.1 u4/3.2a, b, c

SVM/VxVm 4.1 andabove

N/A Yes

4.6 (Shared)SAM-QFS

9 update 3 and laterSolaris 10 FCS - u3

3.1 u4/3.2c, d, e, g, h

VM/Oban (with Solaris10 only, NO S9 support)

L700 i, ia

SL500 FCiaRefer to h

a Supports with use of HA-NFS Agent

b Supports with use of HA-Oracle Agent

c Supports Oracle 10gR2

d Supports with use of RAC Agent(s)

e Support with SVM Cluster Functionality (Oban).

f Supports for SC 3.2 w/QFS 4.5 + QFS 05 patch (Build 4.5.42)

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 257

Page 270: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

RAID in Sun Cluster 3All RAID features provided by the volume managers, or storage devices withhardware RAID capabilities, are supported with Sun Cluster 3 with the followingexceptions. RAID5 functionality of SDS/SVM is not supported. The configurationrules for RAID in Sun Cluster 3 are the following:

■ Sun Cluster requires that access to data be highly available on each node sharingstorage. For example, software mirroring where there are two independent pathsto mirrored data or a supported multi-pathing storage configuration withmultiple paths to highly available hardware RAID5 data.

■ Mirroring or RAID5 is required to ensure high availability of data.

■ There is no architectural limit imposed on the number of mirrors in a Sun Cluster.

■ Sun Cluster recommends that mirroring be done across same the type of storagedevice

g Supports for COTC (Clients-outside-the-cluster, no mixed architecture)

a. Qualified by SAM-QFS QA Only

b. NO QFS ms type filesystems, ma type filesystem only Supported

c. No Software Volume Managers supported

d. There are storage prerequisites required for this configuration See QFS documentation for details on prerequisites, & configu-ration examples

e. This configuration has been qualified for use with 16 nodes configuration (2 cluster nodes/14 client nodes)

f. See above matrix for OS support

g. See above matrix for Sun Cluster Support

h. Requires SUNWqfsr & SUNWqfsu packages

See QFS Documentation http://docs.sun.com/source/819-7935-10/chapter6.html#94364

h Supports for HA-SAM

a.Qualified by SAM-QFS QA Only

b. No software Volume Managers

c. Active-Passive Only supported

d. Oracle RMAN not supported

e. NO other data service supported with this configuration

f. Requires SUNWsamfsr & SUNWsamfsu packages

See HA-SAM Documentation http://docs.sun.com/source/819-7931-10/chap08.html#19295 for configuration prerequisites,configuration examples, and more information

i Requires ACSLS Server running the ACSLS 7.x Software

a. Qualified by SAM-QFS QA Only

258 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 271: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

Support for Virtualized OSEnvironmentsThis section of the document discusses supported virtualized OS Environments withSolaris Cluster.

Support for Logical Domains (LDoms) I/OdomainsLogical Domains are available on most recent SPARC based servers from SUN. As ofrelease of LDoms 1.0.1 release (October 2007), Solaris Cluster is supported only inLDoms I/O domains. LDoms I/O domains have direct access to the hardware andare not dependent upon other domains to get access to the physical resources theyneed. Look into LDoms documentation on how to configure such I/O domains.

Following considerations apply to deployment of Solaris Cluster in LDoms I/Odomains.

■ Minimum required LDoms version is 1.0.1, Solaris version is Solaris10 11/06 (u3)with required patches, Solaris Cluster version is SC3.2.

■ Additional guest domains can be created on the system where Solaris Cluster isrunning in the I/O domain. Such guest domains can use resources such as virtualdisks and virtual networks exported by the I/O domains. Such usage of LDomsI/O domains is supported with Solaris Cluster.

Note that use of the I/O domain to provide device services to other domainscan introduce additional load on the I/O domain. Capacity planning of theI/O domain must take such usage into account.

■ All applications which are certified with Solaris Cluster are supported in LDomsI/O domains by Solaris Cluster. Please check with your ISV for any restrictions onspecific applications.

■ Unless explicitly noted, if a LDoms capable server is qualified with SolarisCluster, then it is qualified to run Solaris Cluster in the I/O domain.

Servers with LDoms (I/O) support:■ Sun Blade T6320

■ Sun SPARC Enterprise T5120 or T5220

■ Sun SPARC Enterprise T5140 or T5240 (LDoms version is 1.0.2)

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 259

Page 272: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 11-1 Solaris Cluster in I/O domains with non-clustered guest domains

Support for Logical Domains (LDoms) 1.0.3 guestdomains as virtual nodes (Sun cluster 3.202/08/Solaris 05/08 and above)In addition to LDoms I/O domains, LDom 1.0.3 Guest domains can also beconfigured as Sun Cluster nodes as of July’08. A guest domain is viewed nodifferently than a physical node. All Sun Cluster topologies are supported usingLDoms 1.0.3 Guest domains.

Please note that using LDoms 1.0.3 Guest domains as Sun Cluster nodes inconjunction with LDoms I/O domains to provide device services to other domainscan introduce additional load on the I/O domains. As such, performance andcapacity planning should be considered for the I/O domains

Sun Cluster Data Services which are currently certified are also supported withLdoms 1.0.3 Guest domains clusters with the following exceptions.

■ Oracle RAC configurations.

Following are some rules and guidelines in using Ldoms 1.0.3 guest domains withSun Cluster:

260 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 273: sun cluster 3 configuration guide 412383

SOFTWARE CONFIGURATION

■ Use the mode=sc option for all virtual switch devices that connect the virtualnetwork devices used as the cluster interconnect.

■ Map only the full SCSI disks into the guest domains for shared storage.

■ The nodes of a cluster can consist of any combination of physical machines,LDoms I/O domains, and LDoms guest domains.

■ If a physical machine is configured with LDoms, install Sun Cluster software onlyin I/O domains or guest domains on that machine.

■ Network isolation - Guest domains that are located on the same physical machinebut are configured in different clusters must be network-isolated from each otherusing one of the following methods:

■ Configure the clusters to use different network interfaces in the I/O domainfor the private network.

■ Use different network addresses for each of the clusters.

For the complete and detailed list of rules and guidelines please refer to

http://wikis.sun.com/display/SunCluster/Sun+Cluster+3.2+2-08+Release+Notes#SunCluster3.22-08ReleaseNotes-ldomsguidelines

For the list of supported Sun Cluster patches please refer to

http://wikis.sun.com/display/SunCluster/Sun+Cluster+3.2+2-08+Release+Notes#SunCluster3.22-08ReleaseNotes-ldomssw

Servers with LDoms (Guest) support:■ Netra CP3060 Blade

■ Netra T2000 Server

■ Netra T5220 Server

■ Netra T5440 Server

■ Sun Blade T6300 Server Module

■ Sun Blade T6320 Server Module

■ Sun Blade T6340 Server Module

■ Sun Fire or SPARC Enterprise T1000 Server

■ Sun Fire or SPARC Enterprise T2000 Server

■ Sun SPARC Enterprise T5120 and T5220 Servers

■ Sun SPARC Enterprise T5140 and T5240 Servers

■ Sun SPARC Enterprise T5440 Server

■ USBRDT-5240 Uniboard

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 261

Page 274: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Please note that the following cards are not supported as of July’08:

http://docs.sun.com/source/820-4895-10/chapter1.html#d0e995

262 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 275: sun cluster 3 configuration guide 412383

CHAPTER 12

Managing Sun Cluster 3

Console AccessIt is required to have console access to each cluster node for some maintenance and service procedures, and for monitoring the console messages. Sun Cluster 3 does not require any specific type of console access mechanism. Some options that are available are:

■ Sun serial port A - this may be used with the Sun Cluster Terminal Concentrator (X1312A), a customer supplied terminal concentrator, an alphanumeric terminal, or serial terminal connection software from another computer such as tip(1).

■ E10K System Service Processor (SSP) and similar console devices.■ Sun keyboards and monitors may be used on cluster nodes when supported by

the base server platform. However, they may not be used as console devices. The console must be redirected to a serial port or SSP/RSC as applicable to the server using the appropriate OBP settings.

Cluster Administration and MonitoringAn administrative console located on a public network from which all cluster nodes are accessible is required for administrating Sun Cluster 3. Several tools/options are available for monitoring and administration of Sun Cluster 3: The use of the Sun Management Center, SunPlex Manager, and Cluster Control Panel GUI tools are optional.

■ Command line interface (CLI) - all SunPlex system management and monitoring may be performed from the system console, or through telnet or rlogin sessions.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 263

Page 276: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Sun Management Center (SunMC) - This is the de facto system management tool for all Sun platforms in the Enterprise. SunMC enables administrators to carry out in-depth monitoring of the SunPlex system. Sun Cluster 3 requires that the SunMC console layer be run on a Solaris SPARC system. The versions of SunMC supported with the Sun Cluster 3 product are listed below:

■ SunMC 2.1.1■ SunMC 3.0■ SunMC 3.5■ SunMC 3.6■ SunMC 3.6.1■ SunMC 4.0

■ SunPlex Manager - This is an easy to use system management tool that enables one to carry out basic SunPlex system management and monitoring with a focus on installation and configuration. This requires a suitable workstation or PC with a Web browser as listed below:

■ Cluster Control Panel (CCP) - provides a launch pad for the cconsole, crlogin, and ctelnet GUI tools which start multiple window connections to a set of specified nodes. The multiple window connections consist of a host window for each of the specified nodes and a common window. The common window’s input is directed to each host window for running the same command on each node simultaneously. This requires a Solaris SPARC system with a graphics console running Solaris 8 (or later) and requires about 250KB in /opt. Note that cconsole is designed to work with the Sun Cluster Terminal Concentrator, Enterprise 10K System Service Processor, Sun Fire 3800 - 6800 System Controller, and Sun Fire 12K/15K System Controller. Cluster Control Panel is supported with Solaris 9 x86 and Solaris 10 x86.

TABLE 12-1 SunPlex Manager Supported Web Browsers

Operating System Browser

Solaris Mozilla 1.4 and aboveNetscape 6.2 and aboveFireFox 1.0 and above

Windows Internet Explorer: 5.5 and aboveMozilla 1.4 and aboveNetscape 6.2 and aboveFireFox 1.0 and above

264 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 277: sun cluster 3 configuration guide 412383

CHAPTER 13

Sun Cluster 3 Ordering Information

Follow the steps given below for ordering a Sun Cluster 3 Configuration:

1. Generating Configuration and Quote: The responsibility for generating a validconfiguration rests with the Sales Team. No formal approval of yourconfiguration is required. Use either one of the following two mechanisms forgenerating a valid cluster configuration:

■ Webdesk: Webdesk is the new online ordering and quoting tool. Not allservers/storage supported in Sun Cluster 3 can currently be validated throughwebdesk. In such cases, use the Sun Cluster 3 configuration guide for validatingyour cluster.

■ Configuration Guide: See “Overview of Order Flow Chart” on page 265

2. Sun Cluster Order Approval: Sun Cluster orders NO LONGER need to gothrough a separate order approval process before they can be completed.

■ GETS Process: Effective April 1st, 2005 Sun Cluster orders will no longer need togo through the “Global Enterprise Tracking System” (GETS) step or the “SunCustomer Order Process Evaluation” (SCOPE) step before being “booked” in theorder entry system and released for shipment to the customer or partnerrespectively. In addition, Partners/Resellers will no longer be required to followRSCOPE-Tool process or use the RSCOPE-Tool for Sun Cluster sales orders as ofApril 1, 2005.

■ The “B-Hold” on Sun Cluster software marketing parts was removed as ofApril 1,2005.

■ The GETS and SCOPE information has been removed from the ConfigurationGuide as of December 13, 2005.

Overview of Order Flow Chart1. (Required) Order cluster nodes.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 265

Page 278: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

2. (Required) Order shared storage.

3. (Required) Order cluster interconnect.

4. (Required) Order public network adapter.

5. (Optional) Order administrative framework.

6. (Required) Order Solaris media and documentation.

7. (Required) Order Sun Cluster 3 software and license.

8. (Optional) Order Sun Cluster 3 Agents software and license.

9. (Required) Order Enterprise Services and training packages from the SunCluster section of the Enterprise Services price list.

Order Flow ChartThe configuration rules laid out in the flow chart below are in addition to theconfiguration rules for the individual components. Under no circumstances, SunCluster 3 relaxes the restrictions imposed by the base components.

1. (Required) Order cluster nodes. Refer to “Generic Server Configuration Rules”on page 15 for rules for configuring a cluster node. The table below lists theminimum number of server components (for example, CPUs) that need to beordered for one server unit to be used as a cluster node. Some of thesecomponents may be bundled with other components (for example, a powersupply with server base). Please calculate the actual number of additionalcomponents to be ordered appropriately. The table below guides you throughordering a single server unit. Use the same fletcher for ordering as many serversas needed.

Server ComponentRequiredQuantity

RecommendedQuantity

Sun Fire T2000/T1000Netra T2000

Server Base Package 1 1

CPU Module 1 1

Internal Memory 4 4

Internal Disk 2 2

Power Supplies 1 2

266 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 279: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 ORDERING INFORMATION

Netra T1 AC200/DC200 Server Base Package 1 1

CPU Module 1 1

Internal Memory 2 2

Internal Disk 1 2

Sun Enterprise 220R, 250,420R, 450Sun Fire 280R, V480,V880Netra t 1120/1125, t1400/1405, 20

Server Base Package 1 1

CPU Module 2 2

Internal Memory 2 2

Internal Disk 1 2

Power Supply as required N+1

Sun Enterprise 3x00,4x00, 5x00, 6x00

Server Base Package 1 1

CPU modules 2 2

Memory 2 2

CPU/Memory board 1 2

SBus I/O board 1 2

Power/Cooling Module as required N+1

Sun Enterprise 10K (*The quantitiesmentioned for eachdomain)

Base cabinet 1 1

CPU modules* 2 2

System Board* 1 2

Memory* 2 2

Sun Fire 3800(* Thequantities mentioned foreach domain)

Base Package 1 1

CPU/Memory BoardBundle*

1 2

cPCI I/O Assembly* 1 2

Sun Fire 4800, 4810,6800(* The quantitiesmentioned for eachdomain)

Base Package 1 1

CPU/Memory BoardBundle*

1 2

PCI I/O Assembly* 1 2

Sun Fire12K/15K/E20K/E25K(*The quantitiesmentioned for eachdomain)

Base Package 1 1

CPU/Memory Board* 1 2

PCI I/O Board* 1 2

Server ComponentRequiredQuantity

RecommendedQuantity

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 267

Page 280: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

2. (Required) Order Shared Storage. The tables below give the number ofcomponents (for example, cable, GBIC) required to connect each storage unit toa pair of nodes. Some of these components may be bundled with othercomponents (for example, cable with storage array). Please calculate the actualnumber of additional components to be ordered appropriately. Also, the tablesgive the number of “Host I/O ports” required with a shared storage unit. Someservers have onboard host adapters and some host adapter cards have multipleports on them. Calculate the actual number of Host Adapter Cards to beordered appropriately.

a. Ordering Netra st D130. Refer to “SCSI Storage Support” on page 127 for theconfiguration rules and the part numbers of the supported components.Order each component in the quantity mentioned in the table below toconfigure a Netra st D130 unit as a shared storage.

b. Ordering Sun StorEdge S1. Refer to “Sun StorEdge S1 Array” on page 134for the configuration rules and the part numbers of the supportedcomponents. Order each component in the quantity mentioned in the tablebelow to configure a Sun StorEdge S1 unit as a shared storage.

Component Quantity

Netra st D130 unit 1

Additional Disk As required

Host I/O Port 2

Cable 2

Component Quantity

Sun StorEdge unit 1

Additional Disk As required

Host I/O Port 2

Cable 2

268 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 281: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 ORDERING INFORMATION

c. Ordering Sun StorEdge MultiPack. Refer to “Sun StorEdge S1 Array” onpage 134 for the configuration rules and the part numbers of the supportedcomponents. Order each component in the quantity mentioned in the tablebelow to configure a MultiPack unit as a shared storage.

d. Ordering Sun StorEdge D1000. Refer to “Netra st A1000 Array” on page 128for the configuration rules and the part numbers of the supportedcomponents. Order each component in the quantity mentioned in the tablebelow to configure one D1000 unit as shared storage. To configure a SingleBus D1000, order components in the first row of the table. To configure SplitBus D1000 order components in the second row of the table.

e. Ordering Netra st D1000. Refer to “Netra st A1000 Array” on page 128 for theconfiguration rules and the part numbers of the supported components.Order each component in the quantity mentioned in the table below toconfigure one Netra st D1000 unit as shared storage. To configure a Single

Component Quantity

Sun StorEdge MultiPack unit 1

Additional Disk As required

Host I/O Port 2

Cable 2

D1000 Configuration Components Quantity

Single Bus D1000 Sun StorEdge D1000 unit 1

Additional disk as required

Host I/O Port (total for two nodes) 2

Cable (excluding Jumper cable) 2

Split Bus D1000 Sun StorEdge D1000 unit 1

Additional disk as required

Host Bus Ports (total for two nodes) 4

Cable 4

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 269

Page 282: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Bus Netra st D1000, order components in the first row of the table. Toconfigure Split Bus Netra st D1000 order components in the second row ofthe table.

f. Ordering Sun StorEdge A3500. Refer to “Sun StorEdge A3500 Array” onpage 140 for the configuration rules and the part numbers of the supportedcomponents. Order each component in the quantities mentioned in the firstrow of the table below to configure one A3500 controller module as a sharedstorage.

g. Ordering Sun StorEdge A3500FC. Refer to “Sun StorEdge A3500FC System”on page 63 for the configuration rules and the part numbers of the supportedcomponents. Order all the components in the first row of the table below for

Netra st D1000 Configuration Components Quantity

Single Bus Netra st D1000 Netra st D1000 unit 1

Additional disk as required

Host I/O Port (total for two nodes) 2

Cable (excluding Jumper cable) 2

Split Bus Netra st D1000 Netra st D1000 unit 1

Additional disk as required

Host Bus Ports (total for two nodes) 4

Cable 4

Component Quantity

Sun StorEdge Base Configuration as required

Additional Disk as required

Controller Module 1

Host I/O Port (total for two nodes) 4

Cable 4

270 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 283: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 ORDERING INFORMATION

connecting two hubs to a pair of nodes. Order all the components in the secondrow of the table below to configure an A3500FC controller module attached toboth the hub.

h. Ordering Sun StorEdge A5x00. Refer to “Sun StorEdge A5x00 Array” onpage 66 for the configuration rules and the part numbers of the supportedcomponents. Order each component in the quantity mentioned in the tablesbelow to configure A5x00 as shared storage.

i. Ordering a direct-attached, full-loop A5x00. Order components in thequantities mentioned in the table below.

Connectivity Component Quantity

Hub - to - node connectivity Seven slot FC-AL Hub 2

Host I/O Port (total for 2 nodes) 4

Cable 4

GBIC 8

A3500FC unit Sun StorEdge Base Configuration as required

Additional Disk as required

Controller Module 1

Cable 2

GBIC 3

Component Quantity

Sun StorEdge A5x00 unit 1

Additional disks as required

Interface Board 1

Host I/O Port 2

Cable 2

GBIC 4

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 271

Page 284: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

ii. Ordering a direct-attached, split-loop A5x00. Order components in thequantities mentioned in the table below:

iii. Ordering Hub-attached full loop, single loop A5x00. Order all thecomponents in the first row of the table below for connecting a hub to apair of nodes. Order all the components in the second row of the tablebelow to attach as many A5x00 to the hub as required. Note thatmaximum 4 A5000, or 4 A5100, or 3 A5200 units can be attached to a hub.

Ordering Hub-attached full loop, dual loop A5x00. Order all thecomponents in the first row of the table below for connecting two hubs to apair of nodes. Order all the components in the second row of the table

Component Quantity

Sun StorEdge A5x00 unit 1

Additional disk as required

Interface Board 2

Host I/O Port 4

Cable 4

GBIC 8

Connectivity Component Quantity

Hub - to - node connectivity Seven slot FC-AL Hub 1

Host I/O Port 2

Cable 2

GBIC 4

A5x00 unit Sun StorEdge A5x00 unit 1

Interface Boards 1

Cable 1

GBIC 2

272 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 285: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 ORDERING INFORMATION

below to attach as many A5x00 to both the hubs as required. Note thatmaximum 4 A5000, or 4 A5100, or 3 A5200 units can be attached to the hub-pair in this fashion.

i. Ordering Sun StorEdge T3. Refer to “Sun StorEdge T3 Array (Single Brick)”on page 74 for the configuration rules and the part numbers of the supportedcomponents. Both T3 for the Workgroup and T3 for the Enterprise modelsare supported with Sun Cluster 3.

i. Ordering Hub-attached T3 Array. Order all the components in the firstrow of the table below for connecting two hubs to a pair of nodes. Orderall the components in the second row of the table below to attach a T3brick to a hub.

Connectivity Component Quantity

Hub - to - node connectivity Seven slot FC-AL Hub 2

Host I/O Port 2

Cable 2

GBIC 4

A5x00 unit Sun StorEdge A5x00 unit 1

Interface Boards 2

Cable 2

GBIC 4

Connectivity Component Quantity

Hub - to - node connectivity Seven slot FC-AL Hub 2

Host I/O Port 4

Cable 4

GBIC 8

T3 brick T3 brick 2

Cable 2

GBIC 2

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 273

Page 286: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

ii. Ordering Switch-attached T3 Array. Order all the components in the firstrow of the table below for connecting two switches to a pair of nodes.Order all the components in the second row of the table below to attach aT3 brick to a switch.

j. Ordering Sun StorEdge 3910/3960 Refer to “Sun StorEdge 3910/3960 System”on page 90 for the configuration rules and the part numbers of supportedcomponents. Order all the components in the table below to attach a SE39x0system to the cluster

3. (Required) Order the cluster interconnect. Refer to “Cluster Interconnect” onpage 183 for configuration rules and the part numbers of the supportedcomponents. Order the components from the table below:

Connectivity Component Quantity

Switch - to - node connectivity FC Switch 2

Host I/O Port 4

Cable 4

GBIC 8

T3 brick T3 brick 2

Cable 2

GBIC 2

Component Quantity

Sun StorEdge 39x0 1

Additional Components As required

Host I/O Port 2

Cable 2

Interconnect topology. ComponentMin.Quantity

Max.Quantity

point-to-point. (N = 2 nodes) Host Network Port 4 12

cable 2 6

274 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 287: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 ORDERING INFORMATION

4. (Required) Order public network interfaces. Refer “Public Network” onpage 202 for configuration rules and the part numbers of the supported networkadapters. Order as many network adapters as required.

5. (Optional) Order the administrative framework.

a. Order administrative workstation - a Sun Ultra 5 or better, as per the tablebelow.

b. Order the Terminal Concentrator bundle: terminal concentrator, rack-mounting bracket (if required), and serial cables to connect to the clusternodes, as per the table below. Note: In the table below it is assumed that Nis total number of nodes in the cluster.

6. (Required) Order the Solaris media. Solaris licenses are included with a newSun server.

7. (Required) Order Sun Cluster 3 software and license.

Junction-based (for N = 2 - 8nodes)

Host Network Port 2 x N 6 x N

cable (customer suppliedfor Fast Ethernet)

2 x N 6 x N

Switch 2 6

Description Part # Quantity

Administrative workstation (Sun Ultra 5or better)

See Workstation section of CS PriceBook

1

Description Part # Quantity

Terminal Concentrator Kit: terminal concentrator, 3 - 5 metersserial cables (for 2 cluster nodes, and administrative workstation)

X1312A 1

Rack mounting bracket for Enterprise 5x00 and 6x00 only X1311A N

5-meter serial cable X3836A N-2

power cord for Terminal Concentrator X311L, orequivalent

1

Interconnect topology. ComponentMin.Quantity

Max.Quantity

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 275

Page 288: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

a. Order Sun Cluster 3 base software. Starting with the 7/01 release, we nowhave a generic part number available for the Sun Cluster 3. This part numberwill always point to the latest update release.Order Sun Cluster 3 license:

Description Part#

Sun Cluster 3.1 Base CD - latest CLUZS-999-99M9

Sun Cluster 3.1 Agents CD - latest CLA9S-999-99M9

Description Part#

Sun Cluster 3.2 Base CD - latest CLUZS-999-99M9 orSOLZ9-10GC9A7M

276 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 289: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 ORDERING INFORMATION

b. Order Sun Cluster 3 server license:

One server entitlement per physical system in the cluster. If a system hasmultiple domains, with some or all domains participating in same or differentclusters, only one server entitlement is needed for that system.

Purchase of a support contract requires the purchase of the license. Supportcontracts are available through the normal channels.

.

TABLE 13-1 Sun Cluster 3.1 base software, License Only

Description Part#

Sun Cluster server license for Netra t 1120/1125 CLNIS-310-B929

Sun Cluster server license for Netra t 1400/1405 CLNIS-31X-A929

Sun Cluster server license for Netra 20 CLNIS-310-D929

Sun Cluster server license for Netra 210 CLNIS-310-A929

Sun Cluster server license for Netra 240 CLNIS-310-I929

Sun Cluster server license for Netra 440 CLNIS-310-H929

Sun Cluster server license for Netra t 1280 CLNIS-310-E929

Sun Cluster server license for ATCA CP3010 SPARC Blade CLUIS-310-AA29

Sun Cluster server license for Netra CP3060 CLNIS-310-J929

Sun Cluster server license for Netra CP3260 CLNIS-310-O929

Sun Cluster server license for Netra T2000 CLNIS-310-F929

Sun Cluster server license for Netra T5440 CLNIS-310-L929

Sun Cluster server license for Netra T5220 CLNIS-310-K929

Sun Cluster server license for Netra X4200 CLNIS-310-G929

Sun Cluster server license for Netra X42500 CLNIS-310-M929

Sun Cluster server license for Netra X4450 CLNIS-310-N929

Sun Cluster server license for Sun Blade X6220 CLUII-310-M929

Sun Cluster server license for Sun Blade X6240 CLUII-310-R929

Sun Cluster server license for Sun Blade X6250 CLUII-310-N929

Sun Cluster server license for Sun Blade X6270 CLUII-310-W929

Sun Cluster server license for Sun Blade X6440 and X6450 CLUII-310-S929

Sun Cluster server license for Sun Blade 84xx Server Module CLUII-310-E929

Sun Cluster server license for Sun Blade T6300 CLUII-310-J929

Sun Cluster server license for Sun Blade T6320 CLUIS-310-AC29

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 277

Page 290: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Cluster server license for Sun Blade T6340 CLUIS-310-AF29

Sun Cluster server license for Sun Fire V20z CLUII-310-G929

Sun Cluster server license for Sun Fire V40z CLUII-310-F929

Sun Cluster server license for Sun Fire X2100 M2 CLUII-310-I929

Sun Cluster server license for Sun Fire X2200 M2 CLUII-310-H929

Sun Cluster server license for Sun Fire X4100 and X4200 CLUII-310-C929

Sun Cluster server license for Sun Fire X4140 CLUII-310-P929

Sun Cluster server license for Sun Fire X4150 CLUII-310-K929

Sun Cluster server license for Sun Fire X4170 CLUII-310-X929

Sun Cluster server license for Sun Fire X4270 and X4275 CLUII-310-Y929

Sun Cluster server license for Sun Fire X4440 CLUII-310-Q929

Sun Cluster server license for Sun Fire X4450 CLUII-310-L929

Sun Cluster server license for Sun Fire X4540 CLUII-310-U929

Sun Cluster server license for Sun Fire X4600 CLUII-310-D929

Sun Cluster server license for E220R CLUIS-31X-B929

Sun Cluster server license for E250 CLUIS-31X-C929

Sun Cluster server license for E420R CLUIS-310-A929

Sun Cluster server license for E450 CLUIS-310-B929

Sun Cluster server license for E3500 CLUIS-31X-D929

Sun Cluster server license for E4500 or E5500 CLUIS-31X-E929

Sun Cluster server license for E6500 CLUIS-31X-F929

Sun Cluster server license for E10000 CLUIS-31X-A929

Sun Cluster server license for Sun Fire T1000 CLUII-310-A929

Sun Cluster server license for Sun Fire T2000 CLUII-310-B929

Sun Cluster server license for Sun Fire V120 CLUIS-310-H929

Sun Cluster server license for Sun Fire V210 CLUIS-310-J929

Sun Cluster server license for Sun Fire V215/V245 CLUIS-310-T929

Sun Cluster server license for Sun Fire V240 CLUIS-310-K929

Sun Cluster server license for Sun Fire V250 CLUIS-310-O929

Sun Cluster server license for Sun Fire 280R CLUIS-310-Q929

TABLE 13-1 Sun Cluster 3.1 base software, License Only (Continued)

Description Part#

278 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 291: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 ORDERING INFORMATION

Sun Cluster server license for Sun Fire V440 CLUIS-310-P929

Sun Cluster server license for Sun Fire V445 CLEIS-310-S929

Sun Cluster server license for Sun Fire V480 / V490 CLUIS-310-L929

Sun Cluster server license for Sun Fire V880 / V890 CLUIS-310-N929

Sun Cluster server license for Sun Fire V1280 CLUIS-310-I929

Sun Cluster server license for Sun Fire E2900 CLUIS-310-E929

Sun Cluster server license for Sun Fire 3800 CLUIS-31X-G929

Sun Cluster server license for Sun Fire 4800/4810 CLEIS-310-R929

Sun Cluster server license for Sun Fire E4900 CLUIS-310-F929

Sun Cluster server license for Sun Fire 6800 CLUIS-310-M929

Sun Cluster server license for Sun Fire E6900 CLUIS-310-G929

Sun Cluster server license for Sun Fire E12K/E20K CLUIS-310-C929

Sun Cluster server license for Sun Fire E15K/E25K CLUIS-310-D929

Sun Cluster.server license for Sun Enterprise M3000 CLUIS-310-AE29

Sun Cluster.server license for Sun Enterprise M4000 CLUIS-310-U929

Sun Cluster server license for Sun Enterprise M5000 CLUIS-310-V929

Sun Cluster server license for Sun Enterprise M8000 CLUIS-310-W929

Sun Cluster server license for Sun Enterprise M9000-32 CLUIS-310-X929

Sun Cluster server license for Sun Enterprise M900-64 CLUIS-310-Y929

Sun Cluster server license for Sun SPARC Enterprise T5120/T5220 CLUIS-310-AB29

Sun Cluster server license for Sun SPARC Enterprise T5140/T5240 CLUIS-310-AD29

Sun Cluster server license for Sun SPARC Enterprise T5440 CLUIS-310-AG29

TABLE 13-2 Sun Cluster 3.2 base software, License Only

Description Part#

Sun Cluster server license for Netra t 1120/1125 CLNIS-320-B929

Sun Cluster server license for Netra 20 CLNIS-320-D929

Sun Cluster server license for Netra 210 CLNIS-320-A929

Sun Cluster server license for Netra 240 CLNIS-320-I929

TABLE 13-1 Sun Cluster 3.1 base software, License Only (Continued)

Description Part#

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 279

Page 292: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Cluster server license for Netra 440 CLNIS-320-H929

Sun Cluster server license for Netra t 1280 CLNIS-320-E929

Sun Cluster server license for Netra 1290 CLNIS-320-C929

Sun Cluster server license for ATCA CP3010 SPARC Blade CLUIS-320-AA29

Sun Cluster server license for Netra CP3060 CLNIS-320-J929

Sun Cluster server license for Netra CP3260 CLNIS-320-O929

Sun Cluster server license for Netra T2000 CLNIS-320-F929

Sun Cluster server license for Netra T5220 CLNIS-320-K929

Sun Cluster server license for Netra T5440 CLNIS-320-L929

Sun Cluster server license for Netra X4200 CLNIS-320-G929

Sun Cluster server license for Netra X4250 CLNIS-320-M929

Sun Cluster server license for Netra X4450 CLNIS-320-N929

Sun Cluster server license for Sun Blade X6220 CLUII-320-M929

Sun Cluster server license for Sun Blade X6240 CLUII-320-R929

Sun Cluster server license for Sun Blade X6250 CLUII-320-N929

Sun Cluster server license for Sun Blade X6270 CLUII-320-W929

Sun Cluster server license for Sun Blade X6440 and X6450 CLUII-320-S929

Sun Cluster server license for Sun Blade 84xx Server Module CLUII-320-E929

Sun Cluster server license for Sun Blade T6300 CLUII-320-J929

Sun Cluster server license for Sun Blade T6320 CLUIS-320-AC29

Sun Cluster server license for Sun Blade T6340 CLUIS-320-AF29

Sun Cluster server license for Sun Fire V20z CLUII-320-G929

Sun Cluster server license for Sun Fire V40z CLUII-320-F929

Sun Cluster server license for Sun Fire X2100 M2 CLUII-320-I929

Sun Cluster server license for Sun Fire X2200 M2 CLUII-320-H929

Sun Cluster server license for Sun Fire X4100 and X4200/X4200 M2 CLUII-320-C929

Sun Cluster server license for Sun Fire X4140 CLUII-320-P929

Sun Cluster server license for Sun Fire X4150 CLUII-320-K929

Sun Cluster server license for Sun Fire X4170 CLUII-320-X929

Sun Cluster server license for Sun Fire X4270 and X4275 CLUII-320-Y929

TABLE 13-2 Sun Cluster 3.2 base software, License Only (Continued)

Description Part#

280 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 293: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 ORDERING INFORMATION

Sun Cluster server license for Sun Fire X4440 CLUII-320-Q929

Sun Cluster server license for Sun Fire X4450 CLUII-320-L929

Sun Cluster server license for Sun Fire X4540 CLUII-320-U929

Sun Cluster server license for Sun Fire X4600 CLUII-320-D929

Sun Cluster server license for E420R CLUIS-320-A929

Sun Cluster server license for E450 CLUIS-320-B929

Sun Cluster server license for Sun Fire T1000 CLUII-320-A929

Sun Cluster server license for Sun Fire T2000 CLUII-320-B929

Sun Cluster server license for Sun Fire V120 CLUIS-320-H929

Sun Cluster server license for Sun Fire V210 CLUIS-320-J929

Sun Cluster server license for Sun Fire V215/V245 CLUIS-320-T929

Sun Cluster server license for Sun Fire V240 CLUIS-320-K929

Sun Cluster server license for Sun Fire V250 CLUIS-320-O929

Sun Cluster server license for Sun Fire 280R CLUIS-320-Q929

Sun Cluster server license for Sun Fire V440 CLUIS-320-P929

Sun Cluster server license for Sun Fire V445 CLUIS-320-S929

Sun Cluster server license for Sun Fire V480 / V490 CLUIS-320-L929

Sun Cluster server license for Sun Fire V880 / V890 CLUIS-320-N929

Sun Cluster server license for Sun Fire V1280 CLUIS-320-I929

Sun Cluster server license for Sun Fire E2900 CLUIS-320-E929

Sun Cluster server license for Sun Fire 4800/4810 CLUIS-320-R929

Sun Cluster server license for Sun Fire E4900 CLUIS-320-F929

Sun Cluster server license for Sun Fire 6800 CLUIS-320-M929

Sun Cluster server license for Sun Fire E6900 CLUIS-320-G929

Sun Cluster server license for Sun Fire E12K/E20K CLUIS-320-C929

Sun Cluster server license for Sun Fire E15K/E25K CLUIS-320-D929

Sun Cluster server license for Sun Enterprise M3000 CLUIS-320-AE29

Sun Cluster server license for Sun Enterprise M4000 CLUIS-320-U929

Sun Cluster server license for Sun Enterprise M5000 CLUIS-320-V929

Sun Cluster server license for Sun Enterprise M8000 CLUIS-320-W929

TABLE 13-2 Sun Cluster 3.2 base software, License Only (Continued)

Description Part#

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 281

Page 294: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

c. Upgrade licenses for the cluster software. Order one per server. Please referto http://www.sun.com/software/solaris/cluster/faq.jsp#g31 for moredetails on various tiers:

8. (Optional) Order Sun Cluster 3 Agent software and license.

a. Order Sun Cluster 3 Agent software. For Sun Cluster 3.1, order the SunCluster 3 Agents CD. A softcopy of documentation for the agents is includedin the CD. For Sun Cluster 3.2 the agents are included on the same DVD asthe base software. Documentation can also be found at docs.sun.com.

Sun Cluster server license for Sun Enterprise M9000-32 CLUIS-320-X929

Sun Cluster server license for Sun Enterprise M900-64 CLUIS-320-Y929

Sun Cluster server license for Sun SPARC Enterprise T5120/T5220 CLUIS-320-AB29

Sun Cluster server license for Sun SPARC Enterprise T5140/T5240 CLUIS-320-AD29

Sun Cluster server license for Sun SPARC Enterprise T5440 CLUIS-320-AG29

TABLE 13-3 Sun Cluster 3.1 and 3.2 Base Software, Upgrade from Previous Revisions Only

Description Part# Quantity

SunPlex upgrade license to upgrade from Tier 1 to Tier 2 CLSIS-LCO-A9U9 1 per server

SunPlex upgrade license to upgrade from Tier 2 to Tier 3 CLSIS-LCO-B9U9

SunPlex upgrade license to upgrade from Tier 3 to Tier 4 CLSIS-LCO-C9U9

SunPlex upgrade license to upgrade from Tier 4 to Tier 5 CLSIS-LCO-D9U9

SunPlex upgrade license to upgrade from Tier 5 to Tier 6 CLSIS-LCO-E9U9

SunPlex upgrade license to upgrade from Tier 6 to Tier 7 CLSIS-LCO-F9U9

SunPlex upgrade license to upgrade from Tier 7 to Tier 8 CLSIS-LCO-G9U9

SunPlex upgrade license to upgrade from Tier 8 to Tier 9 CLSIS-LCO-H9U9

SunPlex upgrade license to upgrade from Tier 9 to Tier 10 CLSIS-LCO-I9U9

SunPlex upgrade license to upgrade from Tier 10 to Tier 11 CLSIS-LCO-J9U9

SunPlex upgrade license to upgrade to same or lower Tier CLSIS-LCO-K9U9

Description Part# Quantity

Sun Cluster 3.1 Agents CD - latest CLA9S-999-99M9 1 Per Cluster

TABLE 13-2 Sun Cluster 3.2 base software, License Only (Continued)

Description Part#

282 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 295: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 ORDERING INFORMATION

b. Order Sun Cluster 3.1 and 3.2 Agent license. Order one license for everyagent installed in the cluster.

TABLE 13-4 Agents License

Description Part#

HA Agfa IMPAX CLAIS-XAH-9999

HA Apache Web/Proxy Server CLAIS-XXA-9999

HA Apache Tomcat CLAIS-XXX-9999

HA BEA Weblogic CLAIS-XXK-9999

HA DHCP CLAIS-XXH-9999

HA DNS CLAIS-XDN-9999

HA NFS CLAIS-XXF-9999

HA IBM WebSphere MQ CLAIS-XXQ-9999

HA IBM WebSphere MQ Integrator CLAIS-XXI-9999

HA Kerberos CLAI9-XXA-9999

HA MySQL CLAIS-XXO-9999

HA Oracle CLAIS-XXR-9999

HA Oracle Application Server CLAIS-XAD-9999

HA PostgreSQL CLAIS-XAM-9999

HA Samba CLAIS-XXM-9999

HA SAP Enqueue server CLAIS-XAI-9999

HA SAP J2EE Engine CLAIS-XAE-9999

HA SAP LiveCache CLAIS-XXL-9999

HA SAP/MaxDB Database CLAIS-XAA-9999

HA Siebel CLAIS-XXS-9999

HA Solaris Container CLAIS-XXZ-9999

HA Sun Java System Application Server CLAIS-XXJ-9999

HA Sun Java System Application Server EE CLAIS-XAB-9999

HA Sun Java System Directory Server CLAIS-XXD-9999

HA Sun Java System Message Queue CLAIS-XXT-9999

HA Sun Java System Web Server CLAIS-XXN-9999

HA Sun N1 Grid Engine CLAIS-XAC-9999

HA Sun N1 Service Provisioning System CLAIS-XAF-9999

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 283

Page 296: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

c. Order VxVM cluster license from the table below. This license needs to beordered when OPS/RAC is used with the VxVM. Note that the VxVMsoftware package includes the cluster functionality in it. Separate licensekeys are needed to enable the VxVM base product and the VxVM clusterfunctionality. The VxVM software package and the license key for VxVMbase product need to be acquired separately.

Note that CVM 5.0 uses the same license PN as that of VxVM 5.0

9. Sun Cluster Advanced Edition for Oracle RAC

Order one license per node. This includes a license for the following:■ Oracle RAC Agent■ Shared QFS Metadata server■ Shared QFS client■ SC agent for QFS metadata server■ Clustered Solaris Volume Manager■ SC-QFS-SVM

The following is NOT included with Sun Cluster Advanced Edition for OracleRAC:

■ Sun Cluster Server licenses (have to be purchased separately)

■ Usage of QFS without Sun Cluster

HA SWIFT Alliance Gateway CLAIS-XAG-9999

HA Sybase CLAIS-XXY-9999

Oracle E-Business Suite CLAIS-XXE-9999

Oracle Parallel Server and Real Application Cluster CLAIS-XXP-9999

Scalable Apache Web/Proxy Server CLAIS-XXC-9999

Scalable Java System Web Server CLAIS-XXW-9999

Scalable SAP CLAIS-XXG-9999

SWIFTAlliance Access CLAIS-XXV-9999

Description Part# Quantity

Veritas VxVM 5.0 Cluster Functionality License CLUI9-500-9999 One per OPS/RAC node

TABLE 13-4 Agents License

Description Part#

284 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 297: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 ORDERING INFORMATION

■ Usage of QFS for non Oracle RAC applications

10. Sun Cluster Geographic Edition:

One server entitlement per physical system in the cluster. If a system has multipledomains, with some or all domains participating in same or different clusters,only one server entitlement is needed for that system

Purchase of a support contract requires the purchase of the license. Supportcontracts are available through the normal channels.

TABLE 13-5 Sun Cluster Advanced Edition for Oracle RAC

Description Part#

Sun Cluster Advanced Edition for Oracle RAC License for Tier 1 Servers CLAI9-LCA-1999

Sun Cluster Advanced Edition for Oracle RAC License for Tier 2 Servers CLAI9-LCA-2999

Sun Cluster Advanced Edition for Oracle RAC License for Tier 3 Servers CLAI9-LCA-3999

Sun Cluster Advanced Edition for Oracle RAC License for Tier 4 Servers CLAI9-LCA-4999

Sun Cluster Advanced Edition for Oracle RAC License for Tier 5 Servers CLAI9-LCA-5999

Sun Cluster Advanced Edition for Oracle RAC License for Tier 6 Servers CLAI9-LCA-6999

Sun Cluster Advanced Edition for Oracle RAC License for Tier 7 Servers CLAI9-LCA-7999

Sun Cluster Advanced Edition for Oracle RAC License for Tier 8 Servers CLAI9-LCA-8999

Sun Cluster Advanced Edition for Oracle RAC License for Tier 9 Servers CLAI9-LCA-9999

Sun Cluster Advanced Edition for Oracle RAC License for Tier 10 Servers CLAI9-LCA-1099

Sun Cluster Advanced Edition for Oracle RAC License for Tier 11 Servers CLAI9-LCA-1199

TABLE 13-6 Sun Cluster Geographic Edition 3.1

Description Part#

Sun Cluster Geographic Edition 3.1 License for Tier 1 Servers CLGI9-001-9999

Sun Cluster Geographic Edition 3.1 License for Tier 2 Servers CLGI9-002-9999

Sun Cluster Geographic Edition 3.1 License for Tier 3 Servers CLGI9-003-9999

Sun Cluster Geographic Edition 3.1 License for Tier 4 Servers CLGI9-004-9999

Sun Cluster Geographic Edition 3.1 License for Tier 5 Servers CLGI9-005-9999

Sun Cluster Geographic Edition 3.1 License for Tier 6 Servers CLGI9-006-9999

Sun Cluster Geographic Edition 3.1 License for Tier 7 Servers CLGI9-007-9999

Sun Cluster Geographic Edition 3.1 License for Tier 8 Servers CLGI9-008-9999

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 285

Page 298: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

11. (Required) Order Enterprise Services and training packages from the SunCluster section of the Enterprise Services pricelist.Enterprise Tracking System)

Agents Edist Download ProcessThe Agents Edist Download Process is a mechanism for the field to procure agentsthat are announced asynchronously from Sun Cluster 3 update releases. You willneed to go through the usual sales process - scope, MCSO, etc. However, you willneed to download the agent binaries and documentation from http://edist.central,and deliver it to the customer site for installation. This is because the agent is notavailable on the Agents CD.

Sun Cluster Geographic Edition 3.1 License for Tier 9 Servers CLGI9-009-9999

Sun Cluster Geographic Edition 3.1 License for Tier 10 Servers CLGI9-010-9999

Sun Cluster Geographic Edition 3.1 License for Tier 11 Servers CLGI9-011-9999

TABLE 13-7 Sun Cluster Geographic Edition 3.2

Description Part#

Sun Cluster Geographic Edition 3.2 Tier 1 CLGI9-320-1999

Sun Cluster Geographic Edition 3.2 Tier 2 CLGI9-320-2999

Sun Cluster Geographic Edition 3.2 Tier 3 CLGI9-320-3999

Sun Cluster Geographic Edition 3.2 Tier 4 CLGI9-320-4999

Sun Cluster Geographic Edition 3.2 Tier 5 CLGI9-320-5999

Sun Cluster Geographic Edition 3.2 Tier 6 CLGI9-320-6999

Sun Cluster Geographic Edition 3.2 Tier 7 CLGI9-320-7999

Sun Cluster Geographic Edition 3.2 Tier 8 CLGI9-320-8999

Sun Cluster Geographic Edition 3.2 Tier 9 CLGI9-320-9999

Sun Cluster Geographic Edition 3.2 Tier 10 CLGI9-320-1099

Sun Cluster Geographic Edition 3.2 Tier 11 CLGI9-320-1199

TABLE 13-6 Sun Cluster Geographic Edition 3.1

Description Part#

286 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 299: sun cluster 3 configuration guide 412383

APPENDIX A

Campus Clusters

Campus clusters are a common means of achieving disaster recovery. Unliketraditional clusters, the nodes of a campus cluster can be several kilometers apart.This enables application services to be highly available in the event of a disaster likefire, earthquake, site destruction due to terrorist attack, etc. Sun now supports 8node campus cluster configurations.

This appendix documents all the support related information for campus clustersusing Sun Cluster 3. For a detailed description of campus cluster concepts andconfigurations refer to the Sun Cluster Hardware Administration Guide. In general,the support information listed for traditional clusters in the rest of the configurationguide applies to campus cluster configurations as well. This section gives details thatare specific to campus cluster configurations with appropriate pointers to othersections in the config guide.

Number of Nodes8-node campus cluster configurations are supported with Sun Cluster 3.

Campus Cluster Room ConfigurationsConfigurations of two or more rooms are supported.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 287

Page 300: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

ApplicationsAll of the application services, including Oracle Parallel Server (OPS), and RealApplication Clusters (RAC), mentioned in the “Software Configuration” on page 219is applicable to the campus clusters as well.

Guideline for Specs Based CampusCluster ConfigurationsThe goal of this section is to provide an overview of what a generic Specs BasedCampus Cluster configuration consists of. It also summarizes the characteristics thata given distance configuration, proposed and submitted by the field, must complywith, in order to be a valid candidate for support.

Overview of a Specs Based Campus Cluster

Basically, a Specs Based Campus Cluster can be considered as a distanceconfiguration where some IP and SAN extension solutions are deployed to provideseparation between the cluster nodes and/or the shared storage devices, includingthe Quorum Device when applicable.

This can be depicted as follows:

288 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 301: sun cluster 3 configuration guide 412383

CAMPUS CLUSTERS

An example of such a configuration is one where a DWDM network is deployed toextend the SANs required to connect the cluster nodes to the distant shared storagedevices, and also to support a distant cluster transport between these nodes:

Note that the solutions deployed for distance in the transport subsystem and for thedistance in the I/O paths can be either distinct or shared, as depicted in the previousexample with DWDMs. This design choice has to be made by the implementers,within the constraints of the requirements described in the other sections of thisdocument, and may depend on the topology of the Specs Based Campus Cluster.

Independently of the level of complexity of the distance implementation(s)(technologies, equipment types,...), the base cluster components - nodes, SANswitches, storage devices - must be supported according to the existing SC3.xConfiguration Guide. Also the existing maintenance and service procedures, asdocumented in the SC3.x HW Administration Guide must continue to be applicable.

Technical requirements

This section deals with the technical list of features that a Specs Based Configurationmust comply with:

Latency:

■ Transport Latency

■ The measured latency of each transport, between any pair of nodes in thecluster, must be less than 15 ms one-way.

■ Note that this document doesn’t address the means used to measure thelatency. It assumes that this information is obtained by the field, possibly butnot exclusively, under the terms of some Service Level Agreement (SLA).

■ Data path Latency

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 289

Page 302: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ The measured latency of each path, between nodes and storage devicesattached through redundant SANs, must be less than 15 ms.

■ Note that the “path” that is referred to in that previous rule is defined aswhatever resides between a SAN switch the cluster nodes are directlyconnected to, and the corresponding SAN switch the shared storage devicesare directly connected to.

■ The same remark as above applies here concerning the actual measurement ofthat latency.

■ General rules and guidelines:

■ The measured network latency should be identical for each redundant privateinterconnect between two nodes

■ In case of failures in the distance infrastructure (“cloud”), the latency of theremaining transport(s) or data path(s) must remain below the max. values (15ms one-way)

Bit Error Rate (BER):

■ The quality of the distance infrastructure for the data paths must be such that theBER shouldn’t be worse than 10^-10.

Topology:

The basic requirements and recommendations are common with standard clusterconfigurations. Below are a few additional considerations

■ HDS array is supported as Quorum Device with Sun Cluster 3.2 using patchrelease 2 (Solaris 9 SPARC/126105-01, Solaris 10 SPARC/126106-01, Solaris 10x86/126107-01) and Sun Cluster 3.1U4 using patches (Solaris 8 SPARC/117950-31,Solaris 9 SPARC/117949-30, Solaris 9 x86/117909-31, Solaris 10 SPARC/120500-15,Solaris 10 x86/120501-15)

■ Transport:

■ Transport redundancy must be implemented and ensured between the clusternodes. The distance transport must be implemented in such a way that thecluster nodes logically and functionally perceive distinct paths. For example,adding/removing as well as enabling/disabling a transport path shouldn’taffect the other one(s). In other words, from a functional point of view, thedistance implementation must be totally transparent, delayed responses apart,to all applicable SC3.x commands related to transports.

■ The same principle must apply during the re-establishment of a previouslyfailed path.

■ I/O:

■ I/O path redundancy must be implemented and ensured between the nodesand the SAN attached shared storage devices.

290 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 303: sun cluster 3 configuration guide 412383

CAMPUS CLUSTERS

■ Independently of the technology employed to implement the SAN extension,and the associated inner working, the incoming traffic - seen from the SANswitches perspective - must be standard Fibre Channel (i.e no vendor specificalteration.)

■ The number of cascaded ISLs must stay within the limits defined by thecurrent SAN WWWW rules. In the case where more than one level ofcascading is present, the sum of the latencies associated with each level mustnot exceed the max Data path latency (15 ms). Note: it may be necessary totake into consideration the latencies of the intermediate SAN switches whencalculating the sum.

■ Although there’s no universal rule, implementers must verify that theprovision of Buffer Credits in the SAN switches is adequate for the proposedextension solution and to prevent unexpected database disruption (Link Reset).

■ The use of host based mirroring is advised even in case where the storagedevices already provide hardware RAID protection.

TrueCopy SupportTrueCopy is now supported for shared storage data replication between two siteswithin a cluster. This offers a configuration alternative for campus clusters in whichdistance concerns make host-side mirroring impractical. Automatic failover in thecase of primary node failure is included, as well as support for SVM, VxVM and rawdisk device groups. Careful consideration must be taken when deciding onTrueCopy configuration parameters, such as fence level, since these have a directimpact on cluster availability and data integrity guarantees.

Some things to consider when investigating a potential Truecopy clusterconfiguration:

All Truecopy fence levels are supported, however, there are specific trade-offs withrespect to cluster availability, performance and data integrity which should beconsidered when deciding upon a setting. The DATA fence level offers the bestguarantees of data integrity by offering fully synchronous data updates, but canleave the primary site vulnerable to storage problems at the secondary site. A fencelevel of NEVER avoids the issues of being vulnerable to secondary storage failures,but opens up the possibility of allowing the primary and secondary data copies toget out of sync. Using a fence level of ASYNC can offer increased I/O performancethrough the use of asynchronous data updates, but of course introduces a potentialfor data loss should the primary site fail while it is still caching unwritten data.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 291

Page 304: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Two node clusters still require the use of a quorum device and even though thereplicated Truecopy devices are made to look like a single DID device, they are nottruly shared devices, so do not meet the needs of a quorum device. Quorum serveris generally a viable option.

Nodes at each site must only have direct access to one of the devices in a replicapair, otherwise volume management software can become confused about the diskswhich make up replicated device groups. Multiple local nodes at each site can shareaccess to local replicas (providing local failover), but direct access to a single replicamust not be shared between sites.

Careful planning of device usage is important as replica groups must be configuredto match a corresponding global device group (including naming) so that theswitching of the replication primary can coincide with the importing of the properdevice groups.

SRDF SupportSRDF is now supported for shared storage data replication between two sites withina cluster. This offers a configuration alternative for campus clusters in whichdistance concerns make host-side mirroring impractical. Automatic failover in thecase of primary node failure is included, as well as support for SVM, VxVM and rawdisk device groups. Careful consideration must be taken when deciding on SRDFconfiguration parameters since these have a direct impact on cluster availability anddata integrity guarantees.

When investigating a potential SRDF campus cluster configuration, please considerthe followings:

■ Synchronous, asynchronous and adaptive copy modes are all supported,however, there are specific trade-offs with respect to cluster availability,performance and data integrity which should be considered when deciding upona setting. The synchronous mode offers the best guarantees of data integrity byoffering fully synchronous data updates, but the primary site could be vulnerableto storage problems at the secondary site. Asynchronous mode can offer increasedI/O performance through the use of asynchronous data updates, but a potentialfor data loss could happen if the primary site fail while it is still cachingunwritten data.

■ While the use of SRDF static devices is supported, they should be avoided at allcosts. SRDF operations required during switchover and failover take severalminutes to complete for static devices. Use dynamic devices whenever possible.

292 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 305: sun cluster 3 configuration guide 412383

CAMPUS CLUSTERS

■ Two node clusters still require the use of a quorum device and even though thereplicated SRDF devices are made to look like a single DID device, they are nottruly shared devices, so do not meet the needs of a quorum device. Quorumserver is generally a viable option.

■ Nodes at each site must only have direct access to one of the devices in a replicapair, otherwise volume management software can become confused about thedisks which make up replicated device groups. Multiple local nodes at each sitecan share access to local replicas (providing local failover), but direct access to asingle replica must not be shared between sites.

■ Careful planning of device usage is important as replica groups must beconfigured to match a corresponding global device group (including naming) sothat the switching of the replication primary can coincide with the importing ofthe proper device groups.

■ Take care to ensure that the correct DID devices are being merged into a singlereplicated DID device. If the wrong pair of devices are combined, use the“scdidadm -b” command to unmerge them.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 293

Page 306: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

294 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 307: sun cluster 3 configuration guide 412383

APPENDIX B

Sun Cluster Geographic Edition

IntroductionThis chapter provides a description of the supported Sun Cluster GeographicEdition (GE) product hardware configurations and infrastructure. The Sun ClusterConfiguration Guide / Support Matrix provides the technical specification forindividual clusters in Sun Cluster GE configurations. The networking infrastructurerequired for inter-cluster connections will depend on customer-specificrequirements.

Elements of Sun Cluster GE HardwareConfigurationAlthough Sun Cluster GE can be installed and configured on a single stand-aloneSun Cluster, the product only has utility when it is installed in configurationsconsisting of several clusters. It is important to distinguish between Sun Cluster GEconfigurations, which provide automated failover between geographically-separateddistinct clusters, and Campus Cluster configurations, which provide automaticfailover within a geographically-spread single cluster.

As a rule of thumb, Campus cluster configurations offer protection against localizedincidents (for example a fire within a single room or building) and allow storage tobe placed near the point of use, but require synchronous data replication to ensurecorrect and reliable automatic failover. This imposes stringent limits of distance andlink characteristics, usually in the 10 - 100km range.

Sun Cluster GE is more appropriate for long-distance configurations (hundreds tothousands of km) where protection against major (city-wide) disaster is required. Itpermits the use of asynchronous data replication over standard Internet connections,as part of a company-wide Business Continuity plan.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 295

Page 308: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Campus and GE configurations can be combined in a single Disaster Recoveryconfiguration, to give the best of both worlds, see the Three-site topologies section,later.

The elements of Sun Cluster GE product configuration are:

■ Sun Cluster installations, with attached data storage. Sun Cluster GE places noadditional restrictions on supported cluster configurations, beyond those alreadyimposed by the base Sun Cluster configuration guidelines.

■ Internet connections for inter-cluster management communication and defaultheartbeat between the Sun Cluster installations

■ Connections for data replication (either host-based or storage-based). This may bethe same connection as that used for the heartbeat.

■ Optional connections for custom heartbeats if required.

Inter-Cluster TopologiesInter-cluster relationships in Sun Cluster GE consist of entities called partnerships,which are relationships between two clusters. All Sun Cluster GE inter-clustercommunications happen between partner clusters.

A partnership requires an IP connection between the public network interfaces ofthe partner clusters for inter-cluster management communication and default inter-cluster heartbeats. A single cluster may participate in more than one partnership andrequires IP connections with each of its partners. These connections can beestablished via dedicated corporate network connections, or across the publicInternet.

Within a partnership, entities known as protection groups may be configured. Aprotection group links a Sun Cluster Resource Group with the data-replicationresources that it requires, and establishes the data-replication relationship betweenpartner clusters. One partnership may have several protection groups configured,each protection group establishing a different data-replication relationship betweenthe partner clusters.

296 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 309: sun cluster 3 configuration guide 412383

SUN CLUSTER GEOGRAPHIC EDITION

FIGURE B-1 Example Sun Cluster GE topologies that demonstrate Sun Cluster GE inter-cluster relationships.

The Geneva-Paris-Rome-Berlin topology is an example of a configuration with acentralized DR site. It assumes a central Geneva cluster that forms three separatepartnerships with the Paris, Rome and Berlin clusters. The partnerships require two-way internet connections between cluster pairs Paris-Geneva, Rome-Geneva andBerlin-Geneva. A protection group is configured on each partnership so that innormal operation, the Paris, Rome, Berlin primaries replicate data to Geneva as asecondary. Each protection group requires the infrastructure to support a data-replication link between the normal primary cluster and Geneva. Should any of theoutlying sites be lost, Geneva can take over as a substitute.

The New York-London topology has two clusters that form a partnership with twoprotection groups. In normal operation, each cluster is the primary for one of theprotection groups, and the secondary for the other, this is a symmetricalconfiguration. The partnership requires a two-way IP connection between the twoclusters for inter-cluster management and heartbeats. Data-replication linkinfrastructure is required between the clusters to support data-replication for twoprotection groups.

Three-site topologies

It is possible to use a campus cluster for the primary cluster, thus creating a three-site configuration of Primary, Backup and DR sites. This is currently supported usingvolume manager mirroring within the campus cluster, and AVS replication to the DRsite. Other combinations will be supported in the future. It is not possible to create adaisy-chain of Sun Cluster GE pairs, i.e. London -> Paris -> Rome.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 297

Page 310: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Cluster Hardware and StorageConfigurationsThe configuration of an individual cluster within an Sun Cluster GE partnership issubject to the standard configuration rules for the related Sun Cluster release, asdescribed elsewhere in the Configuration Guide. Sun Cluster GE imposes noadditional restrictions on the cluster configuration. Clusters can have any supportedsize and configuration, including single-node clusters. It is generally not advisable touse a single-node cluster at the primary site, since any local failure will require aswitchover, however this is a supported configuration.

Both sites must have the same platform architecture, SPARC or x64. This is not arequirement of Sun Cluster GE, but rather of most applications. Filesystems and datafiles (e.g. from an Oracle data base) are generally not endian-neutral. Heterogeneouscombinations have therefore not been tested.

For use of Sun Cluster GE with third-party storage-based data-replicationmechanisms, the cluster hardware configurations required are those supporting therelated storage hardware. Partner clusters must be compatibly configured to supportdata replication between the clusters.

For specific supported software versions, please see the matrices at the end of thissection.

Storage configurationsWithin one cluster, Sun Cluster GE data-replication places some softwareconfiguration requirements on the accessibility of device groups and theconfiguration of data volumes. The software configuration requirements may haveimplications for the preferred configuration of storage on the cluster.

The clusters in a partnership need not be identical, although cluster softwareversions and replicated disk configurations must be the same on each side. Duringan upgrade it is permissible to run with one version of skew between the sites (i.e.Vn at one site, and Vn+1 at the other). There is no requirement to run the sameSolaris version at both sites provided this does not impose other constraints (e.g. onAVS versions).

For all supported products, replication can be configured as Synchronous orAsynchronous. The choice will be determined by the customer’s performancerequirements and requirements on acceptable transaction loss and recovery time fordisaster-recovery.

298 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 311: sun cluster 3 configuration guide 412383

SUN CLUSTER GEOGRAPHIC EDITION

The use of Synchronous replication will guarantee that both clusters in a partnershipalways have identical copies of data, however the need to ensure that data has beenwritten to both partners before a write is considered as complete means that the datawrite throughput is effectively limited to that of the inter-cluster link. This will beorders of magnitude slower than the physical disk connection.

The use of Asynchronous replication will avoid this performance penalty, but canmean that the data stored on the secondary partner may not always be an up-to-datecopy of the primary data. A failure of the primary cluster under such circumstancescan result in some data updates not being completed at the remote site.

Sun AVS configurationAn example partnership of simple two-node clusters is shown.

Using Sun Cluster GE with AVS requires nothing in the way of specializedhardware. AVS, being a software-based replication system, is largely hardware-agnostic. See the AVS documentation for information on which Sun storage systemsare supported.

In terms of network connectivity AVS, being host-based, depends on theconnectivity available between the host systems which make up the Sun Cluster GEpartner groups. It will share the same IP link that is used by the heartbeat.

Since AVS replication software runs on a single host in each cluster, certain scalableand parallel applications cannot be supported with AVS. A specific example isOracle RAC, which cannot work with AVS. HA-Oracle is fully supported.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 299

Page 312: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Supported versionsAVS 3.2.1 is supported only on Solaris 8 and Solaris 9, SPARC only. AVS 4.0 issupported only on Solaris 10, SPARC and x86.

StorEdge TrueCopy configurationUse of Sun Cluster GE with TrueCopy data-replication requires Sun Clusterconfigurations with Sun StorEdge 9970/9910 Array or Hitachi Lightning 9900 Seriesstorage that support the TrueCopy command interfaces. Sun Cluster GE places nospecific limitations on the connectivity to be used, any TrueCopy configurationwhich is supported by Sun Cluster can be used.

Hitachi offers TrueCopy planning and installation services (seehttp://www.hds.com/services/professional-services/plan-design.html) and theseare likely to be the best source of configuration planning information for aTrueCopy-based Sun Cluster GE installation

Support for Hitachi Universal Replicator will be provided in a forthcoming release.

Supported versionsTrueCopy Raid Manager versions 01-18-03/03 or later (SPARC) are supported.

EMC SRDFUse of Sun Cluster GE with EMC Symmetrix Remote Data Facility (SRDF) data-replication requires Sun Cluster configurations with EMC Symmetrix hardware thatsupports the SRDF Solutions Enabler command interface.

For Oracle considerations, the following guidelines may be useful:

http://www.emc.com/techlib/pdf/H1143.1_SRDFS_A_Oracle9i_10g_ldv.pdf

Supported versionsEMC Solution Enabler (SymCLI) version 6.0.1 or later is supported on Solaris SPARCand x86. Enginuity firmware Version 5671 or later is required.

300 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 313: sun cluster 3 configuration guide 412383

SUN CLUSTER GEOGRAPHIC EDITION

Inter-Cluster Network Connections.

Inter-Cluster Management and Default Heartbeats.IP access is required between Sun Cluster GE partner clusters. The communicationbetween partner clusters for Sun Cluster GE inter-cluster management operations isthrough a logical hostname IP address. The hostname used corresponds to the nameof the cluster, a configuration issue which must be considered at the planning stage.The default inter-cluster heartbeat module also communicates through this address.

Custom HeartbeatsSun Cluster GE provides interfaces for optional customer-added plug-ins for inter-cluster heartbeats. The communication channel for a custom heartbeat plug-in isdefined by its implementation. A custom heartbeat plug-in would allow the use ofa communication channel that is different from the default heartbeat connection. In atelecoms environment, for example, there may be other, non-IP, connection pathsavailable.

Data Replication NetworkThere is no explicit limitation on the distance between Sun Cluster GE partnerclusters. Sun Cluster GE partner-cluster configurations require the infrastructure forlong-distance data-replication connections to support the protection-groups hostedby the partnership. The requirements on the data-replication connection aredetermined by:

■ The distance between the partner clusters.

■ The amount of data to be replicated, and the pattern of data access.

■ The cost of the network connection.

■ Data-replication configuration parameters.

The type of inter-cluster links used for the data replication will depend on theproduct chosen. Sun Cluster GE does not place additional limitations on this beyondthose required by the data replication product.

It is difficult to fully define the characteristics of the data-replication infrastructurefor common reference configurations, since it is unlikely that a “typical”configuration exists. Nevertheless, some information on customer requirements isavailable from the field, as is information on network requirements for various datareplication configurations.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 301

Page 314: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Note, however, that while network throughput (in Mbit/s) is important when dealingwith large quantities of data, network latency is of much greater importance as far aswrite performance is concerned.

Data Replication configuration guidelinesIt should be noted that the choice of replication method and parameters is not an“all or nothing” issue. Multiple protection groups can be configured within apartnership, each using a different replication strategy appropriate to their needs.

By way of an example, consider a large internet sales company. It will have a largedatabase of products, which is updated regularly but probably not continuously.Staff will, from time to time, add new products and remove old ones. Such adatabase could safely be replicated asynchronously, since even if some updates werelost following a failure, the situation could be recovered relatively easily. Staff couldre-enter the changes at a later date.

On the other hand, the filesystem which keeps records of customers’ purchasescannot tolerate any data loss, since this could not be recovered by company staff.This would not only result in financial loss from the lost order data, but could alsolead to a loss of customer confidence. The relatively small quantity of data storedwould, however, probably permit this filesystem to be replicated synchronously toavoid any risk of data loss following a failure.

Unsupported featuresSupport for some new features in Solaris requires further testing and/or additionaldevelopment. Please note the following specific restrictions.

Shared QFSShared QFS filesystems embed the names of the host systems in the filesystemmetadata. In order to transfer an sQFS filesystem to a new cluster this, metadatamust be rewritten to contain the names of the hosts in the new cluster. SCGE doesnot perform this rewrite, and so SQFS filesystems cannot be supported with SCGE.This restriction will be lifted in a forthcoming release.

Oracle ASMTesting on ASM is ongoing and support is very limited at this time. Please contactthe cluster team for the latest status.

302 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 315: sun cluster 3 configuration guide 412383

SUN CLUSTER GEOGRAPHIC EDITION

ZFSThere are two issues which prevent SCGE from supporting ZFS:

1. Prior to bringing a zpool online on a new cluster, the LUNs used by the zpoolmust be imported. This is analogous to the import operation carried out bytraditional volume managers such as SVM and VxVM. SCGE does not yet issue azpool import command. This prevents the use of ZFS with storage-basedreplication mechanisms, where the LUNs are inaccessible while configured assecondaries.

2. More seriously, there is a potential interaction between ZFS and block-basedreplication systems in general. The ZFS copy-on-write model of file updatepresumes that the on-disk structure of the filesystem is always internallyconsistent. For a local filesystem this will be the case, but when a filesystem isreplicated to a remote site this consistency can only be guaranteed if the order inwhich disk blocks are written is the same at the secondary site as at the primary.

All of the supported replication technologies will guarantee this during normalactive replication, but if the communications link between primary and secondarysites is lost, or the secondary site is otherwise unavailable, a backlog of modifiedblocks will occur at the primary. This backlog will be transmitted once the secondarysite is again available, however most replication products do not maintain write-ordering during this catch-up phase (AVS, TrueCopy and SRDF do not maintainwrite-ordering such circumstances. Universal Replicator does). If a failure shouldoccur during this catch-up resynchronization, the destination zpool could be left inan unusable state.

This is not an issue specific to Solaris Cluster, and/or Geographic Edition,nevertheless it must be satisfactorily addressed before SCGE can safely claimsupport for ZFS in a DR environment.

Solaris Containers (zones)There are some limitations when using zones in conjunction with AVS. SolarisCluster supports zones in two ways:

1.By treating a zone as a black-box with the HA-Container agent. This model isfully supported by SCGE with all replication mechanisms.

2. 1.By treating a zone as a node, and managing applications inside a zone. In thiscase the application resource group nodelist will contain entries of the form“<nodename>:<zonename>”, sometimes referred to as a “zone-node”

SCGE always treats replication resources as global to a site, i.e. the nodelist forsuch resources groups (RGs) contains only physical hostnames (not zone names).With AVS replication, it is essential that the AVS resource group be online on thesame physical node as the application, so that IO can be intercepted. In order tocorrectly manage failovers within a local cluster in this case, SCGE must creates

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 303

Page 316: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

affinities between the replication RGs and the application RGs. Solaris Clusterwill not permit affinities or dependencies to be created between RGs if one RGhas a nodelist of physical nodenames, and the other has a nodelist of “zone-nodes”. This is highlighted in CR 6443496.

Until this issue is addressed, SCGE will be unable to support the use of zone-nodes with AVS replication. The use of zone-nodes with TrueCopy and SRDF is,however, fully supported.

TABLE B-1 Test/support matrix for SC Geographic Edition with various types of datareplication and volume managers

Data Replicationtype:

AVS on all storagesupported by Sun Cluster†

TrueCopy on StorEdge 99xxseries arrays

SRDF on EMC Symmetrixarrays supported by SunCluster

VolumeManager:

HWRaid SVM†† VxVM HW Raid SVM†† VxVM

HWRaid SVM†† VxVM

OdysseyR1 SCGE3.1 8/05with SC3.1u4 (3.18/05) *

S8u7orlater

SPARC

Yes Yes Yes(V4.1)

Yes‡‡ No††† Yes‡‡ No‡‡‡ No‡‡‡ No‡‡‡

x64 No‡ No‡ No‡ No‡ No‡ No‡ No‡‡‡ No‡‡‡ No‡‡‡

S9u7orlater

SPARC

Yes Yes Yes(V4.1)

Yes No††† Yes(V4.1)

No‡‡‡ No‡‡‡ No‡‡‡

x64 No‡ No‡ No‡ No‡ No‡ No‡ No‡‡‡ No‡‡‡ No‡‡‡

S10 SPARC

No§ No§ No§ Yes No††† Yes No‡‡‡ No‡‡‡ No‡‡‡

x64 No§ No§ No§ No§§ No††† No§§ No‡‡‡ No‡‡‡ No‡‡‡

304 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 317: sun cluster 3 configuration guide 412383

SUN CLUSTER GEOGRAPHIC EDITION

OdysseyR2(“Nestor”)SCGE 3.12006Q4,with SC3.1u4 (3.18/05)

S8u7orlater

SPARC

Yes Yes Yes(V4.1)

Yes*** No††† Yes*** No‡‡‡ No‡‡‡ No‡‡‡

x64 No‡ No‡ No‡ No‡ No‡ No‡ No‡,‡‡‡,§§§

No‡,‡‡‡,§§§

No‡,‡‡‡,§§

§

S9u7orlater

SPARC

Yes Yes Yes(V4.1)

Yes No††† Yes(V4.1)

Yes No††† Yes

x64 No‡ No‡ No‡ No‡ No‡ No‡ No‡ No‡,§§§

No‡

S10U2 orlater

SPARC

Yes Yes Yes(V4.1)

Yes No††† Yes(V4.1)

Yes No††† Yes(V4.1)

x64 Yes Yes Yes(V4.1)

No§§ No††† No§§ No§§§ No†††,§§

§

No§§§

OdysseyR2.1(“Athena”)SCGE 3.2with SC 3.2

S 8 SPARC

No**

x64 No**

S9u8orlater

SPARC

Yes*** Yes*** Yes*** Yes*** No††† Yes*** Yes No††† Yes(V5.0)

x64 No ‡

S10u3 orlater

SPARC

Yes Yes Yes(V5.0)

Yes No††† Yes(V4.1)

Yes No††† Yes(V4.1)

x64 Yes Yes Yes(V4.1)

Yes No††† Yes(V4.1)

Yes No†††

Yes(V4.1)

TABLE B-1 Test/support matrix for SC Geographic Edition with various types of datareplication and volume managers

Data Replicationtype:

AVS on all storagesupported by Sun Cluster†

TrueCopy on StorEdge 99xxseries arrays

SRDF on EMC Symmetrixarrays supported by SunCluster

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 305

Page 318: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

OdysseyR2.2(“Helen”)SCGE 3.22/08(3.2u1)with SC 3.22/08(3.2u1)

S 8 SPARC

No**

x64 No**

S9u8orlater

SPARC

Yes*** Yes*** Yes*** Yes*** No††† Yes*** Yes No††† Yes(V5.0)

x64 No ‡

S10u3 orlater

SPARC

Yes Yes Yes(V5.0)

Yes No††† Yes(V5.0)

Yes No††† Yes(V5.0)

x64 Yes Yes Yes(V5.0)

Yes No††† Yes(V5.0)

Yes No††† Yes(V5.0)

This matrix shows the supported combinations for each release of Sun Cluster Geographic Edition. Superscriptnumbers refer to explanatory notes below. It is assumed that each Solaris release also has the latest patchreleases required by the underlying Sun Cluster installation, unless notes are given to the contrary. The fulldetails of testing can be found at the (internal) URLs in the Test documents section in the following paragraph.

This is a current matrix, including qualifications carried out after a given version was released. The supportstatus of components not specifically referred to here (e.g. UFS, VxFS) should be determined by reference tostandard Sun Cluster.Note that references to volume managers below are to single-owner versions (i.e. not CVM or Oban). Multi-owner volume manager support is addressed in the Oracle configuration matrix.

Test documents:http://haweb.sfbay/dsqa/projects/odyssey/r1/http://galileo.sfbay/scq/odyssey/athena/http://galileo.sfbay/scq/odyssey/post_scgeo32_quals/

* When using SCGE 3.1 8/05 with Cacao 1.1 (as shipped in Java ES 4) patch 122783-03 or later must be installed.

† AVS 3.2.1 required for Solaris 8 and 9, AVS 4.0 or later required for Solaris 10

‡ SCGE x64 support is only available with Solaris 10.

§ AVS was not available for Solaris 10 at this time.

** Solaris 8 is not supported with Sun Cluster 3.2, nor with SCGE 3.2.

††On Solaris 8 references to SVM should be taken as referring to Solstice Disk Suite (SDS)

‡‡Tested on Solaris 9, extrapolated to S8.

§§Not tested.

***Not tested, extrapolated from testing on previous release.

†††CRs 6216278 (SVM) and 5070680 (SCGE) must be addressed first. Work is in progress.

‡‡‡SRDF support was added for SCGE 3.1 2006Q4, for S9 and S10 only.

§§§SRDF software was not available for Solaris on x86 or x64 platforms for this release.

TABLE B-1 Test/support matrix for SC Geographic Edition with various types of datareplication and volume managers

Data Replicationtype:

AVS on all storagesupported by Sun Cluster†

TrueCopy on StorEdge 99xxseries arrays

SRDF on EMC Symmetrixarrays supported by SunCluster

306 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 319: sun cluster 3 configuration guide 412383

SUN CLUSTER GEOGRAPHIC EDITION

TABLE B-2 Test and support matrix for SCGE and Oracle RAC, showing tested andsupported configurations per release.

Oracle version and configuration, per volume manager*

9i RAC 10g RAC

HW Raid SVM/Oban

VxVM/CVM

HW Raid SVM/Oban

VxVM/CVM

R1

SCG

E3.

1u4

8/05

S8 AVS 3.2.1 SPARC No †

True Copy SPARC Yes‡ No‡‡ No§§ No§§ No§§§ No§§

S9 AVS 3.2.1 SPARC No †

True Copy SPARC Yes‡ No‡‡ No§§ No§§ No§§§ No§§

S10 True Copy SPARC Yes No‡‡ No§§ No§§ No§§§ No§§

R2

SCG

E3.

1u4

2006

Q4

S8 AVS 3.2.1 SPARC No §

True Copy SPARC Yes‡ No‡‡ Yes‡ No‡‡

SRDF SPARC No§§

S9 AVS 3.2.1 SPARC No†

True Copy SPARC Yes‡ No‡‡ Yes No§§ No‡‡ No§§

SRDF SPARC No§§ No‡‡ No§§ No§§ No‡‡ No§§

S10 AVS 4.0 SPARC No†

x64 No† No† No† No† No† No†, ****

True Copy SPARC Yes No‡‡ Yes No§§ No‡‡ No§§

x64 No** No**, ‡‡ No** No§§ No‡‡ No****

SRDF SPARC No§§ No‡‡ No§§ No§§ No‡‡ No§§

x64 No**, †† No**, ‡‡ No**, †† No†† No††,‡‡ No††

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 307

Page 320: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

9i RAC 10g/11g RAC‡‡‡

HW Raid SVM/Oban

VxVM/CVM

HW Raid SVM/Oban

VxVM/CVM

SCG

E3.

2

S9 AVS 3.2.1 SPARC No †

True Copy SPARC Yes‡ No‡‡ Yes‡, ***,†††

Yes‡ No‡‡ Yes***,†††

SRDF SPARC No§§ No‡‡ Yes††† Yes‡ No‡‡ Yes†††,††††

S10 AVS 4.0 SPARC No †

x64 No†,** No†,** No†,** No † No † No †

True Copy SPARC Yes‡ No‡‡ Yes‡,***,††

†Yes‡ No‡‡ Yes‡,***,†††

x64 No** No**, ‡‡ No** Yes‡ No‡‡ Yes§

SRDF SPARC No§§ No‡‡ Yes‡,††† Yes‡ No‡‡ Yes†††,††††

x64 No**, †† No**, ††, ‡‡ No**, †† No††,§§ No††,‡‡ No§,††

SCG

E3.

2U1

S9 AVS 3.2.1 SPARC No †

True Copy SPARC Yes‡ No‡‡ Yes Yes No‡‡ Yes

SRDF SPARC Yes‡ No‡‡ Yes Yes No‡‡ Yes

S10 AVS 4.0 SPARC No †

x64 No†,** No†,** No†,** No † No † No †

True Copy SPARC Yes No‡‡ Yes Yes No‡‡ Yes

x64 Yes No**,‡‡ No §, ** Yes No‡‡ No §

SRDF SPARC Yes No‡‡ Yes Yes No‡‡ Yes

x64 Yes No**,‡‡ No §, ** Yes No‡‡ Yes §

TABLE B-2 Test and support matrix for SCGE and Oracle RAC, showing tested andsupported configurations per release.

Oracle version and configuration, per volume manager*

308 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 321: sun cluster 3 configuration guide 412383

SUN CLUSTER GEOGRAPHIC EDITION

This matrix shows the supported combinations for Oracle RAC and various types of data replicationtechnology, for each release of Solaris Cluster Geographic Edition (SCGE). Superscript numbers refer toexplanatory notes below. It is assumed that each Solaris release also has the latest patch releases required bythe underlying Sun Cluster installation, unless notes are given to the contrary. The full details of testing canbe found at the (internal) URLs in the Test documents section in the following paragraph.

“HW Raid” means that no volume manager was used. “SVM/Oban means the Sun Cluster Volume Manager,and “VxVM/CVM” means the Veritas Cluster Volume Manager

This is a current, evolving, matrix, including qualifications carried out after a given version was released.

HA Oracle. Note that this table no longer calls out HA-Oracle as a separate entity. SCGE support for HA-Oracle is the same as that provided by the underlying Solaris Cluster release.

* ASM support is limited at present, for technical reasons.

† The use of AVS Replication with Oracle RAC is not technically possible.

‡ Extrapolated from tests on a compatible release.

§ CVM is not yet supported on Solaris x86

** Oracle 9i was not released for Solaris x86.

††SRDF software was not available with SCGE for Solaris on x86 or x64 platforms for this release.

‡‡CRs 6216268 (SVM), 6325951 (Oban) and 5032363 (SCGE) must be addressed first.

§§Not yet tested, by project decision.

***Requires SCGE TrueCopy patch 126613-01 or later.

†††Limited support, requires special configuration. Obtain prior review/approval of configuration by SCGE team before making com-mitment.

‡‡‡11g support is the same as 10g, presuming corresponding support by underlying core Sun Cluster

§§§CRs 6216268 (SVM) and 5070680 (SCGE) must be addressed first. Work is in progress.

****VxVM on x64 is not supported by SC3.1u4

††††Requires SCGE SRDF patch 126746-01 or later.

TABLE B-2 Test and support matrix for SCGE and Oracle RAC, showing tested andsupported configurations per release.

Oracle version and configuration, per volume manager*

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 309

Page 322: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

310 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 323: sun cluster 3 configuration guide 412383

APPENDIX C

Third-Party Agents

All the agents mentioned in “Application Services” on page 222 are developed, sold, and supported by the Sun Cluster Business Unit. A variety of agents have been/are being developed by third party organizations - other business units in Sun, and ISVs. These agents are sold and supported by the respective third party organizations. The table below lists the agents which Sun Cluster product marketing is aware of:

The versions of the application supported in this table may not be up-to-date. Please contact the person referred to in the contact column of the table for more/latest information on these agents:

TABLE C-1 Third Party Agents

Application Contact

iPlanet Mail/Messaging Server 5.1 Email: [email protected]: x15213/+1 408 276 5213

IBM DB2 7.2 (EE, EEE) Email: [email protected]

IBM IDS/IIF 9.21, 9.3 (HA Informix) Tom BauchEmail: [email protected]: 972-561-7954.

HA SBU 6.1 (Agent is bundled with SBU product) Dennis Henderson Phone: 1-510-936-2260/x12260Email: [email protected]

HA-iCS 5.1 Cheryl AlderesePhone: x34240/+1 408 276 4240Email: [email protected]

Sybase ASE 12.5 (active-passive)NOTE: There are two Sybase agents: one sold by Sun, other sold by Sybase. This table refers to the agent sold by Sybase.

Rick LindenEmail: [email protected]

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 311

Page 324: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

312 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 325: sun cluster 3 configuration guide 412383

APPENDIX D

Revision History

11/21/00■ First draft created.

12/22/00 ■ HA LDAP 4.12 + Solaris 8

02/13/01■ Support for E420R■ CVM

03/30/01■ T3 single brick + E3x00-E6x00,E10K■ A3500FC + E3x00-E6x00, E10K■ Solaris 8 Update 2■ Solaris 8 Update 3

■ E3500-E6500/E10000 + A3500FC (using Hubs)

■ VxVM 3.1 (including CVM functionality)■ HA Oracle 8.1.7 32bit■ VxVM/SDS with SDS root mirror

04/17/2001■ Added support for Serengeti-12/12i/24 with

T3 single brick configs

05/07/2001■ HA Oracle 8.1.6 64bit■ Solaris 8 U4■ SunMC 3.0 support■ changed the verbiage for Sun Cluster 3.0

server licensing■ Sample configs for Serengeti12/12i/24

cluster

06/12/01■ T3 single brick + 220/420/250/450■ Switch + 250/450/220r/420r/4800/4810/

6800

7/10/2001■ VxVM 3.1.1■ Oracle 9iRAC (OPS, 32bit) + VxVM 3.0.4■ Oracle Parallel Server 8.1.7 32bit + VxVM

3.1.1

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 313

Page 326: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ OPS/RAC support on Sun Fire 4800/4810/6800 servers

■ Gigabit Ethernet as Public Network Interface.

■ Sun Fire 4800/4810/6800 8 node, mixed cluster, and SVM support

07/23/01■ SunPlex Manager■ Solaris Resource Manager 1.2 coexistence■ HA Sybase Agent■ HA SAP Agent.■ Sun Fire(TM) 280R server support.■ Sun Fire 3800 server support.■ Netra t1 200■ Netra t 1400/1405■ Netra t 1120/1125

08/01/01■ Fix the VxVM license in sample configs■ Solaris 8 7/01■ HA Informix v9.21■ T3PP + E220R/E420R/E250/E450

08/21/01■ SE 99x0 + E450/E3500-6500

08/29/01■ Changed SVM to SDS■ Oracle 9iRAC (OPS) 32 bit + VxVM 3.1.1

(using cluster functionality)■ HA SAP 4.6D 64 bit■ HA SAP 4.5B 32 bit■ HA SAP 4.0 32 bit■ LDAP 4.13■ Sun StorEdge 4800/4810/6800 + T3PP

9/11/01■ Purple2 support■ 280R + Purple1 partner pair■ 3800 + Purple1 partner pair■ >2 node 280R configs■ add crystal+ support

■ Netra t 1400/1405 + Netra st D130 + VxVM 3.1.1

9/26/01■ Clarify Statement around E1 expander

support■ Add II/SNDR 3.0 support■ Netra 1400/1405 + S1■ Netra AC200/DC200 + S1■ F15K + Purple2

10/01/01■ clarify statement around 2 node OPS/RAC

support■ HA Oracle 8.1.7 64 bit■ HA Oracle 9i 32 bit■ weaken the swap requirements to

recommendation■ removed the two node limit for E250/450/

220R/420R + T3 single bricks■ added a table for maximum cluster nodes

10/16/01■ Solaris 8 Update6 support■ Netra 20 + D1000■ Netra 20 + S1■ HA Netbackup 3.4, 3.4.1

10/29/01■ Sun Fire V880 + D1000/A5200/T3■ Scalable Broadvision

11/13/01■ HA Informix v9.21 to be sold and supported

by Informix. Contact: Hans Juergen Krueger, [email protected], 1-650-926-1061

■ Oracle 9i RAC 64bit■ Update information about webdesk■ Update information about SCOPE■ cleaned up the placement of some of the

storage information.

314 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 327: sun cluster 3 configuration guide 412383

REVISION HISTORY

12/04/01■ Sun Cluster 3.0 U2• PCI-SCI + E3500-6500

01/08/02■ Indy DAS■ OPFS 8i 32bit■ Made MPxIO support information more

explicit■ PCI/SCI + E250/450■ added Sun Cluster 3.0 12/01

01/29/02■ Revision history added■ >2 node support for V880■ >2 node support for SF3800■ F15K and 1034A public network interface■ Netra T1 + Netra st D1000■ 250/220R/420R + FCI 1063 + SE 99x0 direct

attached■ F4800-6800 + 6799/6727 + SE 99x0 direct

attached■ E10K + FC641063 + SE 99x0 direct attached■ F15K + 6799/6727 + SE 99x0 direct attached■ V880 + 1063/6799/6727 + SE 99x0 direct

attached■ V880 + 1063 + Brocade 2800(F) + SE 99x0 ■ F3800 + 6748 + SE 99x0 direct attached■ E250/450/220R/420R + FCI 1063 + Brocade

2800 (F) + SE 99x0■ E3500-6500 + FC641063 + Brocade 2800 (F) +

SE 99x0 ■ E10K + FC641063 + Brocade 2800 (F) + SE

99x0 ■ F4800-6800 + 6727/6799 + Brocade 2800 (QL

only) + SE 99x0 ■ F4800-6800 + FCI 1063 + Brocade 2800 (F) +

SE 99x0 ■ F15K + 6727/6799 + Brocade 2800 (QL only)

+ SE 99x0 ■ F15K + 1063 + Brocade 2800 (F) + SE 99x0■ Quorum support on T3PP/SE 99x0/SE39x0■ F15K + F4800-6800 + SE 99x0 - mixed family

config

02/12/02■ Added a section on campus cluster

configurations■ E3500-6500, 10K (SBus only) + A5x00/T3A/

TB (single brick and partner pair) + 6757A■ onboard GBE port for public interface and

cluster interconnect for V880■ Scalable SAP 4.6D 32 bit (same agent as HA-

SAP)■ HA-iDS 5.1■ HA-iCS 5.1- The HA-iCS agent will be sold

and supported by the iCS group. Contact Cheryl Alderese, [email protected] for details.

■ Updated the part numbers for sun cluster user documentation

■ Updated the contact address for Informix agent

■ added 5-meter fiber optic cable support to T3, A5x00 section

■ Clarified statement around use of PCI I/O board for SCI-PCI in E3500-6500

02/28/02■ TrueCopy support■ Solaris 8 02/02 support■ Build F15K and F6800 in the same family■ A1000 support with E250/450/220R/420R/

280R/V880/3500-6500■ Netra 1400/1405, 1120/1125, 20 + Netra st

A1000■ Campus clusters support for 220R/420R/

250/450/280R/V880/3800 + T3A/T3B (single brick and partner pair)

03/15/02■ Dynamic reconfiguration (DR) support for

Sun Fire 3800-6800■ 1034A as private interconnect with Sun Fire

15K■ SDS 4.2.1 supported with SE 99x0 arrays■ Soft Partitioning now supported with SDS

4.2.1■ SE39x0 + V880, F15K, E3500-6500, E10K■ Sun Fire 15K + T3A

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 315

Page 328: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ iDS 4.16

03/21/02■ SE6910/6960 + 250/450/220R/420R/3500-

6500/10K/3800/4800-6800/15K■ 280R + 6799/6727A direct attached■ 280R + 6799/6727A + Brocade 2802■ 3800 + 6748A + Brocade 2802■ MPxIO support with SE 99x0 arrays for all

the combinations where 6799/6727/6748 is used. VxVM 3.2 is required for MPxIO support.

04/09/02■ Sun Fire 12K support■ DR support with Sun Fire 12K/15K■ Added Solaris 8 U1 support■ Relaxed Sun Cluster and Solaris updates

support-matrix■ Sybase 12.5 (active-active) - sold &

supported by Sybase■ Oracle 9i RAC Guard 32 bit

04/23/02■ OPS/RAC support for campus clusters■ 4 node OPS/RAC with T3WG■ 8 node support for Sun Fire 12K/15K■ HA-SAP 6.10■ F15K + A5200

04/25/02■ Correct statement of support for Sybase ASE

12.5

05/07/02■ Sun StorEdge 9970/9980 support■ T3FW 2.1 support■ 4 node OPS with SE 9960/9910■ Ivory + SE 9960/9910■ E3000 - 6000 support■ SE 9900 ShadowImage, graphtrack, LUN

manager■ E3x00-6x00, 10K server family consolidation

■ 280R and Netra 20 server family consolidation

■ Netra 1120/1125, 1400/1405, Sun Enterprise 220R/420R/250/450 server family consolidation

■ A5100 support with V880■ Added support for Fabric mode with SE

99x0 and Brocade Switches and Sun 1Gb HBAs.

05/21/02■ SE 9960/9910 + SVM + MPxIO■ Campus cluster with SE 9960/9910 with

Brocade switches■ Netra 20 + T3A/B WG + Crystal+ + hubs■ single cpu clusters■ Sun Cluster 3.0 5/02■ HA Oracle 9i 64bit■ Oracle 9i RAC Guard 64 bit■ Solaris 9 support for DNS, NFS, Apache

1.3.9, iDS 5.1

06/04/02■ 280R and V880 support in the same family■ 2222A + 12K/15K for public and private

network■ Jasper + S1 support■ Jasper + D2 support■ 4 node OPS with T3PP w/o CVM■ Indy 1.0+ support■ SAN 4.0 support■ Oracle 9i (R1) RAC Guard 64 bit■ HA-Oracle 9iR2 32/64 bit■ Oracle 9iR2 RAC 32 bit■ HA-Apache 2.0■ Scalable iPlanet Webserver 6.0

06/18/02■ Clarify HA/Scalable app support with N*N

topology■ HA Sybase 12.0 64bit■ Oracle 9iR2 RACG 32bit■ V480 support with Sun Cluster 3.0■ PCI SCI support with 220R, 420R

316 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 329: sun cluster 3 configuration guide 412383

REVISION HISTORY

■ E220R/E420R/E250/E450 + 6799/6727A + SE 9910/9960 + Brocade 2800 switch

■ Support of Sun 1Gb 8/16 port switches with 9960/9910

■ 2222A support on Sun Fire 4800, 4810, 6800 for cluster interconnect and public network interface.

■ 4 node OPS/RAC support with T3PP with VxVM 3.2 cluster functionality.

■ 4 node OPS/RAC support with SE3900 with and w/o VxVM 3.2 cluster functionality.

■ Indy 1.5 support for SE3900 series systems.■ Brocade 12000, 3800, and McDATA 6064

support with SE 9960/9910

07/23/02■ ATM as public network interface■ Support of E10K and F15K in the same

family for SE 9900 series storage systems

08/06/02■ PCI-SCI with Sun Fire 4800, 6800■ Heterogeneous node configurations■ 2222A + S1 on remaining platforms■ Availability Suite 3.1 with Sun Cluster 3.0 5/

02 (or later) + Solaris 8■ 8 node N+1 configurations

08/20/02■ Cassini 1261a, 1150a, 1151a support■ Oracle 9iR2 RAC 64 bit■ Oracle 9iR2 RACG 64 bit■ HA-Livecache 7.4■ HA-Siebel 7.0

09/10/02■ 4 Node OPS/RAC supported with SE 9970/

9980■ Netra server line VxVM support

standardized (identical to all other supported servers with Sun Cluster 3.0)

■ SE A5200 support for V480■ Support 2GB HBA (6767A, 6768A) and

Brocade 3800 switch with SE T3 ES, SE 39x0

10/01/02■ NWS SAN 4.0 support reflected in storage

configuration section■ Supported Private and Public Interconnects

revised (expanded x1150, 1151, 2222 and additional card support)

■ Additional campus cluster features included in campus cluster appendix

■ Network Support section revised to reflect supported configurations in a easier to use matrix fashion

10/15/02■ Added SE 3310 Support■ Added Diskless Cluster Configuration

Support■ Added support for SAP Livecache 4.6D and

Apache 1.3.19

10/29/02■ Revised the topology support section to

reflect the relaxed topology restrictions.■ Added the WDM based campus cluster

configurations section.■ Added the “hot-plug” functionality section

to the Campus Cluster section.

11/12/02■ Added Sun Fire V120 Support ■ Added Enterprise 10k PCI SCI Support

(1074a) ■ Added SANtinel and LUSE to the SE 9900

series software support sections. ■ Updated Agents and Third-Party Agents

section ■ Fixed several typographical errors within

several sections

12/03/02■ Added McData 6064 1GB switch support for

9910/9960■ Added SunOne Proxy Server 3.6 support

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 317

Page 330: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

1/14/03■ PCI SCI (1074a) support for SF 280R, V480,

V880■ Added McData 6064 2GB switch support for

the 9910/9960/9970/9980■ Added Netra 120 support■ Added VLAN support■ Added A1000 daisy chaining support■ Added SunOne Web Server 6.1 agent

support

1/28/03■ Added V1280 support■ Added SDLM support■ Added non-support statement for

multipathing to the local disks of a SF v480/v880

■ Added Sun Fire Link support for 6800■ Added WDM support for V280, 480, 880■ Added WDM support for OPS/RAC

(removed the RAC/OPS restriction)■ Added 6768 HBA support for SF 6800/SE

9980■ Added HA-Siebel 7.5 Sun Cluster 3.0 U3

support■ Revised SE 3310 sections

2/11/03■ VLAN phase 2 (switch trunking) enabled■ Slot 1 DR support added■ Added 6757 McData 6064 support with

9980/ E10k■ Added HA IBM WebSphere MQ agent

support■ Added HA IBM WebSphere MQ Integrator

agent support■ Added HA Samba Agent support■ Added HA DHCP support■ Added HA NetBackUp 3.4 agent support for

Solaris 9■ Revised A5x00 and SE 3310 Storage sections■ Revised agents, server support, interconnect

support sections

2/25/03■ Added Sun Netra 1280 Support■ Added Brocade 6400 Switch Support■ Added SE 69x0 Campus Cluster Support

3/11/03■ Added Brocade 12000 switch support■ Added SF V480 McData 6064 (1&2 Gb)

support with SE 9970/9980■ Revised Storage Support, Interconnects and

Data Configuration sections

4/1/03■ Added SE 6120 support■ Added 4 nodes Sun Fire Link Support■ Added E450 S1 storage support■ Single dual-controller, split-bus SE 3310

JBOD configuration support removed■ Revised Storage Support and Interconnects

sections■ Added SAP 6.20 support■ Added support for RAC on GFS

4/15/03■ Added SE 2GB FC 64 Port Switch Support■ Expanded Brocade 3800 support to SBUS

systems with T3s/39x0■ Expanded SE 9970/9980 support for E 420■ Revised several sections

5/6/03■ Added SE 6320 support■ Added Sol 8 12k/15k SCI support■ Added 12k/15k Sol 9 DR Slot 1 support■ Added RSM support with RAC■ Revised interconnect and storage sections

5/20/03■ Added Sun Cluster 3.1. All sections were

“generified” to Sun Cluster 3 (unless otherwise specified)

■ Added SF V210/V240 support■ Added additional SE 6320 support■ Added Sol 9 12k/15k SCI support

318 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 331: sun cluster 3 configuration guide 412383

REVISION HISTORY

■ Revised topologies, interconnects, storage, data services, ordering and all other sections affected by Sun Cluster 3.1 updates

6/03/03■ Added support for SE 3510 RAID■ Added Support for McData 4500 switch■ Added Support for Brocade 3200 switch■ Added Campus Cluster support for VLANs■ Expanded SDLM (HDLM) support for

Solaris 9 on Sun Cluster 3.0 only

6/17/03■ Added support for Sun Cluster 3.1 and V240

(Sol 8 and Sol 9)■ Added support for Sun Cluster 3.0 and V240

with Sol 9■ Added support for SE 3510 with V240■ Revised 3510 RAID switch supported (added

Sun 64 port 2gb switch)■ Added support for SF/Netra 1280 memory/

CPU DR support■ Added additional Sun Cluster 3.0/3.1 Samba

support■ Revised Sun Cluster 3.1 Solaris support table

and Volume Manager support table■ Revised storage support section

7/15/03■ Added support for Brocade 3900, McData

6140 switches■ Revised storage/switch support■ Logical volume unsupported on SE 3510■ Sun Fire Link (wildcat) supported in DLPI

mode for SF 12k/15k■ WebSphere MQ and MQ integrator

supported in Sol 9 Sun Cluster 3.x versions

7/29/03■ Expanded support for the SE 3310 RAID/

JBOD with SF 4800-6800, 12k/15k support■ Added 8 node RAC 9.2.0.3 with SE 99x0

support ■ Added 8 node N*N support for SE 99x0

storage

■ Revised the storage node limitations to clearly define SCSI/FIBER/99x0 node connectivity limits

■ Revised agents section

8/19/03■ Added expanded campus cluster support

phase 1- additional campus cluster switch support

■ Added SF 3800/Brocade 3900/SE 6x20 support

■ Added Sol 9 support for AVS and Sun Cluster 3.1

■ Revised storage, agents and software sections

9/2/03■ Added SF v240/SE 6x20 support■ Added Sol 9 8/03 support for Sun Cluster

3.0 and Sun Cluster 3.1■ Added NBU 4.5 support for Sun Cluster 3.0

5/02■ Added “maximum node” columns to all

storage arrays■ Revised VxVM CVM license numbers for

Sun Cluster 3.0

9/16/03■ Added SF V440 support■ Added SF V250 support■ Added second source HBA support (jni)■ Added new 6767/6768 HBA part numbers■ Added McData 4300 switch support■ Revised storage section

9/30/03■ Added SAN Support section to storage

support configuration■ Added SF V240, McData 4500 support for SE

99x0■ Revised Storage support section■ Shadowimage CCI device support clarified

on SE 99x0

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 319

Page 332: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

10/14/03■ Added D1000/A1000 to SF V440■ Truecopy CCI device support clarified on SE

99x0

10/28/03■ Sun Cluster 3.1 10/03 announced/added■ Added SCI 1074a card support to SF V440■ TrueCopy CCI device support clarified on SE

99x0■ Added agents for Tomcat, MySQL, Oracle

Ebusiness suite, SWIFTAlliance, Sun Cluster 3.1 NBU 4.5 support

11/11/03■ Added MPxIO boot support■ Removed >1 initiator per channel restriction

on SE 3510■ Expanded leadville-based HBA support■ Added BEA 8.1 agent support

12/2/03■ Added Netra 240 AC/DC■ Added 64 LUN 6120/6320■ Added EBS 7.1 support

1/13/03■ Expanded Support for campus cluster

storage devices (SE 3510, SE 6x20)■ Updated support for RAC on a FS■ Expanded SF 440 storage support■ Solaris 9 12/03 supported■ NBU 5.0 supported with Sun Cluster 3.0 U3

2/10/04■ Sun Fire Enterprise 4900/6900 added■ TrueCopy campus cluster manual support

added■ onboard port campus cluster support added■ SE 3310 JBOD split bus re-enabled■ SF V440/SE 39x0 support added■ SE 3310/x2222 HBA support added■ Netra 240 AC support added■ NBU 5.0 support added

■ Added SCI Promo info■ Changed Informix contact■ Cleaned up several tables■ Fixed psr info

2/24/04■ X4422 (cauldron s) support added■ X4444 (quad-gigabit) support added■ Mixed speed nafo configs supported■ Netra 240/SE 99x0/JNI support added■ Modified SE 3510 section■ Revised Solaris support section

3/9/04■ Expanded JNI/second source support for

6120/6320■ Documented Hardware RAID 1 support

restriction for SF v440 internal disks■ Revised storage section HBA/Storage

support for 6120, 6320, 99x0, 69x0■ Revised supported SAN switch listing-

Brocade 3200

4/6/04■ Sun Fire Enterprise 2900 Support■ SE 3510 with Sun branded JNI supported ■ SE 3120 JBOD Supported■ SE 3510 RAID Restriction Removed■ SE 3310 JBOD With V440 SCSI supported ■ Mirroring Between different types of Storage

Arrays Supported

4/27/04■ Sun Fire Enterprise 20/25 Support■ Solaris 9 4/04 support for Sun Cluster 3.0

and 3.1■ Inclusion of simplified hardware procedures

in documentation■ Revised support for Sun Fire 240 and Netra

240 to include Netra ST D1000■ Revised support for Sun Fire 240 to include

x4422A as cluster interconnect

320 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 333: sun cluster 3 configuration guide 412383

REVISION HISTORY

5/11/04■ Sun Netra 440 DC■ hsPCI+ for 12K/15K■ Single SE 3120 JBOD Split Bus■ SE 3510 8 array expansion ■ Sun Cluster Open Storage■ HA-Oracle Agent for Oracle 10G on Sun

Cluster 3.0

6/1/04■ Expanded Campus Cluster Support

including McData 4500■ HA-Oracle Agent for Oracle 10G on Sun

Cluster 3.1■ SAP DB agent Support (SPARC)■ App Server J2EE Support (SPARC)■ 8 Node support for SE 6120/6130■ x86 Support matrix addendum

6/15/04■ Support for SE 6920

7/13/04■ Support for SE 3511 RAID■ Support for SE 320■ Support for Brocade 3250, 3850 and 24000

switches■ Support for SE 3310 with V440/Netra 440 on

board SCSI■ EMC Symetrix DMX, 8000, EMC Clariion

CX300,CX400,CX500,CX600 and CX700

8/03/04■ Support for 3510 and 3511 RAID arrays with

eight nodes connected to a LUN ■ Support for Netra 440 with the X4422A

(cauldron S), SG-XPCI1FC-QF2, SG-XPCI2FC-QF2 and X4444A cards

8/17/04■ Support for Netra 440 AC with X3151A card■ Support for Sun Fire V40z with SE 3310

RAID and X4422A (cauldron S) HBA.

8/31/04■ Support for Netra 440 X6799 and X6541

9/14/04■ Support for Sun StorEdge 9990■ Support for Sun Fire V490/890■ Support for X4444A card with Sun Fire 20/

25K

10/05/04■ Support for Sun LW8-QFE card

10/19/04■ Support for SE 6130■ Support for 4 card SCI without DR

11/02/04■ Support for 0racle 10G RAC on Solaris

SPARC■ XMITS PCI IO boats for Serengeti class

systems with Sun Cluster

11/16/04■ Sun Cluster 3.1 9/04

12/07/04■

1/11/05■ Jumbo Frames Support

2/01/05■ 10G RAC with SVM Cluster Functionality

3/08/05■ Support for Netra 440 and Jasper 320

■ Support for QLogic 5200 Switch

4/05/05■ Support for Public Network VLAN Tagging

■ Support for Brocade 4100 FC SPARC

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 321

Page 334: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Support for HA Siebel 7.7

■ Support for Sun 4150A51A Cards

4/19/05■ Support for HA Sybase 12.5 agent

■ Updates including AC/DC support clarification in 3510/35100 and 3310/3311

5/03/05■ Support for SE 6920 V 3.0.0 (Unity 3.0)

■ Support for Oracle 10G with Shared QFS

5/17/05■ Support for NEC iStorage

■ Miscellaneous Updates

6/7/05■ Support for 3310/3120 JBOD

■ Support for X4444A

■ Support for SG-XPCI2SCSI-LM320 (Jasper 320)

■ Support for Sybase ASE 12.5.1 (SPARC)

7/12/05■ Support for Sun 5544A Card (SPARC)

■ Support for Sun Emulex Cards (Rainbow) SG-XPCI21C-EM2 and SG-XPCI2FC -EM2 (SPARC)

■ Support for Sun Fire V440 On Board HW RAID

■ Support for SE 9990 with HDLM 5.4\

7/26/05■ Support for Sun 4150/4151A card on Solaris

x86

■ Support for Shadow Image and TrueCopy with SE 9990

■ Support for Sun Fire V40z On Board HW RAID

■ Support for Sun Fire V40z dual core processors

8/23/05■ Support for SE 9985 with Sun Cluster

■ Sun Cluster 3.1 8/05 update

9/13/05■ Support for Jasper 320 with 3310 RAID and

V40z

■ Panther processor support

■ HA-Oracle 10G on Solaris 9 x86

■ Miscellaneous updates

9/27/05■ Support for Brocade 200E and 48000

■ Support for 3310 RAID and V40z with SG-XPCI1SCSI-LM320

■ Panther processor support for E2900,4900 and 6900

■ Support for Sybase 12.5.2 and 12.5.3

10/11/05■ Support for AVS 3.2.1

■ Support for SE 3320

■ Panther processor support for E20 and 25K

■ Misc. updates and corrections

11/11/05■ Galaxy Servers

■ Fibre Channel storage for x64

■ Support for x4445A NIC

■ Support for 3320 on x64

■ Support for Infiniband on x64

■ Support for HA Oracle 10gR1 on x64

■ Corrections on agents

■ Misc. updates and corrections

322 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 335: sun cluster 3 configuration guide 412383

REVISION HISTORY

12/10/05

1/10/06■ Support for T2000 Server

1/24/06■ Support for 6920 with x64 Clusters

■ Updated Version Support for MySQL and WebSphere MQ agents

■ Support for single dual-port HBA as path to shared storage

2/7/06■ Updated Oracle E-Business Suite Agent

Support

■ Edited Volume Manager Support Information

■ Support for the SE3320 with X4200

2/21/06■ Support for T1000 server

■ Support for 3511 in campus clusters

■ Added support for Solaris 10 zone failover for MySQL and Apache Tomcat agents

■ Support for SE 6130 with x64 servers

■ Support for four node connectivity with the SE 6920 with x64 servers

4/4/06■ Oracle RAC 10gR2 for x64

■ 4422A support for Solaris 10x64

■ Support for McData 4500 and 4700 switches

■ Support for 99x0 with T2000

4/18/06■ 8 Node Oracle RAC support with V40z

■ T2000 support for SCSI storage

■ Support for RoHS NICs

■ Updated storage support for Netra 240

■ Added License part numbers for Sun Cluster Geo Edition

■ Added License part numbers for Sun Cluster Advanced Edition for Oracle RAC

7/11/06■ StorageTek 6540 Array

■ StorageTek 6140 Array

■ updates to MySQL agent

■ Sun Blade 8000 Modular System

■ Sun Blade X8400 Server Module

■ Solaris 10 6/06 (Update 3)

10/17/06■ Support for the Sun Fire X4100 M2, X4200

M2, and X4600 M2 servers

11/21/06■ Support for the Sun Fire V215, v245, and

V445 servers

■ Support for mixed 2Gb/s and 4Gb/s FC cards in SAN attached storage

■ Support for Cisco FC switches

1/09/07■ Support for the Sun Fire X2100 M2 and

X2200 M2 servers

2/06/07■ Update MySQL agent section

■ Update Samba agent section

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 323

Page 336: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Support for the Sun Blade x8420 (A4F) Server Module

■ Support of Netra 210 for Diskless Cluster Config

■ Change of config guide ownership from Matt Hamilton to Hamilton Nguyen

3/06/07■ Update the entire config guide with Sun

Cluster 3.2 data

■ Update V210/V240 Server Configuration section

■ Update SE3511 RAID Configuration Rules section

■ Update Private Interconnect Technology Support section

■ Add STK6140 and two additional HBAs to Sun Blade 8000 support matrix

■ Add new Netra x4200 M2 support matrix

■ Add Spec-Based Campus Cluster section

■ Add SAN4.4.12 note

4/03/07■ Add SE 9970/9980 and SE 9985/9990

supports to x4600 Matrix

■ Add note related to Info Doc#88928 to T2000 section

■ Add Oracle Application Server support to Failover Services for Sun Cluster 3.2 (x64) table

■ Add HA Oracle support to Failover Services for Sun Cluster 3.2 (SPARC and x64) and Failover Services for Sun Cluster 3.1 (x64) tables

■ Add JES Directory Server/JES Messaging Server/Netbackup notes to Failover Services for Sun Cluster 3.1 (SPARC) and Failover Services for Sun Cluster 3.2 (SPARC) tables

■ Add new support of V125

■ Add IB notes/Update IB support

■ Add new support of ST2540 (FC)

5/08/07■ Add Sun Cluster Geographic Edition section

■ Update Spec-Based Campus Cluster section

■ Consolidate various Campus Cluster entries

■ Update Siebel 7.8.2, SwiftAlliance Access and SwiftAlliance Gateway support for Sun Cluster3.1 (SPARC) table

■ Add Cisco 9124, Brocade 5000, Qlogic 9100 and 9200 to list of FC switches supported

■ Update 5544A/5544A-4 support with additional servers

■ Add Sun NAS 53XX note

■ Add Minnow firmware note

■ Update QFS and Oracle RAC tables (x64 and SPARC)

■ Add new Sun Blade 8000 P support matrix

■ Add Sun SPARC Enterprise M4000, M5000, M8000 and M9000 supports

6/05/07■ Add Sun Blade T6300 support

■ Add StorageTek 6540 support with x64 servers

■ Add External I/O Expansion Unit for Sun SPARC Enterprise Mx000 Servers

■ Add Apache Tomcat 6.0 support

■ Add/update AVS support including AVS 4.0

■ Add SAP support to Failover Services for Sun Cluster 3.2 (x64)

■ Update SAP with agent support in zones to Failover Services for Sun Cluster 3.2 (SPARC)

■ Update Swift Alliance Access and Gateway sections with Solaris 10 11/06 support

324 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 337: sun cluster 3 configuration guide 412383

REVISION HISTORY

7/10/07■ Add 802.3ad Native Link Aggregation

support with Public Network

■ Add new support of SE 9990V

■ Update Oracle RAC table (Sun Cluster 3.2 SPARC) with additional storage support

■ Update Sun Cluster Geographic Edition and Oracle table with additional config support

■ Update MySQL with incrementally supported versions

■ Update SAP support (Sun Cluster 3.2 x64)

■ Add note to Diskless Cluster section as related to inclusion of Quorum Server

■ Update Andromeda tables with additional hardware support

■ Update V215, V245, V445 and V490 platforms with additional SE 99xx support

■ Update Netra 440 platform with additional storages support

■ Update T1000 platform with SCSI-based storage support

■ Update Cluster Interconnect and Public Network tables with additional NICs support

8/07/07■ Add CP3010 SPARC Blade for Netra CT900

ATCA Server support

■ Add Solaris 9 support to V215 and V245 platforms

■ Update Campus Clusters chapter

■ Update True Copy Support section

■ Add additional PCI-E ExpressModule Network Interfaces to Cluster Interconnect and Public Network tables

■ Update Supported SAN Software section with release SAN 4.4.13 note

■ Update Siebel 7.8.2 entry in Failover Services for Sun Cluster 3.1 (SPARC) table with Solaris 10 support

9/14/07■ Add CP3060 SPARC Blade for Netra CT900

ATCA Server support

■ Update CP3010 SPARC Blade for Netra CT900 ATCA Server with SE3510 support

■ Add Solaris 10 Update 4 support with Sun Cluster 3.2

■ Update Guideline for Spec Based Campus Cluster Configurations section with support of HDS as quorum device

■ Update Cluster Interconnect section of Network Configuration chapter

■ Update link aggregation info in IPMP Support sub-section under Public Network section

■ Add configuration rule to SE 99xx sections on mixing FC HBAs that are and are not MPxIO supported

■ Update JES Messaging Server with version 6.3 and JES Directory Server with version 5.2.x in Failover Services for Sun Cluster 3.2 (SPARC) table

■ Update both SwiftAlliance Access and SwiftAlliance Gateway with version 6.0 in Failover Services for Sun Cluster 3.2 (SPARC) table

■ Update N1 Grid Engine 6.1 in Failover Services for Sun Cluster 3.1 (SPARC and x64) and Sun Cluster 3.2 (SPARC and x64) tables

■ Add Sybase ASE support to Failover Services for Sun Cluster 3.2 (x64) table

■ Update Sybase ASE entry in Failover Services for Sun Cluster 3.2 (SPARC) table with non-global zones support

10/09/07■ Add Sun SPARC Enterprise T5120 and T5220

platforms support

■ Add new support of SE 9985V

■ Update Sun Blade T6300 platform with additional HBA support

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 325

Page 338: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Update X2100 M2 and X2200 M2 Servers with SE3120, SE3310 and SE3320 supports

■ Update SE3310, SE3320, SE3510 and SE3511 with Minnow 4.21 firmware

■ Update Netra1290 with ST6140 and ST6540 supports

■ Update/add SAP Livecache 7.6 and SAP MaxDB 7.6 entries in Failover Services for Sun Cluster 3.2 (SPARC & x64) tables

■ Update MySQL version in Failover Services for Sun Cluster 3.1 (SPARC & x64) and Failover Services for Sun Cluster 3.2 (SPARC & x64) tables

11/06/07■ Add Sun Blade x6220 and x6250 Server Modules

support

■ Add Sun Blade T6320 Server Module support

■ Add new section to introduce Support for Virtualized OS Environment (LDOM)

■ Update Solaris Container agent for Sun Cluster 3.1 with native and 1x brand support

■ Update Guideline for Spec Based Campus Cluster Configurations with support of HDS as quorum device for Sun Cluster 3.1u4

■ Update SE3120 JBOD Support Matrix with E6900 support

■ Update Sun Blade 8000 Support Matrix with x7287A-Z support

■ Update x4100 M2, x4200 M2, Netra x4200 M2, x4600 and x4600 M2 with x4446A-Z support

12/04/07■ Add Sun Blade x8440 Server Modules

support

■ Add Sun Fire X4150 and X4450 Servers support

■ Update ST2540 with M4000, M8000, M9000 and Sun Blade X84xx support

■ Update SE 99xx with Mx000 support

■ Update SE3320 RAID Support Matrix with Netra 1290 support

■ Update Sun Blade T6300 platform with LDOM support

■ Update Mx000 with DR support

■ Add Cisco 9134 and 9222i to list of FC switches supported

■ Update QFS tables with SAM-QFS (Shared) 4.6 support

■ Update Samba with incrementally supported versions for both Solaris Cluster 3.1 and 3.2

01/08/08■ Add ST2530 (SAS) and SAS HBAs supports

■ Add Sun Blade 6048 chassis support

■ Update Sun Blade 60xx Support Matrix with Infiniband interconnect (x1288A-Z) and ST 99xx storage support

■ Update Sun SPARC Enterprise T5120 and T5220 platforms with SCSI storage support

■ Add Sybase version 15.0.1 and 15.0.2 support in Failover Services for Sun Cluster 3.1(SPARC) and Sun Cluster 3.2 (SPARC and x64) tables

■ Add Brocade DCX to list of SAN switches supported

■ Update Mx000 with additional ST 99xx support

■ Update External I/O Expansion Unit for Sun SPARC Enterprise Mx000 Servers with additional NICs

02/05/08■ Update ST2540 with additional servers

support

■ Update ST2530 (SAS) with additional servers and HBAs support

■ Update Sun SPARC Enterprise T5120/T5220 with additional ST6540 Array support

326 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 339: sun cluster 3 configuration guide 412383

REVISION HISTORY

■ Update Volume Manager tables with additional S10U4 support

■ Update Netra X4200 M2 with additional ST2540 RAID Array support

■ Update Sun Blade 6000/6048/8000 Support Matrix with additional NIC support

03/04/08■ Add Sun Blade x8450 Server Module support

■ Add Universal Replicator support with SE 9985V/SE 9990V

■ Add ST2530 support with T5120/T5220

■ Update supported SAN software for Sun Cluster on Solaris 9

■ Update SE 9985V/9990V with x64 support

■ Update Siebel agent with additional version 8.0 support in Failover Services for Sun Cluster 3.2 (SPARC)

■ Update Sun Blade x6220 and x6250 Server Modules with SE 9985V/9990V support

■ Update Sun SPARC Enterprise T5120 and T5220 with SE 99xx support

04/01/08■ Add ST2510 (iSCSI) support

■ Add Sun SPARC Enterprise T5140 and T5240 support

■ Add Sun Fire X4140 and X4240 Servers support

■ Add Sun Fire X4440 Server support

■ Add Sun StorageTek NAS support for any data services with more than 2-node

■ Add support of SRDF in a campus cluster configuration

■ Update Sun Cluster Geographic Edition appendix to reflect SCGE3.2U1 release

■ Update VxVM (on x64 and SPARC) tables to reflect SC3.2U1 release

■ Update Oracle Server with version 11g support in Sun Cluster 3.1 (SPARC) and Sun Cluster 3.2 (SPARC) tables

■ Update Oracle RAC with version 11g support in Sun Cluster 3.1 (SPARC) and Sun Cluster 3.2 (SPARC) tables

■ Update Oracle Application Server with version 10.1.3.1 support in Sun Cluster 3.2 (SPARC and x64) tables

■ Update Oracle Business Suite with version 12.0 support in Sun Cluster 3.2 (SPARC) table

■ Add HA Container (1x and Solaris8 branded) support to Sun Cluster 3.2 (SPARC and x64) tables

■ Update BEA Web Logic Server with version 9.2 support in Sun Cluster 3.2 (SPARC and x64) tables

■ Update JES Application Server with version 9.1EE support in Sun Cluster 3.2 (SPARC and x64) tables

■ Update CP3060 SPARC Blade for Netra CT900 ATCA Server with additional HBA support

■ Update Sun Blade 8000 and 8000P with additional SE 99xx support

■ Update Sun Fire X4100 M2/X4200 M2, X4450, X4600, X4600 M2 with additional SE 99xx support

■ Update Netra 440, Netra 1280, SF V440, SF V445, SF V480, SF V490 with additional NIC support

■ Update Sun SPARC Enterprise M5000 with ST2540 support

■ Update the maximum number of Cluster nodes (x64) from 4x to 8x

05/13/08■ Add S10U5 support with SC3.2

■ Add Brocade 300, 5100 and 5300 switches

■ Add x7285A and x7286A NICs support

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 327

Page 340: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Add limited Netra T5220 support

■ Update ST2530 support

■ Update Sun Blade T6320 platform with additional SE 99xx support

■ Update T5120/T5220 with SE3120 support

■ Update Sun Blade 6000/6048 and 8000 Support Matrix with x7284A-Z support

■ Update Sun Blade T6300 and T6320 with x7284A-Z support

■ Update PostgreSQL in Failover Services for Sun Cluster 3.1 and 3.2 for (SPARC and x64)

■ Update Solaris Container in Failover Services for Sun Cluster 3.2 (SPARC and x64)

■ Update Oracle E-Business Suite in Failover Services for Sun Cluster 3.1 (SPARC)

■ Update JES Web Server in both Failover and Scalable Services for Sun Cluster 3.1 and 3.2 (SPARC and x64)

■ Update SAP in Failover Services for Sun both Sun Cluster 3.1 and 3.2 (SPARC)

■ Update web browsers supported with SunPlex Manager

06/10/08■ Add Sun Blade x6450 Server Module support

■ Update Sun SPARC Enterprise T5140 and T5240 with SCSI and SE 99xx storage support

■ Update ST2510 (iSCSI) with number of nodes support from 2 to 4

■ Update Sun Fire X4150 with SE 9970/9980 storage support

■ Update SwiftAlliance Gateway with additional version 6.1 support in Failover Services for both Sun Cluster 3.1 and 3.2 (SPARC)

■ Update SAP in Failover Services for both Sun Cluster 3.1 and 3.2 (SPARC)

■ Add SG-XPCIE2FCGBE-Q-Z HBA/NIC support for Sun Blade 6000, 6048 and 8000 chassis

■ Add SG-XPCIE2FCGBE-Q-Z HBA/NIC support with Sun Blade T63xx Server Modules

07/08/08■ Add support of LDOM with Guest Domain

■ Add SG-XPCIE2FCGBE-E-Z HBA/NIC support with Sun Blade 6000, 6048 and 8000 chassis

■ Add SG-XPCIE2FCGBE-E-Z HBA/NIC support with Sun Blade T63xx Server Modules

■ Update Sun Blade x6450 Server Module with additional SE 99xx supports

■ Update Sun Fire X4100/X4200 with SE 99xx supports

■ Update Sun Fire X4140/X4240/X4440 with SE 99xx supports

■ Update Sun Fire X4150/X4450 with additional SE 99xx supports

■ Update Netra T2000 with SR2530 and ST2540 supports

08/05/08■ Update Supported SAN Software section

with release SAN 4.4.15 note

■ Update Sun Fire T1000 Server with additional SE 99xx supports

■ Add new section to provide more details on Sun NAS

■ Update MySQL in Failover Services for Sun Cluster 3.2 (SPARC and x64) with additional versions

■ Update Solaris Container in Failover Services for Sun Cluster 3.2 (SPARC and x64) with additional versions

09/02/08■ Add Netra x4250 support

■ Add Netra x4450 support

328 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 341: sun cluster 3 configuration guide 412383

REVISION HISTORY

■ Update Sun Blade T6320 with X4236A support

■ Update MySQL in Failover Services for Sun Cluster 3.2 (SPARC and x64) with additional version

■ Update External I/O Expansion Unit for Sun SPARC Enterprise Mx000 Servers with additional NIC support

10/14/08■ Add Sun SPARC Enterprise T5440 Server

support

■ Add Sun Blade T6340 Server Module support

■ Add Sun Fire X4540 Server support

■ Update Sun Blade X6220 and X6250 Server Module with x4236A NEM10G support

■ Add 4x 8GB FC PCIe HBAs support

■ Update SE3320 JBOD on discontinuation of dual-hosted single-bus support by base product group

■ Update Sun Cluster 3.1 with Solaris 10U5 support

■ Add Informix Dynamic Server V11 support for Sun Cluster 3.2 (SPARC and x64) Failover Services

■ Update JES MQ Server in Failover Services for Sun Cluster 3.1 and 3.2 (SPARC and x64) with version 4.1

■ Update Agfa IMPAX in Failover Services for Sun Cluster 3.2 (SPARC) with version 6.3

11/11/08■ Add Sun SPARC Enterprise M3000 Server

support

■ Add Netra T5440 support

■ Add USBRDT-5240 support

■ Update Netra x4200M2 support

■ Update Netra T5220 support

■ Update Sun Cluster 3.2 and Sun Cluster 3.2U1 with Solaris 10U6 support

■ Update SCGE/Oracle RAC table with VxVM support involving TrueCopy/S10 x86/SCGE3.2 and SRDF/S10 x86/SCGE3.2U1

■ Update SWIFT Alliance Access in Failover Service for Sun Cluster 3.2 (SPARC) with version 6.2

■ Update x2100M2/x2200M2, x4100M2/x4200M2, x4140, x4150, x4240, x4440, x4450, x4600/x4600M2 with additional HBAs support

■ Update External I/O Expansion Unit with T5120, T5140, T5220 and T5240 support

■ Add Brocade 310 switch support

12/09/08■ Add new J4200 storage support

■ Add new J4400 storage support

■ Add discussion on MTU relationship between public network and cluster interconnect when using scalable services

■ Add Qlogic 5802V switch support

■ Update SAP MaxDB in Failover Services for Sun Cluster 3.1 (SPARC) and 3.2 (SPARC and x64) with version 7.7

01/13/09 (not published)■ Add Sun Fire X4250

■ Update Sun SPARC Enterprise M3000 with additional NICs

■ Update Netra T5220 with storage and additional NICs

■ Update Netra T5440 with additional NICs

■ Update StorageTek 2510, 2530, 2540 info

■ Update Informix Dynamic Server in Failover Services for Sun Cluster 3.2 (SPARC and x64) with additional version support

■ Update SAP WAS in Failover Services for Sun Cluster 3.2 (x64) with additional version support

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 329

Page 342: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

02/10/09■ Add Solaris Cluster 3.2 1/09 with associated

Solaris and agents updates

■ Add Brocade DCX-4S switch

■ Add SG-XPCIE20FC-NEM-Z

■ Transition of Config Guide production from Hamilton Nguyen to Ray Jang

03/10/09■ Add Sun Storage 6580/6780

■ Add Sun Netra CP3260

■ Update Sun SPARC Enterprise T5140, T5240, T5440 with X1236A-Z

04/14/09■ Add Sun Blade X6240

■ Add Sun Blade X6440

■ Update Sun StorageTek 9985V/9990V with M3000, T5440, T6340, and X4200

■ Update Sun StorageTek 99985/9990 with Universal Replicator support

■ Update general Sun StorEdge 9900 TrueCopy and Universal Replicator info

■ Update Sun Storage 6580/6780 server support

■ Update WebLogic Server agent with zone nodes support

■ Update Solaris Cluster 3.2 HA for SAP Web Application Server with SAP 7.1 on S10 SPARC

■ Update Apache agent supports all Apache.org 2.2.x versions

05/12/09■ Add Solaris 10 5/09 for SC 3.1 and SC 3.2,

Solaris 10 10/08 for SC 3.1

■ Add Sun Fire X4170, X4270. X4275

■ Add Sun Blade X6270

■ Update Sun Blade X6240 with Barcelona support

■ Update Sun Storage J4200/J4400 with SATA HDD, T5440, X4240, X4250

■ Update Sun StorEdge 9970/9980 with M3000, T5440, T6340, X6240, X6440

■ Update Sun StorEdge 9985/9990, Sun StorageTek T9985V/9990V with X6240, X6440

06/09/09■ Update Sun Storage 6580/6780 with M4000,

M5000, M8000, M9000 External I/O Expansion Unit support

■ Update Sun StorEdge 9910/9960 to sync up with SE 9900 WWWW

■ Update Sun StorEdge 9970/9980, Sun StorageTek 9985V/9990V with X2200 M2

■ Update Sun StorEdge 9985/9990 with T6340, M3000, T5440, X2200 M2

07/21/09■ Update Sun StorageTek 2510 support with

SPARC servers, and consolidated x86 server info

■ Update SVM support to track the bundled Solaris release support

■ Update Sun StorageTek 2530 with Netra X4200 M2 support using the non-NEBS-qualified SG-XPCIE8SAS-E-Z

08/04/09■ Add new Ethernet Storage Support chapter

and relocated ST 2510 section

■ Add Sun Storage 7110, 7210, 7310 and 7410 Systems

■ Update Interconnect and Public Net support of X4447A-Z QGE to include Netra X4200 M2

330 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 343: sun cluster 3 configuration guide 412383

REVISION HISTORY

09/01/09■ Add Sun Blade 6048 for SPARC blades

■ Update Network Configuration chapter, separating ExpressModules and Network Express Modules into separate tables

■ Add Dhole X4822A FEM

■ Update Sun StorEdge 9970/9980 with M4000

10/13/09■ Add Sun StorageTek 9985V/9990V 16-node

N*N RAC support

■ Add Sun Storage 7000 support for RAC over NFS

■ Add Sun Storage 6180

■ Re-add Netra X4450 info (lost since 10/14/08?)

■ Update Apache Web Server agent with Zone Cluster support

■ Update HA Oracle with Zone Cluster support

■ Update Java MQ agent with 4.3 support

■ Update MySQL agent with 5.0.85 and Zone Cluster support

■ Update SS 7000 iSCSI LUN fencing and scsi2/scsi3 quorum device support with SW 2009.Q3

■ Update ST 3320 JBOD that new single-bus configs not supported per FAB 239464

■ Update External I/O Expansion Unit support for the SE9900 line

■ Relocate/integrate Sun StorageTek 5000 NAS info to the Ethernet Storage Support chapter

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 331

Page 344: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

332 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 345: sun cluster 3 configuration guide 412383

Index

NUMERICS2510 RAID array 1732530 RAID array 1672540 RAID array 813120 JBOD array 1423310 JBOD array 1483310 RAID array 1533320 JBOD array 1573320 RAID array 1623510 RAID array 833511 RAID array 883910 system 903960 system 903rd-party agents 3113rd-party storage devices 585000 NAS Appliance 1755210 NAS Appliance 1775220 NAS Appliance 1775310 NAS Appliance 1785320 NAS Appliance 1785320 NAS Cluster Appliance 1786120 array 926130 array 81, 94, 97, 99, 103, 167, 173, 1756140 array 97

6180 array 996320 system 1006540 array 1036580 array 1056780 array 1056910 system 1076920 system 1096960 system 1077000 Unified Storage System 1797110 Unified Storage System 1817210 Unified Storage System 1817310 Unified Storage System 1817410 Unified Storage System 1819910 system 1119960 system 1119970 system 1159980 system 1159985 system 1199985V system 1229990 system 1199990V system 122

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 333

Page 346: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

AA1000 array 137A3500 array 140A3500FC system 63A5x00 array 66administrative consoles 263Agfa IMPAX

Sun Cluster 3.1 223Sun Cluster 3.2 230

Apache Proxy ServerSun Cluster 3.1 223Sun Cluster 3.2 230, 238

Apache TomcatSun Cluster 3.1 223, 228, 243, 244Sun Cluster 3.2 230, 238, 244, 245

Apache Web ServerSun Cluster 3.1 223, 238, 243, 244Sun Cluster 3.2 230, 238, 244, 245

application services 222

Bbackup node capacity 6BEA WebLogic Application Server

Sun Cluster 3.1 223, 228Sun Cluster 3.2 231, 239

benefits of clustering 1boot devices 15

Ccampus clusters 287

configurations 287maximum nodes 287SAN configurations 288TrueCopy 291

Cluster Control Panel (CCP) 264cluster topologies 3

clustered pair topology 4clusters using different servers 35command-line tools 263consoles 263CPUs, minimum 15

DD1000 array 138D2 array 132data configuration 250

file system 253meta devices 250raw devices 250raw volumes 250

DB2 311DHCP

Sun Cluster 3.1 223, 228Sun Cluster 3.2 231, 239

diskless clusters 8DNS

Sun Cluster 3.1 223, 229Sun Cluster 3.2 231, 239

documentation, Sun Cluster x

Eenterprise continuity 295Ethernet 185

Ffailover services

Agfa IMPAXSun Cluster 3.1 223Sun Cluster 3.2 230

Apache Proxy ServerSun Cluster 3.1 223Sun Cluster 3.2 230, 238

334 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 347: sun cluster 3 configuration guide 412383

INDEX

Apache TomcatSun Cluster 3.1 223, 228Sun Cluster 3.2 230, 238

Apache Web ServerSun Cluster 3.1 223, 238Sun Cluster 3.2 230, 238

BEA WebLogic Application ServerSun Cluster 3.1 223, 228Sun Cluster 3.2 231, 239

defined 223DHCP

Sun Cluster 3.1 223, 228Sun Cluster 3.2 231, 239

DNSSun Cluster 3.1 223, 229Sun Cluster 3.2 231, 239

HADBSun Cluster 3.1 223, 229Sun Cluster 3.2 231, 239

IBM WebSphere MQSun Cluster 3.1 223Sun Cluster 3.2 231, 239

InformixSun Cluster 3.2 231, 239

JES Application ServerSun Cluster 3.1 224, 229Sun Cluster 3.2 231, 239

JES Directory ServerSun Cluster 3.1 224Sun Cluster 3.2 231

JES Messaging ServerSun Cluster 3.1 224Sun Cluster 3.2 231

JES MQ ServerSun Cluster 3.1 227, 230Sun Cluster 3.2 237, 242

JES Web Proxy ServerSun Cluster 3.1 224, 229Sun Cluster 3.2 232, 239

JES Web ServerSun Cluster 3.1 224, 229Sun Cluster 3.2 232, 239

KerberosSun Cluster 3.2 232, 240

MySQLSun Cluster 3.1 224, 229Sun Cluster 3.2 232, 240

N1 Grid EngineSun Cluster 3.1 224, 229

Sun Cluster 3.2 232, 240N1 Grid Service Provisioning System

Sun Cluster 3.1 224, 229Sun Cluster 3.2 232, 240

NetbackupSun Cluster 3.1 224Sun Cluster 3.2 232

NFSSun Cluster 3.1 224, 229Sun Cluster 3.2 232, 240

Oracle Application ServerSun Cluster 3.1 225Sun Cluster 3.2 232, 240

Oracle E-Business SuiteSun Cluster 3.1 225, 328Sun Cluster 3.2 233

Oracle ServerSun Cluster 3.1 225, 229Sun Cluster 3.2 233, 240

PostgreSQLSun Cluster 3.1 225, 229Sun Cluster 3.2 234, 241

SambaSun Cluster 3.1 225, 229Sun Cluster 3.2 234, 242

SAPSun Cluster 3.1 226Sun Cluster 3.2 235, 241

SAP LiveCacheSun Cluster 3.1 227Sun Cluster 3.2 236, 241

SAP MaxDBSun Cluster 3.1 227Sun Cluster 3.2 236, 242

SiebelSun Cluster 3.1 227Sun Cluster 3.2 237

Solaris ContainersSun Cluster 3.1 227, 230Sun Cluster 3.2 237, 242

Sun Java Server Message QueueSun Cluster 3.1 227, 230Sun Cluster 3.2 237, 242

Sun One Proxy ServerSun Cluster 3.1 229Sun Cluster 3.2 231, 232, 239

Sun StorEdge Availability SuiteSun Cluster 3.1 228, 230Sun Cluster 3.2 237, 242

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 335

Page 348: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SWIFTAlliance AccessSun Cluster 3.1 228Sun Cluster 3.2 237

SWIFTAlliance GatewaySun Cluster 3.1 228Sun Cluster 3.2 238

Sybase ASESun Cluster 3.1 228Sun Cluster 3.2 238, 242

WebSphere Message BrokerSun Cluster 3.1 228Sun Cluster 3.2 238, 242

Sun Cluster 3.1 list 223, 228, 230, 238FC storage

SAN support 59SE 6130 array 81, 94, 97, 99, 103, 167, 173,

175Sun Storage 6180 array 99Sun Storage 6580 array 105Sun Storage 6780 array 105Sun StorageTek 2510 RAID array 173Sun StorageTek 2540 RAID array 81Sun StorageTek 6140 array 97Sun StorageTek 6540 array 103Sun StorageTek 9985 system 119Sun StorageTek 9985V system 122Sun StorageTek 9990 system 119Sun StorageTek 9990V system 122Sun StorEdge 3510 RAID array 83Sun StorEdge 3511 RAID array 88Sun StorEdge 3910 system 90Sun StorEdge 3960 system 90Sun StorEdge 6120 array 92Sun StorEdge 6130 array 94Sun StorEdge 6320 system 100Sun StorEdge 6910 system 107Sun StorEdge 6920 system 109Sun StorEdge 6960 system 107Sun StorEdge 9910 system 111Sun StorEdge 9960 system 111Sun StorEdge 9970 system 115

Sun StorEdge 9980 system 115Sun StorEdge A3500FC system 63Sun StorEdge A5x00 array 66Sun StorEdge T3 array (partner pair) 78Sun StorEdge T3 array (single brick) 74supported devices 41

fibre channel storage. See FC storage

file system 253

Gglobal interface (GIF) 218global networking 218Graphtrack 114, 118, 122, 125

HHA SBU 311HADB

Sun Cluster 3.1 223, 229Sun Cluster 3.2 231, 239

HA-iCS 311hardware components, typical 1heterogenous servers 35

generic rules for using 35sharing storage 36

heterogenous storage 39

IIBM DB2 311IBM IDS/IIF 311IBM WebSphere MQ

Sun Cluster 3.1 223Sun Cluster 3.2 231, 239

InformixSun Cluster 3.2 231, 239

336 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 349: sun cluster 3 configuration guide 412383

INDEX

interconnect 183Ethernet 185junction-based 184PCI/SCI 186point-to-point 184Sun Fire Link 187technologies supported 185VLAN support 185

iPlanet Mail/Messaging Server 311IPMP 217

JJ4200 JBOD array 169J4400 JBOD array 169JES Application Server

Sun Cluster 3.1 224, 229Sun Cluster 3.2 231, 239

JES Directory ServerSun Cluster 3.1 224Sun Cluster 3.2 231

JES Messaging ServerSun Cluster 3.1 224Sun Cluster 3.2 231

JES MQ ServerSun Cluster 3.1 227, 230Sun Cluster 3.2 237, 242

JES Web Proxy ServerSun Cluster 3.1 224, 229Sun Cluster 3.2 232, 239

JES Web ServerSun Cluster 3.1 224, 229, 243, 244Sun Cluster 3.2 232, 239, 244, 245

KKerberos

Sun Cluster 3.2 232, 240

Llocal storage 39LUN Manager 114, 118, 122, 125LUSE 113, 117, 121, 125

Mmanaging clusters 263meta devices 250minimum CPUs 15Multipack 131multipathing 217MySQL

Sun Cluster 3.1 224, 229Sun Cluster 3.2 232, 240

NN*N topology 7N+1 topology 5N1 Grid Engine

Sun Cluster 3.1 224, 229Sun Cluster 3.2 232, 240

N1 Grid Service Provisioning SystemSun Cluster 3.1 224, 229Sun Cluster 3.2 232, 240

NAFO 216NAS storage

Sun Storage 7000 Unified Storage System 179Sun Storage 7110 Unified Storage System 181Sun Storage 7210 Unified Storage System 181Sun Storage 7310 Unified Storage System 181Sun Storage 7410 Unified Storage System 181Sun StorageTek 5000 NAS Appliance 175Sun StorageTek 5210 NAS Appliance 177Sun StorageTek 5220 NAS Appliance 177Sun StorageTek 5310 NAS Appliance 178Sun StorageTek 5320 NAS Appliance 178

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 337

Page 350: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun StorageTek 5320 NAS Cluster Appliance 178

NetbackupSun Cluster 3.1 224Sun Cluster 3.2 232

Netra servers1280 servers 2120 servers 16440 servers 20t 1120 servers 16t 1125 servers 16t 1400 servers 16t 1405 servers 16T1 AC200 servers 16T1 DC200 servers 16

Netra storagest A1000 array 128st D1000 array 129st D130 array 127

network adapter failover 216network configuration 183

interconnect 183public network 202

network interfacesinterconnect 188, 194, 196, 197, 198, 199,

200, 203, 209, 213, 215public network 211, 212, 214, 216, 325

network multipathing 217NFS

Sun Cluster 3.1 224, 229Sun Cluster 3.2 232, 240

nodes, maximum in cluster 3

OOban 251, 252operating systems 219

Sun Cluster versions and Solaris versions 219

Oracle Application ServerSun Cluster 3.1 225Sun Cluster 3.2 232, 240

Oracle E-Business SuiteSun Cluster 3.1 225, 328Sun Cluster 3.2 233

Oracle Parallel Server, See Oracle RAC

Oracle RAC 245Sun Cluster 3.1 246, 247, 249topologies 245

Oracle Real Application Cluster, See Oracle RACOracle Server

Sun Cluster 3.1 225, 229Sun Cluster 3.2 233, 240

overview of clustering 1

Ppair+N topology 6PCI/SCI 186PostgreSQL

Sun Cluster 3.1 225, 229Sun Cluster 3.2 234, 241

private interconnect, see interconnect 183public network 202

Qquorum devices 41

RRAID 258raw devices 250raw volumes 250recommended rules, definition xirequired rules, definition xi

338 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 351: sun cluster 3 configuration guide 412383

INDEX

SS1 array 134Samba

Sun Cluster 3.1 225, 229Sun Cluster 3.2 234, 242

SAN support 59SANtinel 113, 117, 121, 125SAP

Sun Cluster 3.1 226Sun Cluster 3.2 235, 241

SAP LiveCacheSun Cluster 3.1 227Sun Cluster 3.2 236, 241

SAP MaxDBSun Cluster 3.1 227Sun Cluster 3.2 236, 242

SAS storageSun Storage J4200 JBOD array 169Sun Storage J4400 JBOD array 169Sun StorageTek 2530 RAID array 167

scalable servicesApache Tomcat

Sun Cluster 3.1 243, 244Sun Cluster 3.2 244, 245

Apache Web ServerSun Cluster 3.1 243, 244Sun Cluster 3.2 244, 245

defined 243JES Web Server

Sun Cluster 3.1 243, 244Sun Cluster 3.2 244, 245

Sun Cluster 3.1 243, 244, 245scalable topology 7SCSI storage 127, 167, 173

Sun Netra st A1000 array 128Sun Netra st D1000 array 129Sun Netra st D130 array 127Sun StorEdge 3120 JBOD array 142Sun StorEdge 3310 JBOD array 148Sun StorEdge 3310 RAID array 153

Sun StorEdge 3320 JBOD array 157Sun StorEdge 3320 RAID array 162Sun StorEdge A1000 array 137Sun StorEdge A3500 array 140Sun StorEdge D1000 array 138Sun StorEdge D2 array 132Sun StorEdge Multipack 131Sun StorEdge S1 array 134supported devices 49

SDS 251servers 11

boot devices 15generic configuration 15

ShadowImage 113, 117, 121, 125shared storage 40

quorum devices 41supported FC devices 41supported SCSI devices 49third-party devices 58

SiebelSun Cluster 3.1 227Sun Cluster 3.2 237

single-node clusters 9software components, typical 2Solaris Containers

Sun Cluster 3.1 227, 230Sun Cluster 3.2 237, 242

Solaris Resource Manager 249Solaris Volume Manager 251, 252Solaris Volume Manager for Sun Cluster 251, 252Solstice DiskSuite

Sun Cluster 3.1 251star topology 5storage 39

FC storage 59heterogenous storage 39local storage 39

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 339

Page 352: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SCSI storage 127, 167, 173shared storage 40

storage-attached networks 59Sun Cluster documentation xSun Enterprise servers

10000 servers 183x00-6x00 servers 18

Sun Fire Enterprise servers4900 servers 226900 servers 22

Sun Fire Link 187Sun Fire servers

12K servers 23, 24, 2515K servers 23, 24, 2520K servers 23, 24, 2525K servers 23, 24, 253800 servers 214800 servers 214810 servers 216800 servers 21V1280 servers 21V210 servers 16, 17, 18, 19V240 servers 16, 17, 18, 19V400 servers 21V440 servers 20V480 servers 21V880 servers 21V890 servers 21

Sun Java Server Message QueueSun Cluster 3.1 227, 230Sun Cluster 3.2 237, 242

Sun Management Center (SunMC) 264Sun One Proxy Server

Sun Cluster 3.1 229Sun Cluster 3.2 231, 232, 239

Sun Storage storage6180 array 996580 array 105

6780 array 1057000 Unified Storage System 1797110 Unified Storage System 1817210 Unified Storage System 1817310 Unified Storage System 1817410 Unified Storage System 181J4200 JBOD array 169J4400 JBOD array 169

Sun StorageTek storage2510 RAID array 1732530 RAID array 1672540 RAID array 815000 NAS Appliance 1755210 NAS Appliance 1775220 NAS Appliance 1775310 NAS Appliance 1785320 NAS Appliance 1785320 NAS Cluster Appliance 1786140 array 976540 array 1039985 system 1199985V system 1229990 system 1199990V system 122

Sun StorEdge Availability SuiteSun Cluster 3.1 228, 230Sun Cluster 3.2 237, 242

Sun StorEdge storage3120 JBOD array 1423310 JBOD array 1483310 RAID array 1533320 JBOD array 1573320 RAID array 1623510 RAID array 833511 RAID array 883910 system 903960 system 906120 array 926130 array 94

340 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

Page 353: sun cluster 3 configuration guide 412383

INDEX

6320 system 1006910 system 1076920 system 1096960 system 1079910 system 1119960 system 1119970 system 1159980 system 115A1000 array 137A3500 array 140A3500FC system 63A5x00 array 66D1000 array 138D2 array 132Multipack 131S1 array 134SE 6130 array 81, 94, 97, 99, 103, 167, 173,

175T3 array (partner pair) 78T3 array (single brick) 74

SunPlex Manager 264Support for Virtualized OS Environments 259SVM 251, 252SWIFTAlliance Access

Sun Cluster 3.1 228Sun Cluster 3.2 237

SWIFTAlliance GatewaySun Cluster 3.1 228Sun Cluster 3.2 238

Sybase ASE 311Sun Cluster 3.1 228Sun Cluster 3.2 238, 242

System 1 109

TT3 array (partner pair) 78T3 array (single brick) 74

terminal concentrators 263third-party agents 311third-party storage devices 58topologies 3

clustered pair topology 4defined 3diskless clusters 8N*N topology 7N+1 topology 5pair+N topology 6scalable topology 7single-node clusters 9star topology 5

TrueCopy 113, 117, 291

Uuser documentation, Sun Cluster x

VVeritas file system

Sun Cluster 3.1 253Veritas Volume Manager

Sun Cluster 3.1 251, 252Virtualized OS environments 259VLANs 185volume managers

Sun Cluster 3.1 251, 252

Wwave division multiplexors 295WebLogic Application Server

Sun Cluster 3.1 223, 228Sun Cluster 3.2 231, 239

WebSphere Message BrokerSun Cluster 3.1 228

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 341

Page 354: sun cluster 3 configuration guide 412383

SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Cluster 3.2 238, 242

342 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only