recoverpoint deploying with the brocade splitter

68
RecoverPoint Deploying with the Brocade Splitter 1 This document describes how to deploy EMC RecoverPoint with the Brocade splitter to deliver a complete data replication solution without host-based agents. Topics include: Revision history .................................................................................... 2 Introduction........................................................................................... 5 Brocade splitter concepts ..................................................................... 5 Solution design ................................................................................... 14 Useful Connectrix Control Processor commands .......................... 22 RecoverPoint deployment procedure .............................................. 23 RecoverPoint splitter agent life cycle .............................................. 31 Software and firmware upgrades .................................................... 35 Hardware replacement ...................................................................... 57 Troubleshooting .................................................................................. 60 EMC ® RecoverPoint Deploying with the Brocade Splitter Technical Notes P/N 300-010-644 REV A06 January 13, 2012

Upload: deniznisik

Post on 18-Apr-2015

382 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: RecoverPoint Deploying With the Brocade Splitter

RecoverPoint Deploying with the Brocade Splitter 1

This document describes how to deploy EMC RecoverPoint with the Brocade splitter to deliver a complete data replication solution without host-based agents.

Topics include:

◆ Revision history .................................................................................... 2◆ Introduction........................................................................................... 5◆ Brocade splitter concepts..................................................................... 5◆ Solution design ................................................................................... 14◆ Useful Connectrix Control Processor commands.......................... 22◆ RecoverPoint deployment procedure.............................................. 23◆ RecoverPoint splitter agent life cycle .............................................. 31◆ Software and firmware upgrades .................................................... 35◆ Hardware replacement ...................................................................... 57◆ Troubleshooting .................................................................................. 60

EMC® RecoverPointDeploying with the

Brocade Splitter

Technical NotesP/N 300-010-644

REV A06

January 13, 2012

Page 2: RecoverPoint Deploying With the Brocade Splitter

2 RecoverPoint Deploying with the Brocade Splitter

Revision history

Revision historyTable 1 on page 2 shows the revision history for this document.

Table 1 Revision history

Revision Date Description

A06 January 2012 • Added information and procedures for more than one splitter per fabric

• Added that virtual fabrics require RPQ• Added Brocade splitter VAAI support• Improved procedure for upgrading SAS• Improved procedure for troubleshooting splitter

crash• Added section about the removing of unused

ITLs

A05 September 2011 • Added new Solution Approval procedure• Added procedure for creating unique Blade

Processor IP address• Corrected procedure for disabling replication

and splitters

Page 3: RecoverPoint Deploying With the Brocade Splitter

3

Revision history

RecoverPoint Deploying with the Brocade Splitter

A04 January 2011 • Renamed document to Brocade Splitter TN• Merged RecoverPoint 3.2 version with

RecoverPoint 3.3 version into one document• Removed Multi-VI mode and McData switches;

Multi-VI customers referred to EMC RecoverPoint Deploying with Connectrix AP-7600B and PB-48K-AP4-18 Technical Notes

• Rewrote, updated, and reorganized Brocade splitter concepts:

Brocade Application PlatformBrocade splitter virtual entitiesVirtualization and frame redirectionZoningFrame redirection bindingsData path controllersSCSI LUN reservations

• Rewrote, updated, and reorganized Solution Design:

Hardware requirementsFirmware requirementsFabric requirementsFabric designZoning requirementsNetwork requirementsHost requirementsScalability limitationsApproved solution qualifier

Table 1 Revision history

Revision Date Description

Page 4: RecoverPoint Deploying With the Brocade Splitter

4 RecoverPoint Deploying with the Brocade Splitter

Revision history

A03 August 2010 • Port 7777 must be open for log collection• added new section that describes required

Fabric Design• added best practices for switch front-panel ports,

and ISL requirements• corrected, organized, removed duplications, and

added much material to prerequisites for deployment. Topics include:solution qualifier approvalhardwarefirmwarehost requirements best practices

• updated maintenance procedures for updating FOS and SAS firmware.

• added SAN Health diagnostics procedures to deployment, upgrade, migration, and troubleshooting procedures.

• Added prerequisites concerning McData switches, Multi-VI mode, and supported Brocade switches

• Added non-disruptive upgrade of FOS only or SAS only.

Table 1 Revision history

Revision Date Description

Page 5: RecoverPoint Deploying With the Brocade Splitter

5

Introduction

RecoverPoint Deploying with the Brocade Splitter

Introduction

Scope This document is intended primarily for IT professionals who are responsible for deploying RecoverPoint in a Storage Application Services (SAS)-equipped Connectrix B-Series environment, and who are experienced in the areas of SAN management and topology.

Previously, RecoverPoint implementations supported both Multi-VI and Frame Redirection mode. Currently, only Frame Redirection mode is supported for new implementations. Existing customers using Multi-VI mode are currently supported. This document applies only to frame direction. For documentation of Multi-VI mode, refer to EMC RecoverPoint 3.2 and earlier Deploying with Connectrix AP-7600B and PB-48K-AP4-18 Technical Notes.

Related documents Use the release of any of the following documents, available in the Documentation Library on http://Powerlink.EMC.com, that matches your installed RecoverPoint version:

◆ EMC Connectrix B Series Fabric OS Release Notes

◆ EMC Connectrix B Series AP-7600B Hardware Reference Manual

◆ EMC Connectrix B Series Fabric OS Administrator’s Guide

◆ EMC Connectrix B Series Fabric OS Command Reference Manual

◆ EMC RecoverPoint and RecoverPoint/SE Release Notes

◆ EMC RecoverPoint Administrator’s Guide

◆ EMC RecoverPoint CLI Reference Guide

◆ EMC RecoverPoint Deployment Manager Product Guide

◆ EMC RecoverPoint Deploying with Connectrix AP-7600B and PB-48K-AP4-18 Technical Notes

Brocade splitter conceptsThe Brocade splitter enables intelligent-fabric-based splitting for RecoverPoint replication.

Page 6: RecoverPoint Deploying With the Brocade Splitter

6 RecoverPoint Deploying with the Brocade Splitter

Brocade splitter concepts

Brocade Application Platform

The Brocade splitter is available with the Brocade Application Platform, which is supported on the EMC Connectrix AP7600B switch. The AP7600B switch comprises:

◆ Control Processor (CP), which runs the Fabric Operating System and handles Layer 2 communications.

◆ Blade Processor (BP), which runs the Storage Application Services (SAS) and manages the virtualization layer. The Blade Processor includes the following hardware components:

• An Ethernet port with an IP address assigned to it. Remote ssh or telnet connection to the Blade Processor allows access to the Blade Processor CLI.

• Two data path controllers (DPCs) that handle virtualized traffic.

• 16 full-function Fibre Channel ports that operate completely independently of the virtualization.

Previously, the Brocade Application Platform was also available on a PB-48K-AP4-18 blade installed in an EMC Connectrix ED48000B director-class switch or in a DCX-4S chassis. Due to Brocade's end-of-life announcement for PB-48K-AP4-18, only the AP7600B is supported for new deployments. Existing PB-48K-AP4-18 customers are supported until June 2016.

Unless stated otherwise, the management of PB-48K-AP4-18 splitter is the same as AP7600B splitter. Both ED48000B and DCX-4S have two redundant Control Processors. PB-48K-AP4-18 is the Blade Processor, which is very similar to the Blade Processor built into AP7600B.

The RecoverPoint splitter agent is EMC-proprietary software installed on the Blade Processor. RecoverPoint manages the splitter by communicating with the splitter agent via Fibre Channel. The splitter agent, in turn, manages the Storage Application Services (SAS) platform. The SAS platform is responsible for splitting the actual writes and performing the necessary management tasks within the fabric.

Brocade splitter virtual entities

The following virtual entities are required for RecoverPoint replication in the Brocade splitter environment. These virtual entities are created by the Brocade splitter.

msyuzm1
Highlight
Page 7: RecoverPoint Deploying With the Brocade Splitter

7

Brocade splitter concepts

RecoverPoint Deploying with the Brocade Splitter

◆ Virtual Initiators. Virtual initiators (VIs) communicate with physical storage target and with RecoverPoint appliances (RPAs) on behalf of physical host initiator. During splitter installation, a pool of pWWNs is created to be used at virtual initiators. When a host initiator is bound to a storage target, a virtual initiator is created from this pool.

◆ Virtual Targets. Virtual targets (VTs) communicate with physical host initiators on behalf of physical storage targets. VTs are dynamically created on demand.

◆ Appliance Virtual Targets. Appliance Virtual Targets (AVTs) are special VTs that used by RecoverPoint to overcome host reservations. During the splitter installation, a pool of 500 pWWNs is created to be used as AVTs. When reservation support is enabled, AVTs are created from that pool.

◆ System Virtual Initiator. The System Virtual Initiator (SVI) is a special virtual initiator, which allows RecoverPoint to manage the Brocade splitter. One SVI is created during the splitter installation.

Virtualization and frame redirection

In a Brocade intelligent fabric, host-to-storage communications are virtualized and rerouted to the intelligent fabric platform. The intelligent platform presents virtual targets to the physical initiators (host systems) and virtual initiators to the physical targets (storage ports). I/Os written by hosts to storage are redirected to virtual targets, and then transferred by the intelligent platform to virtual initiators; the physical storage receives the I/O from the virtual initiator. In other words, the path of host I/Os is host -> virtual target -> virtual initiator -> storage.

Page 8: RecoverPoint Deploying With the Brocade Splitter

8 RecoverPoint Deploying with the Brocade Splitter

Brocade splitter concepts

Figure 1 Simplified schematic of virtualized host-to-storage communications

Similarly, responses from the storage to the host are redirected to virtual initiators; virtual targets receive the I/O from the virtual initiators, and direct them to the physical hosts.

In addition, all writes issued to replicated LUNs are copied to the RecoverPoint appliance by the intelligent fabric platform. The intelligent fabric appliance thus acts as a splitter.

In Brocade’s frame redirection mode, fabric virtualization (redirecting I/O traffic to virtualized initiators and targets) is achieved by manipulating Fibre Channel addresses or identifiers (FC IDs). When a host bus adapter (host initiator) or a storage port (storage target) logs into the fabric, it is assigned an FC ID. FC IDs are used to route traffic within the fabric.

To reroute communications between host ports and storage ports, initiator-target pairs (host-storage pairs) are assigned a virtual initiator and a virtual target with different FC IDs from those of the physical initiator and target. This process is called binding.

When an initiator-target pair is bound, the name servers on every switch in the fabric substitute the virtual initiator FC ID for the physical host port FC ID; and the virtual target FC ID for the physical storage port FC ID. In this way, host-storage communications are rerouted via the corresponding virtual initiator and virtual target. This method of virtualization is called frame redirection.

msyuzm1
Highlight
msyuzm1
Highlight
msyuzm1
Highlight
Page 9: RecoverPoint Deploying With the Brocade Splitter

9

Brocade splitter concepts

RecoverPoint Deploying with the Brocade Splitter

Zoning RecoverPoint in the Brocade environment requires the following zoning.

Note: RPA Fibre Channel ports can act as initiators, targets, or both. For more information, refer to the EMC RecoverPoint Zoning and LUN Masking Technical Notes.

Host-to-StorageZoning

Even though the WWNs of the virtual initiators and virtual targets are different than those of the corresponding physical initiators and targets, no changes are required in host-to-storage zoning or storage array LUN masking, as all rerouting is handled at the FC ID level. In other words, the storage target still sees the WWN of the physical host; and the physical host still logs into the WWN of the storage port.

RPA-to-storage zone RPA-to-storage zoning is required for all RecoverPoint installations. For more information, refer to EMC RecoverPoint Zoning and LUN Masking Technical Notes.

RPA target zone The RPA target zone <splitter_name>_FR_RPA_Target_Zone is created during the splitter installation and contains RPA target ports, all virtual initiators, and the System Virtual Initiator (SVI).

The RPA target zone enables data and management communication between the RPA and the Brocade splitter.

In addition to communicating with the storage targets, the virtual initiators also copy the writes that are written to replicated LUNs to the RecoverPoint appliances.

Table 2 Required zoning

Zone Members

Host-to-storage zones

physical host initiator, physical storage target(s)

RPA-to-storage zone

RPA initiator ports, physical storage targets

RPA front-end zone RPA initiator ports, virtual targets, appliance virtual targets (AVTs)

RPA target zone virtual initiator, system virtual initiator, RPA target ports

msyuzm1
Highlight
msyuzm1
Highlight
Page 10: RecoverPoint Deploying With the Brocade Splitter

10 RecoverPoint Deploying with the Brocade Splitter

Brocade splitter concepts

The SVI allows RecoverPoint to manage the Brocade splitter.

RPA front-end zone The RPA front-end zone, <splitter_name>_RPA_Front_End_Zone, is created during installation and contains all RPA initiator ports and all AVTs. By allowing RPAs access to AVTs, RecoverPoint can overcome SCSI LUN reservations. For more information, refer to “SCSI LUN reservations” on page 13.

Frame redirection bindings

Frame redirection bindings are created and deleted by a RecoverPoint administrator using the RecoverPoint Management Application or the command-line interface. When a binding operation is initiated, RecoverPoint contacts the Brocade splitter agent via Fibre Channel and requests that the bindings be created. The splitter agent, in turn, instructs the Brocade fabric application platform to create the bindings. As a result, the records of bindings are maintained in three locations: by RecoverPoint, by the splitter agent and by the Brocade fabric application platform.

The Brocade fabric application platform keeps track of bindings by creating a special config (zoneset) r_e_d_i_r_c__fg. Initially, it contains only one zone red_______base. For every new binding, a new Frame Redirection zone is added called lsan_red_<host initiator WWN>_<storage target WWN>. Each Frame Redirection zone contains four members:

◆ host initiator port WWN

◆ storage target port WWN

◆ virtual initiator port WWN

◆ virtual target port WWN.

These four port WWNs form a host-to-storage binding. Except for troubleshooting, r_e_d_i_r_c__fg config should not be modified It should only be managed by RecoverPoint. r_e_d_i_r_c__fg should never be enabled as an effective config and its zone members should never be in an effective config.

If using more than one splitter in a fabric, the same r_e_d_i_r_c__fg config is used by all splitters in the fabric. However, an initiator or a target pWWN can be bound to only one splitter. If an initiator or a target is bound through one splitter, and then bound through another splitter, the first binding will be forcibly removed, resulting in corruption of the replica, requiring a full sweep.

msyuzm1
Highlight
msyuzm1
Highlight
msyuzm1
Highlight
msyuzm1
Highlight
msyuzm1
Highlight
Page 11: RecoverPoint Deploying With the Brocade Splitter

11

Brocade splitter concepts

RecoverPoint Deploying with the Brocade Splitter

In versions of RecoverPoint prior to 3.3, it is possible to have a mismatch in binding records. For example, when removing a binding, a stale Frame Redirection zone could remain without a corresponding binding record in RecoverPoint. In that case, the Frame Redirection zone would need to be manually removed.

Starting with RecoverPoint 3.3, the splitter agent enforces consistency between RecoverPoint binding records and the Frame Redirection zones. For example, if a Frame Redirect zone does not have a matching binding record in RecoverPoint, the Splitter Agent will automatically delete it. Similarly, if a binding record exists in RecoverPoint, but the corresponding Frame Redirection zone is missing, the Splitter Agent will automatically create it.

As discussed in “Virtualization and frame redirection” on page 7, binding operations change Fibre Channel address identifiers (FC ID). Because virtual targets borrow their SCSI personality, including vendor, product, and UID, from the storage targets, the host initiator sees the virtual targets as having the same storage characteristics as the actual storage target. As a result, the FC ID change will appear to the host as if the original path to storage disappeared and another identical path appeared with a different FC ID. While most hosts are expected to handle that momentary path disruption with no impact, this type of change should be performed with caution and one path at a time.

Some hosts, such as HP-UX without agile addressing and AIX without dynamic tracking, include FC IDs in their definition of paths to SAN devices. As a result, they are sensitive to FC ID changes and cannot automatically rediscover the paths after bindings. It may be necessary to reconfigure the hosts, rescan the SAN, or take other action to have the hosts recognize the new paths to storage. Host downtime may be required.

Data Path Controllers A Brocade application platform includes two Data Path Controllers (DPCs) that handle all virtualized traffic. Each DPC has its own WWN. When bindings are created, each initiator-target pair is assigned to one of the DPCs. After that, all I/O operations for this initiator-target pair are handled by this DPC.

When working with RecoverPoint release 3.0, only one DPC is utilized. When working with RecoverPoint 3.0 SP1 and later, the splitter agent utilizes both DPCs.

From RecoverPoint 3.0 SP1 to 3.3 SP1 P1, virtual targets created during one binding operation are assigned to alternating DPCs. If

msyuzm1
Highlight
msyuzm1
Highlight
Page 12: RecoverPoint Deploying With the Brocade Splitter

12 RecoverPoint Deploying with the Brocade Splitter

Brocade splitter concepts

only one virtual target is created, the splitter agent randomly selects one of the DPCs. As a result, the number of virtual targets on each DPC will not necessarily be balanced. In addition, deleting virtual targets may cause further imbalance.

Starting with RecoverPoint 3.3 SP2, the splitter agent always creates new virtual targets on the DPC with fewer virtual targets. As a result, the number of virtual targets on each DPC will normally not differ by more than one. However, deleting virtual targets may still cause an imbalance.

Unlike virtual initiator pWWNs, which are created from a predefined pool, virtual targets are created on the fly as needed. The virtual target pWWN is based on the storage pWWN and the DPC number as follows:

Figure 2 Virtual target WWN format

For example, a virtual target created on DPC 0, where the WWN of the corresponding storage target is 0x1234567890123456, and the hash of that WWN is 0xabcd, would be:

0x70012482abcd0456

To determine the number of virtual entities on each DPC, use the 4-bit DPC number in the WWN to determine the DPC number of each virtual target. Then, use the binding scheme to determine the number of virtual initiators for each DPC. Note that the system virtual initiator will always be created on DPC0.

Since virtual initiators and virtual targets that reside on different DPCs cannot interact, a single initiator bound to targets on both DPCs will be represented by separate virtual initiators on each DPC. In other words, two virtual initiators may be created for one physical host initiator if the virtual targets that host initiator is accessing are on different DPCs. A physical target, however, is always represented by a single virtual target.

Although the total number of virtual entities may be higher when utilizing two DPCs, the number of virtual entities per DPC is always lower than when using a single DPC. Hence, the possible scalability

Page 13: RecoverPoint Deploying With the Brocade Splitter

13

Brocade splitter concepts

RecoverPoint Deploying with the Brocade Splitter

with two DPCs is always higher (since the limits are per DPC). The number of ITLs used does not change when using two DPCs, but the maximum possible number of ITLs doubles, as does the number of virtual initiators.

SCSI LUN reservations Some hosts, such as clustered nodes, some operating systems, and some logical volume managers, reserve storage volumes using SCSI reservations. Just like other I/Os, reservation commands (both SCSI-2 and SCSI-3) are forwarded by the splitter to the physical storage targets. In other words, the virtual initiator issues a SCSI reservation on behalf of the physical host initiator.

Normally, reserved LUNs can be accessed only by the initiator that placed the reservations. However, RPAs must be able to send I/Os to LUNs replicated by RecoverPoint, even if they are reserved. Therefore, the RPA must address storage through the same virtual initiator that placed the reservation.

The only way for the RPA to access the storage target using the virtual initiator that placed the reservation is through a virtual target. The RPA cannot, however, use an existing virtual target, since each virtual target can be bound to more than one virtual initiator, while each RPA port accessing the virtual target must be bound to a single virtual initiator. As a result, the RPA uses a special virtual target, called an appliance virtual target (AVT) to bind to a single virtual initiator. The RPA front-end zone allows RPAs to access AVTs, as discussed in “RPA front-end zone” on page 10.

The reservation support is specified in the Advanced Policies of each copy in a consistency group. Whenever reservation support is enabled, the splitter agent creates one AVT for each virtual initiator that has access to reserved volumes in this copy. As discussed in “Data Path Controllers” on page 11, two virtual initiators may be created for a single physical initiator. As a result, two AVTs, one on each DPC, are created for one physical initiator.

For every physical LUN (volume) that the initiator may reserve, a corresponding virtual LUN is created on the AVT. Because a single initiator may access physical LUNs on more than one storage target, the virtual LUNs on the AVT may represent multiple storage targets. Also, all LUNs on the AVT are created in pass-through mode, so that I/Os sent to the AVT LUNs are not split (which would cause the I/Os to be sent back to the RPA), but rather, are sent directly to the storage target. Only RPA ports can access the LUNs on the AVT.

Page 14: RecoverPoint Deploying With the Brocade Splitter

14 RecoverPoint Deploying with the Brocade Splitter

Solution design

If the direct path from an RPA to a LUN returns a “Reservation conflict” error, the RPA then checks other potential paths, including the AVTs. Since the RPA cannot determine which host initiator is reserving the device, all existing AVTs are tried until one is found that allows access to the target storage.

Solution designDeploying RecoverPoint with Connectrix B-Series application platforms and Brocade splitters requires the following items.

Hardware requirements

At least one EMC Connectrix AP7600B Application Platform (intelligent switch) per fabric (two fabrics per site). Refer to the RecoverPoint release notes for limits on the number of Brocade splitters per RecoverPoint cluster.

Firmware requirements

The firmware requirements are as follows:

◆ RecoverPoint splitter agent must be compatible with the version of RecoverPoint. Unless instructed otherwise, the RecoverPoint and the RecoverPoint splitter agent version number must be identical.

◆ The SAS version must be compatible with the RecoverPoint splitter agent version. Consult the EMC Support Matrix for compatible versions.

◆ The FOS version must be compatible with the SAS version. Consult the EMC Support Matrix for compatible versions.

Fabric requirements RecoverPoint requires two redundant fabrics at each site with at least one Brocade splitter per fabric per RecoverPoint cluster.

◆ Previously, RecoverPoint implementations supported both Brocade and McData switches in the fabric. Currently, only Brocade switches are supported in new deployments. McData switches are not supported in the fabric, even if no initiators or targets are connected to them.

Existing customers with McData switches in the fabric should consult EMC Knowledge base article emc244711.

Page 15: RecoverPoint Deploying With the Brocade Splitter

15

Solution design

RecoverPoint Deploying with the Brocade Splitter

◆ When using Frame Redirection mode, every switch in the fabric must be able to support Frame Redirection, even if no initiators or targets are connected to the switch. The Fabric Operating System (FOS) version of each switch in the fabric should be 6.1.0f or higher.

The following Brocade switches do not support Frame Redirection mode and cannot be included in a fabric replicating with the RecoverPoint Brocade splitter:

Table 3 Brocade switches not supported when using RecoverPoint Brocade splitter

Switch type Switch Model

1 Brocade 1000 switches

2, 6 Brocade 2800 switch

3 Brocade 2100, 2400 switches

4 Brocade 20x0, 2010, 2040, 2050 switches

5 Brocade 22x0, 2210, 2240, 2250 switches

7 Brocade 2000 switch

9 Brocade 3800 switch

10 Brocade 12000 director

12 Brocade 3900 switch

16 Brocade 3200 switch

17 Brocade 3800VL

18 Brocade 3000 switch

21 Brocade 24000 director

22 Brocade 3016 switch

26 Brocade 3850 switch

27 Brocade 3250 switch

33 Brocade 3014 switch

Page 16: RecoverPoint Deploying With the Brocade Splitter

16 RecoverPoint Deploying with the Brocade Splitter

Solution design

The best practice is that the FOS version of all switches in the fabric should be identical. If you cannot use the same FOS on all switches, consult the EMC Support Matrix for compatible versions.

Fabric design All I/Os between the bound host initiator and the storage target are routed via the intelligent switch or blade. In consequence:

◆ Disruption of the intelligent switch or blade operation disrupts the host-to-storage path of the bound initiator-target pair. Host-to-storage multipathing is required to survive a splitter failure.

◆ Fibre Channel front-panel ports on intelligent switches and blades do not share resources with the splitter I/O traffic. Connecting devices to the Fibre Channel front-panel ports is supported.

◆ Maintenance on the splitter may disrupt operation of the Fibre Channel front-panel ports and the devices connected to it. The best practice is, therefore, not to connect any devices to the front-panel ports. If the front-panel ports must be used, it is strongly recommended to connect only devices that participate in replication. In order of preference, the following devices may be connected: 1) RPAs, 2) bound host initiators, and 3) bound storage targets.

◆ If replication I/O traffic flows over interswitch links (ISLs), plan for sufficient ISL bandwidth. The minimum requirements for connecting the AP7600 to an existing fabric is 6 ISLs, three connected to ports 0–7, and three connected to ports 8–15.

◆ An initiator or a target pWWN can be bound to only one splitter.

◆ If you need to implement with a virtual fabric or other advanced fabric feature, submit an RPQ.

Zoning requirements EMC requires single-initiator pWWN zoning. There should be exactly one host initiator and one or more storage targets per zone. Port zoning and zoning by nWWN are not supported.

Default Zone setting The Default Zone setting of the Brocade fabric determines device access when zoning is not enabled or if there is no effective zone configuration. The default zoning mode has two options:

◆ All Access. All devices within the fabric can communicate with all other devices.

Page 17: RecoverPoint Deploying With the Brocade Splitter

17

Solution design

RecoverPoint Deploying with the Brocade Splitter

◆ No Access. Devices in the fabric cannot access any other device in the fabric.

The default zone mode applies to the entire fabric, regardless of switch model.

However, when a user-specified zone configuration is enabled, it overrides the Default Zone access mode. Since all RecoverPoint clients use user-specified zones, the Default Zone setting is normally irrelevant.

The exception to the above is when using Fabric Operating System (FOS) 6.2.x (supported with RecoverPoint 3.2.x). In that case, the Default Zone setting must be set to “No Access” (rather than “All Access”) when creating frame redirection zones. This setting must be verified before installing the RecoverPoint splitter agent, and is included in Step of “RecoverPoint deployment procedure”.

Network requirements

Port 7777 (hlr_kbox) must be open between all RecoverPoint appliances and all intelligent Brocade switches and blades in the fabric, otherwise log collection may fail.

Host requirements ◆ Verify that your host configuration is supported. Check the EMC Support Matrix.

◆ Host-to-storage multipathing is required.

CAUTION!Brocade splitter binding and unbinding causes the FC ID of affected HBAs to change. HP-UX hosts without agile addressing and AIX hosts without dynamic tracking cannot automatically rediscover the paths to storage if the FC ID has changed. To discover the new paths, it may be necessary to reconfigure the hosts, rescan the SAN, or take other action to have the hosts recognize the new paths to storage. Host downtime may be required.

For more information about FC ID changes with AIX hosts, refer to the following:

◆ EMC Host Connectivity Guide for IBM AIX

(powerlink.emc.com, Support > Technical Documentation and Advisories > Installation/Configuration > Host Connectivity/HBAs > HBAs Installation > Configuration)

Page 18: RecoverPoint Deploying With the Brocade Splitter

18 RecoverPoint Deploying with the Brocade Splitter

Solution design

◆ EMC knowledge base solutions emc91523 and emc115725

◆ Deploying RecoverPoint with AIX Hosts Technical Notes

(powerlink.emc.com, Support > Technical Documentation and Advisories > Software ~P–R~ Documentation > RecoverPoint > Technical Notes/Troubleshooting)

For more information about FC ID changes with HP-UX hosts, refer to the following:

◆ EMC Host Connectivity Guide for HP-UX

(powerlink.emc.com, Support > Technical Documentation and Advisories > Installation/Configuration > Host Connectivity/HBAs > HBAs Installation > Configuration)

◆ EMC Knowledge base solution emc199817.

VAAI support vStorage API for Array Integration (VAAI) commands speed up certain functions when an ESX server writes to a storage array, by offloading specific functions to array-based hardware. Table 4 shows the Brocade splitter support for VAAI commands.

If a particular VAAI command is listed as unsupported, it must be disabled on all ESX servers. Failure to disable an unsupported VAAI command may lead to data corruption, production data being unavailable to ESX hosts, degraded performance, or switch reboot.

If a VAAI command is listed as rejected, ESX immediately reverts to non-VAAI behavior. Rejecting VAAI commands is supported both by RecoverPoint and ESX.

Page 19: RecoverPoint Deploying With the Brocade Splitter

19

Solution design

RecoverPoint Deploying with the Brocade Splitter

For more details about VAAI, refer to “VAAI support” in EMC RecoverPoint Replicating VMware Technical Notes.

Scalability limitations

The limitations of Connectrix AP-7600B or PB-48K-AP4-18 resources when deployed with RecoverPoint are published in the release notes for your release of RecoverPoint. Limits are presented for the following parameters:

◆ Host initiator-target nexuses

◆ Total virtual entities, including virtual initiators, virtual targets, AVTs, and the system virtual initiator

◆ Replicated LUNs

◆ Total LUNs

◆ ITLs

It is important to review these limitations when planning the deployment of RecoverPoint with your Connectrix AP-7600B or PB-48K-AP4-18.

Computing ITLs An ITL (initiator-target-LUN) is a path from an initiator to a target to a LUN. The total number of ITLs is the sum of host ITLs and AVT ITLs, where:

Host ITLs = number of LUNs to which a host initiator has access

For RecoverPoint 3.0 (including 3.0 service packs) and earlier:

AVT ITLs = RPA initiator ports x Host ITLs (for which LUNs support reservations)

Table 4 Brocade splitter VAAI support

Command • Version VAAI behavior

Hardware- assisted locking

• RecoverPoint 3.4 and later

• Prior to RecoverPoint 3.4

Rejected

Unsupported

Block Zeroing • RecoverPoint 3.4 and later

• Prior to RecoverPoint 3.4

Rejected

Unsupported

Full Copy • RecoverPoint 3.4 and later

• Prior to RecoverPoint 3.4

Rejected

Unsupported

Page 20: RecoverPoint Deploying With the Brocade Splitter

20 RecoverPoint Deploying with the Brocade Splitter

Solution design

For RecoverPoint 3.1 and later:

AVT ITLs = Host ITLs (for which LUNs support reservations)

Example 1: For example if, when using one DPC, the configuration includes:

◆ 4 RPA initiator ports

◆ 2 host initiators

◆ With access to 1000 LUNs

◆ 700 of the LUNs support reservations

Then the total number of ITLs is as follows:

For RecoverPoint 3.0 (including 3.0 Service Packs) and earlier:

ITLs = [(2 host initiators) x (1000 LUNs)] + [(2 host initiators) x (700 host ITLs for which LUNs support reservations) x (4 RPA initiator ports)]

= 2000 + 5600

= 7600

For RecoverPoint 3.1 and later:

ITLs = [(2 host initiators) x (1000 LUNs)] + [(2 host initiators) x (700 host ITLs for which LUNs support reservations)]

= 2000 + 1400

= 3400

Example 2: If, however, using frame redirection, you are able to use both DPCs, then ITLs are calculated separately for each DPC. Hence, if the configuration in the first example is changed, as follows:

◆ 500 (of the 1000 LUNs in the first example) are exposed to the hosts through DPC 0, and the remaining 500 are exposed through DPC 1

◆ All 500 of the hosts in DPC 0 support reservations; 200 of the hosts in DPC 1 support reservations

Then the total number of ITLs is as follows:

For RecoverPoint 3.0 (including 3.0 Service Packs) and earlier:

ITLs = {ITLs for DPC 0} + {ITLs for DPC 1}

Page 21: RecoverPoint Deploying With the Brocade Splitter

21

Solution design

RecoverPoint Deploying with the Brocade Splitter

= {[(2 host initiators) x (500 LUNs)] + [(1000 host ITLs for which LUNs support reservations) x (4 RPA initiator ports)]} + {[(2 host initiators) x (500 LUNs)] + [(400 host ITLs for which LUNs support reservations) x (4 RPA initiator ports)]}

= {1000 + 4000} + {1000 + 1600}

= 7600

For RecoverPoint 3.1 and later:

ITLs = {ITLs for DPC 0} + {ITLs for DPC 1}

= {[(2 host initiators) x (500 LUNs)] + (1000 host ITLs for which LUNs support reservations)} + {[(2 host initiators) x (500 LUNs)] + (400 host ITLs for which LUNs support reservations)}

= {1000 + 1000} + {1000 + 400}

= 3400

Beginning with RecoverPoint 3.0 SP1, the system displays warnings whenever your system approaches or exceeds scalability limits for ITLs, virtual entities, and total LUNs. Beginning with RecoverPoint 3.2, realtime ITL usage can be monitored in the Management Application GUI, from the Splitters tab for the System Monitoring component.

Removing unused ITLs The splitter automatically removes unused ITLs. This occurs when a physical target no longer exposes its LUN to a physical initiator. When an initiator-target pair is unbound, all ITLs associated with that initiator-target pair are deleted by the splitter.

Solution approval All Brocade splitter solutions require EMC approval.

◆ From the EMC Support Wiki, select RecoverPoint > Change Control > RecoverPoint Change Control Process. Download and fill out both the New Install Checklist and the Change Control Request Form.

◆ Create a Brocade SAN Health report to submit with the application for approval. For instructions, refer to “SAN Health Report” on page 60.

◆ Submit the New Install Checklist, the Change Control Request Form, and the SAN Health report by email to the RecoverPoint GL-Change Control list.

Page 22: RecoverPoint Deploying With the Brocade Splitter

22 RecoverPoint Deploying with the Brocade Splitter

Useful Connectrix Control Processor commands

Useful Connectrix Control Processor commandsThe following commands are available through the Control Processor CLI:

◆ nsshow

Shows items currently in the switch name server. For each target, shows its PWWN, NWWN, and symbolic name. Also shows virtual initiators (“Brocade virtual Initiator-slot#,DPC#”) and virtual targets (“Brocade virtual target-slot#,DPC#”).

◆ switchshow

Shows switch information, including port information (WWN connected to the port, port status).

◆ cfgshow

Shows zoning configuration, including active and passive configuration and the zones included in each.

◆ firmwareshow

Shows the version number for the Fabric Operating System (FOS) installed on the Control Processor and the Storage Application Services (SAS) installed on the Blade Processor.

◆ ipaddrshow

Shows the IP address of the Control Processor module and Blade Processor module.

◆ ipaddrset

Sets the IP address of the Control Processor module and Blade Processor module.

◆ firmwaredownload

Used to install the Fabric Operating System (FOS) on the Control Processor and the Storage Application Services (SAS) on the Blade Processor.

Note: When using this command to upgrade the Storage Application Services (SAS) version, virtualized switch traffic (that is, all traffic between virtual initiators and virtual targets) is disrupted.

Page 23: RecoverPoint Deploying With the Brocade Splitter

23

RecoverPoint deployment procedure

RecoverPoint Deploying with the Brocade Splitter

◆ supportshow

Outputs a range of diagnostic information about the switch configuration and status. It is generally recommended to direct this output to a file.

◆ diagDisablePost

Disables Power On Self Test when switch is rebooted. The mode state is saved in flash memory and POST remains disabled until it is enabled using diagEnablePost. Diagnostic POST should be disabled before upgrading Fabric Operating System (FOS) or Storage Application Services (SAS) firmware.

Splitter identification The RecoverPoint splitter agent behaves as a standard splitter in the RecoverPoint Management Application or CLI. The OS type for the splitter is designated as Brocade AP. The splitter name will be set during agent installation, which is presented in Step of “RecoverPoint deployment procedure”.

The format of the splitter name is as follows:

SW_name-SW_IPaddress

where:

SW_name is the name of the Blade Processor that you specified during agent installation; it can be obtained by running hostname on the Blade Processor.

SW_IPaddress is the IP address of the Blade Processor (at time of agent installation); it can be obtained by running ifconfig on the Blade Processor.

To determine the exact version of the splitter agent, run the following command on the Blade Processor:

> cat /thirdparty/recoverpoint/tweak/tweak.params.version

RecoverPoint deployment procedureThis section outlines the procedure for deploying RecoverPoint with the Brocade splitter.

Preparing for installation

To prepare for deploying RecoverPoint in a Brocade splitter environment:

Page 24: RecoverPoint Deploying With the Brocade Splitter

24 RecoverPoint Deploying with the Brocade Splitter

RecoverPoint deployment procedure

1. Ensure that the host-storage zones exist, and that LUN masking from the targets to the initiators is defined.

2. Verify that every host can access every presented LUN through all available paths.

Installing RecoverPoint appliances

To install RecoverPoint appliances:

1. Physically connect the RPAs to the Connectrix devices.

2. Create the RPA initiator zone on each fabric, and add all RPA initiators and relevant storage targets on the fabric to that zone.

3. Install RecoverPoint on the RPAs. The EMC RecoverPoint Deployment Manager Product Guide provides RPA installation instructions.

Installing FOS and SAS

The Fabric Operating System (FOS) and Storage Application Services (SAS) combination must be supported by the version of RecoverPoint to which you are upgrading. Refer to the EMC Support Matrix for your version of RecoverPoint for supported Storage Application Services (SAS) and Fabric Operating System (FOS) combinations.

1. On the Control Processor, run the command:

> diagDisablePost

Disable Diagnostic Power-On Self-Test (POST) before upgrading either Fabric Operating System (FOS) or Storage Application Services (SAS) firmware. Failing to disable POSTs may cause the switch to go into an error state when upgrading firmware.

2. Upgrade the Fabric Operating System (FOS) and Storage Application Services (SAS) using the following procedure.

CAUTION!Upgrade the Fabric Operating System (FOS) and Storage Application Services (SAS) on all switches and blades, one fabric at a time to avoid risking downtime.

a. To upgrade the FOS, on the Control Processor, use the firmwaredownload -c command. During the upgrade, the FOS may be temporarily incompatible with the SAS until the SAS is upgraded. The -c option disables the FOS/SAS compatibility check.

Page 25: RecoverPoint Deploying With the Brocade Splitter

25

RecoverPoint deployment procedure

RecoverPoint Deploying with the Brocade Splitter

b. To upgrade the SAS, on the Control Processor, use the firmwaredownload command.

Run the firmwaredownloadstatus command several times until the following message appears:

(SAS) The internal firmware image is relocated successfully.

At the conclusion of the firmware upgrade, the Blade Processor reboots automatically.

c. Run the firmwareshow command to verify that the correct FOS and SAS version are installed on both partitions.

3. If you wish to enable Diagnostic Power-On Self-Tests (optional), on the Control Processor, run the command:

> diagEnablePost

4. It is recommended to clear all port statistics. On the Control Processor, run the following command:

> slotstatsclear

Control Processor and Blade Processor IP addresses

The Control Processor and Blade Processor must have unique IP addresses. Use the following procedure to display both IP addresses and assign a unique IP address to the Blade Processor.

Example To display the IP addresses for Control Processor and Blade Processor:

1. Log in to Control Processor as admin user.

2. Run ipaddrshow command:

<switch name>:admin> ipaddrshow

The output is in the following format:

<switch name>Ethernet IP Address: <Control Processor IP Address>Ethernet Subnetmask: <Control Process Subnet Mask>Fibre Channel IP Address: noneFibre Channel Subnetmask: noneGateway IP Address: <Control Processor Gateway>DHCP: Offeth0: <Blade Processor IP Address>eth1: none/noneGateway: <Blade Processor Gateway>

Page 26: RecoverPoint Deploying With the Brocade Splitter

26 RecoverPoint Deploying with the Brocade Splitter

RecoverPoint deployment procedure

The Blade Processor IP address must be assigned to eth0 network interface; eth1 is not required.

3. Give the Blade Processor a unique IP address and assign it to network interface eth0:

<switch name>:admin> ipaddrset --add <Blade Processor IP Address> -eth0

4. If needed, assign an IP address to the Blade Processor gateway:

<switch name>:admin> ipaddrset --add <Blade Processor Gateway> -gate

5. Confirm the IP configuration:

<switch name>:admin> ipaddrshow

6. Check the connectivity to the Blade Processor:

Use ssh and log in as root.

The EMC Connectrix B Series Fabric OS Command Reference Manual provides detailed information about these, and all other, switch CLI commands.

Installing the splitter agent

After installing Fabric Operating System (FOS) and Storage Application Services (SAS), install the splitter agent. Use the following procedure.

For FOS 6.2.x (supported with RecoverPoint 3.2.x), perform the following first:

If working in frame redirection mode, verify that the default zone setting is “No Access”. To do so, log into the Control Processor module, and run:

> defzone --show

Note: This setting must be set to the correct value before installing the RecoverPoint splitter agent.

To change the defzone setting to No Access, use the following commands:

> defzone --noaccess> cfgsave

Page 27: RecoverPoint Deploying With the Brocade Splitter

27

RecoverPoint deployment procedure

RecoverPoint Deploying with the Brocade Splitter

1. Install the RecoverPoint splitter agent:

a. Download the RecoverPoint splitter agent installation file from Powerlink at:

http://Powerlink.EMC.com

and copy the installation file onto the Blade Processor.

To do so:

1. Log in to Blade Processor as root. 2. Use ftp to download the RecoverPoint splitter agent binary

file to the /tmp directory:> cd /tmp> ftp ip_address ftp> binftp> get agent_binary_fileftp> bye

b. Run the installation package.

The installation process verifies that the FOS and the SAS are compatible with the splitter agent. If a mismatch is found, the correct versions are displayed on screen and the installation is aborted.

At the Blade Processor, use the following commands:

> cd /tmp> chmod +x agent_binary_file> ./agent_binary_file

This extracts the RecoverPoint splitter agent files under /thirdparty/recoverpoint.

c. When prompted, enter the hostname for the splitter agent.

The _ip address will automatically be appended to the name you enter here. The name with appended _ip address is the name of the splitter as it is displayed by the RecoverPoint Management Application.

d. In RecoverPoint 3.2.x, you are prompted to enter the user admin password for the Control Processor. This will enable you to determine the Default Zone setting of the fabric during splitter installation. Although you can skip this (press Enter),

Page 28: RecoverPoint Deploying With the Brocade Splitter

28 RecoverPoint Deploying with the Brocade Splitter

RecoverPoint deployment procedure

it is recommended that you enter the password to allow verification of this setting. The installation will be aborted if an incorrect setting is detected.

2. Create the RPA target zone and RPA front-end zone as follows:

a. On the Blade Processor, run the script:

> /thirdparty/recoverpoint/install/zoning_script.sh

The script does the following:

– The script creates the RPA target zone (hostname_FR_RPA_Target_Zone) and the RPA front-end zone (hostname_RPA_Front_End_Zone).

– Adds the system virtual initiator, and all possible virtual initiator WWNs to the RPA target zone

– Adds all appliance virtual targets (AVTs) to the RPA front-end zone

– In RecoverPoint 3.2 and later, adds RPA ports to both zones.

Instead of using the script, you can create those zones manually, using the WWNs list and zone creation examples located in:

/thirdparty/recoverpoint/init_host/scimitar_wwns_ list.txt

If upgrading to a RecoverPoint release earlier than 3.2, add RPA ports to the appropriate zone.

– If you are using initiator-target separation, add RPA target ports to the RPA target zone and RPA initiator ports to the RPA initiator zone.

– If you are using ports that can serve as both initiators and targets, add all RPA ports to both the RPA target zone and the RPA initiator zone

If upgrading to RecoverPoint release 3.2 x and you are using initiator-target separation, remove RPA initiator ports from the RPA target zone and RPA target ports from the RPA initiator zone.

In RecoverPoint 3.2 or later, if multiple RPA clusters are connected to the same fabric, all RPA ports of all clusters will be added to the zoning scripts. Remove any RPA ports from the created RPA zones that should not be included.

Page 29: RecoverPoint Deploying With the Brocade Splitter

29

RecoverPoint deployment procedure

RecoverPoint Deploying with the Brocade Splitter

b. Add both zones to the effective configuration, and re-enable the configuration.

3. Reboot the Blade Processor. Log in to the Blade Processor and run the following command:

> reboot

When the Blade Processor comes up, the RecoverPoint splitter agent should be activated. You can verify that it is activated by using the kdrv status command (“Running the RecoverPoint splitter agent” on page 32).

ConfiguringRecoverPoint and

splitters

At this point, RecoverPoint appliances have been installed; Fabric Operating System (FOS) and Storage Application Services (SAS) and splitters agents were installed. The next step is to add splitters and create bindings.

1. Run the rescan_san command with parameter volumes=full to rescan all volumes in the SAN, including those that have been changed. This action may take several minutes to complete.

2. Add all splitters.

3. Use the following instructions to create bindings.

CAUTION!Brocade splitter binding and unbinding causes the FC ID of affected HBAs to change. HP-UX hosts without agile addressing and AIX hosts without dynamic tracking cannot automatically rediscover the paths to storage if the FC ID is changed. To discover the new paths, it may be necessary to reconfigure the hosts, rescan the SAN, or take other action to have the hosts recognize the new paths to storage. Host downtime may be required.

For more information about FC ID changes with AIX hosts, refer to the following:

• EMC Host Connectivity Guide for IBM AIX

(powerlink.emc.com, Support > Technical Documentation and Advisories > Installation/Configuration > Host Connectivity/HBAs > HBAs Installation > Configuration)

• EMC knowledge base solutions emc91523 and emc115725

Page 30: RecoverPoint Deploying With the Brocade Splitter

30 RecoverPoint Deploying with the Brocade Splitter

RecoverPoint deployment procedure

• Deploying RecoverPoint with AIX Hosts Technical Notes

(powerlink.emc.com, Support > Technical Documentation and Advisories > Software ~P–R~ Documentation > RecoverPoint > Technical Notes/Troubleshooting)

For more information about FC ID changes with HP-UX hosts, refer to the following:

• EMC Host Connectivity Guide for HP-UX

(powerlink.emc.com, Support > Technical Documentation and Advisories > Installation/Configuration > Host Connectivity/HBAs > HBAs Installation > Configuration)

• EMC Knowledge base solution emc199817.

CAUTION!Create bindings one fabric at a time to avoid risking downtime. After adding bindings to a fabric, ensure that the hosts still have access to targets.

It is required to keep both initiator and target available on the fabric.

4. Use the bind_host_initiators CLI command to configure host binding to storage targets, as in the following example:

s1-bos> bind_host_initiators site=s1-bos splitter=sabre_172.16.0.17 host_initiators=10000000c92ab3bd target=500611111111 frame_redirect=yes

Initiator binding(s) successfully added.

Alternatively, you can add frame redirection bindings in the RecoverPoint Management Application, by using Brocade bindings, under Splitter Properties.

Whether you use the CLI or the Management Application, this command creates the virtual initiators and virtual targets that correspond to the host initiators and storage targets involved, and that enable the frame redirection mechanism to direct the data frames from the initiator to the target through the virtual entities. Each initiator should be bound to all targets that expose protected LUNs to it. You can bind multiple initiators to multiple targets.

Page 31: RecoverPoint Deploying With the Brocade Splitter

31

RecoverPoint splitter agent life cycle

RecoverPoint Deploying with the Brocade Splitter

A special zone configuration, r_e_d_i_r_c__fg, is created. This zone configuration contains the red_______base zone. The other zones in the zone configuration (whose names start with lsan_red…) contain physical host initiator and physical storage target pWWNs; and the corresponding virtual initiators and virtual targets. Each lsan_red… zone corresponds to a frame redirection binding. Under normal circumstances, r_e_d_i_r_c__fg should be managed only through RecoverPoint and should not be altered manually.

5. After adding bindings to the fabric, ensure that the hosts still have access to targets.

Repeat from Step 3 to Step 4 for each splitter.

6. Create consistency groups and configure RecoverPoint replication policies. Refer to the EMC RecoverPoint Administrator’s Guide, chapter “Starting Replication,” sections “Creating new consistency groups” and “Configuring Replication Policies.”

7. Restart replication.

CAUTION!Enabling the consistency groups will trigger a full sweep. If fast first-time initialization is enabled, the last complete and consistent image on the storage LUN will become inconsistent soon after the full sweep starts. However, if fast first-time initialization is disabled, the last complete and consistent image will be preserved as long as sufficient space is available in the replica journal.

8. Enable the consistency groups.

RecoverPoint splitter agent life cycleThis section presents information specific to the RecoverPoint splitter agent, which is installed on the Blade Processor module.

Note: In RecoverPoint 3.1 and later, the RecoverPoint splitter agent runs separate “low-level” and “high-level” processes. In earlier RecoverPoint releases, the agent runs a single process, which can be considered the “low-level” process.

Page 32: RecoverPoint Deploying With the Brocade Splitter

32 RecoverPoint Deploying with the Brocade Splitter

RecoverPoint splitter agent life cycle

Running the RecoverPoint splitter agent

The following commands are useful when running the RecoverPoint splitter agent on the Blade Processor module in the Brocade splitter environment:

◆ To start the agent’s high-level process (RecoverPoint 3.1 or later), assuming the low-level process is already running:

/thirdparty/recoverpoint/install/kdrv start_high

Note: This command starts the high-level process only. It is not recommended to use the kdrv start command to manually start the low-level process. For an explanation of the high-level and low-level processes, see “When the RecoverPoint splitter agent is stopped or crashes” on page 32.

The splitter agent verifies that it is compatible with the FOS and the SAS. If a mismatch is found, the splitter agent will crash.

◆ To check agent status:

/thirdparty/recoverpoint/install/kdrv status

Note: In RecoverPoint 3.1 and later, this command reports the status of both the low-level and high-level processes.

◆ To stop the agent:

/thirdparty/recoverpoint/install/kdrv stop

When the RecoverPoint splitter agent is stopped or crashes

As noted earlier in this section, the RecoverPoint splitter agent for RecoverPoint 3.1 (and later) runs separate “low-level” and “high-level” processes, whereas the agent for earlier RecoverPoint releases runs a single process, which can be considered the “low-level” process.

Splitter agent stoppedin RecoverPoint 3.2

and earlier

If for any reason the low-level process crashes, it is not possible to simply reactivate it. Note that crashing and stopping the RecoverPoint splitter agent are basically the same. In both cases, virtual targets and virtual initiators created by the RecoverPoint splitter agent continue to exist after it stops or crashes.

Page 33: RecoverPoint Deploying With the Brocade Splitter

33

RecoverPoint splitter agent life cycle

RecoverPoint Deploying with the Brocade Splitter

To clear all the virtual targets and virtual initiators, the Blade Processor reboots automatically following a crash, except in the following conditions:

◆ The agent was stopped manually, using the kdrv stop command, to allow maintenance or upgrade of the switch.

◆ The agent detected and aborted a “reboot cycle”, in which the agent repeatedly crashed upon initialization after reboots.

The cause of this problem should be corrected before the switch is manually rebooted.

If the high-level process (RecoverPoint 3.1 and later) crashes, while the low-level process is functioning, the high-level process is automatically reactivated, without affecting the host data path. The high-level process is not reactivated if:

◆ The low level process was stopped (either manually or by an error), and the switch has not yet rebooted.

◆ The agent detects a “restart cycle”, in which the high-level process repeatedly crashes upon initialization. In this case, the high-level process will remain down, while the low-level process may continue functioning. If the cause of the problem is fixed without a switch reboot, the high-level process can be reactivated manually using the kdrv start_high command.

Splitter agent stoppedin RecoverPoint 3.3

and later

If the low-level process crashes, the splitter attempts to recover from the crash by importing the virtual entities as non-disruptively as possible. Recovering from a crash should not impact the I/O path from the hosts unless the platform itself crashes internally. Except in the following conditions, if the attempt to recover non-disruptively fails, the Blade Processor reboots automatically:

◆ The agent was stopped manually, using the kdrv stop command, to allow maintenance or upgrade of the switch.

◆ The agent detected and aborted a “reboot cycle”, in which the agent repeatedly crashed upon initialization after reboots.

The cause of this problem should be corrected before the switch is manually rebooted.

If the high-level process crashes, while the low-level process is functioning, the high-level process is automatically reactivated, without affecting the host data path. The high-level process is not reactivated if:

Page 34: RecoverPoint Deploying With the Brocade Splitter

34 RecoverPoint Deploying with the Brocade Splitter

RecoverPoint splitter agent life cycle

◆ The low level process was stopped (either manually or by an error), and the switch has not yet rebooted.

◆ The agent detects a “restart cycle”, in which the high-level process repeatedly crashes upon initialization. In this case, the high-level process will remain down, while the low-level process may continue functioning. If the cause of the problem is fixed without a switch reboot, the high-level process can be reactivated manually using the kdrv start_high command.

Upgrading the RecoverPoint splitter agent

The procedure for upgrading a RecoverPoint splitter agent is presented on page 55.

Uninstalling the RecoverPoint splitter agent

Use the procedure that applies to your version of RecoverPoint.

Uninstalling splitteragent in RecoverPoint

3.3 and later

To uninstall the RecoverPoint splitter agent in RecoverPoint 3.3 and later, on the Blade Processor, run the following script:

/thirdpart/recoverpoint/install/uninstall.sh

This script stops the splitter agent if necessary and removes all RecoverPoint files from the Blade Processor. The script also prompts the user about keeping splitter logs and persistent information before uninstalling. Logs and persistent information are kept on the Blade Processor in the directory /thirdparty/backup_logs.

Uninstalling splitteragent in RecoverPoint

3.2 and earlier

To uninstall the RecoverPoint splitter agent in RecoverPoint 3.2 and earlier:

1. Stop the RecoverPoint splitter agent.

2. Delete the /thirdparty/recoverpoint folder from the Blade Processor. Log in to the Blade Processor as root and use the following command:

> rm -rf /thirdparty/recoverpoint/

3. Reboot the Blade Processor. Log in to the Blade Processor and run the following command:

> reboot

Page 35: RecoverPoint Deploying With the Brocade Splitter

35

Software and firmware upgrades

RecoverPoint Deploying with the Brocade Splitter

Software and firmware upgradesBefore performing any upgrade procedure, change control approval is required. From the EMC Support Wiki, select RecoverPoint > Change Control > RecoverPoint Change Control Process. Download and fill out both the New Install Checklist and the Change Control Request Form.

The best practice is to run the Brocade SAN Health Diagnostics Capture. For instructions, refer to “SAN Health Report” on page 60.

In most cases, upgrading RecoverPoint in the Brocade environment also requires upgrading Fabric Operating System (FOS) and Storage Application Services (SAS).

◆ Table 5 shows the types of Fabric Operating System (FOS) and Storage Application Services (SAS) upgrades as a function of the release of RecoverPoint before and after the upgrade.

From RecoverPoint 3.1 and later, a RecoverPoint upgrade can be done non-disruptively if and only if the upgrade is within the same major version or the immediately following major version of RecoverPoint. For example, the upgrade from RecoverPoint 3.3.2 to RecoverPoint 3.4.2 may be performed non-disruptively.

If RecoverPoint must be upgraded two or more major versions (for instance, from 3.1 to 3.4), two options are available:

◆ successive non-disruptive upgrades (3,1 --> 3.2 --> 3.3 --> 3.4); splitters must be upgraded to the appropriate version at each step.

◆ disruptive upgrade in a single step (3.1 --> 3.4); if you choose this option, it must be approved in advance by change control.

Use Table 5 to determine whether you need to do a disruptive or non-disruptive upgrade.

Table 5 Types of FOS and SAS upgrades

From To Procedure

Multi-VI mode Frame Redirection mode

Migration

RecoverPoint 2.4.xRecoverPoint 3.0.x

RecoverPoint 3.1.x or any later version

Disruptive

Page 36: RecoverPoint Deploying With the Brocade Splitter

36 RecoverPoint Deploying with the Brocade Splitter

Software and firmware upgrades

◆ For migration, follow the instructions in “Hardware replacement” on page 57.

◆ For disruptive upgrade, follow the instructions in “Non-disruptive upgrade” on page 51.

◆ For non-disruptive upgrade, follow the instructions in “Non-disruptive upgrade” on page 51.

Disruptive upgrade A disruptive upgrade will impact the system in the following ways:

◆ Consistency groups will be disabled, replication will stop, and journals are lost for the duration of the upgrade.

◆ Full sweep will be required after the upgrade.

◆ The last complete pre-upgrade image of each consistency group will be saved on the replica LUNs at least until the full sweep starts. During the full sweep, the last pre-upgrade image may be overwritten, leaving the system without a replica until the full sweep is completed.

◆ Bindings will need to be removed and recreated (one fabric at a time).

◆ Fabric Operating System (FOS) and Storage Application Services (SAS) will need to be upgraded (one fabric at a time) to a version that is supported by the release of RecoverPoint to which you are upgrading.

◆ The splitter agent must be upgraded to the same version as the release of RecoverPoint to which you are upgrading.

◆ Hosts are expected to maintain connectivity through multipathing throughout the upgrade procedure.

Note: If working in multi-VI mode, migrate to frame redirection mode as part of the disruptive upgrade procedure, if possible. If former McData switches are ISLed to the intelligent fabric, you must remain in multi-VI mode. Instructions for migrating to frame redirection mode are included in the disruptive upgrade procedure.

Upgrade preparation Before upgrading RecoverPoint software:

◆ Ensure that a secure shell (SSH) client is installed on your PC.

If your PC runs Microsoft Windows, use an SSH client such as PuTTY. If your PC runs Linux or UNIX, use the SSH client that comes with the operating system.

Page 37: RecoverPoint Deploying With the Brocade Splitter

37

Software and firmware upgrades

RecoverPoint Deploying with the Brocade Splitter

◆ Go to http://powerlink.emc.com to obtain the RecoverPoint ISO image you plan to use for an upgrade:

• For local upgrades, have the ISO image available on a disc:

◆ Verify the checksum against the md5sum listed in Powerlink.

Note: If running Microsoft Windows, use the md5sum.exe utility.

◆ Burn the ISO image onto a disc.

Note: To determine whether the ISO image can fit on a CD or DVD, refer to the ISO image size detailed in the EMC RecoverPoint and RecoverPoint/SE Release Notes.

• For remote upgrades, download the image from a local FTP server. To download the image:

a. Use an SSH client to connect to the RPA’s IP address.

b. If using RecoverPoint 2.4.x, 3.0.x, or 3.1.x, login as boxmgmt with the password boxmgmt.

The Main menu of the Installation Manager appears.

c. From the Main menu, follow these steps: Installation > Upgrade Wizard > Download ISO > Enter IP of the ftp server > Enter the port number to which to connect on the FTP server (default is 21) > Enter the FTP user name > Enter the FTP password > Enter the location of the file on FTP server > Enter the name of the file.

The file will begin downloading.

To save system settings:

1. Ensure all RPAs are working correctly.

In the RPA tab of the RecoverPoint Management Application, ensure that all RPAs that you plan to update are functioning properly. If any error conditions exist, correct them before continuing.

2. Use PuTTY, Plink, or ssh to save the account settings (account ID, license key, activation code):

• Using PuTTY:

a. Connect to the CLI using the Site Management IP address. In the CLI, type the get_system_settings command:

Page 38: RecoverPoint Deploying With the Brocade Splitter

38 RecoverPoint Deploying with the Brocade Splitter

Software and firmware upgrades

• If you connect from the same site as the RPA, record the Box management IP Address.

• If you are across the WAN from the RPA, record the WAN IP Address.

b. Run PuTTY, using the IP address recorded from the get_system_settings command.

c. Select Session > Logging and activate Log printable output only.

d. Click Browse to specify a location for PuTTY log.

e. At the login prompt, log in as admin.

f. At the command line, type the get_account_settings command.

The account settings are written to the PuTTY session log file.

• Using Plink:

In a console window, type the following command:

Plink -ssh <site management IP> -l admin get_account_settings > get_account_settings.txt

where admin = admin (or other administrator login).

The account settings are written to get_account_settings.txt.

• Using ssh:

At the shell prompt, enter the following command:

ssh site_management_IP -l admin get_account_settings > get_account_settings.txt

The account settings are written to get_account_settings.txt.

3. Use Plink or ssh to save the current settings to a file. You will need the bindings section in this file to recreate the bindings:

a. From a UNIX host:

$ ssh site_management_IP -l admin save_settings -n > save_settings1.txt

b. From a Windows host:

Page 39: RecoverPoint Deploying With the Brocade Splitter

39

Software and firmware upgrades

RecoverPoint Deploying with the Brocade Splitter

> plink -ssh site_management_IP -l admin -pw password save_settings -n > save_settings1.txt

To determine the command was successfully executed, open save_settings1.txt.

If you type the command from the RecoverPoint prompt, the settings are output directly to the screen.

4. Log in to http://rplicense.emc.com. Use the Account ID and License Key saved in Step 2 in “Upgrade preparation”to log in. Request to upgrade release.

Disabling replicationand splitters

Perform the following steps in the Management Application:

1. Disable all consistency groups.

CAUTION!Clicking Yes stops all replication, deletes journals, and causes a full sweep once the group is enabled again.

2. Detach all replication volumes from splitters.

3. Use the following instructions to remove bindings.

CAUTION!Brocade splitter binding and unbinding causes the FC ID of affected HBAs to change. HP-UX hosts without agile addressing and AIX hosts without dynamic tracking cannot automatically rediscover the paths to storage if the FC ID is changed. To discover the new paths, it may be necessary to reconfigure the hosts, rescan the SAN, or take other action to have the hosts recognize the new paths to storage. Host downtime may be required.

For more information about FC ID changes with AIX hosts, refer to the following:

• EMC Host Connectivity Guide for IBM AIX

(powerlink.emc.com, Support > Technical Documentation and Advisories > Installation/Configuration > Host Connectivity/HBAs > HBAs Installation > Configuration)

• EMC knowledge base solutions emc91523 and emc115725

Page 40: RecoverPoint Deploying With the Brocade Splitter

40 RecoverPoint Deploying with the Brocade Splitter

Software and firmware upgrades

• Deploying RecoverPoint with AIX Hosts Technical Notes

(powerlink.emc.com, Support > Technical Documentation and Advisories > Software ~P–R~ Documentation > RecoverPoint > Technical Notes/Troubleshooting)

For more information about FC ID changes with HP-UX hosts, refer to the following:

• EMC Host Connectivity Guide for HP-UX

(powerlink.emc.com, Support > Technical Documentation and Advisories > Installation/Configuration > Host Connectivity/HBAs > HBAs Installation > Configuration)

• EMC Knowledge base solution emc199817.

CAUTION!The bindings should not all be removed simultaneously, to avoid risking downtime. After removing bindings from a fabric, ensure that the hosts still have access to targets.

Remove bindings for one splitter at a time:

a. In the RecoverPoint Management Application, select the Brocade splitter, and remove the bindings.

a. Ensure that the hosts still have access to targets. Verify I/O capability on all paths.

b. Verify that the frame redirection zones have been removed from the fabric:

> cfgshow | grep lsan

If the switch returns lsan zones, then frame redirection zones still exist and must be removed manually:

At the switch command line or GUI, remove all lsan zones except red_______base from r_e_d_i_r_c__fg zone configuration.

In FOS 6.1.x and earlier, use the following command:

> cfgremove “r_e_d_i_r_c__fg”, “lsan_zone_name1[; lsan_zone_name2; lsan_zone_name3…];”> zonedelete “lsan_zone_name1”> zonedelete “lsan_zone_name2”…> cfgsave

Page 41: RecoverPoint Deploying With the Brocade Splitter

41

Software and firmware upgrades

RecoverPoint Deploying with the Brocade Splitter

In FOS 6.2.x and later, use the following command:

> zone --rddelete lsan_zone_name1> zone --rddelete lsan_zone_name2…> cfgsave

The bindings were saved in save_settings1.txt, created in Step 3 in “Upgrade preparation”.

Repeat Step 3 in “Upgrade preparation” for each splitter.

4. Remove all splitters.

Note: At this point, RecoverPoint and switch components are disconnected from each other, and they can be updated simultaneously or consecutively. Continue upgrading RecoverPoint appliances in “Upgrading RecoverPoint appliances” on page 41. Continue the switch component upgrade according to the instructions in “Upgrading FOS and SAS” on page 44.

UpgradingRecoverPoint

appliances

To upgrade the RecoverPoint appliances:

1. Use Plink or ssh to again save the settings after consistency groups are disabled and splitters are removed. You will use this file, which is a CLI script, to recreate the RecoverPoint configuration:

a. To type the command from a UNIX host:

$ ssh site_management_IP -l admin save_settings -n > save_settings2.txt

b. To type the command from a Windows host:

C:\ plink -ssh site_management_IP -l admin -pw password save_settings -n > save_settings2.txt

To determine the command was successfully executed, open save_settings2.txt.

If you type the command from the RecoverPoint prompt, the settings are output directly to the screen.

2. Detach RPAs from the RPA cluster.

3. Start an RPA from the ISO image that you prepared in “Upgrade preparation” on page 36.

Wait until the software loads. If using a disc, wait until it ejects.

Page 42: RecoverPoint Deploying With the Brocade Splitter

42 RecoverPoint Deploying with the Brocade Splitter

Software and firmware upgrades

4. Restart the RPA.

5. Repeat steps 3–4 for every RPA in the cluster.

6. Run the Auto Upgrade wizard on every RPA:

a. Access the RPA by using an SSH client to connect to the RPA’s IP address.

b. At the prompt, login as boxmgmt with the password boxmgmt.

c. When the Installation Manager appears, you are prompted:

– To configure the RPA management IP address (optional).– To enter the number of sites in the replication environment.– To enter the number of RPAs at the site.

Note: You must install at least two RPAs at each site, and all sites must have the same number of RPAs.

– Whether you want to enable replication over Fibre Channel.

Note: This prompt only appears for RPA platforms of type: Dell PowerEdge 1950 or 2950 phase 3 running QLogic QLE24xx (Gen3) or Dell R610 running QLE-2564 (Gen4) HBAs.

d. From the Main menu, select Installation > Upgrade wizard > Auto upgrade.

Auto upgrade is available only if you have not yet used the Apply command to apply RPA settings, and you have not yet attached the RPA to the RPA cluster. Both of these will be done automatically by the Auto Upgrade wizard.

e. Type y (yes) when you are queried “Are you sure you want to perform auto upgrade? (y/n)”.

f. Type y (yes) once the machine finishes performing a SAN diagnostics test.

g. Type y (yes) when you are queried “Do you want to apply these settings? (y/n)”.

h. A public/private DSA key pair is generated, stating the identification and public key location, and the key fingerprint.

Page 43: RecoverPoint Deploying With the Brocade Splitter

43

Software and firmware upgrades

RecoverPoint Deploying with the Brocade Splitter

You are queried “Did you already format the repository volume on this site? (y/n)”. Since you must reformat the repository volume once at each site:

– Type n (no) for the first RPA at any site.– Type y (yes) for any subsequent RPAs at that site.

Note: As of 3.1, the repository volume size should be 3 GB.

i. The RPA restarts and is attached to the RPA cluster. The RPA should become fully functional.

Updating licenseinformation

Once RPAs are fully functional and communicating with each other, a new installation ID will be generated.

1. Log in to http://rplicense.emc.com.

2. Use the Account ID and License Key saved in Step 2 in “Upgrade preparation” to log in.

3. If the upgrade request was approved (Step 4 in “Upgrade preparation”), provide the new Installation ID to obtain a new Activation Code.

4. To activate the upgraded RecoverPoint license, enter Account ID, License Key, and new Activation Code.

Convert and loadsettings

The configuration settings saved in Step 1 in “Upgrade preparation” are converted and reloaded back to the RPA.To convert and load the settings:

1. Run the rescan_san command with parameter volumes=full to rescan all volumes in the SAN, including those that have been changed. This action may take several minutes to complete.

2. Run the convert_script command to convert the settings you saved in Step 1 in “Upgrade preparation” (which does not include splitters and bindings) to the format of the current release. The version= parameter specifies the version of RecoverPoint you are converting from. For example, if you are upgrading from RecoverPoint 2.4, replace version=3.0 in the examples below with version=2.4.

a. To type the command from a UNIX host:

$ ssh site_management_IP -l admin convert_script version=3.0 < save_settings2.txt > converted_settings.txt

Page 44: RecoverPoint Deploying With the Brocade Splitter

44 RecoverPoint Deploying with the Brocade Splitter

Software and firmware upgrades

When prompted for password, enter admin (after the upgrade, the password is reset to the default).

b. To type the command from a Windows host:

C:\ plink -ssh site_management_IP -l admin -pw admin convert_script version=3.0 < save_settings2.txt > converted_settings.txt

To determine if the command was executed successfully, open the file converted_settings.txt.

If you type the command from the RecoverPoint prompt, the settings are output directly to the screen.

3. Load converted_settings.txt into the system.

a. To type the command from a UNIX host:

$ ssh site_management_IP -l admin < converted_settings.txt

When prompted for password, enter admin (after the upgrade, the password is reset to the default).

b. To type the command from a Windows host:

C:\ plink -ssh site_management_IP> -l admin -pw admin -m converted_settings.txt

Note: At this point, you cannot proceed with configuring RecoverPoint until the Fabric Operating System (FOS) and Storage Application Services (SAS) upgrade (“Upgrading FOS and SAS” on page 44) has been completed. After completing the Fabric Operating System (FOS) and Storage Application Services (SAS) upgrade and installing the splitter agent, continue at “Configuring RecoverPoint and splitters” on page 49.

Upgrading FOS andSAS

To upgrade the FOS and SAS:

1. Remove the RPA front-end zone (hostname_RPA_Front_End_Zone) and RPA target zone (hostname_FR_RPA_Target_Zone) from the active zoning config. Delete or rename RPA front-end zone and RPA target zone. Renaming may be useful for future reference.

2. Stop the RecoverPoint splitter agent. Run the following command from the Blade Processor:

>/thirdparty/recoverpoint/install/kdrv stop

Page 45: RecoverPoint Deploying With the Brocade Splitter

45

Software and firmware upgrades

RecoverPoint Deploying with the Brocade Splitter

3. Delete the /thirdparty/recoverpoint folder from the Blade Processor. Log in to the Blade Processor as root and use the following command:

> rm -rf /thirdparty/recoverpoint/

4. Save the switch configuration to an off-switch location using the configupload command.

5. On the Control Processor, run the command:

> diagDisablePost

Disable Diagnostic Power-On Self-Test (POST) before upgrading either Fabric Operating System (FOS) or Storage Application Services (SAS) firmware. Failing to disable POSTs may cause the switch to go into an error state when upgrading firmware.

6. Upgrade the Fabric Operating System (FOS) and Storage Application Services (SAS) using the following procedure.

CAUTION!Upgrade the Fabric Operating System (FOS) and Storage Application Services (SAS) on all switches and blades, one fabric at a time to avoid risking downtime.

The Fabric Operating System (FOS) and Storage Application Services (SAS) combination must be supported by the release of RecoverPoint to which you are upgrading. Refer to the release notes for your release of RecoverPoint for supported Storage Application Services (SAS) and Fabric Operating System (FOS) combinations.

a. To upgrade the FOS, on the Control Processor, use the firmwaredownload -c command. During the upgrade, the FOS may be temporarily incompatible with the SAS until the SAS is upgraded. The -c option disables the FOS/SAS compatibility check.

b. To upgrade the SAS, on the Control Processor, use the firmwaredownload command.

Run the firmwaredownloadstatus command several times until the following message appears:

(SAS) The internal firmware image is relocated successfully.

Page 46: RecoverPoint Deploying With the Brocade Splitter

46 RecoverPoint Deploying with the Brocade Splitter

Software and firmware upgrades

At conclusion of the firmware upgrade, the Blade Processor reboots automatically.

c. Run the firmwareshow command to verify that the correct FOS and SAS version are installed on both partitions.

7. If you wish to enable Diagnostic Power-On Self-Tests (optional), on the Control Processor, run the command:

> diagEnablePost

Reinstalling the splitteragent

After upgrading Fabric Operating System (FOS) and Storage Application Services (SAS), reinstall the splitter agent. Use the following procedure.

1. If working in frame redirection mode, verify that the default zone setting is “No Access”.

To do so, log into the Control Processor module, and run:

> defzone --show

Note: This setting must be set to the correct value before installing the RecoverPoint splitter agent.

To change the defzone setting to No Access, use the following commands:

> defzone --noaccess> cfgsave

2. Install the new RecoverPoint splitter agent:

a. Download the RecoverPoint splitter agent installation file from Powerlink at:

http://Powerlink.EMC.com

and copy the installation file onto the Blade Processor.

To do so:

1. Log in to Blade Processor as root. 2. Use ftp to download the RecoverPoint splitter agent binary

file to the /tmp directory:> cd /tmp> ftp ip_address ftp> binftp> get agent_binary_file

Page 47: RecoverPoint Deploying With the Brocade Splitter

47

Software and firmware upgrades

RecoverPoint Deploying with the Brocade Splitter

ftp> bye

b. Run the installation package.

The installation process verifies that the FOS and the SAS are compatible with the splitter agent. If a mismatch is found, the correct versions are displayed on screen and the installation is aborted.

Note: Do not use the upgrade parameter when re-installing the RecoverPoint splitter agent.

At the Blade Processor, use the following commands:

> cd /tmp> chmod +x agent_binary_file> ./agent_binary_file

This extracts the RecoverPoint splitter agent files under /thirdparty/recoverpoint.

c. When prompted, enter the same hostname for the splitter agent as was used before the upgrade.

The _ip address will automatically be appended to the name you enter here. The name with appended _ip address is the name of the splitter as it is displayed by the RecoverPoint Management Application.

d. In RecoverPoint 3.2.x, you are prompted to enter the user admin password for the Control Processor. This will enable you to determine the Default Zone setting of the fabric during splitter installation. Although you can skip this (press Enter), it is recommended that you enter the password to allow verification of this setting. The installation will be aborted if an incorrect setting is detected.

3. Create the RPA target zone and RPA front-end zone as follows:

a. On the Blade Processor, run the script:

> /thirdparty/recoverpoint/install/zoning_script.sh

The script does the following:

– The script creates the RPA target zone (hostname_FR_RPA_Target_Zone) and the RPA front-end zone (hostname_RPA_Front_End_Zone).

Page 48: RecoverPoint Deploying With the Brocade Splitter

48 RecoverPoint Deploying with the Brocade Splitter

Software and firmware upgrades

– Adds the system virtual initiator, and all possible virtual initiator WWNs to the RPA target zone

– Adds all appliance virtual targets (AVTs) to the RPA front-end zone

– In RecoverPoint 3.2 and later, adds RPA ports to both zones.

Instead of using the script, you can create those zones manually, using the WWNs list and zone creation examples located in:

/thirdparty/recoverpoint/init_host/scimitar_wwns_ list.txt

b. Add or remove RPA pWWNs to or from RPA zones as follows.

If upgrading to a RecoverPoint release earlier than 3.2, add RPA ports to the appropriate zone.

– If you are using initiator-target separation, add RPA target ports to the RPA target zone and RPA initiator ports to the RPA initiator zone.

– If you are using ports that can serve as both initiators and targets, add all RPA ports to both the RPA target zone and the RPA initiator zone

If upgrading to RecoverPoint release 3.2 x and you are using initiator-target separation, remove RPA initiator ports from the RPA target zone and RPA target ports from the RPA initiator zone.

In RecoverPoint 3.2 or later, if multiple RPA clusters are connected to the same fabric, all RPA ports of all clusters will be added to the zoning scripts. Remove any RPA ports from the created RPA zones that should not be included.

c. Add both zones to the effective configuration, and re-enable the configuration.

4. Reboot the Blade Processor. Log in to the Blade Processor and run the following command:

> reboot

When the Blade Processor comes up, the RecoverPoint splitter agent should be activated. You can verify that it is activated by using the kdrv status command (“Running the RecoverPoint splitter agent” on page 32).

Page 49: RecoverPoint Deploying With the Brocade Splitter

49

Software and firmware upgrades

RecoverPoint Deploying with the Brocade Splitter

ConfiguringRecoverPoint and

splitters

At this point, RecoverPoint appliances have been upgraded, settings were converted and loaded (except for splitters and bindings); Fabric Operating System (FOS) and Storage Application Services (SAS) were upgraded and splitter agents were reinstalled. The next step is to add splitters and recreate bindings.

1. Run the rescan_san command with parameter volumes=full to rescan all volumes in the SAN, including those that have been changed. This action may take several minutes to complete.

2. Add all splitters.

3. Use the following instructions to add bindings.

CAUTION!Brocade splitter binding and unbinding causes the FC ID of affected HBAs to change. HP-UX hosts without agile addressing and AIX hosts without dynamic tracking cannot automatically rediscover the paths to storage if the FC ID is changed. To discover the new paths, it may be necessary to reconfigure the hosts, rescan the SAN, or take other action to have the hosts recognize the new paths to storage. Host downtime may be required.

For more information about FC ID changes with AIX hosts, refer to the following:

• EMC Host Connectivity Guide for IBM AIX

(powerlink.emc.com, Support > Technical Documentation and Advisories > Installation/Configuration > Host Connectivity/HBAs > HBAs Installation > Configuration)

• EMC knowledge base solutions emc91523 and emc115725

• Deploying RecoverPoint with AIX Hosts Technical Notes

(powerlink.emc.com, Support > Technical Documentation and Advisories > Software ~P–R~ Documentation > RecoverPoint > Technical Notes/Troubleshooting)

For more information about FC ID changes with HP-UX hosts, refer to the following:

• EMC Host Connectivity Guide for HP-UX

(powerlink.emc.com, Support > Technical Documentation and Advisories > Installation/Configuration > Host Connectivity/HBAs > HBAs Installation > Configuration)

Page 50: RecoverPoint Deploying With the Brocade Splitter

50 RecoverPoint Deploying with the Brocade Splitter

Software and firmware upgrades

• EMC Knowledge base solution emc199817.

CAUTION!The bindings should not all be loaded simultaneously, to avoid risking downtime. After adding bindings to a fabric, ensure that the hosts still have access to targets.

When working in frame redirection mode, it is required to keep both initiator and target available on the fabric.

Load bindings for one splitter at a time:

4. Use the following procedure to load the bindings one splitter at a time.

save_settings1.txt, which was saved at Step 3 in “Upgrade preparation”, contains all bindings. Locate the bind_host_initiator commands and copy them into separate files, one file per splitter. If the names of the splitters were changed, modify the bind_host_initiator commands to reflect the new splitter names.

Use the following commands to load bindings one splitters at a time:

a. To type the command from a UNIX host:

$ ssh site_management_IP -l admin < bindings_splitter1.txt

b. To type the command from a Windows host:

C:\ plink -ssh site_management_IP> -l admin -pw admin -m bindings_splitter1.txt

Skip to Step 5.

5. After adding bindings to the fabric, ensure that the hosts still have access to targets.

Repeat from Step 3 to Step 4 for each splitter.

6. Re-attach volumes to the Connectrix-based splitters.

7. Restart replication.

Page 51: RecoverPoint Deploying With the Brocade Splitter

51

Software and firmware upgrades

RecoverPoint Deploying with the Brocade Splitter

CAUTION!Enabling the consistency groups will trigger a full sweep. If fast first-time initialization is enabled, the last complete and consistent image on the storage LUN will become inconsistent soon after the full sweep starts. However, if fast first-time initialization is disabled, the last complete and consistent image will be preserved as long as sufficient space is available in the replica journal.

8. Enable the consistency groups.

Non-disruptive upgrade

Starting with RecoverPoint 3.1, RPAs are backward compatible with the previous version of the RecoverPoint splitter agent. As a result, RPAs can be upgraded first; the upgraded RPAs will not conflict with the pre-upgrade splitter agent, and the upgrade can be performed without disrupting service.

The following are true during a non-disruptive upgrade:

◆ Replication will continue as normal; that is, splitter settings are not lost; resynchronization is not required; RecoverPoint journal will be preserved.

◆ All splitter and RPA persistent data will be preserved; therefore, there is no need to unbind and bind the host to the target during the upgrade.

◆ During the upgrade, only one host path will be available at a time. Host operation and configuration will not be affected, due to multipath capabilities of the host.

◆ FC IDs are not changed during the upgrade procedure. AIX and HP-UX hosts do not need to be rebooted, and SAN rescanning is not required.

Go to the appropriate procedure:

◆ “Non-disruptive upgrade of RecoverPoint with FOS or SAS upgrade” on page 52

◆ “Non-disruptive upgrade of RecoverPoint without FOS and SAS upgrade” on page 55

Page 52: RecoverPoint Deploying With the Brocade Splitter

52 RecoverPoint Deploying with the Brocade Splitter

Software and firmware upgrades

Non-disruptive upgrade of RecoverPoint with FOS or SAS upgrade

Use the following procedure to upgrade RecoverPoint and the Fabric Operating System or Storage Application Services. Also use this procedure if updating both the Fabric Operating System and the Storage Application Services.

1. If upgrading RecoverPoint, upgrade RecoverPoint on all RPAs now.

When upgrading to RecoverPoint 3.2 and later, refer to the EMC RecoverPoint Deployment Manager Product Guide, section “Upgrade RPA Software.”

When upgrading to RecoverPoint 3.1.x, refer to the EMC RecoverPoint 3.1 Installation Guide, section “Upgrading from Release 3.1 and higher”

CAUTION!Perform the following procedure one fabric at a time, to avoid host downtime.

2. Stop the RecoverPoint splitter agent. Run the following command from the Blade Processor:

>/thirdparty/recoverpoint/install/kdrv stop

Even though the splitter agent has been stopped, the host path and the splitter are still functioning.

3. Save the splitter settings to an off-switch location. These settings will be used to restore the splitter configuration.

To save the files:

a. Run:

>/thirdparty/recoverpoint/install/save_settings.sh

This saves all the splitter settings to the /thirdparty/hostname_config.tar.gz backup file.

b. Use ftp to copy this backup file to a remote location:

> cd /thirdparty/> ftp ip_address ftp> binftp> put hostname_config.tar.gz

Page 53: RecoverPoint Deploying With the Brocade Splitter

53

Software and firmware upgrades

RecoverPoint Deploying with the Brocade Splitter

ftp> bye

4. Save the switch configuration to an off-switch location using the configupload command.

5. On the Control Processor, run the command:

> diagDisablePost

Disable Diagnostic Power-On Self-Test (POST) before upgrading either Fabric Operating System (FOS) or Storage Application Services (SAS) firmware. Failing to disable POSTs may cause the switch to into an error state when upgrading firmware.

CAUTION!Perform Step 6 to Step 11 one fabric at a time to avoid host downtime. This procedure causes hosts to lose connectivity with the storage and makes the splitter unavailable on this fabric. If the hosts have multipath capabilities, they can remain on-line by using the other fabric.

6. Upgrade Fabric Operating System (FOS), Storage Application Services (SAS), or both:

a. To upgrade the FOS, on the Control Processor, use the firmwaredownload -c command. During the upgrade, the FOS may be temporarily incompatible with the SAS until the SAS is upgraded. The -c option disables the FOS/SAS compatibility check.

b. To upgrade the SAS, on the Control Processor, use the firmwaredownload command. Run the firmwaredownloadstatus command several times until the following message appears:

(SAS) The internal firmware image is relocated successfully.

At conclusion of the firmware upgrade, the Blade Processor reboots automatically.

c. Run the firmwareshow command to verify that the correct FOS and SAS version are installed on both partitions.

7. Clean up the previous splitter agent installation:

Page 54: RecoverPoint Deploying With the Brocade Splitter

54 RecoverPoint Deploying with the Brocade Splitter

Software and firmware upgrades

If /thirdparty/recoverpoint/ directory exists, the old agent files are still on disk, but the agent is not registered with the new Storage Application Services (SAS).

• To ensure a clean splitter agent installation, stop the old agent and delete the /thirdparty/recoverpoint directory. Run the following commands on the Blade Processor:

> /thirdparty/recoverpoint/install/kdrv stop> rm -rf /thirdparty/recoverpoint/

8. Install the new RecoverPoint splitter agent:

a. Download the RecoverPoint splitter agent installation file from Powerlink at:

http://Powerlink.EMC.com

and copy the installation file onto the Blade Processor.

To do so:

1. Log in to Blade Processor as root. 2. Use ftp to download the RecoverPoint splitter agent binary

file to the /tmp directory:> cd /tmp> ftp ip_address ftp> binftp> get agent_binary_fileftp> bye

b. Run the installation package.

The installation process verifies that the FOS and the SAS are compatible with the splitter agent. If a mismatch is found, the correct versions are displayed on screen and the installation is aborted.

Note: Do not use the –u parameter when re-installing the RecoverPoint splitter agent.

At the Blade Processor, use the following commands:

> cd /tmp> chmod +x agent_binary_file> ./agent_binary_file

This extracts the RecoverPoint splitter agent files under /thirdparty/recoverpoint.

Page 55: RecoverPoint Deploying With the Brocade Splitter

55

Software and firmware upgrades

RecoverPoint Deploying with the Brocade Splitter

c. When prompted, enter the same hostname for the splitter agent as was used before the upgrade.

The _ip address will automatically be appended to the name you enter here. The name with appended _ip address is the name of the splitter as it is displayed by the RecoverPoint Management Application.

d. In RecoverPoint 3.2.x, you are prompted to enter the user admin password for the Control Processor. This will enable you to determine the Default Zone setting of the fabric during splitter installation. Although you can skip this (press Enter), it is recommended that you enter the password to allow verification of this setting. The installation will be aborted if an incorrect setting is detected.

9. Restore to the switch the files that you saved in Step 3, as follows:

a. Copy the hostname_config.tar.gz file to the /thirdparty/ folder on the Blade Processor.

b. On the Blade Processor, run:

/thirdparty/recoverpoint/install/load_settings.sh

The previous configuration of the splitter should now be restored.

10. Reboot the Blade Processor. Log in to the Blade Processor and run the following command:

> reboot

11. After the Blade Processor reboot is completed, verify that hosts have access to storage through both paths and that the splitter is available in RecoverPoint.

12. If you wish to enable Diagnostic Power-On Self-Tests (optional), on the Control Processor, run the command:

> diagEnablePost

13. Repeat from Step 2 for each fabric.

Non-disruptive upgrade of RecoverPoint without FOS and SAS upgrade

Use the following procedure to upgrade RecoverPoint when upgrading Fabric Operating System and Storage Application Services is not required:

1. If upgrading RecoverPoint, upgrade RecoverPoint on all RPAs now.

Page 56: RecoverPoint Deploying With the Brocade Splitter

56 RecoverPoint Deploying with the Brocade Splitter

Software and firmware upgrades

Refer to the EMC RecoverPoint 3.1 Installation Guide, section “Upgrading from release 3.1 and higher” or EMC RecoverPoint 3.2 Installation Guide, section “Upgrading from release 3.1 and later.”

CAUTION!Perform the following procedure one fabric at a time, to avoid host downtime.

2. Stop the RecoverPoint splitter agent. Run the following command from the Blade Processor:

>/thirdparty/recoverpoint/install/kdrv stop

Even though the splitter agent has been stopped, the host path and the splitter are still functioning.

3. Upgrade to the new version of the RecoverPoint splitter agent.

a. Download the RecoverPoint splitter agent installation file from Powerlink at;

http://Powerlink.EMC.com

and copy the installation file onto the Blade Processor.

To do so:

1. Log in to Blade Processor as root. 2. Use ftp to download the RecoverPoint splitter agent binary

file to the /tmp directory:> cd /tmp> ftp ip_address ftp> binftp> get agent_binary_fileftp> bye

b. Run the installation package.

The installation process verifies that the FOS and the SAS are compatible with the splitter agent. If a mismatch is found, the correct versions are displayed on screen and the installation is aborted.

At the Blade Processor, use the following command:

> cd /tmp> chmod +x agent_binary_file> ./agent_binary_file -u

Page 57: RecoverPoint Deploying With the Brocade Splitter

57

Hardware replacement

RecoverPoint Deploying with the Brocade Splitter

CAUTION!Failure to add the -u parameter results in the loss of all current splitter settings and cause a full sweep.

c. If prompted, enter the same hostname for the splitter as was used before the upgrade.

The _ip address will automatically be appended to the name you enter here. The name with appended _ip address is the name of the splitter as it is displayed by the RecoverPoint Management Application.

d. In RecoverPoint 3.2.x, you are prompted to enter the user admin password for the Control Processor. This will enable you to determine the Default Zone setting of the fabric during splitter installation. Although you can skip this (press Enter), it is recommended that you enter the password to allow verification of this setting. The installation will be aborted if an incorrect setting is detected.

4. Reboot the Blade Processor.

CAUTION!Rebooting the Blade Processor causes the bound host initiators to lose connectivity with the storage and makes the splitter unavailable. If the hosts have multipath capabilities, they can remain on-line by using the other fabric.

Log in to the Blade Processor and run the following command:

> reboot

5. Repeat from Step 2 for each fabric.

Hardware replacement

Adding or replacing server node HBA

This procedure describes how to add or a replace an HBA or a host at a replica (non-production) side in the Brocade splitter environment. This procedure, if followed carefully, maintains data consistency without triggering a full/volume sweep on RecoverPoint consistency groups.

Page 58: RecoverPoint Deploying With the Brocade Splitter

58 RecoverPoint Deploying with the Brocade Splitter

Hardware replacement

Prerequisites Before performing this procedure, note the following:

◆ This procedure applies to RecoverPoint 3.2 and later only.

◆ This procedure only applies to Frame Redirection mode, not to Multi-VI mode.

◆ The relevant splitters should be operational.

◆ Volumes should be attached to the relevant splitter.

Procedure This procedure can be done on multiple initiator/target combinations in parallel or one by one.

1. If replacing an HBA or a host, use the instructions in this step to remove the old HBA or host:

a. Make sure image access is disabled.

b. Remove the zone between the HBA to be replaced and the storage target.

c. Remove the bindings between the HBA to be replaced and the storage target.

d. If you are replacing the entire host, repeat steps b and c for the other HBA(s).

e. Replace the HBA or the host.

2. Physically cable the new HBA(s) to the switch(es). HBA(s) must be logged in to the fabric(s), but must not have access to any replicated LUN. Zoning or LUN masking can be used to ensure that the host cannot access replicated LUNs.

3. Use the add_safe_bindings command from RecoverPoint CLI to add safe binding between the new HBA and the storage target. For example, the following command will add a safe binding between the host initiator 10000000c9668eb6 and the storage target 5006016239a01beb:

> add_safe_bindings site=Brocade_Right splitter=sabre_172.16.12.42 host_initiators=10000000c9668eb6 target=5006016239a01beb

For details about the add_safe_bindings command, refer to the RecoverPoint CLI Reference Guide.

4. Make the necessary changes to zoning and/or LUN masking to expose the LUNs to the host.

Page 59: RecoverPoint Deploying With the Brocade Splitter

59

Hardware replacement

RecoverPoint Deploying with the Brocade Splitter

5. Wait 5 minutes to allow the splitter agent to automatically rescan the SAN.

6. Run the rescan_san command from the RecoverPoint CLI to refresh the RecoverPoint SAN view.

7. Run the get_safe_bindings_itls command from the RecoverPoint CLI to list the LUNs that are visible through a specific Brocade splitter safe binding.

For example, the following output means that Initiator 10000000c9668eb6 can access LUN 1 (non RecoverPoint LUN) and LUN 3 (Remote LUN on Replication set RSet1) on target 5006016239a01beb.

Site: Brocade_right: Splitter: sabre_172.16.12.42: Safe bindings: 10000000c9668eb6 => 5006016239a01beb: Visible LUNs: 1: Group Name: N/A UID: N/A 3: Group Name: [g, remote, RSet 1] UID: 60,06,01,60,f5,c3,1d,00,7e,83,f8,0d,14,0e,de,11

8. Ensure that all safe bindings appear correctly in the get_safe_bindings_itls output. If the bindings are not correct, use the CLI command remove_safe_bindings. This command is not available after the safe bindings are approved. For details about the remove_safe_bindings command, refer to the EMC RecoverPoint CLI Reference Guide.

9. Ensure that all replicated LUNs appear in the get_safe_bindings_itls output. Exposing additional replicated LUNs to the host initiator after the safe bindings are approved will cause a volume sweep. If an expected LUN is missing:

a. Verify the zoning between the relevant host initiator and the storage target port.

b. Verify the LUN mapping/masking configuration for the relevant host initiator on the storage array.

c. Repeat rescan_san (see Step 6).

Page 60: RecoverPoint Deploying With the Brocade Splitter

60 RecoverPoint Deploying with the Brocade Splitter

Troubleshooting

10. Once all replicated LUNs appear in the get_safe_bindings_itls output, use the approve_safe_bindings command from RecoverPoint CLI to complete the process. For example, the following will complete the process for the binding created in Step 3:

> approve_safe_bindings site=Brocade_Right splitter=sabre_172.16.12.42 host_initiators=10000000c9668eb6 target=5006016239a01beb

Once the save binding is approved, it becomes a regular binding, it will no longer appear in the get_safe_bindings_itls output, and the remove_safe_bindings command cannot be used to remove them.

11. Update the host's SAN view. The LUNs should now be visible from the host.

TroubleshootingAn important part of troubleshooting RecoverPoint issues in the Brocade environment is collecting the correct logs to allow EMC Customer Service to isolate, identify, and analyze the issues. The following sections provide instructions for log collection at customer sites according to the presenting issue. Collect the logs as soon after the event as possible. Three use cases are presented here:

◆ Splitter crashes and switch reboots ................................................. 61◆ Binding and host connectivity failures ........................................... 62◆ I/O performance issues .................................................................... 63

In addition, specific instructions are given for the following symptoms and procedures:

◆ Root file system is full ....................................................................... 65◆ Host initiators cannot write to virtual target ................................. 65◆ Replacing a faulty intelligent module............................................. 65◆ Adding hosts at the remote site ....................................................... 65◆ Using analysis pack ........................................................................... 65

SAN Health Report As part of troubleshooting, the best practice is to run the Brocade SAN Health Diagnostics Capture, a tool provided by Brocade that

Page 61: RecoverPoint Deploying With the Brocade Splitter

61

Troubleshooting

RecoverPoint Deploying with the Brocade Splitter

discovers and analyzes the SAN configuration. The tools detects problems in the following areas:

◆ zoning

◆ Frame Redirection

◆ unsupported switches and blades

◆ unsupported Fabric Operating System (FOS) and Storage Application Services (SAS)

◆ port errors

◆ Interswitch Link (ISL) connectivity

To run the SAN Health Diagnostics Capture, go to www.brocade.com and select Services & Support > Drivers & Downloads > SAN Health. Click Download SAN Health Diagnostics and follow instructions on screen.

After you email the SAN data to Brocade, it can take up to 48 hours to get a SAN Health report.

Splitter crashes and switch reboots

If you experience an unexpected splitter crash or switch reboot, do the following to gather the needed information.

1. On the Blade Processor, check the RecoverPoint splitter logs to determine which processes crashed and when:

• To check the low-level process: /thirdparty/recoverpoint/wd/wd.log

• To check the high-level process: /thirdparty/recoverpoint/wd/wd_high.log

1. In the RecoverPoint Management Application, go the System menu, and select Collect System Information.

Use the wizard to collect logs from all RecoverPoint appliances and from all splitters.

2. On the Control Processor of the switch running the splitter agent, run the following command, and when prompted, provide an FTP location to which to upload the logs:

> supportsave

3. On the Control Processor of the edge switches where hosts and targets are connected, run the following command, and when prompted, provide an FTP location to which to upload the logs:

Page 62: RecoverPoint Deploying With the Brocade Splitter

62 RecoverPoint Deploying with the Brocade Splitter

Troubleshooting

> supportsave

4. Provide a time line and information regarding any changes in

• Fabric

• Ports

• Configuration

prior to the event.

Binding and host connectivity failures

If you experience binding or host connectivity failures, do the following as soon as possible and before you unbind:

1. In the RecoverPoint Management Application, go the System menu, and select Collect System Information.

Use the wizard to collect logs from all RecoverPoint appliances and from all splitters.

2. On the Control Processor of the switch running the splitter agent, run the following command, and when prompted, provide an FTP location to which to upload the logs:

> supportsave

3. On the Control Processor of the edge switches where hosts and targets are connected, run the following command and, and when prompted, provide an FTP location to which to upload the logs:

> supportsave

4. If there are McData switches in the fabric, do the following to collect the required logs:

• If using the Brocade Enterprise Fabric Connectivity Manager (EFCM) interface, at the switch Element Manager, select Maintenance tab > Data collection. Select the log files and save them to the local disk.

• If using the Enterprise Fabric Connectivity Manager (EFCM) Basic interface, select Maintenance tab > System files > Data Collection Retrieval, select the log files, and save them to the local machine.

5. Use the emcgrab (for UNIX) or emcreports (for Windows) tools to collect relevant host data.

6. Run Brocade’s SAN Health tool and save the output. For instructions, refer to “SAN Health Report” on page 60.

Page 63: RecoverPoint Deploying With the Brocade Splitter

63

Troubleshooting

RecoverPoint Deploying with the Brocade Splitter

7. Record host and HBA pWWN details, HBA driver, and firmware.

8. Record storage (target) pWWN details.

9. Provide a topology diagram of the fabric: host nodes, storage nodes, switches, and RecoverPoint appliances.

10. Provide a time line and information regarding any changes in

• Fabric

• Ports

• Configuration

prior to the event.

Stale bindings In versions of RecoverPoint prior to 3.3 and in some other rare situations, it is possible to have a mismatch in binding records. For example, if RecoverPoint is uninstalled without removing bindings, stale bindings will remain on the fabric.

To identify stale binding, compare the binding records in RecoverPoint with the bindings on the fabric. To display fabric bindings:

> cfgshow | grep lsan

If a binding exists on the fabric without a corresponding binding in the RecoverPoint, the stale binding should be removed.

In FOS 6.1.x and earlier, use the following command:

> cfgremove “r_e_d_i_r_c__fg”, “lsan_zone_name1[; lsan_zone_name2; lsan_zone_name3…];”> zonedelete “lsan_zone_name1”> zonedelete “lsan_zone_name2”…> cfgsave

In FOS 6.2.x and later, use the following command:

> zone --rddelete lsan_zone_name1> zone --rddelete lsan_zone_name2…> cfgsave

I/O performance issues

If you experience issues with I/O performance, do the following to gather the needed information.

Page 64: RecoverPoint Deploying With the Brocade Splitter

64 RecoverPoint Deploying with the Brocade Splitter

Troubleshooting

1. On the Control Processor of all switches in the fabric, run the following command, and when prompted, provide an FTP location to which to upload the logs:

> supportsave

2. If there are McData switches in the fabric, do the following to collect the required logs:

• If using the Brocade Enterprise Fabric Connectivity Manager (EFCM) interface, at the switch Element Manager, select Maintenance tab > Data collection. Select the log files and save them to the local disk.

• If using the Enterprise Fabric Connectivity Manager (EFCM) Basic interface, select Maintenance tab > System files > Data Collection Retrieval, select the log files, and save them to the local machine.

3. Run Brocade’s SAN Health tool and save the output. For instructions, refer to “SAN Health Report” on page 60

4. In the RecoverPoint Management Application, go the System menu, and select Collect System Information.

Use the wizard to collect logs from all RecoverPoint appliances and from all splitters.

5. Use the emcgrab (for UNIX) or emcreports (for Windows) tools to collect relevant host data.

6. From the RecoverPoint command-line interface, run the commands:

> detect_bottlenecks> export_statistics

and save the output.

7. Contact the vendor of your storage array and request an analysis of storage performance.

8. Record host node HBA pWWN details, drivers, firmware details.

9. Record storage pWWN details.

10. Provide a topology diagram of the fabric: host nodes, storage nodes, switches, and RecoverPoint appliances.

11. Provide a time line of the events leading up to the degradation of system performance.

Page 65: RecoverPoint Deploying With the Brocade Splitter

65

Troubleshooting

RecoverPoint Deploying with the Brocade Splitter

Instructions for specific symptoms

Root file system is full When the root (/) file system on the Blade Processor module is full, various problems may occur, such as an inability to run the RecoverPoint splitter agent.

Beginning with RecoverPoint 3.2, you can monitor disk usage of the Blade Processor module in the Management Application GUI, from the Splitters tab of the System Monitoring component.

Use the df command to check disk usage, and remove files from the file system. The core files under /thirdparty/recoverpoint/log are usually the cause for this problem. Unless they are needed for bug analysis, remove the core files.

Host initiators cannotwrite to virtual target

If host initiators are able to see the virtual target and the LUNs, but fail in writing to the storage, it is most likely a LUN-masking problem in the target storage device.

Check the LUN configuration.

Replacing a faultyintelligent module

For information about replacing a faulty intelligent module, contact EMC Customer Service.

Adding hosts at theremote site

For information about adding hosts on a remote “cold recovery site”, contact EMC Customer Service.

Using analysis pack The analysis pack is a set of scripts that is installed as part of the RecoverPoint splitter agent installation package. These scripts enable on-site analysis of splitter logs, and extraction of important diagnostic information about current SAN view and SAN view changes.

To execute the analysis scripts, run the following command on the Blade Processor:

/thirdparty/recoverpoint/install/analysis_pack/log_analysis.sh

Page 66: RecoverPoint Deploying With the Brocade Splitter

66 RecoverPoint Deploying with the Brocade Splitter

Troubleshooting

The following parameters are valid:

You must also choose at least one of the following:

To display on-screen syntax help, run the command without parameters:

/thirdparty/recoverpoint/install/analysis_pack/log_analysis.sh

For example, if you want to see the SAN view as the splitter sees it, run the command with the following parameters:

/thirdparty/recoverpoint/install/analysis_pack/log_analysis.sh –vl

The output of this command will contain the following:

◆ Virtual Initiator information — For each virtual initiator:

• Port ID

• Corresponding initiator WWN

• virtual initiator WWN

• Whether it is in frame redirection mode

In this example, there is one frame redirection virtual initiator and one system virtual initiator. There is always a system virtual initiator, and its port ID is always 0.

VI information extracted from file: /thirdparty/recoverpoint/log/ram_log/host.log.00

ID Bound Initiator VI WWN FR==================================================

Parameter Description

v Runs the view analysis, to display the latest san view.

c Runs the changes analysis, to display the latest changes in SAN view (added and removed paths).

i Runs the host ITL analysis, and displays and count ITLs per host.

Parameter Description

r Parse recovered logs (as created after a splitter crash).

l Parse latest logs (host.log.00)

Page 67: RecoverPoint Deploying With the Brocade Splitter

67

Troubleshooting

RecoverPoint Deploying with the Brocade Splitter

0 N/A (System VI) 0x60012482c6de1000 -1 0x5001248206ce4bc8 0x60012482c6de1001 +

◆ SAN view — It is grouped by device GUID, and for each GUID it shows the relevant paths. The port number displayed is the port ID from the virtual initiator list. It also shows the size of the volume. Note that for the last device, there are 2 paths using the same GUID; both use LUN 1, since VI 1 can access the same volume through two different storage ports.

If all zones are configured correctly, the Host SAN view should match the view of its corresponding Virtual Initiator.

This portion of the output is shown on the following page.

############################################################################# Parsing results from file /thirdparty/recoverpoint/log/ram_log/host.log.00#########################################################################

############################################################################# View analysis#########################################################################

Device information

# WWN LUN Port Size (BLK) Size (MB) GUID==========================================================================================

1 0x5001248205ae4bc8 0 1 2097152 1024 SHARK(73,66,77,32,32,32,32,32,50,49,48,53,32,32,32,32,32,32,32,32,32,32,32,32,50,53,48,50,54,32,32,32)

2 0x5001248205ae4bc8 1 1 4194304 2048 SHARK(73,66,77,32,32,32,32,32,50,49,48,53,32,32,32,32,32,32,32,32,32,32,32,32,50,53,48,50,55,32,32,32)

3 0x5001248205ae4bc8 2 1 6291456 3072 SHARK(73,66,77,32,32,32,32,32,50,49,48,53,32,32,32,32,32,32,32,32,32,32,32,32,50,53,48,50,56,32,32,32)

4 0x5001248205ae4bc8 3 1 8388608 4096 SHARK(73,66,77,32,32,32,32,32,50,49,48,53,32,32,32,32,32,32,32,32,32,32,32,32,50,53,48,50,57,32,32,32)

5 0x5001248205ae4bc8 4 1 10485760 5120 SHARK(73,66,77,32,32,32,32,32,50,49,48,53,32,32,32,32,32,32,32,32,32,32,32,32,50,53,48,51,48,32,32,32)

6 0x5001248205ae4bc8 5 1 12582912 6144 SHARK(73,66,77,32,32,32,32,32,50,49,48,53,32,32,32,32,32,32,32,32,32,32,32,32,50,53,48,51,49,32,32,32)

7 0x5001248205ae4bc8 6 1 14680064 7168 SHARK(73,66,77,32,32,32,32,32,50,49,48,53,32,32,32,32,32,32,32,32,32,32,32,32,50,53,48,51,50,32,32,32)

8 0x5001248205ae4bc8 7 1 16777216 8192 SHARK(73,66,77,32,32,32,32,32,50,49,48,53,32,32,32,32,32,32,32,32,32,32,32,32,50,53,48,51,51,32,32,32)

Page 68: RecoverPoint Deploying With the Brocade Splitter

68 RecoverPoint Deploying with the Brocade Splitter

Troubleshooting

9 0x5001248205ae4bc8 8 1 18874368 9216 SHARK(73,66,77,32,32,32,32,32,50,49,48,53,32,32,32,32,32,32,32,32,32,32,32,32,50,53,48,51,52,32,32,32)

10 0x5001248205ae4bc8 9 1 20971520 10240 SHARK(73,66,77,32,32,32,32,32,50,49,48,53,32,32,32,32,32,32,32,32,32,32,32,32,50,53,48,51,53,32,32,32)

11 0x5001248205ae4bc8 10 1 23068672 11264 SHARK(73,66,77,32,32,32,32,32,50,49,48,53,32,32,32,32,32,32,32,32,32,32,32,32,50,53,48,51,54,32,32,32)

12 0x50060e8014427506 1 1 6291840 3072 LIGHTNING(72,73,84,65,67,72,73,32,82,53,48,49,52,50,55,53,48,48,50,66)

0x50060e8014427526 1 1

Copyright © 2012EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.All other trademarks used herein are the property of their respective owners.