new hitachi nas platform f1000 series · 2019. 4. 23. · preface this user's guide describes...
TRANSCRIPT
Hitachi NAS Platform F1000 SeriesEnterprise Array Features Administrator's Guide
MK-92NAS094-02
Product Version
Getting Help
Contents
© 2015 Hitachi, Ltd. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means,electronic or mechanical, including photocopying and recording, or stored in a database or retrievalsystem for any purpose without the express written permission of Hitachi, Ltd.
Hitachi, Ltd., reserves the right to make changes to this document at any time without notice andassumes no responsibility for its use. This document contains the most current information availableat the time of publication. When new or revised information becomes available, this entiredocument will be updated and distributed to all registered users.
Some of the features described in this document might not be currently available. Refer to the mostrecent product announcement for information about feature and product availability, or contactHitachi Data Systems Corporation at https://portal.hds.com.
Notice: Hitachi, Ltd., products and services can be ordered only under the terms and conditions ofthe applicable Hitachi Data Systems Corporation agreements. The use of Hitachi, Ltd., products isgoverned by the terms of your agreements with Hitachi Data Systems Corporation.
Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. HitachiData Systems is a registered trademark and service mark of Hitachi, Ltd., in the United States andother countries.
Archivas, Essential NAS Platform, HiCommand, Hi-Track, ShadowImage, Tagmaserve, Tagmasoft,Tagmasolve, Tagmastore, TrueCopy, Universal Star Network, and Universal Storage Platform areregistered trademarks of Hitachi Data Systems Corporation.
AIX, AS/400, DB2, Domino, DS8000, Enterprise Storage Server, ESCON, FICON, FlashCopy, IBM,Lotus, OS/390, RS6000, S/390, System z9, System z10, Tivoli, VM/ESA, z/OS, z9, zSeries, z/VM, z/VSE are registered trademarks and DS6000, MVS, and z10 are trademarks of International BusinessMachines Corporation.
All other trademarks, service marks, and company names in this document or website areproperties of their respective owners.
Microsoft product screen shots are reprinted with permission from Microsoft Corporation.
iiHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Contents
Preface.................................................................................................viiIntended Audience...................................................................................................viiiProduct version.......................................................................................................viiiRelease Notes.........................................................................................................viiiOrganization of HNAS F manuals...............................................................................viiiReferenced Documents..............................................................................................ixDocument Conventions...............................................................................................xConvention for storage capacity values.......................................................................xiGetting help..............................................................................................................xiComments ...............................................................................................................xi
1 Introduction.........................................................................................1-1Supported Program Products...................................................................................1-2
2 Replication Functions............................................................................2-1Overview................................................................................................................2-2
ShadowImage .................................................................................................2-2TrueCopy ........................................................................................................2-2Universal Replicator .........................................................................................2-2
Command Control Interface ....................................................................................2-2Overview of CCI Operation in HNAS F................................................................2-3About Command Devices..................................................................................2-3Scope Related to CCI........................................................................................2-3
How to use ShadowImage.......................................................................................2-6Overview of ShadowImage Operation on the HNAS F..........................................2-6Scope of ShadowImage Operations on the HNAS F.............................................2-8
Scope Related to ShadowImage...............................................................2-8Scope Related to Backup Restore.............................................................2-9Scope Related to the file snapshot functionality.........................................2-9
Prerequisites for Using ShadowImage on the HNAS F..........................................2-9Hardware Prerequisites............................................................................2-9Software Prerequisites.............................................................................2-9
Notes on Operating the Volume Replication Function.........................................2-10Preparing for ShadowImage Operation.............................................................2-15
Preparing for ShadowImage Volume Pair Operation.................................2-15
iiiHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Registering Public Key Used for SSH.......................................................2-15Configuration Settings of CCI.................................................................2-16File System Creation on P-VOL of ShadowImage .....................................2-29
Overview of Using ShadowImage.....................................................................2-30Starting ShadowImage Operations and Creating ShadowImage Pair..........2-30Splitting a ShadowImage Volume Pair.....................................................2-31Resynchronizing a ShadowImage Volume Pair.........................................2-36Deleting a ShadowImage Volume Pair.....................................................2-38
P-VOL Data Recovery from S-VOL....................................................................2-41Recovering Data When the P-VOL File System is in Normal Status.............2-41Recovering Data When the P-VOL File System is Blocked.........................2-45Recovering Data When an Error Occurs on the Device File Used by the P-VOL............................................................................................................2-50
Data Backup Cooperated with a Tape Device....................................................2-56Notes on Failures of External Volumes..............................................................2-57
How to use TrueCopy............................................................................................2-57Overview of TrueCopy Operations in the HNAS F...............................................2-57Scope of TrueCopy Function with the HNAS F...................................................2-58
Scope of TrueCopy Function with the HNAS F..........................................2-58Scope Related to Features in Backup Restore..........................................2-59Scope Related to the file snapshot functionality.......................................2-59
Prerequisites..................................................................................................2-59Hardware Prerequisites..........................................................................2-59Software Prerequisites...........................................................................2-59
Notes on Operations.......................................................................................2-60Preparing for TrueCopy Operations..................................................................2-63
Preparing for TrueCopy Volume Pair Operation........................................2-63Registration of Public Key Used for SSH .................................................2-63Configuring the CCI Environment............................................................2-63Create File System in TrueCopy P-VOL....................................................2-76
Overview of TrueCopy Operations....................................................................2-77Volume Replication................................................................................2-77Disaster Recovery Operations.................................................................2-87
Notes on Failures of External Volumes..............................................................2-92How to use Universal Replicator on the HNAS F.......................................................2-93
Overview of Universal Replicator Operations in HNAS F system..........................2-93Scope of Universal Replicator Function in the HNAS F system.............................2-94
Scope Related to Universal Replicator.....................................................2-94Scope Related to Features in Backup Restore..........................................2-95Scope Related to the file snapshot functionality.......................................2-95
Prerequisites..................................................................................................2-95Hardware Prerequisites..........................................................................2-95Software Prerequisites...........................................................................2-95
Notes on Operations.......................................................................................2-96Preparing for Universal Replicator Operations...................................................2-99
Preparing for Universal Replicator Volume Pair Operation.........................2-99Registration of Public Key Used for SSH .................................................2-99Configuring the CCI Environment............................................................2-99Create File System in Universal Replicator P-VOL...................................2-112
Overview of Universal Replicator Operations...................................................2-113Volume Replication..............................................................................2-113Disaster Recovery Operations...............................................................2-124
ivHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Notes on Failures of External Volumes............................................................2-129Operations of the Log Files..................................................................................2-130
Operations of the Command Control Interface Log Files...................................2-130Format of the Command Control Interface Log Files...............................2-130Downloading the Command Control Interface Log Files to the Client.......2-131Notes on the Operations of the Command Control Interface Log Files......2-131
Commands that HNAS F provides.........................................................................2-132
3 Volume Management Functions.............................................................3-1Dynamic Provisioning..............................................................................................3-2Dynamic Tiering......................................................................................................3-2Universal Volume Manager......................................................................................3-3
Troubleshooting for the HNAS F Including External Storage System.....................3-3Stopping and Restarting External Storage System on Purpose ...................3-3Recovery Procedure in Case of Error in External Storage System................3-5
Volume Migration..................................................................................................3-14Volume Shredder..................................................................................................3-14Encryption License Key..........................................................................................3-14
4 Resource Management Functions...........................................................4-1Storage Navigator...................................................................................................4-2LUN Manager.........................................................................................................4-2Configuration File Loader.........................................................................................4-3Virtual Partition Manager.........................................................................................4-3
5 Performance Management Functions......................................................5-1Performance Monitor...............................................................................................5-2
A Details of ShadowImage Operations on the HNAS F................................A-1Execution of Command Operation on HNAS F ..........................................................A-2
Creating Pairs...................................................................................................A-2Splitting Pairs...................................................................................................A-3Re-synchronizing Pairs....................................................................................A-12Restoring Pairs...............................................................................................A-17Deleting Pairs.................................................................................................A-26
Pairs Recovery from Failures on the HNAS F...........................................................A-35Commands for Pairs Recovery.........................................................................A-35Recovery Procedure from Failures....................................................................A-36
B Details of TrueCopy and Universal Replicator Operations on the HNAS F...B-1Execution of Command Operation on HNAS F...........................................................B-2
Creating Pairs...................................................................................................B-2Splitting Pairs...................................................................................................B-3Re-synchronizing Pairs....................................................................................B-10Deleting Pairs.................................................................................................B-13
C Operation when Failures Occurred on the HNAS F...................................C-1Isolation when Failures Occur .................................................................................C-3
vHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Operation when the Main Site or Storage System Went Down....................................C-4Operation when the Cluster of the Main Site Went Down.........................................C-16Operation when Multiple Failures Occurred in All the Storages on the Main Site.........C-28Operation when Multiple Failures Occurred in a Part of Storages on the Main Site.....C-43Operation when Network Failures Occurred Between the Main Site and the Remote SiteCausing Journal Overflow......................................................................................C-58Operation when Network Failures Occurred Between the Main Site and the Remote SiteWithout Causing Journal Overflow..........................................................................C-60Operation when Network Failures Occurred in the Main Site.....................................C-61Operation when Multiple Failures Occurred in Some or All Journals on the Main Site. .C-72
D System Option Settings........................................................................D-1mode495...............................................................................................................D-2
E Acronyms.............................................................................................E-1Acronyms used in the HNAS F manuals....................................................................E-2
Index
viHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Preface
This user's guide describes and provides instructions for using Hitachi UnifiedStorage VM Series products with the Hitachi NAS Platform F.
Please read this document carefully to understand how to use this product,and maintain a copy for reference purposes.
Notice: The use of Hitachi NAS Platform F and all other Hitachi Data Systemsproducts is governed by the terms of your agreement(s) with Hitachi DataSystems.
This manual is not applicable to single-node configurations.
This preface includes the following information:
□ Intended Audience
□ Product version
□ Release Notes
□ Organization of HNAS F manuals
□ Referenced Documents
□ Document Conventions
□ Convention for storage capacity values
□ Getting help
□ Comments
Preface viiHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Intended AudienceThis document is intended for system administrators, Hitachi Data Systemsrepresentatives, and Authorized Service Providers who are involved ininstalling, configuring, and operating the storage system.
This document assumes the following:
• The user has a background in data processing and understands direct-access storage device (DASD) systems and their basic functions.
• The user is familiar with the Hitachi Unified Storage VM storage systemand has read the Hitachi Unified Storage VM Block Module Hardware UserGuide.
• The user is familiar with the operating system and web browser softwareon the system hosting the Storage Navigator software. For details on theapplicable operating systems and web browser software, please refer toHitachi Storage Navigator User's Guide(User Guide).
• The user has read and understands the Hitachi Command ControlInterface (CCI) User's Guide (User Guide).
• The user has read the Installation and Configuration Guide and is familiarwith how to back up and restore file system data used in a Hitachi NASPlatform F system.
Product versionThis document revision applies to Hitachi NAS Platform F version 5.1.1 orlater.
Release NotesRelease notes can be found on the documentation CD or on the Hitachi DataSystems Support Portal: https://extranet.hds.com/http:/aim.hds.com/portal/dtRelease notes contain requirements and more recent product information thatmay not be fully described in this manual. Be sure to review the release notesbefore installation.
Organization of HNAS F manualsHNAS F manuals are organized as shown below:
Manual name Description
Hitachi NAS Platform F1000 SeriesInstallation and Configuration Guide,MK-92NAS061
You must read this manual first to use an HNASF system.
This manual contains the information that youmust be aware of before starting HNAS F system
viii PrefaceHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Manual name Description
operation, as well as the environment settingsfor an external server.
Hitachi NAS Platform F1000 SeriesCluster Getting Started Guide,MK-92NAS076
This manual explains how to set up an HNAS Fsystem in a cluster configuration.
To operate HNAS F on a virtual server, see theCluster Getting Started Guide for Virtual NAS.
Hitachi NAS Platform F1000 SeriesCluster Getting Started Guide forVirtual NAS, MK-92NAS073
This manual explains how to set up virtualservers for HNAS F systems in a clusterconfiguration.
Hitachi NAS Platform F1000 SeriesCluster Administrator's Guide,MK-92NAS084
This manual provides procedures for using HNASF systems in a cluster configuration, as well asprovides GUI references.
Hitachi NAS Platform F1000 SeriesCluster Troubleshooting Guide,MK-92NAS066
This manual provides troubleshootinginformation for HNAS F systems in a clusterconfiguration.
Hitachi NAS Platform F1000 SeriesSingle Node Getting Started Guide,MK-92NAS079
This manual explains how to set up an HNAS Fsystem in a single-node configuration.
Hitachi NAS Platform F1000 SeriesSingle Node Administrator's Guide,MK-92NAS089
This manual explains the procedures for usingHNAS F systems in a single-node configuration,as well as provides GUI references.
Hitachi NAS Platform F1000 SeriesSingle Node Troubleshooting Guide,MK-92NAS078
This manual provides troubleshootinginformation for HNAS F systems in a single-nodeconfiguration.
Hitachi NAS Platform F1000 Series CLIAdministrator's Guide, MK-92NAS085
This manual describes the syntax of thecommands that can be used for HNAS F systemsin a cluster configuration or a single-nodeconfiguration.
Hitachi NAS Platform F1000 Series APIReferences, MK-92NAS064
This manual explains how to use the API forHNAS F systems in a cluster configuration or asingle-node configuration.
Hitachi NAS Platform F1000 SeriesError Codes, MK-92NAS065
This manual contains messages for HNAS Fsystems in a cluster configuration or a single-node configuration.
Hitachi NAS Platform F1000 Series FileSystem Protocols (CIFS/NFS)Administrator's Guide, MK-92NAS086
This manual contains the things to keep in mindbefore using the CIFS or NFS service of an HNASF system in a cluster configuration or a single-node configuration from a CIFS or NFS client.
Referenced Documents
Hitachi Unified Storage VM
• Hitachi Encryption License Key User's Guide (User Guide)
• Hitachi Performance Manager User's Guide (User Guide)
Preface ixHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
• Hitachi ShadowImage User's Guide (User Guide)
• Hitachi Storage Navigator User's Guide (User Guide)
• Hitachi System Operations Using Spreadsheets
• Hitachi TrueCopy User's Guide (User Guide)
• Hitachi Universal Volume Manager User's Guide (User Guide)
• Hitachi Virtual Partition Manager User's Guide (User Guide)
• Hitachi Volume Migration User Guide
• Hitachi Volume Shredder User Guide
• Provisioning Guide
• Provisioning Guide for Open Systems
Document ConventionsThe terms "Hitachi Unified Storage VM" and "HUS VM" refer to all models ofthe Hitachi Unified Storage Platform VM, unless otherwise noted.
This document uses the following typographic conventions:
TypographicConvention Description
Bold Indicates text on a window, other than the window title,including menus, menu options, buttons, fields, and labels.Example: Click OK.
Italic Indicates a variable, which is a placeholder for actual textprovided by the user or system. Example: copy source-filetarget-file
Note: Angled brackets (< >) are also used to indicatevariables.
screen/code Indicates text that is displayed on screen or entered by theuser. Example: # pairdisplay -g oradb
< > angled brackets Indicates a variable, which is a placeholder for actual textprovided by the user or system. Example: # pairdisplay -g<group>Note: Italic font is also used to indicate variables.
[ ] square brackets Indicates optional values. Example: [ a | b ] indicates that youcan choose a, b, or nothing.
{ } braces Indicates required or expected values. Example: { a | b }indicates that you must choose either a or b.
| vertical bar Indicates that you have a choice between two or more optionsor arguments. Examples:
[ a | b ] indicates that you can choose a, b, or nothing.
{ a | b } indicates that you must choose either a or b.
x PrefaceHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Convention for storage capacity valuesStorage capacity values (e.g., drive capacity) are calculated based on thefollowing values:
Capacity Unit Physical Value Logical Value
1 KB 1,000 bytes 1,024 (210) bytes
1 MB 1,000 KB or 1,0002 bytes 1,024 KB or 1,0242 bytes
1 GB 1,000 MB or 1,0003 bytes 1,024 MB or 1,0243 bytes
1 TB 1,000 GB or 1,0004 bytes 1,024 GB or 1,0244 bytes
1 PB 1,000 TB or 1,0005 bytes 1,024 TB or 1,0245 bytes
1 EB 1,000 PB or 1,0006 bytes 1,024 PB or 1,0246 bytes
1 block - 512 bytes
Getting helpThe Hitachi Data Systems customer support staff is available 24 hours a day,seven days a week. If you need technical support, log on to the Hitachi DataSystems Portal for contact information: https://portal.hds.com
CommentsPlease send us your comments on this document: [email protected] the document title, number, and revision, and refer to specificsection(s) and paragraph(s) whenever possible.
Thank you! (All comments become the property of Hitachi Data Systems.)
Preface xiHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
xii PrefaceHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
1Introduction
The Hitachi NAS Platform F (HNAS F) is network-attached storage thatconnects to a storage system via a Fibre Channel interface in order to providefile-sharing services over NFS or CIFS protocol to clients on the network.
The HNAS F can be used in conjunction with the rich variety of functionsprovided by the program products on the storage system.
This manual describes the features of and provides precautions andrestrictions regarding HNAS F, when it is used in conjunction with theprogram product functions provided by HUS VM enterprise-class storagesystems.
For details about the functions and operation of the program productssupplied with storage systems, see the appropriate User's Guide.
□ Supported Program Products
Introduction 1-1Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Supported Program ProductsThe file system service functionality provided by HNAS F can be used inconjunction with the program product functions supported by storagesystems connected to the system.
This section describes the program products that can be used in conjunctionwith the HNAS F. For notes and instructions on using these programs in thismanner, read from Chapter 2, Replication Functions on page 2-1 of thismanual.
Program products for using replication functions
• CCI
• ShadowImage
• TrueCopy
• Universal Replicator
Program products for using volume management functions
• Dynamic Provisioning
• Dynamic Tiering
• Encryption License Key
• Universal Volume Manager
• Volume Migration
• Volume Shredder
Program products for using resource management functions
• Storage Navigator
• LUN Manager
• Virtual LVI
• Configuration File Loader
• Virtual Partition Manager
Program products for using performance management functions
• Performance Monitor
1-2 IntroductionHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
2Replication Functions
This chapter describes the replication functions in storage systems that canbe used in a HNAS F system.
□ Overview
□ Command Control Interface
□ How to use ShadowImage
□ How to use TrueCopy
□ How to use Universal Replicator on the HNAS F
□ Operations of the Log Files
□ Commands that HNAS F provides
Replication Functions 2-1Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
OverviewThe following program products provide replication functions compatible withthe HNAS F
• ShadowImage
• TrueCopy
• Universal Replicator
ShadowImage ShadowImage is a program product for backing up and duplicating data instorage systems. Using ShadowImage in conjunction with the HNAS F, youcan easily back up and replicate user data. ShadowImage lets you managecopies of logical units (LUs) within the same storage system. Copies of thesource LU (secondary volumes or S-VOLs ) can be created in the samestorage system without impairing the RAID redundancy of the data on thesource LU (primary volume or P-VOL ).
TrueCopy TrueCopy is a program product for backing up and replicating data in storagesystems. Using TrueCopy in conjunction with the HNAS F, you can create aremote backup or disaster recovery system for user data. TrueCopy lets youmanage copies of the logical units (LUs) across storage systems connected bya Fibre Channel interface. Copies of the source LU (secondary volumes or S-VOLs )can be created on a different storage system without impairing theRAID redundancy of the data on the source LU (primary volume or P-VOL ).
TrueCopy uses synchronous mode to transmit data written from the HNAS Fto a different storage system.
Universal Replicator Similar to TrueCopy, Universal Replicator is a program product for backing upand duplicating data in storage systems. Using Universal Replicator inconjunction with the HNAS F, you can create a remote backup or disasterrecovery system for user data. Universal Replicator allows for a reliablereplication system by temporarily storing the data to be copied in a logicalvolume called a journal volume. This minimizes the frequency of interruptionsto the copy process between volume pairs caused by restrictions imposed ondata transfer from the primary site to the secondary site.
Command Control Interface This section describes how to use CCI on the HNAS F.
2-2 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Overview of CCI Operation in HNAS FCCI operations need to be performed when using ShadowImage, TrueCopy,or Universal Replicator in a HNAS F system. When you use a HNAS F system,CCI is installed with the OS. As such, there is no need to install CCIseparately.
If you intend to use ShadowImage, TrueCopy, or Universal Replicator in yourHNAS F system, first read the Hitachi Command Control Interface (CCI) Userand Reference Guide and make sure that you have a thorough understandingof how to use CCI. Then read the relevant sections in this user's guide.
About Command DevicesTo use CCI, a command device needs to be configured in the storage system.Before starting operation, make sure that a command device is configured.
If HNAS F system operation is started without first configuring a commanddevice, configure a command device, and then restart both nodes.
For details about how to configure a command device and check the settings,see the Storage Navigator User's Guide (User Guide).
Do not set an authentication mode for the command devices used by a HNASF system.
Scope Related to CCI
CCI and failover/failback by the HNAS F system
CCI is not subject to failover or failback by the HNAS F system. Prepare thesame configuration definition file as was prepared for the main node, with theexception of the ip_address parameter in the HORCM_MON section. If a failureoccurs while CCI is running, you can start CCI on the standby node andcontinue operating ShadowImage, TrueCopy, or Universal Replicator.
If a virtual server fails over, you can continue operation by starting CCI afterthe failover.
Protect feature
When using ShadowImage, TrueCopy, or Universal Replicator in a HNAS Fsystem, you need to create and resynchronize pairs using CCI, without theHNAS F being aware of the S-VOLs. If the HNAS F recognizes the file system,Data Retention Utility automatically gives volumes "S-VOL Disable attribute".Therefore, the protect feature of CCI which prohibits the operation of volumepairs not recognized by the system is unavailable.
Executing Command Control Interface (CCI) commands
To execute CCI commands using the nasroot account, which is used forlogging in to a node or virtual server via SSH, use the sudo command asshown below.
Replication Functions 2-3Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
$ sudo pairdisplay -g VG01
Example 2-1 Example of Executing the CCI 'pairdisplay' Command Usingthe sudo Command
$ ssh –2 [email protected] sudo pairdisplay -g VG01
Example 2-2 Example of Executing the 'pairdisplay' Command using aShell Script created on the SSH Client
Note:Create a shell script on the OS used by the system administrator. Fordetails about creating a shell script, see the documentation for your OS.
Special file names given from standard input to CCI commands
When passing a special file name from standard input to the raidscan,inqraid, or mkconf.sh commands, specify ls /dev/sdu*u.$ ls /dev/sdu*u | sudo inqraid –CLI
Example 2-3 Example of Executing the 'inqraid' Command Using the sudoCommand
$ ssh –2 [email protected] "ls /dev/sdu*u | sudo inqraid –CLI"
Example 2-4 Executing the 'inqraid' Command Using a Shell Script Createdon the SSH Client
/dev/sdu00u to /dev/sduFFu corresponds to /dev/enas/lu00 to /dev/enas/luFF of user LU.
Note on scanning on HNAS F ports using the raidscan command
When you use the raidscan command to scan HNAS F ports, informationabout virtual server OS disks and shared LU sometimes appears in thecommand output. However, because virtual server OS disks and shared LUcannot be specified as a P-VOL or S-VOL in ShadowImage, TrueCopy, orUniversal Replicator, you cannot define them in the HORCM_DEV section of theconfiguration definition file.
Note on the CCI configuration definition file
Immediately after HNAS F is installed, a temporary node name is used as thehost name in the CCI configuration definition file. Please confirm whether thenode name is right before starting CCI, and change manually if the nodename is wrong.
The CCI configuration definition file used by a virtual server must be createdand edited on both of the nodes on which the virtual server operates. Makesure that the host name and IP address in the configuration file areappropriate for the node where the file is located.
When the FC channel path connection mode between the node and thestorage system is Active-Active, and if you use the inqraid or mkconf.sh
2-4 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
commands, the ports that are paired by the alternative paths will bedisplayed inconsistently.
On CCI instance numbers
HNAS F makes the following instance numbers available.
Two instance numbers are allocated to each node, which are made availableafter the installation of HNAS F.
Table 2-1 Instance numbers allocated by default
Type Instancenumber
Service value of configurationdefinition file
Instance numbers allocated toeach node by default
16
17
20331
20332
To use three or more instances on each node, or to use instance numberswith virtual servers, you must configure an environment that can handle theadditional instance numbers. You can use a maximum of 18 instancenumbers (including the two default instance numbers) on each node.
Table 2-2 Additional instance numbers
Type Instancenumber
Service value of configurationdefinition file
Additional instance numbers(virtual servers)
Virtual-server-ID + 1000
Virtual-server-ID + 1500
30000 + instance-number
Additional instance numbers(common)
20 to 499 30000+instance number
To use additional instance numbers, use the horcsetconf command to createa CCI configuration definition file.
After configuring a command device, you can execute this command.
If a CCI configuration definition file is no longer required (with the exceptionsof the configuration instance files for instance numbers 16 and 17), delete itby using the horcunsetconf command. If the same CCI configurationdefinition file might be re-created, create a backup of the CCI configurationdefinition file by the scp command. It is also possible to confirm the alreadyallocated instance numbers by the horcconflist command.
Operation examples are described below. Follow these examples to addinstance numbers and create a CCI configuration definition file.
Replication Functions 2-5Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
For creating a CCI configuration definition file using instancenumbers 16 and 20:
1. By the "ls /dev/sdu*u | sudo mkconf.sh –gg device-group-name –i16" command, add the HORCM_MON,HORCM_CMD,HORCM_DEV and HORCM_INSTdefinition to the CCI configuration definition file (instance number 16).
2. Use the sudo horcconfedit horcm16.conf command to change theHORCM_CMD definition in the CCI configuration definition file (instancenumber 16).
3. Delete the unnecessary devices from HORCM_DEV of the CCI configurationdefinition file (instance number 16).
4. By the "sudo horcsetconf –i 20" command, create a CCI configurationdefinition file with the instance number 20.
5. Use the ls /dev/sdu*u | sudo mkconf.sh –gg device-group-name –i20 command to add the HORCM_MON, HORCM_CMD, HORCM_DEV, andHORCM_INST definitions to the CCI configuration definition file (instancenumber 20).
6. Use the sudo horcconfedit horcm20.conf command to change theHORCM_CMD definition in the CCI configuration definition file (instancenumber 20).
7. Delete the unnecessary devices from HORCM_DEV of the CCI configurationdefinition file (instance number 20).
How to use ShadowImageThis section describes how to use ShadowImage on the HNAS F.
Overview of ShadowImage Operation on the HNAS FThe HNAS F system enables data consolidation on storage system andheterogeneous platform data sharing by using existing LAN environment, asfile system servers. The HNAS F makes copied volumes of original volumes inthe same storage system by using the ShadowImage function at hardwarespeed without reducing RAID redundancy. From these copied volumes usercan obtain backup of user data without interrupting current transactions. Theuser may also perform additional transactions.
CCI is used to operate ShadowImage in the HNAS F system. Because CCI isinstalled with the OS on the HNAS F system, there is no need to install CCIseparately.
Figure 2-1 Figure 2-1 Concept of ShadowImage Operation on the HNAS F onpage 2-7shows an overview of ShadowImage operation.
2-6 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 2-1 Concept of ShadowImage Operation on the HNAS F
The following table shows the main features provided by ShadowImage onthe HNAS F.
Table 2-3 Main ShadowImage Features
Item ShadowImage support
ShadowImage user interface CLI (Backup Restore commands and CCIcommands)
Relationship between P-VOL connectednode and S-VOL connected node
Can be used between different nodes andbetween virtual servers provided they arein the same disk array
Cascade configuration (TrueCopy-ShadowImage,
ShadowImage/ShadowImage)
Applicable
Duplicating file systems and differential-data snapshots used by the file snapshotfunctionality
Applicable
Expanding or deleting file systems duringShadowImage operation
Applicable to P-VOL. Applicable to S-VOLonly when its status is SMPL or SSUS.
Replication Functions 2-7Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Item ShadowImage support
Starting or ending to use the file snapshotfunctionality or expanding the device whichstores the differential data
If you want to continue using the filesnapshot functionality, you need to changethe configuration definition file.
Handling of the file system obtained on theS-VOL after ShadowImage operation ends
Operation can continue.
Support for NDMP-compatible backupmanagement software
No
ShadowImage is suitable for data replication when the replicated data is to beused for a different purpose than the purpose for the original data, for clusterfailure recovery, or for disaster recovery of data center utilizing TrueCopy,because it allows for the creation of duplicate file systems and the placementof volumes in a cascade configuration.
Scope of ShadowImage Operations on the HNAS F
Scope Related to ShadowImage
Volume type
Only a user LU can be defined as P-VOL or S-VOL of ShadowImage on theHNAS F system.
The OS disks and shared LU of a virtual server cannot be specified as aShadowImage P-VOL or S-VOL.
Platform which can access P-VOL or S-VOL of ShadowImage
A P-VOL or S-VOL of ShadowImage created by the HNAS F system may beaccessed by the HNAS F system or clients connected to the HNAS F through anetwork. It cannot be accessed by hosts connected through serial ports orfibre channel ports.
Filesystem which can be placed on P-VOL of ShadowImage
When Logical Volume Manager (LVM) on OS is utilized, a file system thatconsist of 129 or more LUs each, but are still not used by the file snapshotfunctionality, cannot be placed on P-VOL of ShadowImage.
File systems used by the file snapshot functionality that consist of 129 ormore LUs each, including the file system and the differential devices, cannotbe placed on P-VOL of ShadowImage.
2-8 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Scope Related to Backup Restore
Limitations on the functionality of the backup managementsoftware
If you resynchronize a ShadowImage pair defined by Volume Replicationfunction after performing a backup of the S-VOL by using the backupmanagement software, a full backup will be acquired next time.
Scope Related to the file snapshot functionality
The file snapshot functionality
When you copy a file system managed by the file snapshot functionality usingShadowImage, you must copy both the LUs that constitute the file systemand the LUs which constitute the file system for differential devices. If youcopy only the LUs which constitute the file system, you will not be able toconnect them to the HNAS F system on the remote site.
The setting of an automatic creation schedule for the file snapshotfunctionality is not copied. The differential-data snapshot is not mounted atthe copy destination.
Prerequisites for Using ShadowImage on the HNAS F
Hardware Prerequisites
You need a workstation or PC to log in to the HNAS F system using secureshell (SSH) in addition to the prerequisite hardware for ShadowImage, CCI,and the HNAS F system described in the following guides:
Manuals related to HNAS F
¢ Installation and Configuration Guide
Manuals related to storage systems
¢ Hitachi ShadowImage User's Guide (User Guide)
¢ Hitachi Command Control Interface (CCI) User and Reference Guide
¢ Hitachi TrueCopy User and Reference Guide (if TrueCopy is used inconjunction with forming a cascaded configuration)
¢ Universal Replicator User's Guide (User Guide) (if Universal Replicatoris used in a cascaded configuration)
Software Prerequisites
To use the HNAS F system, the program products in the HNAS F must beproperly installed and have valid licenses.
In addition, to use ShadowImage in a HNAS F system, the following programproducts must be installed in the storage system to which HNAS F isconnected, and their licenses must be valid:
Replication Functions 2-9Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
• ShadowImage
• TrueCopy (required for cascaded configurations between TrueCopy)
• Universal Replicator (if Universal Replicator is used in a cascadedconfiguration)
Notes on Operating the Volume Replication Function
Mounting the NFS client for a file system whose data is backed uponline
If you perform online backup for a file system accessed by the NFS client byvolume replication, you must specify the NFS version 3 before mounting a filesystem on the NFS client. If you specify the NFS version 2, you must specifythe hard option before mounting a file system on the NFS client.
Limitation for operation of ShadowImage due to cluster, node,resource group status
When a cluster is not configured, the cluster is stopped, the nodes arestopped, the resource groups are offline, or the resource groups are split,connecting the device file or creating and mounting a file system is restricted.Due to these restrictions, the following operations performed in the targetShadowImage operation will also end in error. You therefore should notoperate the cluster, nodes, and resource groups during the ShadowImageoperation. Should any problems occur with the cluster, nodes, or resourcegroups, fix them immediately.
• Unmount and mount the source file system during the splitting of aShadowImage pair volume.
• Connect the target file system to the node.
• Unmount and delete the target file system before the ShadowImage pairis resynchronized.
Notes on occurrence of a failover during operation on node
During the operation on node, to continue the operation at the failoverdestination when a failover occurs, execute the command making the remotehost specify the virtual IP address. When you connect the S-VOL file systemof the copy destination, execute the horcimport or horcvmimport commandby adding the -r option and specifying a resource group name.
Notes on changing system configuration during the ShadowImageoperations
When one of following system configuration changes is performed, the CCIconfiguration definition files on the node must be modified:
• Changing the fixed IP addresses of the node
• Expanding or deleting the source file system
2-10 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
• Defining, expanding, or deleting the file snapshot functionalitydifferential-data storage device to or from the source file system
When the host name is specified in the CCI configuration definition file andyou change the following system configurations, you must change theconfiguration definition file:
• Editing the "/etc/hosts" file (when resolving the host name using the "/etc/hosts" file)
• Changing registration information on NIS server, or changing a setting forNIS server (when resolving the host name using NIS)
• Changing registration information on DNS server, or changing a settingfor DNS server (when resolving the host name using DNS)
• Changing a name of node or a name of host
Notes on using external volumes
When using external volumes for P-VOLs or S-VOLs, do not perform the quickrestore pairresync operation. For example, if the P-VOL is an internal volumeand the S-VOL is an external volume, the quick restore pairresync operationswaps the contents of the P-VOL and S-VOL. Consequently, the P-VOLbecomes an external volume and the S-VOL becomes an internal volume.
See Hitachi ShadowImage User's Guide (User Guide) for further informationon backward pairresync operation.
In case of periodic maintenance of external volumes, you must delete allShadowImage pairs once.
Notes on splitting ShadowImage pairs by online backup
After the horcfreeze command is executed, if it takes long time until thehorcunfreeze command is executed, the timeout may occur in some client.In addition, if the file snapshot functionality is used on the copy source filesystem, the timeout tends to occur since the horcfreeze command takeslong time to be executed.
Confirming whether the access to the file system is suspended
After you execute the horcfreeze command and the horcunfreezecommand, you can check whether the access from the client to the filesystem is suspended or not by using the fsctl command.
Notes on using volume replication with virtual servers
Operations that take place when CCI is not running, such as creating, editing,and deleting the CCI configuration definition file and log file, must beperformed on a node. The table below lists the commands that cannot beperformed on a virtual server. These commands must be executed on a node.
Replication Functions 2-11Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Table 2-4 Commands that Cannot Be Used on a Virtual Server
Command name Description
horclogremove Deletes log files and trace files stored in the CCI log directoriesnasroot/logn (where n is the instance number)
horcconflist Displays a list of instance numbers that are in use.
horcsetconf Provides a model CCI configuration definition file and configuresthe CCI log directory for a specific instance number.
horcunsetconf Deletes the CCI configuration definition file and CCI log directoryfor a specific instance number.
horcconfedit Assists the user in editing the CCI configuration definition file.
mkconf.sh Defines the CCI configuration definition file.
CCI configuration definition files must be created and edited on both of thenodes where the virtual server operates. Make sure that the host name andIP address in the configuration file are appropriate for the node where the fileis located. The instance numbers used by CCI on a virtual server are virtual-server-ID + 1000 and virtual-server-ID + 1500, and the numbers 20 to 499.You can use the vnasinfo command to check the virtual server ID.
When you execute the horcmstart.sh command to start CCI on a virtualserver, CCI starts on the node where the virtual server is running. If thevirtual server fails over, you can continue operation by starting CCI after thefailover.
Different administrators are responsible for creating the CCI configurationdefinition file on the node and operating CCI on the virtual server. To make iteasier for both parties to be notified of pertinent events, we recommend thatyou incorporate the file system name into the volume group name. (Example:vg_fs01_001)
Table 2-5 Starting CCI Operation on a Virtual Server
StepLocation
OperationNode Virtual server
1 -- Y Notify the node administrator of the filesystem and volume to be used.
2 Y -- Determine the instance number to be usedby the virtual server.
3 Y -- Use the mkconf.sh command or someother means to create and edit CCIconfiguration definition files on both nodes.Make sure that the host names and IPaddresses in the files correspond to theirrespective nodes. For the IP address of thelocal node, use the IP address of the nodewhere you edited the CCI configurationdefinition file.
2-12 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
StepLocation
OperationNode Virtual server
4 Y -- Notify the virtual server administrator ofthe instance number and volume groupname.
5 -- Y Start a CCI instance using the instancenumber reported by the nodeadministrator, and begin operation.
Legend:Y: Perform the operation.--: Do not perform the operation.
Table 2-6 Procedure after Restarting a Virtual Server
StepLocation
OperationNode Virtual server
1 -- Y Start a CCI instance. Ignore any errors thatindicate an instance is already running.
2 -- Y Continue operation.
Legend:Y: Perform the operation.--: Do not perform the operation.
Table 2-7 Ending CCI Operation on a Virtual Server
StepLocation
OperationNote Virtual server
1 -- Y Notify the node administrator that youintend to terminate CCI.
2 Y -- On both nodes, stop the CCI instance thatwas being used on the virtual server.
3 Y -- From both nodes, delete the configurationdefinition files for the CCI instance that wasbeing used on the virtual server.
4 Y -- Delete the virtual server.
Legend:Y: Perform the operation.--: Do not perform the operation.
Replication Functions 2-13Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Notes on file systems managed by the file snapshot functionality
The following settings and status of a copy source file system managed bythe file snapshot functionality are copied to a target file system.
• Warning threshold
• Operation threshold
• Overflow prevention
• Status of the differential data storage device
If a ShadowImage pair is split and connected to the node while thedifferential data storage device in the copy source file system does not havesufficient capacity, certain measures must be taken in the copy source filesystem to solve the lack of the capacity.
In splitting a ShadowImage pair, confirm that the differential data storagedevice in the copy source file system is in the normal status.
Note on file systems that support single-instancing
The single-instancing setting is not copied over to copy-destination filesystems. If the single-instancing setting is enabled for a copy-source filesystem, after splitting a ShadowImage pair and connecting to a node, use thefsedit command to enable the single-instancing setting for the copy-destination file system. Also, set the policy for duplicate file capacityreduction.
Before splitting a ShadowImage pair, make sure that the duplicate filecapacity reduction policy has not yet been executed on the copy source filesystem. If a ShadowImage pair is split during policy execution, resynchronizethe ShadowImage pair and perform its subsequent operations again.
Notes on tiered file systems
If the copy-source file system is a tiered file system, the LUs in all the filesystems that make up the tier must be assigned the same device group namein the configuration definition file.
To connect the copy-source file system to a node or a virtual server, specifythe --tier1 and --tier2 options for the horcimport command or thehorcvmimport command if the copy file system is a tiered file system.Otherwise, specify the -d option.
If the tiered file system is already connected to a node or a virtual server,you need to set up a tier policy schedule. For details, see the Hitachi NASPlatform F1000 Series Cluster Administrator's Guide.
Notes on pair recovery in using the replication functions
If the HNAS F system creates a file system, the S-VOL Disable attribute ofData Retention Utility is given to the LUs of the storage system. This is forpreventing the HNAS F volumes from becoming S-VOLs.
2-14 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
If the copy-destination file system is connected to the node, a pair recoverycannot be performed because the S-VOL Disable attribute is assigned. Beforeperforming a pair recovery, execute the horcexport command to separatethe copy-destination file system from the node. Separating the copy-destination file system from the node releases the S-VOL Disable attribute,which makes it possible to perform a pair recovery.
Notes on dynamic recognition in the FC path
HNAS F automatically recognizes LUs connected to FC paths.
A user LU (device file number) is determined when the device file to be usedfor the copy target of the file system is reserved by using the horcvmdefinecommand but, if the OS is rebooted without having the device file numberdetermined, the device file number might change from the one before thereboot. In the following cases, immediately reserve the device file to be usedfor the copy target of the file system by using the horcvmdefine command sothat the device file number will not be changed.
• In starting to use the replication functions.
• In case the file system is deleted and then the LU of the deleted filesystem is to be used as the S-VOL.
If the OS is rebooted before reserving the device file to be used for the copytarget of the file system, find out the device file number and the LU numberthat constitute the file system by using the horcdevlist command.
Note on WORM file systems
A node or virtual server cannot be connected to if ShadowImage is used tocopy a WORM file system.
If the file system was encrypted by using the HNAS F functionality
If the local data encryption functionality is being used, the copy-destinationfile system can be connected to node only when the copy-source file systemand the copy-destination file system are in the same cluster. If the filesystems are in different clusters, the copy-destination file system cannot beconnected to the node.
Preparing for ShadowImage Operation
Preparing for ShadowImage Volume Pair Operation
Prepare the ShadowImage volume pair. For details about the preparationrequired on the storage system side, refer to the Hitachi ShadowImage User'sGuide (User Guide).
Registering Public Key Used for SSH
Before issuing the commands described in this document, the public key usedfor SSH needs to be registered in the node or virtual server and also in the
Replication Functions 2-15Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
node or virtual server which will use the S-VOL. You can register the publickey from the Access Protocol Configuration window, in the Add Public Keypage.
Configuration Settings of CCI
Logging in to a node via SSH
Using the nasroot account via SSH, log in both the node in which theShadowImage P-VOL is connected, and the node which will use the S-VOL.(For information about login, refer to the appropriate documentation of theSSH communication software)
Setting the environment of instance numbers to be used
For using the instance numbers allocated by default, the usage environmentis already configured, and no operation is required here.
For using additional instance numbers, the environment for the instancenumbers to be used must be configured by using the horcsetconf command.$ sudo horcsetconf -i instance number
Example 2-5 Configuring the usage environment for additional instancenumbers
The instance numbers which are already set can be confirmed by thehorcconflist command. By this command, search for instance numbers notused.$ sudo horcconflistinstance node number or virtual server name 16 node 0(D000000000), node 1(D000000001) 17 node 0(D000000000), node 1(D000000001) 499 node 0(D000000000), node 1(D000000001)
Example 2-6 Confirming already set instance numbers
The usage environment of additional instance numbers can be deleted ifnecessary by the horcunsetconf command. Note that the usage environmentof the instance numbers allocated by default cannot be deleted.$ sudo horcunsetconf -i instance number
Example 2-7 Deleting the usage environment of additional instancenumbers
Configuring the CCI Configuration Definition Files
To control a ShadowImage pairs using CCI, you must first define theShadowImage pair using the CCI configuration definition file.
When HVFP is installed, a template is provided for CCI configuration definitionfiles.
2-16 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
The template of the CCI configuration definition file:/home/nasroot/horcm<instance-number>.conf For nodes, <instance-number> is 16 or 17. Additional instance numbers are from 20 to 499.Use the CCI horcconfedit and mkconf.sh commands to define theHORCM_MON, HORCM_CMD, HORCM_DEV, and HORCM_INST sections in thetemplate. Then edit the file and create the CCI configuration definition file.
You must perform these operations in both nodes. You therefore need toprepare a total of four CCI configuration files, assuming two instances pernode.
In a HNAS F, you can define one or two CCI instances per one node. Tooperate only the ShadowImage pairs in CCI, you need one instance. Tooperate the cascaded pairs in CCI, you need two instances.
In the following section, we explain how to create a CCI configurationdefinition file using the HNAS F system with LUs as an example.
Replication Functions 2-17Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 2-2 Example of Configuration of Pair LU
Note:To enable two node to communicate over a LAN, you must define thenetwork configuration.
Defining configuration definition files by using the CCI mkconf.shcommand
Add the HORCM_MON, HORCM_CMD, HORCM_DEV, and HORCM_INSTsections to the CCI configuration definition file template by using the CCImkconf.sh command.$ ls /dev/sdu*u | sudo mkconf.sh -gg <device-group-name> -i 16
Example 2-8 Defining configuration definition files (for instance number16)
2-18 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Note:Be sure to specify the -gg option for the mkconf.sh command. Specifyingthe -gg option assigns LU numbers allocated to host groups to pairs whenthe pairs are defined. When the -gg option is not specified, data is copiedto an LU other than the required one because the pair cannot be definedby using an LU number allocated to the host group. In addition, do notspecify the -a option because the command device path of the HORCM_CMDsection is defined by the mkconf.sh command. For details, see the HitachiCommand Control Interface (CCI) User and Reference Guide.
To execute the "mkconf.sh" command, the file system to be paired must becreated. The LU configuration (number and size) in the file system to bepaired needs to be exactly the same in both the P-VOL and the S-VOL.
To create the correct CCI configuration definition file, it is recommended thatyou create a file system to be temporarily paired before executing the"mkconf.sh" command. After creating the CCI configuration definition fileusing the "mkconf.sh" command, you can continue using the P-VOL filesystem is connected. Alternatively, you may also delete it and create a filesystem with the same configuration using the same LU according to theprocedure described in File System Creation on P-VOL of ShadowImage onpage 2-29. However, you must delete the temporarily created file systemfor S-VOL before creating a pair.
We will explain how to create a CCI configuration definition file in the node0to which the P-VOL is connected in the sample configuration below. You mustcreate CCI configuration definition files for the other three nodes using thesame procedure.$ ls /dev/sdu*u | sudo mkconf.sh -gg VG -i 16starting HORCM inst 16HORCM inst 16 starts successfully.HORCM Shutdown inst 16 !!!A CONFIG file was successfully completed.starting HORCM inst 16HORCM inst 16 starts successfully.DEVICE_FILE Group PairVol PORT TARG LUN M SERIAL LDEV/dev/sdu00u VG VG_000 CL1-A-1 0 0 - 62486 70/dev/sdu01u VG VG_001 CL1-A-1 0 1 - 62486 18 : dev/sdu10u VG VG_010 CL1-A-1 0 10 - 62486 10/dev/sdu11u VG VG_011 CL1-A-1 0 11 - 62486 64/dev/sdu12u VG VG_012 CL1-A-1 0 12 - 62486 12/dev/sdu13u VG VG_013 CL1-A-1 0 13 - 62486 66/dev/sdu14u VG VG_014 CL1-A-1 0 14 - 62486 14/dev/sdu15u VG VG_015 CL1-A-1 0 15 - 62486 68/dev/sdu16u VG VG_016 CL1-A-1 0 16 - 62486
Replication Functions 2-19Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
16HORCM Shutdown inst 16 !!!Please check '/home/nasroot/horcm16.conf','/home/nasroot/log16/curlog/horcm_*.log', and modify 'ip_address & service'.
Example 2-9 Example of "mkconf.sh" Execution Result (for instancenumber 16)
Then, use the horcconfedit command to change the HORCM_CMD definition inthe CCI configuration definition file to a format that does not depend ondevice file changes (\\.\CMD-<serial-number>:/dev/sd).$ sudo horcconfedit horcm16.conf
Example 2-10 Changing the HORCM_CMD definition in configurationdefinition file (for instance number 16)
Editing the CCI configuration definition file
The following table shows the values that are specified for the items includedin the CCI configuration definition file in the HNAS F system.
Table 2-8 Configuration Definition File Settings (HORCM_MON) andSpecified Values in the HNAS F system
Section Name Item Specified Values in HNAS F system
HORCM_MON ip_address Fixed IP address of the node
service Specify any of the following:
• 20331 (In the case of a node. If the instancenumber is 16.)
• 20332 (In the case of a node. If the instancenumber is 17.)
• 31032 to 31254 or 31532 to 31754 (In the case of avirtual server. Instance-number +30000)
• 30020 to 30499 (Common. Instance number+30000)
Note:A host name can be specified instead of a fixed IP address if the IPaddress and the corresponding host name are registered into /etc/hosts, on an NIS server, or a DNS server. You can edit /etc/hosts onthe Edit System File page on the Network & System Configurationwindow. You can set the NIS server or DNS server information on theDNS, NIS, LDAP Setup page on the Network & System Configurationwindow.
Using the information in Table 2-8 Configuration Definition File Settings(HORCM_MON) and Specified Values in the HNAS F system on page 2-20,change the entries for service and ip_address to appropriate values.
Change the entries poll and timeout to the appropriate values according tohardware requirements.
2-20 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu00u SER = 62486 LDEV = 70 [ FIBRE FCTBL = 3 ]VG VG_000 CL1-A-1 0 0# /dev/sdu01u SER = 62486 LDEV = 18 [ FIBRE FCTBL = 3 ]VG VG_001 CL1-A-1 0 1 : :# /dev/sdu13u SER = 62486 LDEV = 66 [ FIBRE FCTBL = 3 ]VG VG_013 CL1-A-1 0 13# /dev/sdu14u SER = 62486 LDEV = 14 [ FIBRE FCTBL = 3 ]VG VG_014 CL1-A-1 0 14# /dev/sdu15u SER = 62486 LDEV = 68 [ FIBRE FCTBL = 3 ]VG VG_015 CL1-A-1 0 15# /dev/sdu16u SER = 62486 LDEV = 16 [ FIBRE FCTBL = 3 ]VG VG_016 CL1-A-1 0 16
HORCM_INST#dev_group ip_address serviceVG 127.0.0.1 52323
Example 2-11 Example of CCI Configuration Definition File (for instancenumber 16)
Next, at HORCM_DEV section, remove all unnecessary LU entry (line) otherthan those to be managed by CCI.
LU which makes up a file system and its LDEV ID can be obtained by usingthe following command.$ sudo horcdevlist | grep ':filesystem-name'ForExample 2-12 Example of an Explorer Command of LU which Makes up aFile System Sample on page 2-21, LU 11, 12, 13 shown in left make up afile system "sample", and their LDEV ID are 64, 12, 66 shown on the thirdcolumn to the left.$ sudo horcdevlist | grep ':sample$'11 62486 64 OPEN-V 3.906GB -- -- - Normal File:sample12 62486 12 OPEN-V 3.906GB -- -- - Normal File:sample13 62486 66 OPEN-V 3.906GB -- -- - Normal File:sample
Example 2-12 Example of an Explorer Command of LU which Makes up aFile System Sample
Replication Functions 2-21Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Next, modify the device file name and the device name, which are to behandled by CCI. There are some restrictions on modifying the device filename or device name.
• Volumes on both P-VOL and S-VOL must be assigned the same devicegroup name and same device name.
• Volumes making up one file system must be assigned the same devicegroup name.When the file snapshot functionality is utilized, volumes forfile system and differential-data storage device must be assigned thesame device group name.
• For a tiered file system, all of the LUs used to configure the tier (includingdifferential-data storage devices) must be assigned the same devicegroup name.
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu11u SER = 62486 LDEV = 64 [ FIBRE FCTBL = 3 ]VG VG_011 CL1-A-1 0 11# /dev/sdu12u SER = 62486 LDEV = 12 [ FIBRE FCTBL = 3 ]VG VG_012 CL1-A-1 0 12# /dev/sdu13u SER = 62486 LDEV = 66 [ FIBRE FCTBL = 3 ]VG VG_013 CL1-A-1 0 13
HORCM_INST#dev_group ip_address serviceVG 127.0.0.1 52323
Example 2-13 Example of CCI Configuration Definition File - 2 (forinstance number 16)
For one LU, two lines of the HORCM_DEV section information are output: thecomment line starting with a # and the definition line right below it. Use the /dev/sdu**u (where ** is the LU number) and LDEV = ** (where ** is theLDEV number) text output in the comment lines to identify the necessaryentries.
Next, as an item to be set in the HORCM_INST section, specify the ip addressof the S-VOL. As a preparation for failover, specify the ip_address of theinstance for each node.
Table 2-9 Sections and Items on HORCM_INST Section of CCIConfiguration Definition File and Required Values for HNAS F system
Section Item Values for HNAS F system System
HORCM_INST ip_address Fixed IP address of the other ShadowImage node
2-22 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Section Item Values for HNAS F system System
service Specify any of the following:
• 20331 (In the case of a node. If the instancenumber is 16.)
• 20332 (In the case of a node. If the instancenumber is 17.)
• 31032 to 31254 or 31532 to 31754 (In the case of avirtual server. Instance-number +30000)
• 30020 to 30499 (Common. Instance number+30000)
Note:A host name can be specified instead of a fixed IP address if the IPaddress and the corresponding host name are registered into /etc/hosts, on an NIS server, or a DNS server. You can edit /etc/hosts onthe Edit System File page on the Network & System Configurationwindow. You can set the NIS server or DNS server information on theDNS, NIS, LDAP Setup page on the Network & System Configurationwindow.
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu11u SER = 62486 LDEV = 64 [ FIBRE FCTBL = 3 ]VG VG_011 CL1-A-1 0 11# /dev/sdu12u SER = 62486 LDEV = 12 [ FIBRE FCTBL = 3 ]VG VG_012 CL1-A-1 0 12# /dev/sdu13u SER = 62486 LDEV = 66 [ FIBRE FCTBL = 3 ]VG VG_013 CL1-A-1 0 13
HORCM_INST#dev_group ip_address serviceVG 123.45.78.51 20332VG 123.45.78.52 20332
Example 2-14 Example of CCI Configuration Definition File - 3 (forinstance number 16)
Checking the contents of the CCI configuration definition file
By combining the following commands, you can check whether an appropriateLU is specified in the HORCM_DEV section in the CCI configuration definitionfile.
Start CCI on both the node or virtual server connected to the ShadowImageP-VOL and the node or virtual server that will use the S-VOL.
Replication Functions 2-23Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
$ sudo horcsetenv HORCMINST 16 (for instance number 16) OR sudo horcsetenv HORCMINST 17 (for instance number 17)$ sudo horcsetenv HORCC_MRCF 1IF THESE SETTINGS ARE DONE THROUGH SSH, LOG OUT ONCE THEN LOG IN AGAIN TO VALIDATE THE MODIFICATIONS.$ sudo horcmstart.sh
Example 2-15 Procedure for Starting CCI
To obtain the LDEV ID of the LU specified in the HORCM_DEV section, executethe pairdisplay command on the node or virtual server connected to theShadowImage P-VOL or the node or virtual server connected to the volumethat either is or will be used for the ShadowImage S-VOL.$ sudo pairdisplay -g device-group-name
Example 2-16 How to Check LDEV Number of LUs Specified in theHORCM_DEV Section
You can check the device file number and LDEV ID that make up the filesystem by executing the horcdevlist command on the node or virtual serverconnected to the ShadowImage P-VOL. Verify the result shown on Example2-16 How to Check LDEV Number of LUs Specified in the HORCM_DEVSection on page 2-24.$ sudo horcdevlist | grep ':sample$'11 62486 64 OPEN-V 3.906GB -- -- - Normal File:sample12 62486 12 OPEN-V 3.906GB -- -- - Normal File:sample13 62486 66 OPEN-V 3.906GB -- -- - Normal File:sample
Example 2-17 How to Check Device File Numbers and LDEV Numbers of P-VOL
You can also check the device file numbers and the LDEV numbers availablefor a ShadowImage S-VOL by executing the horcdevlist command on thenode or virtual server connected to the ShadowImage S-VOL. Verify theresult shown in Example 2-16 How to Check LDEV Number of LUs Specified inthe HORCM_DEV Section on page 2-24$ sudo horcdevlist | grep ' Free$'21 62486 74 OPEN-V 3.906GB -- -- - Normal Free22 62486 22 OPEN-V 3.906GB -- -- - Normal Free23 62486 76 OPEN-V 3.906GB -- -- - Normal Free
Example 2-18 How to Check Device File Numbers and LDEV Numbers forUnused Device Files Which Can be S-VOLs
When specifying the port name in the HORCM_DEV section of the CCIconfiguration definition file, use the name of the storage system FibreChannel port connected to the node or virtual server.
After the CCI definition files has been validated, stop CCI on the nodesconnected to the ShadowImage P-VOL and S-VOL.$ sudo horcmshutdown.sh
Example 2-19 Procedure for Stopping CCI
2-24 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
To save CCI configuration definition files, save the system settingsinformation. For details about saving system settings information, see theHitachi NAS Platform F1000 Series Cluster Administrator's Guide.
An example of the configuration definition file for a tiered system
An example of the configuration definition file for a tiered system is providedbelow.
Figure 2-3 Example of a Tiered File System$ sudo horcdevlist | grep ':sample$'10 62486 10 OPEN-V 3.906GB -- -- - Normal Tier1,File:sample11 62486 64 OPEN-V 3.906GB -- -- - Normal Tier1,File:sample12 62486 12 OPEN-V 3.906GB -- -- - Normal Tier2,File:sample13 62486 66 OPEN-V 3.906GB -- -- - Normal Tier2,File:sample
Example 2-20 How To Check the Device File Number and the LU Number ofa P-VOL
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu10u SER = 62486 LDEV =10 [ FIBRE FCTBL = 3 ]VG VG_010 CL1-A-1 0 10# /dev/sdu11u SER = 62486 LDEV =64 [ FIBRE FCTBL = 3 ]
Replication Functions 2-25Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
VG VG_011 CL1-A-1 0 11# /dev/sdu12u SER = 62486 LDEV =12 [ FIBRE FCTBL = 3 ]VG VG_012 CL1-A-1 0 12# /dev/sdu13u SER = 62486 LDEV =66 [ FIBRE FCTBL = 3 ]VG VG_013 CL1-A-1 0 13
HORCM_INST#dev_group ip_address serviceVG 123.45.80.51 20332VG 123.45.78.52 20332
Example 2-21 Example of the CCI Configuration Definition File (for P-VOLs)
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.52 20332 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu20u SER = 62486 LDEV = 20 [ FIBRE FCTBL = 3 ]VG VG_010 CL1-A-1 0 20# /dev/sdu21u SER = 62486 LDEV = 74 [ FIBRE FCTBL = 3 ]VG VG_011 CL1-A-1 0 21# /dev/sdu22u SER = 62486 LDEV = 22 [ FIBRE FCTBL = 3 ]VG VG_012 CL1-A-1 0 22# /dev/sdu23u SER = 62486 LDEV = 76 [ FIBRE FCTBL = 3 ]VG VG_013 CL1-A-1 0 23
HORCM_INST#dev_group ip_address serviceVG 123.45.78.51 20331VG 123.45.78.52 20331
Example 2-22 Example of a CCI Configuration Definition File (for S-VOLs)
Cascaded Configuration
Pairs of TrueCopy and ShadowImage, or multiple pairs of ShadowImage canbe cascaded on the HNAS F system. On cascaded configuration of TrueCopypair and ShadowImage pair, ShadowImage can periodically back up S-VOLfile system of TrueCopy, which is copied from P-VOL of TrueCopy, to preparesome kind of disasters such as S-VOL of TrueCopy cannot restore the filesystem. On cascaded configuration of multiple ShadowImage pair file system,up to 9 S-VOLs can be created to 1 P-VOL.
Figure 2-3 shows an example of cascaded configuration of multipleShadowImage pair file system. For cascaded configuration of TrueCopy pairfile system and ShadowImage pair file system, please refer to HitachiTrueCopy User and Reference Guide.
2-26 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 2-4 Example of a Cascaded Configuration of ShadowImage Pairs
Preparing following CCI configuration definition file, cascaded multipleShadowImage pairs can be managed by CCI.HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.80.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62490)\\.\CMD-62490:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu20u SER = 62486 LDEV =20 [ FIBRE FCTBL = 3 ]VG_SI2 VG_020 CL1-A-1 0 20 0# /dev/sdu20u SER = 62486 LDEV =20 [ FIBRE FCTBL = 3 ]VG_SI2 VG_021 CL1-A-1 0 20 1
HORCM_INST#dev_group ip_address serviceVG_SI2 123.45.80.51 20332VG_SI2 123.45.80.115 20332
Example 2-23 Example of CCI Configuration Definition File on the Cluster(Instance 16)
Replication Functions 2-27Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.80.51 20332 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62490)\\.\CMD-62490:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu30u SER =62486 LDEV =30 [ FIBRE FCTBL = 3 ]VG_SI2 VG_020 CL1-A-1 0 30 0# /dev/sdu31u SER =62486 LDEV =31 [ FIBRE FCTBL = 3 ]VG_SI2 VG_021 CL1-A-1 0 31 0
HORCM_INST#dev_group ip_address serviceVG_SI2 123.45.80.51 20331VG_SI2 123.45.80.115 20331
Example 2-24 Example of CCI Configuration Definition File of Cluster(Instance 17)
Setting the CCI User Environmental Variables
The HNAS F system allows you to set the following user environmentalvariables.
• HORCMINST
• HORCC_MRCF
• HORCC_SPLT
• HORCC_RSYN
• HORCC_REST
Modify HORCMINST, HORCC_MRCF, HORCC_SPLT, HORCC_RSYN, and HORCC_RESTaccording to following procedure to meet current system configuration. Thismust be done on the nodes or virtual servers connected to the ShadowImageP-VOL and the volume that will be the S-VOL.
1. Set environmental variable for CCI instance(s).sudo horcsetenv HORCMINST 16 (in case of instance number 16)
or sudo horcsetenv HORCMINST 17 (in case of instance number 17)
2. Set environmental variable for CCI HOMRCF command as "use asShadowImage".sudo horcsetenv HORCC_MRCF 1
3. Set environmental variable HORCC_SPLT, HORCC_RSYN, and HORCC_REST asneeded.sudo horcsetenv <environmental variable name> <value>In case of setting "NORMAL" to HORCC_SPLT is as follows.sudo horcsetenv HORCC_SPLT NORMAL
2-28 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
The following table shows the value that can be set as each environmentvariable and the operation when setting up the value. If you do not setthese environment variables, each operation will be performed accordingto the settings of the storage system. See Hitachi ShadowImage User'sGuide (User Guide) for further information on each operation.
Table 2-10 Value that Can be Set as CCI User Environment Variableand the Operation When Setting Up the Value
EnvironmentVariable Name Value Operation
HORCC_SPLT NORMAL Pair split operation performs as steady split.
QUICK Pair split operation performs as quick split.
HORCC_RSYN NORMAL Pairresync operation performs as normal copy.
QUICK Pairresync operation performs as quick resync.
HORCC_REST NORMAL Backward pairresync operation performs asreverse copy.
QUICK Backward pairresync operation performs asquick restore.
4. If procedure 1., 2., or 3 are done through SSH, log out once and log inagain to validate these settings.exitssh nasroot@<fixed ip address or virtual IP address>
To confirm the defined variables, type the following command.$ sudo horcprintenvThe following table shows the settings of the environment variables just afterHNAS F is installed.
Table 2-11 Settings of Environment Variables just after the Installation ofHNAS F
Environment Variables Setting
HORCMINST 16 (for a node)
HORCC_MRCF No setting.
File System Creation on P-VOL of ShadowImage
Create file system through the "Create New File System" window of FileServices Manager, or using the fscreate command on P-VOL ofShadowImage. Without creating file system on P-VOL of ShadowImage, S-VOL of ShadowImage cannot be accessed from the node or virtual server towhich S-VOL is connected even volume pair is successfully created and split.
Replication Functions 2-29Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Overview of Using ShadowImageThis section describes the overview of ShadowImage operations, CommandControl Interface (CCI) commands, and the commands provided by the HNASF for using ShadowImage for volume replication.
We describe only the arguments of the CCI commands, which are required forthe basic ShadowImage operations. For other arguments, please refer toHitachi Command Control Interface (CCI) User and Reference Guide. For thecommands provided by the HNAS F products, see Commands that HNAS Fprovides on page 2-132.
For node operation examples in this section, instance numbers 16 and 17 areused. If you are using additional instance numbers, replace 16 and 17 in theexamples with those numbers.
This chapter describes the procedure for backup of file systems. Overview ofthe operation and the corresponding sections are shown in the figure below.When the pair is in the PSUS state, the S-VOL can be accessed from the nodeor virtual server connected to the S-VOL.
Figure 2-5 Overview of the Operations (and the Related DocumentSections)
Starting ShadowImage Operations and Creating ShadowImage Pair
If the ShadowImage S-VOL contains a file system, you need to delete the filesystem using the fsdelete command in File Services Manager before startingthe ShadowImage operation. If the ShadowImage S-VOL has a device thatstores the differential data of the file snapshot functionality, you need torelease the device using the syncstop command before starting theShadowImage operation.
To start ShadowImage operation and to create ShadowImage volume pair:
1. Start CCI from the nodes or virtual servers connected to the P-VOL andS-VOL.
2-30 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
sudo horcmstart.sh (1-instance configuration)
or sudo horcmstart.sh 16 17 (2-instance configuration)
2. In the node or virtual server to which the S-VOL is connected, reserve adevice file used as a target file system.$ sudo horcvmdefine -d Device file number,...
3. Create a ShadowImage volume pair in the node or virtual server to whichthe P-VOL is connected.$ sudo paircreate {-g group-name|-d volume-name} -vl
4. Check if the ShadowImage volume has been created in the node or virtualserver to which the P-VOL is connected.$ sudo pairvolchk {-g group-name |-d volume-name }Note:
You can also use the pairevtwait command, which waits for thevolumes to be paired.
a. pairvolchk : Volstat is P-VOL.[status = COPY] => Creating
b. pairvolchk : Volstat is P-VOL.[status = PAIR] => Created
Splitting a ShadowImage Volume Pair
The method for splitting a ShadowImage volume pair depends on whetheryou perform an offline backup or an online backup. For an offline backup, theP-VOL is unmounted to split the pair. For an online backup, instead ofunmounting the P-VOL, updates to the file system are temporarily stopped tosplit the pair.
In the offline backup method, a ShadowImage volume pair is split after NFSshares and CIFS shares are deleted and accesses from the client arecompletely stopped. An I/O error is reported to the application if the userdeletes NFS/CIFS shares while the application is writing data to P-VOL, or ifthe application tries to write data to the P-VOL after the user deletes theshares. As a result, the application is able to see which application's data isreflected to the ShadowImage volume pair. This method can be used foralmost all applications.
With online backup, the ShadowImage volume pair is split without deletingthe NFS/CIFS shares. Since an I/O error is not reported to the applicationthat is writing data to the P-VOL, the application cannot see how much data isreflected to the S-VOL. Therefore, this method can be used for only theapplications that are able to keep track of the progress of writing data usingjournal files etc.
ShadowImage Volume Pair Split by Offline Backup
When the file system is not the one operated by the file snapshotfunctionality or when the differential-data snapshot is not publishedin the shared file system even though the file system is operated bythe file snapshot functionality
Replication Functions 2-31Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
1. Stop the program which accesses the P-VOL in the node or virtual serverto which the P-VOL is connected. Then, unmount the NFS shares on theclient side.
2. In the node or virtual server to which the P-VOL is connected, delete theNFS/CIFS shares using the nfsdelete command and the cifsdeletecommand in File Services Manager, and unmount all file systems in thepair using the fsumount command.
3. In the node or virtual server to which the P-VOL is connected, stopoperations from the file snapshot functionality to the P-VOL.$ sudo horcfreeze -f source-file-system-name
4. In the node or virtual server to which the P-VOL is connected, split theShadowImage volume pair.$ sudo pairsplit {-g group-name |-d volume-name }
5. In the node or virtual server to which the P-VOL is connected, confirmthat the ShadowImage volume is split.$ sudo pairvolchk {-g group-name |-d volume-name }Note:
You can also use the pairevtwait command, which waits for the pairto be split (PSUS).
a. pairvolchk : Volstat is P-VOL.[status = COPY] => Splitting
b. pairvolchk : Volstat is P-VOL.[status = PSUS] => Split
6. In the node or virtual server to which the P-VOL is connected, permit thestopped operations from the file snapshot functionality to the P-VOL.$ sudo horcunfreeze -f source-file-system-name
7. In the node or virtual server to which the P-VOL is connected, mount theP-VOL using the fsmount command in File Services Manager and set theNFS/CIFS shares using the nfscreate and the cifscreate command.
8. In the node or virtual server to which the P-VOL is connected, mount theNFS shares of the P-VOL on the client side. Then, restart the programwhich accesses the P-VOL.
9. In the node or virtual server to which the S-VOL is connected, connect thetarget file system to the node or virtual server.
For a non-LVM and non-tiered file systemsudo horcimport -f target-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f target-file-system-name --tier1 device-file-number --tier2 device-file-number
For a LVM and non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,...
For a LVM and tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
2-32 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
10. In the node or virtual server to which the S-VOL is connected, mount theS-VOL using the fsmount command. After that, set the NFS/CIFS sharesusing the nfscreate command and the cifscreate command.
Note:In the file snapshot functionality target file system, mount thedifferential-data snapshot using the syncmount command ifnecessary, and set the NFS/CIFS shares using the nfscreatecommand and the cifscreate command.
11. In the node or virtual server to which the S-VOL is connected, start theprogram which accesses the S-VOL.
When the file system is operated by the file snapshot functionalityand the differential-data snapshot is published in the shared filesystem
1. In the node or virtual server to which the P-VOL is connected, stop theprogram which accesses the P-VOL and un-mount the NFS shares at aclient-side.
2. In the node or virtual server to which the P-VOL is connected, un-mountthe differential-data snapshot using the syncumount command, delete theNFS/CIFS shares using the nfsdelete command and the cifsdeletecommand in File Services Manager, and un-mount all file systems in thepair using the fsumount command.
3. In the node or virtual server to which the P-VOL is connected, stopoperations from the file snapshot functionality to the P-VOL.$ sudo horcfreeze -f source-file-system-name
4. In the node or virtual server to which the P-VOL is connected, split theShadowImage volume pair.$ sudo pairsplit {-g group-name | -d volume-name }
5. In the node or virtual server to which the P-VOL is connected, confirmthat the ShadowImage volume is split.$ sudo pairvolchk {-g group-name | -d volume-name }Note:
You can also use the pairevtwait command, which waits for the pairto be split (PSUS).
a. pairvolchk : Volstat is P-VOL.[status = COPY] => Splitting
b. pairvolchk : Volstat is P-VOL.[status = PSUS] => Split
6. In the node or virtual server to which the P-VOL is connected, permit thestopped operations from the file snapshot functionality to the P-VOL.$ sudo horcunfreeze -f source-file-system-name
7. In the node or virtual server to which the P-VOL is connected, mount theP-VOL using the fsmount command in File Services Manager and set theNFS/CIFS shares using the nfscreate command and the cifscreatecommand then mount the differential-data snapshots using thesyncmount command.
Replication Functions 2-33Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
8. In the node or virtual server to which the P-VOL is connected, mount theP-VOL NFS shares at a client-side and restart the program which accessesthe P-VOL.
9. In the node or virtual server to which the S-VOL is connected, connect thetarget file system to the node or virtual server.
For a non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,...
For a tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
10. From the node or virtual server that is connected to the S-VOL, executethe fsmount command to mount the S-VOL. After that, set the NFS/CIFSshares using the nfscreate command and the cifscreate command.
11. In the node or virtual server to which the S-VOL is connected, mount thedifferential-data snapshot using the syncmount command, and will bemade public the NFS/CIFS shares.
12. In the node or virtual server to which the S-VOL is connected, start theprogram which accesses the S-VOL.
ShadowImage Volume Pair Split by Online Backup
When the file system is not the one operated by the file snapshotfunctionality or when the differential-data snapshot is not publishedin the shared file system even though the file system is operated bythe file snapshot functionality
1. In the node or virtual server to which the P-VOL is connected, stop theoperation from the file snapshot functionality to the P-VOL, hold theaccess from the client, and write the unreflected data to the disk.$ sudo horcfreeze -f source-file-system-name
2. In the node or virtual server to which the P-VOL is connected, split theShadowImage volume pair.$ sudo pairsplit {-g group-name|-d volume-name }
3. In the node or virtual server to which the P-VOL is connected, check if theShadowImage volume pair is split.$ sudo pairvolchk {-g group-name|-d volume-name }Note:
You can also use the pairevtwait command, which waits for the pairto be split (PSUS).
a. pairvolchk : Volstat is P-VOL.[status = COPY] => Splitting
b. pairvolchk : Volstat is P-VOL.[status = PSUS] => Split
4. In the node or virtual server to which the P-VOL is connected, permit thewrite request to P-VOL, and permit the stopped access from the filesnapshot functionality.$ sudo horcunfreeze -f source-file-system-name
2-34 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
5. In the node or virtual server to which the S-VOL is connected, connect thetarget file system to the node or virtual server.
For a non-LVM and non-tiered file systemsudo horcimport -f target-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f target-file-system-name --tier1 device-file-number --tier2 device-file-number
For a LVM and non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,...
For a LVM and tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
6. In the node or virtual server to which the S-VOL is connected, mount theS-VOL using the fsmount command. After that, set the NFS/CIFS sharesusing the nfscreate command and the cifscreate command.
Note:In the file snapshot functionality target file system, mount thedifferential-data snapshot using the syncmount command ifnecessary, and set the NFS/CIFS shares using the nfscreatecommand and the cifscreate command.
7. In the node or virtual server to which the S-VOL is connected, start theprogram which accesses the S-VOL.
When the file system is operated by the file snapshot functionalityand the differential-data snapshot is published in the shared filesystem
1. In the node or virtual server to which the P-VOL is connected, stop theoperation from the file snapshot functionality to the P-VOL, stop the clientaccess request, and write the un-reflected data to the disk.$ sudo horcfreeze -f source-file-system-name
2. In the node or virtual server to which the P-VOL is connected, split theShadowImage volume pair.$ sudo pairsplit {-g group-name | -d volume-name }
3. In the node or virtual server to which the P-VOL is connected, check if theShadowImage volume pair is split.$ sudo pairvolchk {-g group-name | -d volume-name }Note:
You can also use the pairevtwait command, which waits for the pairto be split (PSUS).
a. pairvolchk : Volstat is P-VOL.[status = COPY] => Splitting
b. pairvolchk : Volstat is P-VOL.[status = PSUS] => Split
Replication Functions 2-35Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
4. In the node or virtual server to which the P-VOL is connected, permit theclient access stop request to P-VOL, and permit the stopped operationsfrom the file snapshot functionality.$ sudo horcunfreeze -f source-file-system-name
5. In the node or virtual server to which the S-VOL is connected, connect thetarget file system to the node or virtual server.
For a non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number
For a tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
6. In the node or virtual server to which the S-VOL is connected, mount theS-VOL using the fsmount command. After that, set the NFS/CIFS sharesusing the nfscreate command and the cifscreate command.
7. In the node or virtual server to which the S-VOL is connected, mount thedifferential-data snapshot using the syncmount command, and will bemade public the NFS/CIFS shares.
8. In the node or virtual server to which the S-VOL is connected, start theprogram which accesses the S-VOL.
Resynchronizing a ShadowImage Volume Pair
You need to separate the target file system using the horcexport commandbefore resynchronizing the pair. However, in spite of separating the target filesystem, resynchronizing the volume pair is not initial copy.
Note:In the file snapshot functionality target file system, unmount thedifferential-data snapshot using the syncumount command beforeexecuting the horcexport command.
To resynchronize the ShadowImage volume pair:
When the file system is not the one operated by the file snapshotfunctionality or when the differential-data snapshot is not madepublic in the shared file system even though the file system isoperated by the file snapshot functionality
1. Stop the program which accesses the S-VOL in the node or virtual serverto which the S-VOL is connected. Then, unmount the NFS shares on theclient side.
2. In the node or virtual server to which the S-VOL is connected, use thenfsdelete command and the cifsdelete command to delete the NFS/CIFS share. Then, use the fsumount command to unmount.
Note:In the file snapshot functionality target file system, delete the NFS/CIFS shares for the differential-data snapshot by using the nfsdelete
2-36 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
command and the cifsdelete command, unmount the differential-data snapshot by using the syncumount command.
3. In the node or virtual server to which the S-VOL is connected, separatethe file system of the S-VOL by using the horcexport command.$ sudo horcexport -f target-file-system-name
4. In the node or virtual server to which the P-VOL is connected, resume theShadowImage volume pair.$ sudo pairresync {-g group-name |-d volume-name }
5. In the node or virtual server to which the P-VOL is connected, check if theShadowImage volume pair has been resumed.$ sudo pairvolchk {-g group-name |-d volume-name }Note:
You can also use the pairevtwait command, which waits for thevolumes to be paired.
a. pairvolchk : Volstat is P-VOL.[status = COPY] => Recovering
b. pairvolchk : Volstat is P-VOL.[status = PAIR] => Recovered
When the file system is operated by the file snapshot functionalityand the differential-data snapshot is make public in the shared filesystem
1. In the node or virtual server to which the S-VOL is connected, stop theprogram which accesses and un-mount the NFS shares at a client-side.
2. In the node or virtual server to which the S-VOL is connected, un-mountthe differential-data snapshot using the syncumount command.
Note:When the differential-data snapshot is not only make public in theshared file system but also create the NFS/CIFS shares to thedifferential-data snapshot, it is required to cancel the NFS/CIFSshares for the differential-data snapshot using the nfsdelete andcifsdelete commands before un-mounting the differential-datasnapshot.
3. In the node or virtual server to which the S-VOL is connected, release theNFS/CIFS shares for the differential-data snapshot using the nfsdeleteand cifsdelete commands and un-mount all file systems in the pairusing the fsumount command.
4. In the node or virtual server to which the S-VOL is connected, separatethe S-VOL file system using the fsdelete command.$ sudo horcexport -f target-file-system-name
5. In the node or virtual server to which the P-VOL is connected,resynchronize the ShadowImage volume pair.$ sudo pairresync {-g group-name | -d volume-name }
6. In the node or virtual server to which the P-VOL is connected, check if theShadowImage volume pair has been resynchronized.
Replication Functions 2-37Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
$ sudo pairvolchk {-g group-name | -d volume-name }Note:
You can also use the pairevtwait command, which waits for thevolumes to be paired.
a. pairvolchk : Volstat is P-VOL.[status = COPY]=> Recovering
b. pairvolchk : Volstat is P-VOL.[status = PAIR] => Recovered
Deleting a ShadowImage Volume Pair
Procedure of finishing the ShadowImage operations by deleting theShadowImage volume pair in the PSUS state differs depending on whetheryou will continue using the file system in S-VOL, or will destroy the filesystem.
When deleting the ShadowImage volume pair in the state other than PSUS,you cannot use the file system because the consistency of data in the S-VOLis not guaranteed as a file system.
To delete the ShadowImage volume pair in the PSUS status, and to continueto use the S-VOL file system after that:
1. In the node or virtual server to which the P-VOL is connected, delete theShadowImage volume pair.$ sudo pairsplit {-g group-name |-d volume-name } -S
2. In the node or virtual server to which the P-VOL is connected, check if theShadowImage volume pair has been deleted.$ sudo pairvolchk {-g group-name |-d volume-name}a. pairvolchk : Volstat is P-VOL.[status = COPY] => Deleting
b. pairvolchk : Volstat is P-VOL.[status = SMPL] => Deleted
3. In the node or virtual server to which the P-VOL is connected and thenode to which the S-VOL is connected, stop CCI.$ sudo horcmshutdown.sh (1-instance configuration)
or $ sudo horcmshutdown.sh 16 17 (2-instance configuration)
Note:If you have started the task of splitting the ShadowImage volume pair viaan offline backup but have not yet completed steps 1 through 11,complete steps 1 through 11 before proceeding with the above steps.When using an online backup, you need to have completed steps 1through 7 of splitting the ShadowImage volume pair before proceedingwith the above steps (see Splitting a ShadowImage Volume Pair on page2-31).
To delete the ShadowImage volume pair in PSUS state without using the S-VOL file system:
When the file system is not the one operated by the file snapshotfunctionality or when the differential-data snapshot is not make
2-38 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
public in the shared file system even though the file system isoperated by the file snapshot functionality
1. In the node or virtual server to which the S-VOL is connected, terminatethe program which accesses the S-VOL.
2. In the node or virtual server to which the S-VOL is connected, deleteNFS/CIFS shares using the nfsdelete command and the cifsdeletecommand, and unmount S-VOL using the fsumount command.
Note:In the file snapshot functionality target file system, delete the NFS/CIFS shares for the differential-data snapshot using the nfsdeletecommand and the cifsdelete command, unmount the differential-data snapshot using the syncumount command, and release thedifferential-data storage device using the syncstop command.
3. In the node or virtual server to which the S-VOL is connected, delete thefile system in the S-VOL using the fsdelete command.
4. In the node or virtual server to which the P-VOL is connected, delete theShadowImage volume pair.$ sudo pairsplit {-g group-name|-d volume-name} -S
5. In the node or virtual server to which the P-VOL is connected, check if theShadowImage volume pair has been deleted.$ sudo pairvolchk {-g group-name |-d volume-name }a. pairvolchk : Volstat is P-VOL.[status = COPY] => Deleting
b. pairvolchk : Volstat is P-VOL.[status = SMPL] => Deleted
6. Stop CCI on the nodes or virtual servers connected to the P-VOL and theS-VOL.$ sudo horcmshutdown.sh (1-instance configuration)
or $ sudo horcmshutdown.sh 16 17 (2-instance configuration)
Note:If you have started the task of splitting the ShadowImage volume pair viaan offline backup but have not yet completed steps 1 through 11,complete steps 1 through 11 before proceeding with the above steps.When using an online backup, you need to have completed steps 1through 7 of splitting the ShadowImage volume pair before proceedingwith the above steps (See Splitting a ShadowImage Volume Pair on page2-31).
When the file system is operated by the file snapshot functionalityand the differential-data snapshot is make public in the shared filesystem
1. In the node or virtual server to which the S-VOL is connected, terminatethe program which accesses the S-VOL.
2. In the node or virtual server to which the S-VOL is connected, un-mountthe differential-data snapshot using the syncumount command.
Replication Functions 2-39Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Note:When the differential-data snapshot is not only make public in theshared file system but also create the NFS/CIFS shares to thedifferential-data snapshot, it is required to cancel the NFS/CIFSshares for the differential-data snapshot using the nfsdelete andcifsdelete commands before un-mounting the differential-datasnapshot.
3. In the node or virtual server to which the S-VOL is connected, release thedevice which stores the differential data using the syncstop command.
4. In the node or virtual server to which the S-VOL is connected, release theNFS/CIFS shares for the differential-data snapshot using the nfsdeleteand cifsdelete commands and un-mount all file systems in the pairusing the fsumount command.
5. In the node or virtual server to which the S-VOL is connected, delete theS-VOL file system using the fsdelete command.
6. In the node or virtual server to which the P-VOL is connected, delete theShadowImage volume pair.$ sudo pairsplit {-g group-name | -d volume-name } -S
7. In the node or virtual server to which the P-VOL is connected, check if theShadowImage volume pair has been deleted.$ sudo pairvolchk {-g group-name | -d volume-name}You can also use the pairevtwait command, which waits for the pair tobe deleted.
a. $ pairvolchk : Volstat is P-VOL.[status = COPY] => Deleting
b. $ pairvolchk : Volstat is P-VOL.[status = SMPL] => Deleted
8. In the node or virtual server to which the P-VOL is connected and thenode to which the S-VOL is connected, stop CCI.$ sudo horcmshutdown.sh (1-instance configuration)
or $ sudo horcmshutdown.sh 16 17 (2-instance configuration)
To delete the ShadowImage volume pair in the state other than PSUS state:
1. In the node or virtual server to which the P-VOL is connected, delete theShadowImage volume pair.$ sudo pairsplit {-g group-name |-d volume-name } -S
2. In the node or virtual server to which the P-VOL is connected, check if theShadowImage volume pair has been deleted.$ sudo pairvolchk {-g group-name |-d volume-name }a. pairvolchk : Volstat is P-VOL.[status = COPY] => Deleting
b. pairvolchk : Volstat is P-VOL.[status = SMPL] => Deleted
3. In the node or virtual server to which the S-VOL is connected, release thedevice file used in the target file system.$ sudo horcvmdelete -d Device-file-number,...
4. In the node or virtual server to which the P-VOL is connected and thenode to which the S-VOL is connected, stop CCI.
2-40 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
$ sudo horcmshutdown.sh (1-instance configuration)
or $ sudo horcmshutdown.sh 16 17 (2-instance configuration)
P-VOL Data Recovery from S-VOLThis section describes how to recover the P-VOL data to the status at thepoint of the backup when the ShadowImage pair is in PSUS status, by usingthe data that is backed up in the S-VOL. The operations to recover the P-VOLdata differ according to the following condition:
• When the P-VOL file system is in normal status, see Recovering DataWhen the P-VOL File System is in Normal Status on page 2-41.
• When the P-VOL file system is blocked, see Recovering Data When the P-VOL File System is Blocked on page 2-45.
• When an error occurs on the device file used by the P-VOL, seeRecovering Data When an Error Occurs on the Device File Used by the P-VOL on page 2-50.
Note that if the ShadowImage pair is not in PSUS status, you cannot recoverthe P-VOL data since the consistency of the S-VOL data as the file system isnot guaranteed.
Recovering Data When the P-VOL File System is in Normal Status
When the P-VOL file system is in normal status, separate the source filesystem and then recover the data. Assuming the P-VOL is in PSUS status andthe S-VOL is in SSUS status, to recover the data:
When the file system is not the one operated by the file snapshotfunctionality or when the differential-data snapshot is not makepublic in the shared file system even though the file system isoperated by the file snapshot functionality
1. Unmount the NFS share of P-VOL and S-VOL on the client side.
2. In the node or virtual server to which the P-VOL is connected, delete NFS/CIFS shares by using the nfsdelete command and the cifsdeletecommand, and unmount P-VOL by using the fsumount command.
Note:In the file snapshot functionality file system, separate the NFS/CIFSshares for the differential-data snapshot by using the nfsdeletecommand and the cifsdelete command, unmount the differential-data snapshot by using the syncumount command.
3. In the node or virtual server to which the P-VOL is connected, separatethe file system in the P-VOL by using the horcexport command.$ sudo horcexport -f source-file-system-name
4. In the node or virtual server to which the S-VOL is connected, deleteNFS/CIFS shares by using the nfsdelete command and the cifsdeletecommand, and unmount S-VOL by using the fsumount command.
Replication Functions 2-41Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Note:In the file snapshot functionality file system, separate the NFS/CIFSshares for the differential-data snapshot by using the nfsdeletecommand and the cifsdelete command, unmount the differential-data snapshot by using the syncumount command.
5. In the node to which the S-VOL is connected, separate the file system ofthe S-VOL by using the horcexport command.$ sudo horcexport -f target-file-system-name
6. In the node or virtual server to which the P-VOL is connected,resynchronize the ShadowImage pair in reverse direction.$ sudo pairresync {-g group-name |-d volume-name } -restore
7. In the node or virtual server to which the P-VOL is connected, check if theShadowImage pair has been resynchronized.$ sudo pairvolchk {-g group-name |-d volume-name }Note:
You may use the pairevtwait command to check whether theresynchronization is finished or not. Execute the pairevtwaitcommand and if the result is pairvolchk : Volstat is P-VOL.[status = PAIR], resynchronization is finished.
8. In the node or virtual server to which the P-VOL is connected, split theShadowImage pair.$ sudo pairsplit {-g group-name |-d volume-name }
9. In the node or virtual server to which the P-VOL is connected, connect thesource file system to the node or virtual server.
For a non-LVM and non-tiered file systemsudo horcimport -f source-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f source-file-system-name --tier1 device-file-number --tier2 device-file-number
LVM and non-tiered file systemsudo horcvmimport -f source-file-system-name -d device-file-number,...
For a LVM and non-tiered file systemsudo horcvmimport -f source-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
10. In the node or virtual server to which the P-VOL is connected, mount theP-VOL by using the fsmount command, and set the NFS/CIFS shares byusing the nfscreate command and the cifscreate command.
Note:In the file snapshot functionality file system, mount the differential-data snapshot by using the syncmount command if necessary, and setthe NFS/CIFS shares by using the nfscreate command and thecifscreate command.
2-42 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
11. In the node to which the S-VOL is connected, connect the copydestination file system to the node.
For a non-LVM and non-tiered file systemsudo horcimport -f target-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f target-file-system-name --tier1 device-file-number --tier2 device-file-number
For a LVM and non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,...
For a LVM and tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
12. In the node or virtual server to which the S-VOL is connected, mount theS-VOL by using the fsmount command, and set the NFS/CIFS shares byusing the nfscreate command and the cifscreate command.
Note:In the file snapshot functionality file system, mount the differential-data snapshot by using the syncmount command if necessary, and setthe NFS/CIFS shares by using the nfscreate command and thecifscreate command.
13. Mount the NFS share of P-VOL and S-VOL on the client side.
When the file system is operated by the file snapshot functionalityand the differential-data snapshot is make public in the shared filesystem
1. Un-mount the NFS shares of the P-VOL and S-VOL at a client-side.
2. In the node or virtual server to which the P-VOL is connected, un-mountthe differential-data snapshot using the syncumount command.
Note:When the differential-data snapshot is not only make public in theshared file system but also create the NFS/CIFS shares to thedifferential-data snapshot, it is required to cancel the NFS/CIFSshares for the differential-data snapshot using the nfsdelete andcifsdelete commands before un-mounting the differential-datasnapshot.
3. In the node or virtual server to which the P-VOL is connected, delete theNFS/CIFS shares using the nfsdelete command and the cifsdeletecommand, and un-mount the P-VOL file systems using the fsumountcommand.
4. In the node or virtual server to which the P-VOL is connected, separatethe P-VOL file system using the horcexport command.$ sudo horcexport -f source-file-system-name
Replication Functions 2-43Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
5. In the node or virtual server to which the S-VOL is connected, un-mountthe differential-data snapshot using the syncumount command.
Note:When the differential-data snapshot is not only make public in theshared file system but also create the NFS/CIFS shares to thedifferential-data snapshot, it is required to cancel the NFS/CIFSshares for the differential-data snapshot using the nfsdelete andcifsdelete commands before un-mounting the differential-datasnapshot.
6. In the node or virtual server to which the S-VOL is connected, delete theNFS/CIFS shares using the nfsdelete command and the cifsdeletecommand, and un-mount the S-VOL using the fsumount command.
7. In the node to which the S-VOL is connected, separate the file system ofthe S-VOL by using the horcexport command.$ sudo horcexport -f target-file-system-name
8. In the node or virtual server to which the P-VOL is connected,resynchronize the ShadowImage volume pair in the reverse direction.$ sudo pairresync {-g group-name | -d volume-name } -restore
9. In the node or virtual server to which the P-VOL is connected, check thecompletion of the resynchronization of the ShadowImage volume pair.$ sudo pairvolchk {-g group-name | -d volume-name}Note:
You may use the pairevtwait command to check whether theresynchronization is finished or not. Execute the pairevtwaitcommand and if the result is pairvolchk : Volstat is P-VOL.[status = PAIR], resynchronization is finished.
10. In the node or virtual server to which the P-VOL is connected, split theShadowImage volume pair.$ sudo pairsplit {-g group-name | -d volume-name }
11. In the node or virtual server to which the P-VOL is connected, connect thecopy source file system to the node or virtual server.
For a non-tiered file systemsudo horcvmimport -f source-file-system-name -d device-file-number,...
For a tiered file systemsudo horcvmimport -f source-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
12. In the node or virtual server to which the P-VOL is connected, mount theP-VOL using the fsmount command, and set the NFS/CIFS shares usingthe nfscreate command and the cifscreate command.
13. In the node or virtual server to which the P-VOL is connected, mount thedifferential-data snapshot using the syncmount command, and will bemade public the NFS/CIFS shares.
14. In the node to which the S-VOL is connected, connect the copydestination file system to the node.
2-44 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
For a non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,,...
For a tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
15. In the node or virtual server to which the S-VOL is connected, mount theS-VOL using the fsmount command and set the NFS/CIFS shares usingthe nfscreate command and the cifscreate command.
16. In the node or virtual server to which the S-VOL is connected, mount thedifferential-data snapshot using the syncmount command, and will bemade public the NFS/CIFS shares.
17. Mount the NFS shares of the P-VOL and S-VOL at a client-side.
Recovering Data When the P-VOL File System is Blocked
When the P-VOL file system is blocked because of a logical failure, first deletethe blocked source file system, and then recover the data.
To delete the file system (when the file system is not the oneoperated by the file snapshot functionality or when the differential-data snapshot is not make public in the shared file system eventhough the file system is operated by the file snapshotfunctionality):
1. Unmount the NFS share of P-VOL and S-VOL on the client side.
2. Ask a service engineer to acquire the Dump of the node in which thefailed file system exists.
3. Perform the failback on the resource group of the failover node or virtualserver in step 2.
4. Delete NFS/CIFS shares of the P-VOL by using the nfsdelete commandand the cifsdelete command, and unmount the P-VOL using thefsumount command.
Note:In the file snapshot functionality file system, delete the NFS/CIFSshares for the differential-data snapshot by using the nfsdeletecommand and the cifsdelete command, unmount the differential-data snapshot by using the syncumount command, and release thedifferential-data storage device by using the syncstop command.
5. Delete the P-VOL file system by using the fsdelete command.
6. Start CCI on the nodes or virtual servers connected to the P-VOL and theS-VOL.
¢ $ sudo horcmstart.sh (1-instance configuration)
¢ $ sudo horcmstart.sh 16 17 (2-instance configuration)
Replication Functions 2-45Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Note:If the failed file system is not unmounted when step 2 is finished, deleteonly the CIFS share in step 3 by using the cifsdelete command.
To delete the file system (when the file system is operated by thefile snapshot functionality and the differential data snapshot ismake public in the shared file system)
1. Un-mount the NFS shares of the P-VOL and S-VOL at a client-side.
2. Get the Dump from the node in which the failed file system exists.
3. Execute a failback for the resource group of the node or virtual server towhich a failover occurred in Step 2.
4. Un-mount the differential-data snapshot using the syncumount command.
Note:When the differential-data snapshot is not only make public in theshared file system but also create the NFS/CIFS shares to thedifferential-data snapshot, it is required to cancel the NFS/CIFSshares for the differential-data snapshot using the nfsdelete andcifsdelete commands before un-mounting the differential-datasnapshot.
5. Release the device which stores the differential data using the syncstopcommand.
6. Delete NFS/CIFS shares of the P-VOL by using the nfsdelete commandand the cifsdelete command, and unmount the P-VOL using thefsumount command.
Note:In the file snapshot functionality file system, delete the NFS/CIFSshares for the differential-data snapshot by using the nfsdeletecommand and the cifsdelete command, unmount the differential-data snapshot by using the syncumount command, and release thedifferential-data storage device by using the syncstop command.
7. Delete the P-VOL file system using the fsdelete command.
8. Start CCI on the nodes or virtual servers connected to the P-VOL and theS-VOL.
¢ $ sudo horcmstart.sh (1-instance configuration)
¢ $ sudo horcmstart.sh 16 17 (2-instance configuration)
Note:After the execution was completed up to Step 3, if a failed file system isnot mounted, perform only the release of the CIFS sharing in Step 6 usingthe cifsdelete command.
To recover P-VOL data (when the file system is not the one operatedby the file snapshot functionality or when the differential-data
2-46 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
snapshot is not make public in the shared file system even thoughthe file system is operated by the file snapshot functionality):
1. In the node or virtual server to which the S-VOL is connected, deleteNFS/CIFS shares by using the nfsdelete command and the cifsdeletecommand, and unmount S-VOL by using the fsumount command.
Note:In the file snapshot functionality file system, separate the NFS/CIFSshares for the differential-data snapshot by using the nfsdeletecommand and the cifsdelete command, unmount the differential-data snapshot by using the syncumount command.
2. In the node to which the S-VOL is connected, separate the file system ofthe S-VOL by using the horcexport command.$ sudo horcexport -f target-file-system-name
3. In the node or virtual server to which the P-VOL is connected, reserve adevice file to be used as a source file system.$ sudo horcvmdefine -d device-file-number,...
4. In the node or virtual server to which the P-VOL is connected,resynchronize the ShadowImage pair in reverse direction.$ sudo pairresync {-g group-name |-d volume-name } -restore
5. In the node or virtual server to which the P-VOL is connected, check if theShadowImage pair has been resynchronized.$ sudo pairvolchk {-g group-name |-d volume-name }Note:
You may use the pairevtwait command to check whether theresynchronization is finished or not. Execute the pairevtwaitcommand and if the result is pairvolchk : Volstat is P-VOL.[status = PAIR], resynchronization is finished.
6. In the node or virtual server to which the P-VOL is connected, split theShadowImage pair.$ sudo pairsplit {-g group-name |-d volume-name }
7. In the node or virtual server to which the P-VOL is connected, connect thesource file system to the node or virtual server.
For a non-LVM and non-tiered file systemsudo horcimport -f source-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f source-file-system-name --tier1 device-file-number --tier2 device-file-number
For a LVM and non-tiered file systemsudo horcvmimport -f source-file-system-name -d device-file-number,...
For a LVM and tiered file systemsudo horcvmimport -f source-file-system-name --tier1 device-file-number --tier2 device-file-number
Replication Functions 2-47Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
8. In the node or virtual server to which the P-VOL is connected, mount theP-VOL by using the fsmount command, and set the NFS/CIFS shares byusing the nfscreate command and the cifscreate command.
Note:In the file snapshot functionality file system, mount the differential-data snapshot by using the syncmount command if necessary, and setthe NFS/CIFS shares by using the nfscreate command and thecifscreate command.
9. In the node to which the S-VOL is connected, connect the copydestination file system to the node.
For a non-LVM and non-tiered file systemsudo horcimport -f target-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f target-file-system-name --tier1 device-file-number --tier2 device-file-number
For a LVM and non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,...
For a LVM and tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
10. In the node or virtual server to which the S-VOL is connected, mount theS-VOL by using the fsmount command, and set the NFS/CIFS shares byusing the nfscreate command and the cifscreate command.
Note:In the file snapshot functionality file system, mount the differential-data snapshot by using the syncmount command if necessary, and setthe NFS/CIFS shares by using the nfscreate command and thecifscreate command.
11. Mount the NFS share of P-VOL and S-VOL on the client side.
To recover P-VOL data (when the file system is operated by the filesnapshot functionality and the differential-data snapshot is makepublic in the shared file system)
1. In the node or virtual server to which the S-VOL is connected, un-mountthe differential-data snapshot using the syncumount command.
Note:When the differential-data snapshot is not only make public in theshared file system but also create the NFS/CIFS shares to thedifferential-data snapshot, it is required to cancel the NFS/CIFSshares for the differential-data snapshot using the nfsdelete andcifsdelete commands before un-mounting the differential-datasnapshot.
2-48 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
2. In the node or virtual server to which the S-VOL is connected, delete theNFS/CIFS shares using the nfsdelete command and the cifsdeletecommand, and un-mount the S-VOL using the fsumount command.
3. In the node to which the S-VOL is connected, separate the file system ofthe S-VOL by using the horcexport command.$ sudo horcexport -f target-file-system-name
4. In the node or virtual server to which the P-VOL is connected, reserve adevice file that was used by the copy source file system.$ sudo horcvmdefine -d device-file-number,...
5. In the node or virtual server to which the P-VOL is connected,resynchronize the ShadowImage volume pair in the reverse direction.$ sudo pairresync {-g group-name | -d volume-name } -restore
6. In the node or virtual server to which the P-VOL is connected, check thecompletion of the resynchronization of the ShadowImage volume pair.$ sudo pairvolchk {-g group-name | -d volume-name}Note:
You may use the pairevtwait command to check whether theresynchronization is finished or not. Execute the pairevtwaitcommand and if the result is pairvolchk : Volstat is P-VOL.[status = PAIR], resynchronization is finished.
7. In the node or virtual server to which the P-VOL is connected, split theShadowImage volume pair.$ sudo pairsplit {-g group-name | -d volume-name }
8. In the node or virtual server to which the P-VOL is connected, connect thecopy source file system to the node or virtual server.
For a non-tiered file systemsudo horcvmimport -f source-file-system-name -d device-file-number
For a tiered file systemsudo horcvmimport -f source-file-system-name --tier1 device-file-number --tier2 device-file-number,...
9. In the node or virtual server to which the P-VOL is connected, mount theP-VOL using the fsmount command, and set the NFS/CIFS shares usingthe nfscreate command and the cifscreate command.
10. In the node or virtual server to which the P-VOL is connected, mount thedifferential-data snapshot using the syncmount command, and will bemade public the NFS/CIFS shares.
11. In the node to which the S-VOL is connected, connect the copydestination file system to the node.
For a non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,...
For a tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
Replication Functions 2-49Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
12. In the node or virtual server to which the S-VOL is connected, mount theP-VOL using the fsmount command and set the NFS/CIFS shares usingthe nfscreate command and the cifscreate command.
13. In the node or virtual server to which the S-VOL is connected, mount thedifferential-data snapshot using the syncmount command, and will bemade public the NFS/CIFS shares.
14. Mount the NFS shares of the P-VOL and S-VOL at a client-side.
Recovering Data When an Error Occurs on the Device File Used by the P-VOL
When an error occurs on the device file, first delete the ShadowImage pair.After that, recover the failure of the device file and recover the data, andthen create the ShadowImage pair again.
To apply maintenance mode:
1. Apply maintenance mode by using the lumapctl command. Applying themaintenance mode prevents that the device file number on the storageside is automatically allocated to the user LU.$ sudo lumapctl -t m --on
To delete the ShadowImage pair:
1. In the node or virtual server to which the P-VOL is connected, delete theShadowImage pair.$ sudo pairsplit {-g group-name |-d volume-name } -S
2. In the node or virtual server to which the P-VOL is connected, check if theShadowImage pair has been deleted.$ sudo pairvolchk {-g group-name |-d volume-name }pairvolchk : Volstat is P-VOL.[status = COPY] => Deleting
pairvolchk : Volstat is P-VOL.[status = SMPL] => Deleted
To recover the failure of the device file and delete file system (whenthe file system is not the one operated by the file snapshotfunctionality or when the differential-data snapshot is not makepublic in the shared file system even though the file system isoperated by the file snapshot functionality):
1. Delete the NFS/CIFS shares using the nfsdelete command and thecifsdelete command, and un-mount the P-VOL using the fsumountcommand.
Note:In the file snapshot functionality file system, before delete the filesystem, delete the NFS/CIFS shares for the differential-data snapshotusing the nfsdelete command and the cifsdelete command, andun-mount the differential-data snapshot using the syncumountcommand, and release the device which stores the differential datausing the syncstop command.
2-50 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
2. Delete the file system of the P-VOL by using fsdelete command.
3. Ask maintenance personnel to recover the failed device file.
4. Perform the following operations in order:
¢ Fail over the resource group.
¢ Stop the node.
¢ Ask a service engineer to restart the OS on the node that wasstopped.
¢ Start the node.
¢ Fail back the resource group.
5. Perform step 4 for the other node.
6. Start CCI on the node or virtual server to which the P-VOL is connectedand in the node to which the S-VOL is connected.
¢ $ sudo horcmstart.sh (1-instance configuration)
¢ $ sudo horcmstart.sh 16 17 (2-instance configuration)
To recover the failure of the device file (when the file system isoperated by the file snapshot functionality and the differential-datasnapshot is published in the shared file system):
1. In the node or virtual server to which the P-VOL is connected, un-mountthe differential-data snapshot using the syncumount command.
Note:When the differential-data snapshot is not only make public in theshared file system but also create the NFS/CIFS shares to thedifferential-data snapshot, it is required to cancel the NFS/CIFSshares for the differential-data snapshot using the nfsdelete andcifsdelete commands before un-mounting the differential-datasnapshot.
2. In the node or virtual server to which the P-VOL is connected, release thedifferential-data storage device by using the syncstop command.
3. Delete the NFS/CIFS shares using the nfsdelete command and thecifsdelete command, and un-mount the P-VOL using the fsumountcommand.
4. Delete the file system of the P-VOL by using fsdelete command.
5. Ask maintenance personnel to recover the failed device file.
6. Perform the following operations in order:
¢ Fail over the resource group.
¢ Stop the node.
¢ Ask a service engineer to restart the OS on the node that wasstopped.
¢ Start the node.
¢ Fail back the resource group.
Replication Functions 2-51Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
7. Perform step 6 for the other node.
8. Start CCI in the node or virtual server to which the P-VOL is connectedand in the node or virtual server to which the S-VOL is connected.
¢ $ sudo horcmstart.sh (1-instance configuration)
¢ $ sudo horcmstart.sh 16 17 (2-instance configuration)
To restore the data to the P-VOL (when the file system is not theone operated by the file snapshot functionality or when thedifferential-data snapshot is not make public in the shared filesystem even though the file system is operated by the file snapshotfunctionality):
1. In the node or virtual server to which the S-VOL is connected, deleteNFS/CIFS shares by using the nfsdelete command and the cifsdeletecommand, and unmount S-VOL by using the fsumount command.
2. In the node or virtual server to which the P-VOL is connected, reserve adevice file to be used as a source file system.$ sudo horcvmdefine -d device-file-number,...
3. In the node or virtual server to which the S-VOL is connected, create aShadowImage pair whose copy direction is opposite to that of the deletedpair.$ sudo paircreate {-g group-name|-d volume-name} -vl
4. Check if the ShadowImage volume has been created in the node or virtualserver to which the S-VOL is connected.$ sudo pairvolchk {-g group-name |-d volume-name }Note:
You may use the pairevtwait command to check whether thecreation of the pair is finished or not. Execute the pairevtwaitcommand and if the result is pairvolchk : Volstat is P-VOL.[status = PAIR], creation is finished.
5. In the node or virtual server to which the S-VOL is connected, suppressthe file snapshot functionality operations on the S-VOL.$ sudo horcfreeze -f source-file-system-name
6. In the node or virtual server to which the S-VOL is connected, split theShadowImage pair.$ sudo pairsplit {-g group-name |-d volume-name }
7. In the node or virtual server to which the S-VOL is connected, check if theShadowImage pair has been split.$ sudo pairvolchk {-g group-name |-d volume-name }Note:
You may use the pairevtwait command to check whether the splitprocessing is finished or not. Execute the pairevtwait command andif the result is pairvolchk : Volstat is P-VOL.[status = PSUS],split processing is finished.
2-52 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
8. In the node or virtual server to which the P-VOL is connected, connect thesource file system to the node or virtual server.
For a non-LVM and non-tiered file systemsudo horcimport -f source-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f source-file-system-name --tier1 device-file-number --tier2 device-file-number
For a LVM and non-tiered file systemsudo horcvmimport -f source-file-system-name -d device-file-number,...
For a LVM and tiered file systemsudo horcvmimport -f source-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
9. In the node or virtual server to which the S-VOL is connected, delete theShadowImage pair.$ sudo pairsplit {-g group-name |-d volume-name } -S
10. In the node or virtual server to which the S-VOL is connected, check if theShadowImage pair has been deleted.$ sudo pairvolchk {-g group-name |-d volume-name }Note:
You may use the pairevtwait command to check whether the deleteprocessing is finished or not. Execute the pairevtwait command andif the result is pairvolchk : Volstat is P-VOL.[status = SMPL],delete processing is finished.
11. In the node or virtual server to which the S-VOL is connected, cancel thesuppression of the file snapshot functionality operations.$ sudo horcunfreeze -f source-file-system-name
12. In the node or virtual server to which the P-VOL is connected, mount theP-VOL by using the fsmount command, and set the NFS/CIFS shares byusing the nfscreate command and the cifscreate command.
Note:In the file snapshot functionality file system, mount the differential-data snapshot by using the syncmount command if necessary, and setthe NFS/CIFS shares by using the nfscreate command and thecifscreate command.
13. In the node or virtual server to which the S-VOL is connected, separatethe S-VOL file system by using the horcexport command.$ sudo horcexport -f target-file-system-nameNote:
In the file snapshot functionality file system, separate the NFS/CIFSshares for the differential-data snapshot by using the nfsdeletecommand and the cifsdelete command, unmount the differential-data snapshot by using the syncumount command.
Replication Functions 2-53Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
To restore the data to the P-VOL (when the file system is operatedby the file snapshot functionality and the differential-data snapshotis published in the shared file system)
1. In the node or virtual server to which the S-VOL is connected, un-mountthe differential-data snapshot using the syncumount command.
Note:When the differential-data snapshot is published in the shared filesystem and the NFS/CIFS shares is created in the differential-datasnapshot, delete the NFS/CIFS shares for the differential-datasnapshot using the nfsdelete command and the cifsdeletecommand before un-mounting the differential-data snapshot.
2. In the node or virtual server to which the S-VOL is connected, delete theNFS/CIFS shares using the nfsdelete command and the cifsdeletecommand, and un-mount the S-VOL using the fsumount command.
3. In the node or virtual server to which the P-VOL is connected, reserve adevice file that was used by the copy source file system.$ sudo horcvmdefine -d device-file-number,...
4. In the node or virtual server to which the S-VOL is connected, create aShadowImage volume pair by copying data in the direction reverse to theusual.$ sudo paircreate {-g group-name | -d volume-name } -vl
5. In the node or virtual server to which the S-VOL is connected, check thecompletion of the creation of the ShadowImage volume pair.$ sudo pairvolchk {-g group-name | -d volume-name}Note:
You may use the pairevtwait command to check whether thecreation of the pair is finished or not. Execute the pairevtwaitcommand and if the result is pairvolchk : Volstat is P-VOL.[status = PAIR], creation is finished.
6. In the node or virtual server to which the S-VOL is connected, suppressthe file snapshot functionality operations on the S-VOL.$ sudo horcfreeze -f source-file-system-name
7. In the node or virtual server to which the S-VOL is connected, split theShadowImage volume pair.$ sudo pairsplit {-g group-name | -d volume-name }
8. In the node or virtual server to which the S-VOL is connected, confirmthat the ShadowImage volume is split.$ sudo pairvolchk {-g group-name | -d volume-name }Note:
You may use the pairevtwait command to check whether the splitprocessing is finished or not. Execute the pairevtwait command andif the result is pairvolchk : Volstat is P-VOL.[status = PSUS],split processing is finished.
9. In the node or virtual server to which the P-VOL is connected, connect thecopy source file system to the node.
2-54 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
For a non-tiered file systemsudo horcvmimport -f source-file-system-name -d device-file-number...
For a tiered file systemsudo horcvmimport -f source-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
10. In the node or virtual server to which the S-VOL is connected, delete theShadowImage volume pair.$ sudo pairsplit {-g group-name | -d volume-name } -S
11. In the node or virtual server to which the S-VOL is connected, check if theShadowImage volume pair has been deleted.$ sudo pairvolchk {-g group-name | -d volume-name}Note
You may use the pairevtwait command to check whether the deleteprocessing is finished or not. Execute the pairevtwait command andif the result is pairvolchk : Volstat is P-VOL.[status = SMPL],delete processing is finished.
12. In the node or virtual server to which the S-VOL is connected, permit thestopped operations from the file snapshot functionality to the S-VOL.$ sudo horcnfreeze -f source-file-system-name
13. In the node or virtual server to which the P-VOL is connected, mount theP-VOL using the fsmount command, and set the NFS/CIFS shares usingthe nfscreate command and the cifscreate command.
14. In the node or virtual server to which the P-VOL is connected, mount thedifferential-data snapshot using the syncmount command, and will bemade public the NFS/CIFS shares.
15. In the node or virtual server to which the S-VOL is connected, release thedevice which stores the differential data using the syncstop command.
16. In the node or virtual server to which the S-VOL is connected, separatethe S-VOL file system using the horcexport command.$ sudo horcexport -f target-file-system-name
To restore the ShadowImage volume pair
1. In the node or virtual server to which the P-VOL is connected, create aShadowImage pair.$ sudo paircreate {-g group-name|-d volume-name} -vl
2. Check if the ShadowImage volume has been created in the node or virtualserver to which the P-VOL is connected.$ sudo pairvolchk {-g group-name |-d volume-name }Note:
You may use the pairevtwait command to check whether thecreation of the pair is finished or not. Execute the pairevtwaitcommand and if the result is pairvolchk : Volstat is P-VOL.[status = PAIR], creation is finished.
Replication Functions 2-55Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
3. Split the ShadowImage pair.For details about how to split a ShadowImage volume pair, see Overviewof Using ShadowImage on page 2-30.
To apply normal management mode:
1. Apply normal management mode by using the lumapctl command.$ sudo lumapctl -t m --off
Data Backup Cooperated with a Tape DeviceThis section explains how to back up data from an online volume to a tape byusing a tape device connected to a node or virtual server via an FC interface.The backup to the tape media by using ShadowImage is the full backup.When making the incremental backup to a tape media, use the file snapshotfunctionality. Refer to the Hitachi NAS Platform F1000 Series ClusterAdministrator's Guide for the procedure for the backup using the file snapshotfunctionality.
Prerequisites before starting operation
• The ShadowImage volume pair created and it in the pair status of PSUS(refer to Overview of Using ShadowImage on page 2-30).
• The tape device connected to the node or virtual server via the FCinterface and the tape device in the Ready status (refer to the Installationand Configuration Guide).
• The operating environment to be used with Backup Restore is set in thebackup management software (such as NetBackup). (Refer to thesupplemental documentation for Backup Restore provided with the HNASF system.)
The procedure for backing up data:
1. Resynchronize the pair. Change the pair status from PSUS to PAIR byupdating the S-VOL using the differential between the data of the P-VOLand the S-VOL.For details about how to resynchronize a ShadowImage volume pair, seeOverview of Using ShadowImage on page 2-30.
2. Split the pair following the procedure for the online backup.For the procedure on how to use the online backup method to split aShadowImage volume pair, see Overview of Using ShadowImage on page2-30.
3. Copy the updated data from the S-VOL to the tape media.
Note:Concerning the backup to a tape media using ShadowImage,acceptance of a Read/Write I/O instruction by the P-VOL ofShadowImage realizes the online backup. Perform the operationfollowing the procedure for the execution of the offline backup byBackup Restore because the offline S-VOL is updated based on the
2-56 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
data of the online P-VOL and then the backup is made from the S-VOLto the tape media.
Notes on Failures of External VolumesIf a trouble ( including a temporary trouble, for example, the cableconnection has cut out) occurs to the external volume which is currently usedwith ShadowImage volume pair, perform the following processes.
1. If the external volume can not be used succeedingly, deleteShadowImage volume pair by using the pairsplit -S command.
If the S-VOL is not connected to the node or virtual server, release thedevice file used in the target file system by using the horcvmdeletecommand.
2. Confirm all the ShadowImage volume pair statuses that are using theexternal volume with trouble(s) by using the pairdisplay command.
If the status of the ShadowImage volume pair is PSUE, delete the volumepair by using the pairsplit -S command.
If you delete ShadowImage volume pair when the S-VOL is not connectedto the node or virtual server, release the device file used in the target filesystem by using the horcvmdelete command.
How to use TrueCopyThis section describes how to use TrueCopy with the HNAS F.
Overview of TrueCopy Operations in the HNAS FThe HNAS F system enables integrated management of the data in thestorage system. HNAS F utilizes the LAN environment already in place andenables the data within a. HNAS F utilizes the LAN environment already inplace and enables the data within a storage system to be shared acrossheterogeneous platforms. HNAS F also enables you to copy and maintain thedata stored in the storage system using the volume replication featureTrueCopy. With TrueCopy, you can copy data utilizing the file system server(even in the remote center), back up data to guard against a main volumefailure, and perform disaster recovery at the main site.
To use TrueCopy in the HNAS F system, use Command Control Interface(CCI). As CCI is installed automatically with the OS, there is no need forseparate installation of CCI in your HNAS F system.
Figure 2-6 Using TrueCopy in the HNAS F system on page 2-58 illustratesthe TrueCopy operation.
Replication Functions 2-57Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 2-6 Using TrueCopy in the HNAS F system
Scope of TrueCopy Function with the HNAS F
Scope of TrueCopy Function with the HNAS F
Volume type
Only the user LUs may be used as TrueCopy P-VOLs and S-VOLs in an HNASF system
You cannot specify the OS disks and shared LU of a virtual server as aTrueCopy P-VOL or S-VOL.
Platforms which can access the TrueCopy P-VOLs or S-VOLs
A P-VOL or S-VOL of TrueCopy created by the HNAS F system may beaccessed by the HNAS F system or clients connected to the HNAS F through anetwork. Hosts connected via serial port or fibre-channel port will not be ableto access them.
File systems which can be allocated in the TrueCopy P-VOL
You cannot allocate a file system that consists of 129 or more LUs and thatuses Logical Volume Manager (LVM) on the OS.
You cannot allocate a volume group that consists of 129 or more LUs if itcontains a file system managed by the file snapshot functionality anddifferential-data storage devices.
Writing data to S-VOL when a volume pair is split
When you split a TrueCopy volume pair in the HNAS F system, you need tochange the S-VOL to write-enable. Make sure that you always specify the –rwoption instead of the -r option when issuing the CCI pairsplit command.
To protect the data in the split S-VOL from being written to by the client, usethe fsmount command with –r option in File Services Manager whenmounting the file system in S-VOL.
2-58 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Note on the path settings
If you use the HNAS F system with TrueCopy, you must add paths to theports of both CL1 and CL2 to connect the primary and secondary sites.
Scope Related to Features in Backup Restore
Limitations on the functionality of the backup managementsoftware
If you resynchronize a TrueCopy pair defined by Volume Replication functionafter performing a backup of the S-VOL by using the backup managementsoftware, a full backup will be acquired next time.
Scope Related to the file snapshot functionality
The file snapshot functionality
When you copy a file system managed by the file snapshot functionality usingTrueCopy, you must copy both the LUs that constitute the file system and theLUs which constitute the differential-data storage devices. If you copy onlythe LUs which constitute a file system, you will not be able to connect themto the HNAS F system on the remote site.
The setting of an automatic creation schedule for the file snapshotfunctionality is not copied. The differential-data snapshot is not mounted atthe copy destination.
Prerequisites
Hardware Prerequisites
You need a workstation or PC to log in to the HNAS F system using secureshell (SSH) in addition to the prerequisite hardware for TrueCopy, CCI, andthe HNAS F system described in the following guides:
Manuals related to HNAS F
¢ Installation and Configuration Guide
Manuals related to storage systems
¢ Hitachi TrueCopy User and Reference Guide
¢ Hitachi Command Control Interface (CCI) User and Reference Guide
¢ Hitachi ShadowImage User's Guide (User Guide) (when using theShadowImage cascade function)
Software Prerequisites
To use the HNAS F system, the program products in the HNAS F must beproperly installed and have valid licenses.
Replication Functions 2-59Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
In addition, to use TrueCopy in a HNAS F system, all of the following programproducts must be installed in the storage system to which the HNAS F systemis connected, and their licenses must be valid:
• TrueCopy
• ShadowImage(Required when configuring the cascade connection of TrueCopy/ShadowImage)
Notes on Operations
Mounting the NFS client for a file system whose data is backed uponline
If you perform online backup for a file system accessed by the NFS client byvolume replication, you must specify the NFS version 3 before mounting a filesystem on the NFS client. If you specify the NFS version 2, you must specifythe hard option before mounting a file system on the NFS client.
Limitations on TrueCopy operations due to the status of cluster,nodes, and resource groups
When a cluster is not configured, the cluster is stopped, the nodes arestopped, and the resource groups are offline, connecting the device file orcreating and mounting a file system is restricted. Due to these restrictions,the following operations performed in the target TrueCopy operation will alsoend in error. You therefore should not operate the cluster, nodes, andresource groups during the TrueCopy operation. Should any problems occurwith the cluster, nodes, or resource groups, fix them immediately.
• Unmount and mount the source file system during the splitting of aTrueCopy pair volume.
• Connect the target file system to the node.
• Unmount and delete the target file system before the TrueCopy pair isresynchronized.
Notes on occurrence of a failover during operating on node
During the operation on node, to continue the operation at the failoverdestination when a failover occurs, execute the command making the remotehost specify the virtual IP address. When you connect the S-VOL file systemof the copy destination, execute the horcimport or horcvmimport commandby adding the -r option and specifying a resource group name.
Notes on changing system configuration during the TrueCopyoperations
If any of the following are changed during the operation of TrueCopy, youmust change the Command Control Information configuration definition fileson the node:
2-60 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
• Changing the fixed IP address of the node
• Expanding or deleting a source file system
• Setting up, expanding or releasing a differential-data storage device forthe file snapshot functionality
When the host name is specified in the Command Control Interfaceconfiguration definition file and you change the following systemconfigurations, you must change the configuration definition file:
• Editing the /etc/hosts file (when resolving the host name using the /etc/hosts file)
• Changing registration information on NIS server, or changing a setting forNIS server (when resolving the host name using NIS)
• Changing registration information on DNS server, or changing a settingfor DNS server (when resolving the host name using DNS)
• Changing the node name or the host name
Notes on using external volumes
In case of periodic maintenance of external volumes, you must delete allTrueCopy pairs once.
Notes on splitting TrueCopy volume pairs with online backup
After the horcfreeze command is executed, if it takes long time until thehorcunfreeze command is executed, the timeout may occur in some client.In addition, if the file snapshot functionality is used on the copy source filesystem, the timeout tends to occur because the horcfreeze command takeslong time to be executed.
Confirming whether the access to the file system is suspended
After you execute the horcfreeze command and the horcunfreezecommand, you can check whether the access from the client to the filesystem is suspended or not by using the fsctl command.
Notes on file systems managed by the file snapshot functionality
The following settings and status of a copy source file system managed bythe file snapshot functionality are copied to a target file system.
• Warning threshold
• Operation threshold
• Overflow prevention
• Status of the differential data storage device
If a TrueCopy pair is split and connected to the node while the differentialdata storage device in the copy source file system does not have sufficientcapacity, certain measures must be taken in the copy source file system tosolve the lack of the capacity.
Replication Functions 2-61Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
In splitting a TrueCopy pair, confirm that the differential data storage devicein the copy source file system is in the normal status.
Note on file systems that support single-instancing
The single-instancing setting is not copied over copy-destination file systems.If the single-instancing setting is enabled for a copy-source file system, aftersplitting a TrueCopy pair and connecting to a node, use the fsedit commandto enable the single-instancing setting for the copy-destination file system.Also, set the policy for duplicate file capacity reduction.
Before splitting a TrueCopy pair, make sure that the duplicate file capacityreduction policy has not yet been executed on the copy source file system. Ifa TrueCopy pair is split during policy execution, resynchronize the TrueCopypair and perform its subsequent operations again.
Notes on tiered file systems
If the copy-source file system is a tiered file system, the LUs of all the filesystems that make up the tier must be assigned the same device group namein the configuration definition file.
To connect the copy-source file system to a node or a virtual server, specifythe --tier1 and --tier2 options for the horcimport command or thehorcvmimport command if the file system is a tiered file system. Otherwise,specify the -d option.
If the tiered file system is already connected to a node or a virtual server,you need to set up a tier policy schedule. For details, see the Hitachi NASPlatform F1000 Series Cluster Administrator's Guide.
Notes on dynamic recognition in the FC path
HNAS F automatically recognizes LUs connected to FC paths.
A user LU (device file number) is determined when the device file to be usedfor the copy target of the file system is reserved by using the horcvmdefinecommand but, if the OS is rebooted without having the device file numberdetermined, the device file number might change from the one before thereboot. In the following cases, immediately reserve the device file to be usedfor the copy target of the file system by using the horcvmdefine command sothat the device file number will not be changed.
• In starting to use the replication functions.
• In case the file system is deleted and then the LU of the deleted filesystem is to be used as the S-VOL.
If the OS is rebooted before reserving the device file to be used for the copytarget of the file system, find out the device file number and the LU numberthat constitute the file system by using the horcdevlist command.
2-62 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Note on WORM file systems
A node cannot be connected to if TrueCopy is used to copy a WORM filesystem.
If the file system was encrypted by using the HNAS F functionality
If the local data encryption functionality is being used, the copy-destinationfile system can be connected to the node only when the copy-source filesystem and the copy-destination file system are in the same cluster. If thefile systems are in different clusters, the copy-destination file system cannotbe connected to the node.
Preparing for TrueCopy Operations
Preparing for TrueCopy Volume Pair Operation
Make the preparations necessary for creating TrueCopy volume pairs. Fordetails on the preparations required on the storage system side, see theHitachi TrueCopy User's Guide (User Guide).
Registration of Public Key Used for SSH
Before issuing the commands described in this document, the public key usedfor SSH needs to be registered in the node or virtual server to which theTrueCopy P-VOL is connected, and also in the node or virtual server that willuse the S-VOL. You can register the public key from the Access ProtocolConfiguration window, in the Add Public Key page.
Configuring the CCI Environment
Logging in to a node via SSH
Using the nasroot account via SSH, log in both the node in which theTrueCopy P-VOL is connected, and the node which will use the S-VOL. (Forinformation about login, refer to the appropriate documentation of the SSHcommunication software.)
Setting the environment of instance numbers to be used
For using the instance numbers allocated by default, the usage environmentis already configured, and no operation is required here.
For using additional instance numbers, the environment for the instancenumbers to be used must be configured by using the horcsetconf command.$ sudo horcsetconf -i instance number
Example 2-25 Configuring the usage environment for additional instancenumbers
Replication Functions 2-63Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
The instance numbers which are already set can be confirmed by thehorcconflist command. By this command, search for instance numbers notused.$ sudo horcconflistinstance node number or virtual server name 16 node 0(D000000000), node 1(D000000001) 17 node 0(D000000000), node 1(D000000001) 499 node 0(D000000000), node 1(D000000001)
Example 2-26 Confirming already set instance numbers
The usage environment of additional instance numbers can be deleted ifnecessary by the horcunsetconf command. Note that the usage environmentof the instance numbers allocated by default cannot be deleted.$ sudo horcunsetconf -i instance number
Example 2-27 Deleting the usage environment of additional instancenumbers
Configuring the CCI Configuration Definition Files
To control a TrueCopy pair by using CCI, you must first define the TrueCopypair by using the CCI configuration definition file.
A Command Control Interface configuration definition file template is madeavailable with the installation of HNAS F.
The template of the CCI configuration definition file:/home/nasroot/horcm<instance-number>.confFor nodes, <instance number> is 16 or 17. Additional instance numbers are from 20 to 499.Next, add the HORCM_MON, HORCM_CMD, HORCM_DEV, and HORCM_INST sectionsto the CCI configuration definition file template using the CCI mkconf.sh andhorcconfedit commands. After that, edit the file as necessary.
To complete the CCI configuration definition file:
1. Add the HORCM_DEV section and the HORCM_INST section to a templatefor the Command Control Interface configuration definition file using themkconf.sh command in Command Control Interface.
2. Edit the file and create the Command Control Interface configurationdefinition file.
You must perform these operations in both nodes of the main site and in bothnodes of the remote site. You therefore need to prepare a total of four CCIconfiguration files, assuming one instance per node. For two instances, youwill need a total of eight configuration files.
In the HNAS F, you can define one or two CCI instances per a node. Tooperate only the TrueCopy pairs in CCI, you need one instance. To operatethe pairs where TrueCopy and ShadowImage are cascaded, you need twoinstances.
2-64 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
In the following section, we explain how to create a CCI configurationdefinition file using the HNAS F system with LUs as an example.
Figure 2-7 Example of Configuration of Pair LU
Defining configuration definition files by using the CCI mkconf.shcommand
Use the CCI mkconf.sh command to define the HORCM_MON, HORCM_CMD,HORCM_DEV, and HORCM_INST sections in the CCI configuration definition filetemplate.$ ls /dev/sdu*u | sudo mkconf.sh -gg device-group-name -i 16
Example 2-28 Defining configuration definition files (for instance number16)
Note:Be sure to specify the -gg option for the mkconf.sh command. Specifyingthe -gg option assigns LU numbers allocated to host groups to pairs whenthe pairs are defined. When the -gg option is not specified, data is copiedto an LU other than the required one because the pair cannot be definedby using an LU number allocated to the host group. In addition, do notspecify the -a option because the command device path of the HORCM_CMDsection is defined by the mkconf.sh command. For details, see the HitachiCommand Control Interface (CCI) User and Reference Guide.
To execute the mkconf.sh command, the file system to be paired must becreated. The LU configuration (size and number) in the file system to be
Replication Functions 2-65Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
paired needs to be exactly the same in both the primary site and thesecondary site.
To create the correct CCI configuration definition file, it is recommended thatyou create a file system to be temporarily paired before executing themkconf.sh command. After creating the CCI configuration definition file usingthe mkconf.sh command, you can continue using the file system in theprimary site. Alternatively, you may also delete it and create a file systemwith the same configuration using the same LU in Create File System inTrueCopy P-VOL on page 2-76. You must delete the temporarily created filesystem in the secondary site before creating a pair.
Example 2-29 Example of mkconf.sh Execution Result (for Instance Number16) on page 2-66 shows an example of how to create a template for a CCIconfiguration definition file. We will explain how to create a CCI configurationdefinition file in node0 in the primary site in the sample configuration above.You must create CCI configuration definition files for the other nodes usingthe previously explained procedure.$ ls /dev/sdu*u | sudo mkconf.sh -gg VG -i 16starting HORCM inst 16HORCM inst 16 starts successfully.HORCM Shutdown inst 16 !!!A CONFIG file was successfully completed.starting HORCM inst 16HORCM inst 16 starts successfully.DEVICE_FILE Group PairVol PORT TARG LUN M SERIAL LDEV/dev/sdu00u VG VG_000 CL1-A-1 0 17 - 62486 70/dev/sdu01u VG VG_001 CL1-A-1 1 18 - 62486 18 : :/dev/sdu10u VG VG_010 CL1-A-1 0 10 - 62486 10/dev/sdu11u VG VG_011 CL1-A-1 0 11 - 62486 64/dev/sdu12u VG VG_012 CL1-A-1 0 12 - 62486 12/dev/sdu13u VG VG_013 CL1-A-1 0 13 - 62486 66/dev/sdu14u VG VG_014 CL1-A-1 0 14 - 62486 14/dev/sdu15u VG VG_015 CL1-A-1 0 15 - 62486 68/dev/sdu16u VG VG_016 CL1-A-1 0 16 - 62486 16HORCM Shutdown inst 16 !!!Please check '/home/nasroot/horcm16.conf','/home/nasroot/log16/curlog/horcm_*.log', andmodify ‘ip_address & service'.#
Example 2-29 Example of mkconf.sh Execution Result (for InstanceNumber 16)
Then, use the horcconfedit command to change the HORCM_CMD definition inthe CCI configuration definition file to a format that does not depend ondevice file changes (\\.\CMD-<serial-number>:/dev/sd).$ sudo horcconfedit horcm16.conf
Example 2-30 Changing the HORCM_CMD definition in the CCIconfiguration definition file (for instance number 16)
2-66 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Editing the CCI configuration definition file
The following table shows the values are specified for the items included inthe CCI configuration definition file in the HNAS F system as shown in Table2-12.
Table 2-12 Configuration Definition File Settings (HORCM_MON) andSpecified Values in HNAS F system
Section Name Item Specified Values in HNAS F system
HORCM_MON ip_address Fixed IP address or virtual IP address of the local node.
service Specify one of the following:
• 20331 (In the case of a node. If the instancenumber is 16.)
• 20332 (In the case of a node. If the instancenumber is 17.)
• 31032 to 31254 or 31532 to 31754 (In the case of avirtual server. Instance number +30000)
• 30020 to 30499 (Common. Instance-number+30000)
Note:A host name can be specified instead of a fixed IP address if the IPaddress and the corresponding host name are registered into /etc/hosts, on an NIS server, or a DNS server. You can edit /etc/hosts onthe Edit System File page on the Network & System Configurationwindow. You can set the NIS server or DNS server information on theDNS, NIS, LDAP Setup page on the Network & System Configurationwindow.
Using the information in Table 2-12, change the service and ip_addressentries.
Change the entries poll and timeout to the appropriate values according tohardware requirements.HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu00u SER = 62486 LDEV =70 [ FIBRE FCTBL = 3 ]VG VG_000 CL1-A-1 0 0# /dev/sdu01u SER = 62486 LDEV =18 [ FIBRE FCTBL = 3 ]VG VG_001 CL1-A-1 0 1 :# /dev/sdu13u SER = 62486 LDEV =66 [ FIBRE FCTBL = 3 ]VG VG_013 CL1-A-1 0 13# /dev/sdu14u SER = 62486 LDEV =14 [ FIBRE FCTBL = 3 ]
Replication Functions 2-67Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
VG VG_014 CL1-A-1 0 14# /dev/sdu15u SER = 62486 LDEV =68 [ FIBRE FCTBL = 3 ]VG VG_015 CL1-A-1 0 15# /dev/sdu16u SER = 62486 LDEV =16 [ FIBRE FCTBL = 3 ]VG VG_016 CL1-A-1 0 16
HORCM_INST#dev_group ip_address serviceVG 127.0.0.1 52323
Example 2-31 Example of CCI Configuration Definition File - 1 (forInstance Number 16)
Next, delete unnecessary LU entries (lines) for the HORCM_DEV sectionexcept for the LU entry (line) that you want to control using CCI.
LUs which constitute a file system and their LDEV numbers can be reviewedusing the following command.$ sudo horcdevlist | grep ':File System Name$'For example, in Example 2-32 Example of the Command Execution whichLists LUs which Constitute the File System Sample on page 2-68, 11, 12,13 are LUs that constitute the file system sample. 64, 12, and 66 are theirLDEV numbers.$ sudo horcdevlist | grep ':sample$'11 62486 64 OPEN-V 3.906GB -- -- - Normal File:sample12 62486 12 OPEN-V 3.906GB -- -- - Normal File:sample13 62486 66 OPEN-V 3.906GB -- -- - Normal File:sample
Example 2-32 Example of the Command Execution which Lists LUs whichConstitute the File System Sample
The device file name and the device name of the LU entry (line) you want tocontrol using Command Control Interface are changed to an appropriatename. In changing the device file name and the device name, observe thefollowing:
• The same device group name and the same device name need to bespecified to the volumes to be paired in the primary site and in thesecondary site. Specify the device group name and the device nameaccordingly.
• The same device group name must be specified as LUs which constituteone file system. If the file snapshot functionality manages the file system,the same device group name must be specified as LUs that constitute afile system and as LUs that constitute differential-data storage devices.
• For a tiered file system, all of the LUs used to configure the tier (includingdifferential-data storage devices) must be assigned the same devicegroup name.
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sd
2-68 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu11u SER = 62486 LDEV = 64 [ FIBRE FCTBL = 3 ]VG VG_011 CL1-A-1 0 11# /dev/sdu12u SER = 62486 LDEV = 12 [ FIBRE FCTBL = 3 ]VG VG_012 CL1-A-1 0 12# /dev/sdu13u SER = 62486 LDEV = 66 [ FIBRE FCTBL = 3 ]VG VG_013 CL1-A-1 0 13
HORCM_INST#dev_group ip_address serviceVG 127.0.0.1 52323
Example 2-33 Example of CCI Configuration Definition File - 2 (forInstance Number 16)
Next, specify the IP address of the instance to be paired in the secondary sitefor the HORCM_INST section. As a preparation for failover, specify the IPaddresses of the instances in both nodes.
Table 2-13 Configuration Definition File Settings (HORCM_INST) andSpecified Values in HNAS F
Section Name Item Specified Values in HNAS F
HORCM_INST ip_address Fixed IP address of the other TrueCopy node
Service Specify one of the following:
• 20331 (In the case of a node. If the instancenumber is 16.)
• 20332 (In the case of a node. If the instancenumber is 17.)
• 31032 to 31254 or 31532 to 31754 (In the case of avirtual server. Instance number +30000)
• 30020 to 30499 (Common. Instance-number+30000)
Note:A host name can be specified instead of a IP address if the fixed IPaddress and the corresponding host name are registered into /etc/hosts,on an NIS server, or a DNS server. Refer to the Hitachi NAS PlatformF1000 Series Cluster Administrator's Guide for information on how toregister the IP address and the corresponding host name in /etc/hosts,and for information on how to set up the HNAS F system to search thehost name using NIS or DNS.
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sd
HORCM_DEV
Replication Functions 2-69Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
#dev_group dev_name port# TargetID LU# MU## /dev/sdu11u SER = 62486 LDEV = 64 [ FIBRE FCTBL = 3 ]VG VG_011 CL1-A-1 0 11# /dev/sdu12u SER = 62486 LDEV = 12 [ FIBRE FCTBL = 3 ]VG VG_012 CL1-A-1 0 12# /dev/sdu13u SER = 62486 LDEV = 66 [ FIBRE FCTBL = 3 ]VG VG_013 CL1-A-1 0 13
HORCM_INST#dev_group ip_address serviceVG 123.45.80.51 20331VG 123.45.80.115 20331
Example 2-34 Example of CCI Configuration Definition File - 3 (forInstance Number 16)
Checking the contents of the CCI configuration definition file.
By combining the following commands, you can check whether an appropriateLU is specified in the HORCM_DEV section in the CCI configuration definitionfile.
Start CCI on both the node or virtual server connected to the TrueCopy P-VOL and the node or virtual server that will use the S-VOL.$ sudo horcsetenv HORCMINST 16 (For instance number 16) or sudo horcsetenv HORCMINST 17 (For instance number 17)$ sudo horcunsetenv HORCC_MRCFWhen you logged in using SSH and performed the above setup, confirm the setup by once logging out and relogging in again.$ sudo horcmstart.sh
Example 2-35 Procedure for Starting CCI
By issuing the pairdisplay command in the node or virtual server in whichTrueCopy P-VOL or S-VOL is connected or in which S-VOL will be used, youcan see the LDEV numbers of LUs specified in the HORCM_DEV section (seeExample 2-35 Procedure for Starting CCI on page 2-70).$ sudo pairdisplay –g Device-Group-Name
Example 2-36 How to Check LDEV Numbers of LUs Specified in theHORCM_DEV Section
You can check the device file numbers, and the LDEV numbers for the devicefiles that constitute a file system, by issuing the horcdevlist command inthe node or virtual server in which P-VOL is connected. Compare this withExample 2-36 How to Check LDEV Numbers of LUs Specified in theHORCM_DEV Section on page 2-70.$ sudo horcdevlist | grep ':sample$'11 62486 64 OPEN-V 3.906GB -- -- - Normal File:sample12 62486 12 OPEN-V 3.906GB -- -- - Normal File:sample13 62486 66 OPEN-V 3.906GB -- -- - Normal File:sample
Example 2-37 How to Check Device File Numbers and LDEV Numbers of P-VOL
2-70 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
You can also check the device file numbers and the LDEV numbers for theunused device files which can be S-VOL by issuing the horcdevlistcommand in the node or virtual server in which S-VOL will be used. Comparethis with Example 2-36 How to Check LDEV Numbers of LUs Specified in theHORCM_DEV Section on page 2-70.$ sudo horcdevlist | grep ' Free$'21 62486 74 OPEN-V 3.906GB -- -- - Normal Free22 62486 22 OPEN-V 3.906GB -- -- - Normal Free23 62486 76 OPEN-V 3.906GB -- -- - Normal Free
Example 2-38 How to Check Device File Numbers and LDEV Numbers forUnused Device Files Which Can be S-VOLs
When specifying the port name in the HORCM_DEV section of the CCIconfiguration definition file, use the name of the storage system FibreChannel adaptor port that connects to the node.
After checking the CCI configuration definition file, stop CCI in both the nodeor virtual server in which TrueCopy P-VOL is connected and in the node orvirtual server in which S-VOL is connected.$ sudo horcmshutdown.sh 16
Example 2-39 Stopping CCI (for Instance Number 16)
To save CCI configuration definition files, save the system settingsinformation. For details on saving system settings information, see theHitachi NAS Platform F1000 Series Cluster Administrator's Guide.
Example of a tiered file system configuration definition file
An example of a configuration definition file for a tiered file system isprovided below.
Replication Functions 2-71Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 2-8 Example of a Tiered File System$ sudo horcdevlist | grep ':sample$'10 62486 10 OPEN-V 3.906GB -- -- - Normal Tier1,File:sample11 62486 64 OPEN-V 3.906GB -- -- - Normal Tier1,File:sample12 62486 12 OPEN-V 3.906GB -- -- - Normal Tier2,File:sample13 62486 66 OPEN-V 3.906GB -- -- - Normal Tier2,File:sample
Example 2-40 How to Check Device File Numbers and LU Numbers of a P-VOL
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sdHORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu10u SER = 62486 LDEV =10 [ FIBRE FCTBL = 3 ]VG_ VG_010 CL1-A-1 0 10# /dev/sdu11u SER = 62486 LDEV =64 [ FIBRE FCTBL = 3 ]VG_ VG_011 CL1-A-1 0 11# /dev/sdu12u SER = 62486 LDEV =12 [ FIBRE FCTBL = 3 ]VG_ VG_012 CL1-A-1 0 12# /dev/sdu13u SER = 62486 LDEV =66 [ FIBRE FCTBL = 3 ]VG_ VG_013 CL1-A-1 0 13
HORCM_INST#dev_group ip_address service
2-72 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
VG_TC 123.45.80.51 20331VG_TC 123.45.80.115 20331
Example 2-41 Example of a CCI Configuration Definition File (for P-VOLs)HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.115 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sdHORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu10u SER = 62486 LDEV =20 [ FIBRE FCTBL = 3 ]VG_ VG_010 CL1-A-1 0 20# /dev/sdu11u SER = 62486 LDEV =74 [ FIBRE FCTBL = 3 ]VG_ VG_011 CL1-A-1 0 21# /dev/sdu12u SER = 62486 LDEV =22 [ FIBRE FCTBL = 3 ]VG_ VG_012 CL1-A-1 0 22# /dev/sdu13u SER = 62486 LDEV =76 [ FIBRE FCTBL = 3 ]VG_ VG_013 CL1-A-1 0 23
HORCM_INST#dev_group ip_address serviceVG_TC 123.45.80.115 20331VG_TC 123.45.80.51 20331
Example 2-42 Example of a CCI Configuration Definition File (for S-VOLs)
Cascade Configuration of TrueCopy and ShadowImage
In the HNAS F system, you may use the TrueCopy pair and ShadowImagepair in a cascade configuration. In a cascade configuration, you may preparefor disaster scenario where, for example, you would not be able to recoverthe file system from the TrueCopy S-VOL, by backing up the file systemcopied from the TrueCopy P-VOL to S-VOL periodically using ShadowImage.
Figure 2-9 Example of Cascade Configuration of TrueCopy and ShadowImageon page 2-74 shows an example of a cascade configuration of TrueCopy andShadowImage.
Replication Functions 2-73Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 2-9 Example of Cascade Configuration of TrueCopy andShadowImage
By preparing the CCI configuration definition file as shown in Example 2-43Example of CCI Configuration Definition File at Main Site (Instance 16) onpage 2-75, Example 2-44 Example of CCI Configuration Definition File atRemote Site (Instance 16) on page 2-75, and Example 2-45 Example of CCIConfiguration Definition File at Remote Site (Instance 17) on page 2-76, youmay operate the cascade configuration of TrueCopy and ShadowImage fromCCI.HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu10u SER = 62486 LDEV = 10 [ FIBRE FCTBL = 3 ]VG_TC VG_032 CL1-A-1 0 10# /dev/sdu11u SER = 62486 LDEV = 64 [ FIBRE FCTBL = 3 ]VG_TC VG_033 CL1-A-1 0 11# /dev/sdu12u SER = 62486 LDEV = 12 [ FIBRE FCTBL = 3 ]VG_TC VG_034 CL1-A-1 0 12# /dev/sdu13u SER = 62486 LDEV = 66 [ FIBRE FCTBL = 3 ]VG_TC VG_035 CL1-A-1 0 13
HORCM_INST#dev_group ip_address service
2-74 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
VG_TC 123.45.80.51 20331VG_TC 123.45.80.115 20331
Example 2-43 Example of CCI Configuration Definition File at Main Site(Instance 16)
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.80.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62490)\\.\CMD-62486:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu10u SER = 62486 LDEV = 20 [ FIBRE FCTBL = 3 ]VG_TC VG_032 CL1-A-1 0 20# /dev/sdu11u SER = 62486 LDEV = 74 [ FIBRE FCTBL = 3 ]VG_TC VG_033 CL1-A-1 0 21# /dev/sdu12u SER = 62486 LDEV = 22 [ FIBRE FCTBL = 3 ]VG_TC VG_034 CL1-A-1 0 22# /dev/sdu13u SER = 62486 LDEV = 76 [ FIBRE FCTBL = 3 ]VG_TC VG_035 CL1-A-1 0 23# /dev/sdu10u SER = 62486 LDEV = 20 [ FIBRE FCTBL = 3 ]VG_SI VG_014 CL1-A-1 0 20# /dev/sdu11u SER = 62486 LDEV = 74 [ FIBRE FCTBL = 3 ]VG_SI VG_015 CL1-A-1 0 21# /dev/sdu12u SER = 62486 LDEV = 22 [ FIBRE FCTBL = 3 ]VG_SI VG_015 CL1-A-1 0 22# /dev/sdu13u SER = 62486 LDEV = 76 [ FIBRE FCTBL = 3 ]VG_SI VG_016 CL1-A-1 0 23
HORCM_INST#dev_group ip_address serviceVG_TC 123.45.78.51 20331VG_TC 123.45.78.115 20331VG_SI 123.45.80.51 20332VG_SI 123.45.80.115 20332
Example 2-44 Example of CCI Configuration Definition File at Remote Site(Instance 16)
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.80.51 20332 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62490)\\.\CMD-62486:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu30u SER = 62486 LDEV = 30 [ FIBRE FCTBL = 3 ]VG_SI VG_014 CL1-A-1 0 30# /dev/sdu31u SER = 62486 LDEV = 84 [ FIBRE FCTBL = 3 ]VG_SI VG_015 CL1-A-1 0 31# /dev/sdu32u SER = 62486 LDEV = 41 [ FIBRE FCTBL = 3 ]
Replication Functions 2-75Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
VG_SI VG_016 CL1-A-1 0 32# /dev/sdu33u SER = 62486 LDEV = 86 [ FIBRE FCTBL = 3 ]VG_SI VG_017 CL1-A-1 0 33
HORCM_INST#dev_group ip_address serviceVG_SI 123.45.80.51 20331VG_SI 123.45.80.115 20331
Example 2-45 Example of CCI Configuration Definition File at Remote Site(Instance 17)
Setting the CCI User Environmental Variable
In the following procedure, the environment variables HORCMINST andHORCC_MRCF are corrected corresponding to a system configuration. Thissetup is performed on both the node or virtual server to which TrueCopy P-VOL is connected, and the node or virtual server in which S-VOL will be used.You must perform these operations in both nodes of the main site and in bothnodes of the remote site.
1. Set up the environment variable of the Command Control Interfaceinstance:sudo horcsetenv HORCMINST 16 (For instance number 16)
or sudo horcsetenv HORCMINST 17 (For instance number 17)
2. Set up the HOMRCF command environment variable of the CommandControl Interface as TrueCopy:sudo horcunsetenv HORCC_MRCF
3. If you have logged in using SSH and set up as explained in steps 1 and 2,confirm the setup by logging out once and then logging back in:exitssh nasroot@fixed-IP-address-or-service-IP-address
Enter the command shown in the following example to check the result ofsetting up the environment variable:$ sudo horcprintenvThe following table shows the settings of the environment variables just afterHNAS F is installed.
Table 2-14 Settings of Environment Variables just after the Installation ofHNAS F
Environment Variables Setting
HORCMINST 16 (for a node)
HORCC_MRCF No setting.
Create File System in TrueCopy P-VOL
Create a file system in TrueCopy P-VOL using the Create New File Systemwindow of File Services Manager or by using the fscreate command. Even if
2-76 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
you create and split the TrueCopy volume pair without creating a file systemin the TrueCopy P-VOL, you cannot access the TrueCopy S-VOL in thesecondary site.
Overview of TrueCopy OperationsThis section describes the overview of operations, the CCI commands, andthe commands provided by the HNAS F for the TrueCopy operations ofvolume replication and disaster recovery.
We describe only the arguments of the CCI commands which are required forthe basic TrueCopy operations. For other arguments, please refer to theHitachi Command Control Interface User and Reference Guide. For thecommands provided by the HNAS F products, see Commands that HNAS Fprovides on page 2-132.
For node operation examples in this section, instance numbers 16 and 17 areused. If you are using additional instance numbers, replace 16 and 17 in theexamples with those numbers.
Volume Replication
This section describes the procedure of volume replication. Figure 2-10Overview of Volume Replication and the Corresponding Sections on page2-77 shows an overview of volume replication and the corresponding sections.When the pair is in the PSUS state, the S-VOL can be accessed at the remotesite.
Figure 2-10 Overview of Volume Replication and the CorrespondingSections
Starting TrueCopy Operations and Creating a TrueCopy Pair
If the TrueCopy S-VOL contains a file system, you must delete the file systemusing the fsdelete command in File Services Manager before startingTrueCopy operations. If the differential-data storage device of the file
Replication Functions 2-77Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
snapshot functionality is in the TrueCopy S-VOL, you must release it usingthe syncstop command before starting the TrueCopy operations.
To start TrueCopy operations and create a TrueCopy volume pair:
1. At the main site and the remote site, start CCI.sudo horcmstart.sh (1-instance configuration)
or sudo horcmstart.sh 16 17 (2-instance configuration)
2. At the remote site, reserve device file used as a target file system.sudo horcvmdefine –d Device file number,...
3. At the main site, create TrueCopy volume pair.sudo paircreate {-g group-name|-d volume-name} –f never –vl
4. At the main site, check the completion of TrueCopy volume pair creation.sudo pairvolchk {-g group-name|-d volume-name}Note:
You can also use the pairevtwait command, which waits for thevolumes to be paired.
a. pairvolchk: Volstat is P-VOL.[status = COPY]=> Creating
b. pairvolchk: Volstat is P-VOL.[status = PAIR] => Created
Splitting a TrueCopy Volume Pair
The method for splitting a TrueCopy volume pair depends on whether youperform an offline backup or an online backup. For an offline backup, the P-VOL is unmounted to split the pair. For an online backup, instead ofunmounting the P-VOL, updates to the file system are temporarily stopped tosplit the pair.
During offline backup, a TrueCopy volume pair is split after completelystopping access from clients by deleting NFS/CIFS shares. Since an I/O erroris reported to the application if NFS/CIFS shares are deleted while theapplication writes data in P-VOL, or if the application tries to write data in P-VOL after deleting NFS/CIFS shares, it can be distinguished by the applicationdata reflected in the TrueCopy volume pair. It is therefore applicable to mostapplications.
During online backup, however, a TrueCopy volume pair is split withoutdeleting NFS/CIFS shares. Since an I/O error is not reported to theapplication when writing data to P-VOL, it cannot be distinguished of whichtime data is reflected in S-VOL. For this reason, it is applicable only to theapplication which can identify where data was updated with a journal file etc.
To split the TrueCopy volume pair during offline backup:
TrueCopy Volume Pair Split by Offline Backup (when the file systemis not operated by the file snapshot functionality or when thedifferential-data snapshot is not published in the shared file system
2-78 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
even though the file system is operated by the file snapshotfunctionality)
1. At the main site, stop the program that accesses P-VOL. Then, unmountthe NFS shares on the client side.
2. At the main site, delete NFS/CIFS shares in the P-VOL using thenfsdelete command and the cifsdelete command in File ServicesManager. Unmount the P-VOL using the fsumount command.
3. At the main site, prevent the file snapshot functionality from performingoperations on the P-VOL:sudo horcfreeze –f source-file-system-name
4. At the main site, split the TrueCopy volume pair.sudo pairsplit {-g group-name|-d volume-name} -rw
5. At the main site, verify that the TrueCopy volume pair is split.sudo pairvolchk {-g group-name|-d volume-name}Note:
You can also use the pairevtwait command, which waits for the pairto be split (PSUS).
a. pairvolchk: Volstat is P-VOL.[status = COPY]=> Splitting
b. pairvolchk: Volstat is P-VOL.[status = PSUS]=> Split
6. At the main site, enable the operations from the file snapshotfunctionality on the P-VOL.sudo horcunfreeze –f source-file-system-name
7. At the main site, mount P-VOL using the fsmount command in FileServices Manager, and create NFS/CIFS shares using the nfscreatecommand and the cifscreate command. At the main site, mount theNFS shares in the P-VOL on the client side. Then, restart the programwhich accesses the P-VOL.
8. At the main site, mount the P-VOL NFS shares at a client-side and restartthe program which accesses the P-VOL.
9. At the remote site, connect the target file system to the node or virtualserver.
For a non-LVM and non-tiered file systemsudo horcimport -f target-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f target-file-system-name --tier1 device-file-number --tier2 device-file-number
For an LVM and non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,...
For an LVM and tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
Replication Functions 2-79Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
10. At the remote site, mount S-VOL using the fsmount command. CreateNFS/CIFS shares using the nfscreate command and the cifscreatecommand.
Note:When the S-VOL is the file system managed by the file snapshotfunctionality, mount the differential-data snapshots using thesyncmount command, and create the NFS/CIFS shares using thenfscreate command and the cifscreate command, if necessary.
11. At the remote site, start the program that accesses the S-VOL.
TrueCopy Volume Pair Split by Offline Backup (when the file systemis operated by the file snapshot functionality and the differential-data snapshot is published in the shared file system)
1. At the main site, stop the programs that access the P-VOL. Then,unmount the NFS shares on the client side.
2. At the main site, unmount the differential-data snapshot using thesyncumount command, delete the NFS/CIFS shares using the nfsdeletecommand and the cifsdelete command in File Services Manager, andunmount the P-VOL using the fsumount command.
3. At the main site, prevent the file snapshot functionality from performingoperations on the P-VOL:sudo horcfreeze –f source-file-system-name
4. At the main site, split the TrueCopy volume pair.sudo pairsplit {-g group-name|-d volume-name} -rw
5. At the main site, verify that the TrueCopy volume pair is split.$ sudo pairvolchk {-g group-name | -d volume-name }Note:
You can also use the pairevtwait command, which waits for the pairto be split (PSUS).
a. pairvolchk : Volstat is P-VOL.[status = COPY]=> Splitting
b. pairvolchk : Volstat is P-VOL.[status = PSUS]=> Split
6. At the main site, enable the operations from the file snapshotfunctionality on the P-VOL.sudo horcunfreeze –f source-file-system-name
7. At the main site, mount the P-VOL using the fsmount command in FileServices Manager, and set the NFS/CIFS shares using the nfscreatecommand and the cifscreate command. Then, mount the differential-data snapshots using the syncmount command.
8. At the main site, mount the P-VOL NFS shares on the client-side andrestart the programs that access the P-VOL.
9. At the remote site, connect the target file system to the node or virtualserver.
For a non-tiered file system
2-80 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
$ sudo horcvmimport -f target-file-system-name -d device-file-number,...
For a tiered file system$ sudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
10. At the remote site, mount the S-VOL using the fsmount command. Then,set the NFS/CIFS shares using the nfscreate command and thecifscreate command.
11. At the remote site, mount the differential-data snapshot using thesyncmount command to make it public in the shared file system.
12. At the remote site, start the programs that access the S-VOL.
To split the TrueCopy volume pair during online backup:
TrueCopy Volume Pair Split by Online Backup (when the file systemis not operated by the file snapshot functionality or when thedifferential-data snapshot is not published in the shared file systemeven though the file system is operated by the file snapshotfunctionality)
1. At the main site, prevent the file snapshot functionality from performingoperations on P-VOL, prohibit the access from the client, and write theunreflected data to the disk.sudo horcfreeze –f Source-file-system-name
2. At the main site, split the TrueCopy volume pair.sudo pairsplit {-g group-name|-d volume-name} –rw
3. At the main site, verify that the TrueCopy volume pair has been split.sudo pairvolchk {-g group-name|-d volume-name}Note:
You can also use the pairevtwait command, which waits for the pairto be split (PSUS).
a. pairvolchk : Volstat is P-VOL.[status = COPY]=> Splitting
b. pairvolchk : Volstat is P-VOL.[status = PSUS]=> Split
4. At the main site, permit the access from the client to P-VOL and enablethe operations from the file snapshot functionality on P-VOL.sudo horcunfreeze –f Source-file-system-name
5. At the remote site, connect the target file system to the node or virtualserver.
For a non-LVM and non-tiered file systemsudo horcimport -f target-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f target-file-system-name --tier1 device-file-number --tier2 device-file-number
Replication Functions 2-81Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
For an LVM and non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,...
For an LVM and tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
6. At the remote site, mount the S-VOL using the fsmount command.
Create NFS/CIFS shares using the nfscreate command and thecifscreate command.
Note:When the S-VOL is the file system managed by the file snapshotfunctionality, mount the differential-data snapshots using thesyncmount command, and create the NFS/CIFS shares using thenfscreate command and the cifscreate command, if necessary.
7. At the remote site, start the program that accesses the S-VOL.
TrueCopy Volume Pair Split by Online Backup (when the file systemis operated by the file snapshot functionality and the differential-data snapshot is published in the shared file system)
1. At the main site, prevent the file snapshot functionality from performingoperations on P-VOL, prohibit the access from the client, and write theunreflected data to the disk.sudo horcfreeze –f Source-file-system-name
2. At the main site, split the TrueCopy volume pair.sudo pairsplit {-g group-name|-d volume-name} –rw
3. At the main site, verify that the TrueCopy volume pair has been split.sudo pairvolchk {-g group-name|-d volume-name}Note:
You can also use the pairevtwait command, which waits for the pairto be split (PSUS).
a. pairvolchk : Volstat is P-VOL.[status = COPY]=> Splitting
b. pairvolchk : Volstat is P-VOL.[status = PSUS]=> Split
4. At the main site, permit client access to the P-VOL and enable operationsfrom the file snapshot functionality on the P-VOL.sudo horcunfreeze –f Source-file-system-name
5. At the remote site, connect the target file system to the node or virtualserver.
For a non-tiered file system$ sudo horcvmimport -f target-file-system-name -d device-file-number,...
For a tiered file system
2-82 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
$ sudo horcvmimport -f target-file-system-name --tier1 device-file-number,… --tier2 device-file-number,…
6. At the remote site, mount the S-VOL using the fsmount command. Then,set the NFS/CIFS shares using the nfscreate command and thecifscreate command.
7. At the remote site, mount the differential-data snapshot using thesyncmount command to make it public in the shared file system.
8. At the remote site, restart the programs that access the S-VOL.
Resynchronizing a TrueCopy Volume Pair
You must separate the target file system using the horcexport commandbefore resynchronizing the volume pair. However, in spite of separating thetarget file system, resynchronizing the volume pair is not the same asperforming initial copy.
Note:When the S-VOL is the file system managed by the file snapshotfunctionality, unmount the differential-data snapshots using thesyncumount command before using the horcexport command.
To resynchronize the TrueCopy volume pair:
When the file system is not operated by the file snapshotfunctionality or when the differential-data snapshot is not madepublic in the shared file system even though the file system isoperated by the file snapshot functionality
1. At the remote site, stop the program that accesses the S-VOL. Then,unmount the NFS shares on the client side.
2. At the remote site, use the nfsdelete command and the cifsdeletecommand to delete the NFS/CIFS share in the S-VOL. Then, use thefsumount command to unmount.
Note:In the file snapshot functionality target file system, delete the NFS/CIFS shares for the differential-data snapshot by using the nfsdeletecommand and the cifsdelete command, unmount the differential-data snapshot by using the syncumount command.
3. At the remote site, separate the file system of the S-VOL by using thehorcexport command.sudo horcexport -f target-file-system-name
4. At the main site, resume the TrueCopy volume pair.sudo pairresync {-g group-name|-d volume-name}
5. At the main site, verify that the TrueCopy volume pair has been resumed.sudo pairvolchk {-g group-name|-d volume-name}Note:
Replication Functions 2-83Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
You can also use the pairevtwait command, which waits for thevolumes to be paired.
a. pairvolchk : Volstat is P-VOL.[status = COPY]=> Resuming
b. pairvolchk : Volstat is P-VOL.[status = PAIR]=> Resumingcompleted
When the file system is operated by the file snapshot functionalityand the differential-data snapshot is made public in the shared filesystem
1. At the remote site, stop the programs that access the S-VOL. Then,unmount the NFS shares on the client side.
2. At the remote site, unmount the differential-data snapshot using thesyncumount command.
Note:When NFS/CIFS shares are created in a differential-data snapshot thatis made public in the shared file system, you must cancel the NFS/CIFS shares for the differential-data snapshot using the nfsdeleteand cifsdelete commands before unmounting it.
3. At the remote site, release the NFS/CIFS shares using the nfsdelete andcifsdelete commands, and unmount the file system using the fsumountcommand.
4. At the remote site, separate the S-VOL file system using the horcexportcommand.sudo horcexport -f target-file-system-name
5. At the main site, restore the TrueCopy volume pair.sudo pairresync {-g group-name|-d volume-name
6. At the main site, verify that the TrueCopy volume pair has been restored.sudo pairvolchk {-g group-name|-d volume-name}Note:
You can also use the pairevtwait command, which waits for thevolumes to be paired.
a. pairvolchk : Volstat is P-VOL.[status = COPY]=> Resuming
b. pairvolchk : Volstat is P-VOL.[status = PAIR]=> Resumingcompletes
Deleting a TrueCopy Volume Pair
Completing TrueCopy operations by deleting the TrueCopy volume pair in thePSUS state will differ depending upon whether you will continue using the filesystem in S-VOL, or if you will destroy the file system.
When deleting the TrueCopy volume pair in a state other than PSUS, youcannot use that file system because the consistency of data in the S-VOL isnot guaranteed.
2-84 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
To delete the TrueCopy volume pair in PSUS state, and to continue using theS-VOL file system:
1. At the main site, delete the TrueCopy volume pair.sudo pairsplit {-g group-name|-d volume-name} –S
2. At the main site, verify that the TrueCopy volume pair has been deleted.sudo pairvolchk {-g group-name|-d volume-name}a. pairvolchk: Volstat is P-VOL.[status = COPY]=> Deleting
b. pairvolchk: Volstat is P-VOL.[status = SMPL]=> Deleted
3. At the main site and the remote site, stop CCI.sudo horcmshutdown.sh (1-instance configuration)
or sudo horcmshutdown.sh 16 17 (2-instance configuration)
Note:If you have started the task of splitting the TrueCopy volume pair via anoffline backup but have not yet completed steps 1 through 11, completesteps 1 through 11 before proceeding with the above steps. When usingan online backup, you need to have completed steps 1 through 7 ofsplitting the TrueCopy volume pair before proceeding with the abovesteps (see Splitting a TrueCopy Volume Pair on page 2-78).
To delete the TrueCopy volume pair in PSUS state without using the S-VOLfile system (when the file system is not operated by the file snapshotfunctionality or when the differential-data snapshot is not made public in theshared file system even though the file system is operated by the filesnapshot functionality):
1. At the remote site, terminate the program which accesses the S-VOL.
2. At the remote site, delete NFS/CIFS shares using the nfsdeletecommand and the cifsdelete command, and unmount S-VOL using thefsumount command.
Note:When the S-VOL is the file system managed by the file snapshotfunctionality, delete the NFS/CIFS shares under the differential-datasnapshots using the nfsdelete command and cifsdelete command,unmount the differential-data snapshots using the syncumountcommand, and release the differential-data storage devices using thesyncstop command.
3. At the remote site, delete the S-VOL file system using the fsdeletecommand.
4. At the main site, delete the TrueCopy volume pair.sudo pairsplit {-g group-name|-d volume-name} –S
5. At the main site, verify that the TrueCopy volume pair has been deleted.sudo pairvolchk {-g group-name|-d volume-name}a. pairvolchk: Volstat is P-VOL.[status = COPY]=> Deleting
b. pairvolchk: Volstat is P-VOL.[status = SMPL]=> Deleted
6. At the main site and the remote site, stop CCI.
Replication Functions 2-85Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
sudo horcmshutdown.sh (1-instance configuration)
or sudo horcmshutdown.sh 16 17 (2-instance configuration)
Note:If you have started the task of splitting the TrueCopy volume pair via anoffline backup but have not yet completed steps 1 through 11, completesteps 1 through 11 before proceeding with the above steps. When usingan online backup, you need to have completed steps 1 through 7 ofsplitting the TrueCopy volume pair before proceeding with the abovesteps (see Splitting a TrueCopy Volume Pair on page 2-78).
To delete the TrueCopy volume pair in the PSUS status without using the S-VOL file system (when the file system is operated by the file snapshotfunctionality or when the differential-data snapshot is made public in theshared file system):
1. At the remote site, stop the programs that access the S-VOL.
2. At the remote site, unmount the differential-data snapshot using thesyncumount command.
Note:When NFS/CIFS shares are created in a differential-data snapshot thatis made public in the shared file system, you must cancel the NFS/CIFS shares for the differential-data snapshot using the nfsdeleteand cifsdelete commands before unmounting it.
3. At the remote site, release the differential-data storage device using thesyncstop command.
4. At the remote site, release the NFS/CIFS shares using the nfsdelete andcifsdelete commands, and unmount the file system using the fsumountcommand.
5. At the remote site, delete the S-VOL file system using the fsdeletecommand.
6. At the main site, delete the TrueCopy volume pair.sudo pairsplit {-g group-name|-d volume-name} -S
7. At the main site, verify that the TrueCopy volume pair has been deleted.sudo pairvolchk {-g group-name|-d volume-name}a. pairvolchk : Volstat is P-VOL.[status = COPY]=> Deleting
b. pairvolchk : Volstat is P-VOL.[status = SMPL]=> Deleted
8. At the main site and the remote site, stop CCI.sudo horcmshutdown.sh (1-instance configuration)
or sudo horcmshutdown.sh 16 17 (2-instance configuration)
Note:If you start the split pair operation with offline backup, but you have notyet connected the target file system to the node at the remote site, finishconnecting the target file system to the node, and then start thepreviously described operations. If you start the split pair operation withonline backup, but you have not yet connected the target file system to
2-86 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
the node at the remote site, finish connecting the target file system to thenode, and then start the above operations (see Splitting a TrueCopyVolume Pair on page 2-78).
To delete the TrueCopy volume pair in a state other than PSUS state:
1. At the main site, delete the TrueCopy volume pair.sudo pairsplit {-g group-name|-d volume-name} –S
2. At the main site, verify that the TrueCopy volume pair has been deleted.sudo pairvolchk {-g group-name|-d volume-name}a. pairvolchk: Volstat is P-VOL.[status = COPY]=> Deleting
b. pairvolchk: Volstat is P-VOL.[status = SMPL]=> Deleted
3. At the remote site, release the device file used in the target file system.sudo horcvmdelete –d Device-file-number,...
4. At the main site and the remote site, stop CCI.sudo horcmshutdown.sh (1-instance configuration)
or sudo horcmshutdown.sh 16 17 (2-instances configuration)
Disaster Recovery Operations
This section describes the procedures for disaster recovery operations.
Switching Operations to the Remote Site
This section describes the procedure of disaster recovery up to the pointwhen operations are switched to the remote site. Perform these operations onthe remote site only.
To switch operations to the remote site (when the file system is notoperated by the file snapshot functionality or when the differential-data snapshot is not made public in the shared file system):
Note:The response of the takeover may change according to the condition ofthe failed main site.
1. Execute SVOL-Takeover using the horctakeover command.sudo horctakeover {-g group-name|-d volume-name}
2. Connect the target file system to the node or virtual server.
For a non-LVM and non-tiered file systemsudo horcimport -f target-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f target-file-system-name --tier1 device-file-number --tier2 device-file-number
For an LVM and non-tiered file system
Replication Functions 2-87Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
sudo horcvmimport -f target-file-system-name -d device-file-number,...
For an LVM and tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
3. Mount S-VOL using the fsmount command and create NFS/CIFS sharesusing the nfscreate command and the cifscreate command.
Recover the file system when mounting the S-VOL.
Note:When the S-VOL is the file system managed by the file snapshotfunctionality, mount the differential-data snapshots using thesyncmount command, and create the NFS/CIFS shares using thenfscreate command and the cifscreate command, if necessary.
4. Start the program which accesses the S-VOL.
To switch operations to the remote site (when the file system isoperated by the file snapshot functionality and the differential-datasnapshot is made public in the shared file system):
1. Execute SVOL-Takeover using the horctakeover command.sudo horctakeover {-g group-name|-d volume-name}
2. Connect to the node or virtual server of the target file system.
For a non-tiered file system$ sudo horcvmimport -f target-file-system-name -d device-file-number,...
For a tiered file system$ sudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
3. Mount S-VOL using the fsmount command and create NFS/CIFS sharesusing the nfscreate command and the cifscreate command.
Restore the file system when mounting the S-VOL.
4. At the remote site, mount the differential-data snapshot using thesyncmount command to make it public in the shared file system.
5. Start the programs that access the S-VOL.
Transferring the Data Back to the Main Site
Below is the procedure for transferring data back to the main site duringdisaster recovery.
To transfer the data back to the main site (when the file system isnot operated by the file snapshot functionality or when the
2-88 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
differential-data snapshot is not made public in the shared filesystem):
1. At the main site, set up the environment variable of the CommandControl Interface instance:sudo horcsetenv HORCMINST 16 (For instance number 16)
or sudo horcsetenv HORCMINST 17 (For instance number 17)
2. At the main site, set up the HOMRCF command environment variable of theCommand Control Interface as TrueCopy:sudo horcunsetenv HORCC_MRCF
3. After you have logged in using SSH and set up as explained in steps 1and 2, confirm the setup by logging out and relogging in.exitssh nasroot@fixed-IP-address-or-service-IP-address
4. At the main site and at the remote site, start CCI.sudo horcmstart.sh (1-instance configuration)
or sudo horcmstart.sh 16 17 (2-instances configuration)
5. At the main site, unmount the NFS share in the old P-VOL at the clientside.
6. At the main site, delete the NFS/CIFS shares in the old P-VOL, unmountthe old P-VOL, and separate the file system in the old P-VOL.
Note:When the old P-VOL is the file system managed by the file snapshotfunctionality, delete the NFS/CIFS shares under the differential-datasnapshots, unmount the differential-data snapshots before deletingthe file system in the old P-VOL.
7. At the remote site, delete the TrueCopy volume pair.sudo pairsplit {-g group-name|-d volume-name} –S
8. At the remote site, confirm the deletion of the TrueCopy volume pair hasbeen completed.sudo pairvolchk {-g group-name|-d volume-name}a. If pairvolchk : Volstat is P-VOL.[status = COPY] is displayed,
the delete operation is in progress.
b. If pairvolchk: Volstat is P-VOL. [status = SMPL] is displayed,the delete operation has been completed.
9. Transfer the data from the remote site to the main site (Create TrueCopyvolume pair).sudo paircreate {-g group-name|-d volume-name} –f never –vl
10. If the state is changed to PAIR, stop the operation in the secondary site.
11. At the remote site, delete the NFS/CIFS shares in the old S-VOL using thenfsdelete command and the cifsdelete command, and unmount theold S-VOL using the fsumount command.
12. At the remote site, prevent the file snapshot functionality from performingoperations on the old S-VOL.
Replication Functions 2-89Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
sudo horcfreeze –f source-file-system-name13. At the remote site, split TrueCopy volume pair.
sudo pairsplit {-g group-name|-d volume-name} –rw 14. At the remote site, enable the operations from the file snapshot
functionality on the old S-VOL.sudo horcunfreeze –f source-file-system-name
15. At the main site, connect the target file system to the node.
For a non-LVM and non-tiered file systemsudo horcimport -f target-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f target-file-system-name --tier1 device-file-number --tier2 device-file-number
For an LVM and non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,...
For an LVM and tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
16. At the remote site, separate the file system in the old S-VOL using thehorcexport command.sudo horcexport -f target-file-system-nameNote:
When the old S-VOL is the file system managed by the file snapshotfunctionality, delete the NFS/CIFS shares under the differential-datasnapshots before deleting the file system in the old S-VOL.
17. Perform reverse resynchronization of data from the main site to theremote site.sudo pairresync {-g group-name|-d volume-name} –swaps
18. At the main site, mount the new P-VOL using the fsmount command, andcreate the NFS/CIFS shares using the nfscreate command and thecifscreate command.
Note:When the new P-VOL is the file system managed by the file snapshotfunctionality, mount the differential-data snapshots using thesyncmount command, and create the NFS/CIFS shares using thenfscreate command and the cifscreate command, if necessary.
19. Mount the NFS share in the P-VOL at the client side.
20. At the main site, resume the operations in the primary site.
2-90 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
To transfer the data back to the main site (when the file system isoperated by the file snapshot functionality and the differential-datasnapshot is made public in the shared file system):
1. At the main site, set the environmental variable for CCI instance(s).sudo horcsetenv HORCMINST 16 (in case of instance number 16)
or sudo horcsetenv HORCMINST 17 (in case of instance number 17)
2. At the main site, set the environmental variable for the CCI HOMRCFcommand to "use as ShadowImage".sudo horcunsetenv HORCC_MRCF
3. If you have logged in using SSH and set up as explained in steps 1 and 2,log out and then log in again to validate the settings.exitssh nasroot@fixed-IP-address-or-service-IP-address
4. Start CCI at the main site and the remote site.sudo horcmstart.sh (1-instance configuration)
or sudo horcmstart.sh 16 17 (2-instance configuration)
5. At the main site, unmount the NFS share in the old P-VOL at the clientside.
6. At the main site, unmount the differential-data snapshot using thesyncumount command.
Note:When NFS/CIFS shares are created in a differential-data snapshot thatis made public in the shared file system, you must cancel the NFS/CIFS shares for the differential-data snapshot using the nfsdeleteand cifsdelete commands before unmounting it.
7. At the main site, release the NFS/CIFS shares of the old P-VOL, unmountthe old P-VOL, and separate the file system.sudo horcexport -f source-file-system-name
8. At the remote site, delete the TrueCopy volume pair.sudo pairsplit {-g group-name|-d volume-name}
9. At the remote site, confirm that the TrueCopy volume pair has beendeleted.sudo pairvolchk {-g group-name|-d volume-name}a. pairvolchk : Volstat is P-VOL.[status = COPY] => Deleting
b. pairvolchk : Volstat is P-VOL.[status = SMPL] => Deleted
10. Transfer the data from the remote site to the main site (Create TrueCopyvolume pair).sudo paircreate {-g group-name|-d volume-name} –f never –vl
11. When the state changes to PAIR, stop the operation in the secondary site.
12. Execute the syncumount command to unmount the differential datasnapshot from the node or virtual server that is connected to the old S-VOL at the secondary site.
Note:
Replication Functions 2-91Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When NFS/CIFS shares are created in a differential-data snapshot thatis made public in the shared file system, you must cancel the NFS/CIFS shares for the differential-data snapshot using the nfsdeleteand cifsdelete commands before unmounting it.
13. At the remote site, release the NFS/CIFS shares of the old S-VOL, andunmount the old S-VOL using the fsumount command.
14. At the remote site, prevent the file snapshot functionality from performingoperations on the S-VOL:sudo horcfreeze -f source-file-system-name
15. At the remote site, split the TrueCopy volume pair.sudo pairsplit {-g group-name|-d volume-name} -rw
16. At the remote site, enable the operations from the file snapshotfunctionality on the S-VOL:sudo horcunfreeze -f source-file-system-name
17. At the main site, connect the target file system to the node or virtualserver.
For a non-tiered file system$ sudo horcvmimport -f target-file-system-name -d device-file-number,...
For a tiered file system$ sudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
18. At the remote site, delete the file system of the S-VOL by using thehorcexport command.sudo horcexport -f target-file-system-name
19. Perform reverse resynchronization of data from the main site to theremote site.sudo pairresync {-g group-name|-d volume-name} –swaps
20. At the main site, mount the new P-VOL using the fsmount command, andcreate the NFS/CIFS shares using the nfscreate command and thecifscreate command.
21. On the node or virtual server connected to the new P-VOL of the mainsite, mount the differential-data snapshot using the syncmount commandto make it public in the shared file system.
22. Mount the NFS share in the P-VOL at the client side.
23. Resume operations at the main site.
Notes on Failures of External VolumesIf a trouble (including a temporary trouble, for example, the cable connectionhas cut out) occurs to the external volume which is currently used withTrueCopy volume pair, perform the following processes.
1. If the external volume can not be used succeedingly, delete TrueCopyvolume pair by using the pairsplit -S command.
2-92 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
If the S-VOL is not connected to the node or virtual server, release thedevice file used in the target file system by using the horcvmdeletecommand.
2. Confirm all the TrueCopy volume pair statuses that are using the externalvolume with trouble(s) by using the pairdisplay command.
If the status of the TrueCopy volume pair is PSUE, delete the volume pairby using the pairsplit -S command.
If you delete TrueCopy volume pair when the S-VOL is not connected tothe node or virtual server, release the device file used in the target filesystem by using the horcvmdelete command.
How to use Universal Replicator on the HNAS FThis section describes how to use Universal Replicator with the HNAS F.
Overview of Universal Replicator Operations in HNAS F systemThe HNAS F system enables integrated management of the data in thestorage system. HNAS F utilizes the LAN environment already in place andenables the data within a storage system to be shared across heterogeneousplatforms. HNAS F also enables you to copy and maintain the data stored inthe storage system using the volume replication feature Universal Replicator.With Universal Replicator, you can copy data utilizing the file system server(even in the remote center), back up data to guard against failures in primarydata volumes, and perform disaster recovery at the primary site.
To use Universal Replicator in the HNAS F system, use Command ControlInterface (CCI). Note that if you install OS on the HNAS F system, CCI isinstalled at the same time. You do not need to install CCI in the HNAS Fsystem.
Figure 2-11 Using Universal Replicator in the HNAS F on page 2-93illustrates the Universal Replicator operation.
Figure 2-11 Using Universal Replicator in the HNAS F
Replication Functions 2-93Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Scope of Universal Replicator Function in the HNAS F system
Scope Related to Universal Replicator
Volume type
Only the user LUs may be used as Universal Replicator primary data volumes(hereinafter abbreviated as P-VOLs) and secondary data volumes (hereinafterabbreviated as S-VOLs) in a HNAS F system.
You cannot specify the OS disks and shared LU of a virtual server as aUniversal Replicator P-VOL or S-VOL.
Data Overflow Watch
If you use the HNAS F system with Universal Replicator, Specify the timeoutperiod (data overflow watch) within the range of 0 to 20 seconds as a journalgroup option. The I/O timeout may occur earlier than the timeout by the dataoverflow of the journal when data overflow watch is adjusted to 20 secondsor more, and the file system will become blocked.For details of the setting ofdata overflow watch, refer to the Hitachi Universal Replicator User's Guide(User Guide).
Platforms which can access the Universal Replicator P-VOLs or S-VOLs
Only clients that are connected to a network via a HNAS F system can accessthe Universal Replicator P-VOLs and S-VOLs created in an HNAS F system.Hosts connected via serial port or fibre-channel port will not be able to accessthem.
File systems which can be allocated in the Universal Replicator P-VOL
You cannot allocate a file system that consists of 129 or more LUs and thatuses Logical Volume Manager (LVM) on the OS.
You cannot allocate a volume group that consists of 129 or more LUs if itcontains a file system managed by the file snapshot functionality anddifferential-data storage devices.
Writing data to S-VOL when a volume pair is split
When you split a Universal Replicator volume pair in the HNAS F system, youneed to change the S-VOL to write-enable. Make sure that you always specifythe –rw option instead of the -r option when issuing the CCI pairsplitcommand.
To protect the data in the split S-VOL from being written to by the client, usethe fsmount command with –r option in File Services Manager whenmounting the file system in S-VOL.
2-94 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Note on path settings
If you use the HNAS F system with Universal Replicator, you must add pathsto ports of both CL1 and CL2 to connect the primary and secondary sites.
Scope Related to Features in Backup Restore
Limitations on the functionality of the backup managementsoftware
If you resynchronize a Universal Replicator pair defined by Volume Replicationfunction after performing a backup of the S-VOL by using the backupmanagement software, a full backup will be acquired next time.
Scope Related to the file snapshot functionality
The file snapshot functionality
When you copy a file system managed by the file snapshot functionality usingUniversal Replicator, you must copy both the LUs that constitute the filesystem and the LUs which constitute the differential-data storage devices. Ifyou copy only the LUs which constitute a file system, you will not be able toconnect them to the HNAS F system on the secondary site.
The setting of an automatic creation schedule for the file snapshotfunctionality is not copied. The differential-data snapshot is not mounted atthe copy destination.
Prerequisites
Hardware Prerequisites
You need a workstation or PC to log in to the HNAS F system using secureshell (SSH) in addition to the prerequisite hardware for Universal Replicator,CCI, and the HNAS F system described in the following guides:
Manuals related to HNAS F
¢ Installation and Configuration Guide
Manuals related to storage system
¢ Hitachi Universal Replicator User and Reference Guide
¢ Hitachi Command Control Interface (CCI) User and Reference Guide
¢ Hitachi ShadowImage User's Guide (User Guide) (when using theShadowImage cascade function)
Software Prerequisites
To use the HNAS F system, the program products in the HNAS F must beproperly installed and have valid licenses.
Replication Functions 2-95Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
To use Universal Replicator on the HNAS F system, all of the followingprogram products must be installed in the storage system to which the HNASF system is connected, and their licenses must be valid.
• Universal Replicator
• TrueCopy
• ShadowImage (Required when configuring the cascade connection ofUniversal Replicator/ShadowImage)
Notes on Operations
Mounting the NFS client for a file system whose data is backed uponline
If you perform online backup for a file system accessed by the NFS client byvolume replication, you must specify the NFS version 3 before mounting a filesystem on the NFS client. If you specify the NFS version 2, you must specifythe hard option before mounting a file system on the NFS client.
Limitations on Universal Replicator operations due to the status ofcluster, nodes, and resource groups
When a cluster is not configured, the cluster is stopped, the nodes arestopped, and the resource groups are offline, connecting the device file orcreating and mounting a file system is restricted. Due to these restrictions,the following operations performed in the target Universal Replicatoroperation will also end in error. You therefore should not operate the cluster,nodes, and resource groups during the Universal Replicator operation. Shouldany problems occur with the cluster, nodes, or resource groups, fix themimmediately.
• Unmount and mount the source file system during the splitting of aUniversal Replicator pair volume.
• Connect the target file system to the node.
• Unmount and delete the target file system before the Universal Replicatorpair is resynchronized.
Notes on occurrence of a failover during operating on node
During the operation on node, to continue the operation at the failoverdestination when a failover occurs, execute the command making the remotehost specify the virtual IP address. When you connect the S-VOL file systemof the copy destination, execute the horcimport or horcvmimport commandby adding the -r option and specifying a resource group name.
Notes on changing system configuration during the UniversalReplicator operations
When changing the following system configurations during UniversalReplicator operations, you must change the Command Control Information
2-96 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
configuration definition files on the node and on each node in the clusterwhere the S-VOL is used:
• Changing a fixed IP address of the node.
• Expanding or deleting a source file system
• Setting up, expanding or releasing a differential-data storage device forthe file snapshot functionality
When the host name is specified in the Command Control Interfaceconfiguration definition file and you change the following systemconfigurations, you must change the configuration definition file:
• Editing the /etc/hosts file (when resolving the host name using the /etc/hosts file)
• Changing registration information on NIS server, or changing a setting forNIS server (when resolving the host name using NIS)
• Changing registration information on DNS server, or changing a settingfor DNS server (when resolving the host name using DNS)
• Changing node names (host names)
Notes on using external volumes
In case of periodic maintenance of external volumes, you must delete allUniversal Replicator pairs once.
Notes on splitting a Universal Replicator pair with online backup
If it takes a long time between the horcfreeze command execution and thehorcunfreeze command execution, access timeout may occur with someclients. If the file snapshot functionality operates on the source file system,the horcfreeze command execution may take a long time, and this increasesthe possibility of timeout.
Verifying whether the access to the file system is suspended
After executing of the horcfreeze command and the horcunfreezecommand, you can use the fsctl command to verify whether the accessfrom the client to the file system is suspended.
Notes on file systems managed by the file snapshot functionality
The following settings and status of a copy source file system managed bythe file snapshot functionality are copied to a target file system.
• Warning threshold
• Operation threshold
• Overflow prevention
• Status of the differential data storage device
If a Universal Replicator pair is split and connected to the node while thedifferential data storage device in the copy source file system does not have
Replication Functions 2-97Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
sufficient capacity, certain measures must be taken in the copy source filesystem to solve the lack of the capacity.
In splitting a Universal Replicator pair, confirm that the differential datastorage device in the copy source file system is in the normal status.
Note on file systems that support single-instancing
The single-instancing setting is not copied over copy-destination file systems.If the single-instancing setting is enabled for a copy-source file system, aftersplitting a Universal Replicator pair and connecting to a node, use the fseditcommand to enable the single-instancing setting for the copy-destination filesystem. Also, set the policy for duplicate file capacity reduction.
Before splitting a Universal Replicator pair, make sure that the duplicate filecapacity reduction policy has not yet been executed on the copy source filesystem. If a Universal Replicator pair is split during policy execution,resynchronize the Universal Replicator pair and perform its subsequentoperations again.
Notes on tiered file systems
If the copy-source file system is a tiered file system, the LUs in all the filesystems that make up the tier must be assigned the same device group namein the configuration definition file.
To connect the copy-source file system to a node or a virtual server, specifythe --tier1 and --tier2 options for the horcimport command or thehorcvmimport command if the copy file system is a tiered file system.Otherwise, specify the -d option.
If the tiered file system is already connected to a node or a virtual server,you need to set up a tier policy schedule. For details, see the Hitachi NASPlatform F1000 Series Cluster Administrator's Guide.
Notes on dynamic recognition in the FC path
HNAS F automatically recognizes LUs connected to FC paths.
LUs connected to FC paths are automatically recognized.
A user LU (device file number) is determined when the device file to be usedfor the copy target of the file system is reserved by using the horcvmdefinecommand but, if the OS is rebooted without having the device file numberdetermined, the device file number might change from the one before thereboot. In the following cases, immediately reserve the device file to be usedfor the copy target of the file system by using the horcvmdefine command sothat the device file number will not be changed.
• In starting to use the replication functions.
• In case the file system is deleted and then the LU of the deleted filesystem is to be used as the S-VOL.
2-98 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
If the OS is rebooted before reserving the device file to be used for the copytarget of the file system, find out the device file number and the LU numberthat constitute the file system by using the horcdevlist command.
Note on WORM file systems
A node or virtual server cannot be connected to if Universal Replicator is usedto copy a WORM file system.
If the file system was encrypted by using the HNAS F functionality
If the local data encryption functionality is being used, the copy-destinationfile system can be connected to the node only when the copy-source filesystem and the copy-destination file system are in the same cluster. If thefile systems are in different clusters, the copy-destination file system cannotbe connected to the node.
Preparing for Universal Replicator Operations
Preparing for Universal Replicator Volume Pair Operation
Prepare for creating the Universal Replicator volume pairs. For details of thepreparation required on the storage system side, refer to the HitachiUniversal Replicator User's Guide (User Guide).
Registration of Public Key Used for SSH
Before issuing the commands described in this document, the public key usedfor SSH needs to be registered in the node or virtual server in which theUniversal Replicator P-VOL is connected and also in the node or virtual serverwhich will use the S-VOL. You can register the public key from the AccessProtocol Configuration window, in the Add Public Key page.
Configuring the CCI Environment
Login to the Target node
Using the nasroot account via SSH, log in both the node in which theUniversal Replicator P-VOL is connected, and the node which will use the S-VOL. (For information about login, refer to the appropriate documentation ofthe SSH communication software.)
Setting the environment of instance numbers to be used
For using the instance numbers allocated by default, the usage environmentis already configured, and no operation is required here.
For using additional instance numbers, the environment for the instancenumbers to be used must be configured by using the horcsetconf command.
Replication Functions 2-99Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
$ sudo horcsetconf -i instance number
Example 2-46 Configuring the usage environment for additional instancenumbers
The instance numbers which are already set can be confirmed by thehorcconflist command. By this command, search for instance numbers notused.$ sudo horcconflistinstance node number or virtual server name 16 node 0(D000000000), node 1(D000000001) 17 node 0(D000000000), node 1(D000000001) 499 node 0(D000000000), node 1(D000000001)
Example 2-47 Confirming already set instance numbers
The usage environment of additional instance numbers can be deleted ifnecessary by the horcunsetconf command. Note that the usage environmentof the instance numbers allocated by default cannot be deleted.$ sudo horcunsetconf -i instance number
Example 2-48 Deleting the usage environment of additional instancenumbers
Configuring the CCI Configuration Definition Files
To control a Universal Replicator pair using CCI, you must first define theUniversal Replicator pair using the CCI configuration definition file.
A Command Control Interface configuration definition file template is madeavailable after the installation of HNAS F.
The template of the Command Control Interface configurationdefinition file is:/home/nasroot/horcm<instance number>.confFor nodes, <instance number> is 16 or 17. Additional instance numbers are from 20 to 499.Use the CCI horcconfedit and mkconf.sh commands to add the HORCM_MON,HORCM_CMD, HORCM_DEV, and HORCM_INST sections to the template. After that,finish editing the CCI configuration definition file.
You must perform these operations in both nodes of the main site and in bothnodes of the remote site. You therefore need to prepare a total of four CCIconfiguration files, assuming one instance per node. For two instances, youwill need a total of eight configuration files.
In a HNAS F, you can define one or two CCI instances per one node. Tooperate only the Universal Replicator pairs in CCI, you need one instance. Tooperate the pairs where Universal Replicator and ShadowImage are cascaded,you need two instances.
In the following section, we explain how to create a CCI configurationdefinition file using the HNAS F system with LUs as an example (see Figure2-12 Example of Configuration of Pair LU on page 2-101).
2-100 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 2-12 Example of Configuration of Pair LU
Defining configuration definition files by using the CCI mkconf.shcommand
Use the CCI mkconf.sh command to define the HORCM_MON, HORCM_CMD,HORCM_DEV, and HORCM_INST sections in the CCI configuration definition filetemplate.$ ls /dev/sdu*u | sudo mkconf.sh -gg device-group-name -i 16
Example 2-49 Defining configuration definition files (for instance number16)
Note:Be sure to specify the -gg option for the mkconf.sh command. Specifyingthe -gg option assigns LU numbers allocated to host groups to pairs whenthe pairs are defined. When the -gg option is not specified, data is copiedto an LU other than the required one because the pair cannot be definedby using an LU number allocated to the host group. In addition, do notspecify the -a option because the command device path of the HORCM_CMDsection is defined by the mkconf.sh command. For details, see the HitachiCommand Control Interface (CCI) User and Reference Guide.
Before executing the mkconf.sh command, you must create a file system tobe paired. The LU configuration (size and number) in the file system to bepaired needs to be exactly the same in both the primary site and thesecondary site.
To create the correct CCI configuration definition file, it is recommended thatyou create a file system to be temporarily paired before executing the
Replication Functions 2-101Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
mkconf.sh command. After creating the CCI configuration definition file usingthe mkconf.sh command, you can continue using the file system in theprimary site. Alternatively, you may also delete it and create a file systemwith the same configuration using the same LU in Create File System inUniversal Replicator P-VOL on page 2-112. You must delete the temporarilycreated file system in the secondary site before creating a pair.
Example 2-50 Example of mkconf.sh Execution Result (for Instance Number16) on page 2-102 shows an example of how to create a template for a CCIconfiguration definition file.We will explain how to create a CCI configurationdefinition file in node0 at the primary site in the sample configuration above.You must create CCI configuration definition files for the other nodes usingthe previously explained procedure.$ ls /dev/sdu*u | sudo mkconf.sh -gg VG -i 16starting HORCM inst 16HORCM inst 16 starts successfully.HORCM Shutdown inst 16 !!!A CONFIG file was successfully completed.starting HORCM inst 16HORCM inst 16 starts successfully.DEVICE_FILE Group PairVol PORT TARG LUN M SERIAL LDEV/dev/sdu00u VG VG_000 CL1-A-1 0 17 - 62486 70/dev/sdu01u VG VG_001 CL1-A-1 1 18 - 62486 18 : /dev/sdu10u VG VG_010 CL1-A-1 0 10 - 62486 10/dev/sdu11u VG VG_011 CL1-A-1 0 11 - 62486 64/dev/sdu12u VG VG_012 CL1-A-1 0 12 - 62486 12/dev/sdu13u VG VG_013 CL1-A-1 0 13 - 62486 66/dev/sdu14u VG VG_014 CL1-A-1 0 14 - 62486 14/dev/sdu15u VG VG_015 CL1-A-1 0 15 - 62486 68/dev/sdu16u VG VG_016 CL1-A-1 0 16 - 62486 16HORCM Shutdown inst 16 !!!Please check '/home/nasroot/horcm16.conf','/home/nasroot/log16/curlog/horcm_*.log', andmodify ‘ip_address & service'.#
Example 2-50 Example of mkconf.sh Execution Result (for InstanceNumber 16)
Then, use the horcconfedit command to change the HORCM_CMD definition inthe CCI configuration definition file to a format that does not depend ondevice file changes (\\.\CMD-<serial-number>:/dev/sd).$ sudo horcconfedit horcm16.conf
Example 2-51 Changing the HORCM_CMD definition in the configurationdefinition file (for instance number 16)
Editing the CCI configuration definition file.
The following table shows the values are specified for the items included inthe CCI configuration definition file in the HNAS F system as shown in Table2-15.
2-102 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Table 2-15 Configuration Definition File Settings (HORCM_MON) andSpecified Values in HNAS F system
Section Name Item Specified Values in HNAS F
HORCM_MON ip_address Fixed IP address of the local node.
service Specify one of the following:
• 20331 (In the case of a node. If the instancenumber is 16.)
• 20332 (In the case of a node. If the instancenumber is 17.)
• 31032 to 31254 or 31532 to 31754 (In the case of avirtual server. Instance-number +30000)
• 30020 to 30499 (Common. Instance-number+30000)
Note:A host name can be specified instead of a fixed IP address if the IPaddress and the corresponding host name are registered into /etc/hosts, on an NIS server, or a DNS server. You can edit /etc/hosts onthe Edit System File page on the Network & System Configurationwindow. You can set the NIS server or DNS server information on theDNS, NIS, LDAP Setup page on the Network & System Configurationwindow.
Based on Table 2-15, change each entry of ip_address in HORCM_MON to anappropriate value.
Change the entries poll and timeout to the appropriate values according tohardware requirements.HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu00u SER = 62486 LDEV = 70 [ FIBRE FCTBL = 3 ]VG VG_000 CL1-A-1 0 0# /dev/sdu01u SER = 62486 LDEV = 18 [ FIBRE FCTBL = 3 ]VG VG_001 CL1-A-1 0 1 : :# /dev/sdu13u SER = 62486 LDEV = 66 [ FIBRE FCTBL = 3 ]VG VG_013 CL1-A-1 0 13# /dev/sdu14u SER = 62486 LDEV = 14 [ FIBRE FCTBL = 3 ]VG VG_014 CL1-A-1 0 14# /dev/sdu15u SER = 62486 LDEV = 68 [ FIBRE FCTBL = 3 ]VG VG_015 CL1-A-1 0 15# /dev/sdu16u SER = 62486 LDEV = 16 [ FIBRE FCTBL = 3 ]
Replication Functions 2-103Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
VG VG_016 CL1-A-1 0 16
HORCM_INST#dev_group ip_address serviceVG 127.0.0.1 52323
Example 2-52 Example of CCI Configuration Definition File - 1 (forInstance Number 16)
Next, delete unnecessary LU entries (lines) for the HORCM_DEV section exceptfor the LU entry (line) that you want to control using CCI (see Example 2-52Example of CCI Configuration Definition File - 1 (for Instance Number 16) onpage 2-104).
LUs which constitute a file system and their LDEV numbers can be reviewedusing the following command.$ sudo horcdevlist | grep ':File System Name$'For example, in Example 2-53 Example of the Command Execution whichLists LUs which Constitute the File System Sample on page 2-104, 11, 12,13 are LUs that constitute the file system sample. 64, 12, and 66 are theirLDEV numbers.$ sudo horcdevlist | grep ':sample$'11 62486 64 OPEN-V 3.906GB -- -- - Normal File:sample12 62486 12 OPEN-V 3.906GB -- -- - Normal File:sample13 62486 66 OPEN-V 3.906GB -- -- - Normal File:sample
Example 2-53 Example of the Command Execution which Lists LUs whichConstitute the File System Sample
The device file name and the device name of the LU entry (line) you want tocontrol using Command Control Interface are changed to an appropriatename. In changing the device file name and the device name, observe thefollowing:
• The same device group name and the same device name need to bespecified to the volumes to be paired in the primary site and in thesecondary site. Specify the device group name and the device nameaccordingly.
• The same device group name must be specified as LUs which constituteone file system. If the file snapshot functionality manages the file system,the same device group name must be specified as LUs that constitute afile system and as LUs that constitute differential-data storage devices.
• For a tiered file system, all of the LUs used to configure the tier (includingdifferential-data storage devices) must be assigned the same devicegroup name.
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sd
2-104 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu11u SER = 62486 LDEV = 64 [ FIBRE FCTBL = 3 ]VG VG_011 CL1-A-1 0 11# /dev/sdu12u SER = 62486 LDEV = 12 [ FIBRE FCTBL = 3 ]VG VG_012 CL1-A-1 0 12# /dev/sdu13u SER = 62486 LDEV = 66 [ FIBRE FCTBL = 3 ]VG VG_013 CL1-A-1 0 13
HORCM_INST#dev_group ip_address serviceVG 127.0.0.1 52323
Example 2-54 Example of CCI Configuration Definition File - 2 (forInstance Number 16)
For one LU, two lines of the HORCM_DEV section information are output: thecomment line starting with a # and the definition line right below it. Use the /dev/sdu**u (where ** is the LU number) and LDEV = ** (where ** is theLDEV number) text output in the comment lines to identify the necessaryentries.
Next, specify the IP address of the instance to be paired in the secondary sitefor the HORCM_INST section. You operate CCI on node, as a preparation forfailover, specify the IP addresses of the instances in both nodes.
Table 2-16 Configuration Definition File Settings (HORCM_INST) andSpecified Values in the HNAS F system
Section Name Item Specified Values in HNAS F system
HORCM_INST ip_address Fixed IP address or virtual IP address of the node in thesecondary site of Universal Replicator
service Specify one of the following:
• 20331 (In the case of a node. If the instancenumber is 16.)
• 20332 (If the instance number is 17.)
• 31032 to 31254 or 31532 to 31754 (In the case of avirtual server. Instance-number +30000)
• 30020 to 30499 (Common. Instance-number+30000)
Note:A host name can be specified instead of a fixed IP address if the IPaddress and the corresponding host name are registered into /etc/hosts, on an NIS server, or a DNS server. You can edit /etc/hosts onthe Edit System File page on the Network & System Configurationwindow. You can set the NIS server or DNS server information on theDNS, NIS, LDAP Setup page on the Network & System Configurationwindow.
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
Replication Functions 2-105Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu11u SER = 62486 LDEV = 64 [ FIBRE FCTBL = 3 ]VG VG_011 CL1-A-1 0 11# /dev/sdu12u SER = 62486 LDEV = 12 [ FIBRE FCTBL = 3 ]VG VG_012 CL1-A-1 0 12# /dev/sdu13u SER = 62486 LDEV = 66 [ FIBRE FCTBL = 3 ]VG VG_013 CL1-A-1 0 13
HORCM_INST#dev_group ip_address serviceVG 123.45.80.51 20331VG 123.45.80.115 20331
Example 2-55 Example of CCI Configuration Definition File - 3 (forInstance Number 16)
Checking the contents of the CCI configuration definition file.
By combining the following commands, you can check whether an appropriateLU is specified in the HORCM_DEV section in the CCI configuration definitionfile.
First, start CCI in both the node or virtual server in which the UniversalReplicator P-VOL is connected and in the node or virtual server which will usethe S-VOL.$ sudo horcsetenv HORCMINST 16 (For instance number 16) or sudo horcsetenv HORCMINST 17 (For instance number 17)$ sudo horcunsetenv HORCC_MRCFWhen you logged in using SSH and performed the above setup, confirm the setup by once logging out and relogging in again.$ sudo horcmstart.sh
Example 2-56 Procedure for Starting CCI
By issuing the pairdisplay command in the node or virtual server in whichUniversal Replicator P-VOL or S-VOL is connected or in which S-VOL will beused, you can see the LDEV numbers of LUs specified in the HORCM_DEVsection.$ sudo pairdisplay –g Device-Group-Name
Example 2-57 How to Check LDEV Numbers of LUs Specified in theHORCM_DEV Section
You can check the device file numbers, and the LDEV numbers for the devicefiles that constitute a file system, by issuing the horcdevlist command inthe node or virtual server in which P-VOL is connected. Compare this withExample 2-57 How to Check LDEV Numbers of LUs Specified in theHORCM_DEV Section on page 2-106.$ sudo horcdevlist | grep ':sample$'11 62486 64 OPEN-V 3.906GB -- -- - Normal File:sample
2-106 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
12 62486 12 OPEN-V 3.906GB -- -- - Normal File:sample13 62486 66 OPEN-V 3.906GB -- -- - Normal File:sample
Example 2-58 How to Check Device File Numbers and LDEV Numbers of P-VOL
You can also check the device file numbers and the LDEV numbers for theunused device files which can be S-VOL by issuing the horcdevlistcommand in the node or virtual server in which S-VOL will be used. Comparethis with Example 2-57 How to Check LDEV Numbers of LUs Specified in theHORCM_DEV Section on page 2-106.$ sudo horcdevlist | grep ' Free$'21 62486 74 OPEN-V 3.906GB -- -- - Normal Free22 62486 22 OPEN-V 3.906GB -- -- - Normal Free23 62486 76 OPEN-V 3.906GB -- -- - Normal Free
Example 2-59 How to Check Device File Numbers and LDEV Numbers forUnused Device Files Which Can be S-VOLs
When specifying port names in the HORCM_DEV section of the CCIconfiguration definition file, use the name of the storage system FibreChannel port that connects to the node.
After the CCI definition files has been validated, stop CCI on the nodes orvirtual servers connected to the Universal Replicator P-VOL and S-VOL.# sudo horcmshutdown.sh 16
Example 2-60 Stopping CCI (for Instance Number 16)
Save the definition information to save the configured CCI configurationdefinition files. For details about saving definition information, please refer tothe Hitachi NAS Platform F1000 Series Cluster Administrator's Guide.
An example of the configuration definition file for a tiered system
An example of the configuration definition file for a tiered system is providedbelow.
Replication Functions 2-107Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 2-13 Example of a Tiered File System$ sudo horcdevlist | grep ':sample$'10 62486 10 OPEN-V 3.906GB -- -- - Normal Tier1,File:sample11 62486 64 OPEN-V 3.906GB -- -- - Normal Tier1,File:sample12 62486 12 OPEN-V 3.906GB -- -- - Normal Tier2,File:sample13 62486 66 OPEN-V 3.906GB -- -- - Normal Tier2,File:sample
Example 2-61 How To Check the Device File Number and the LU Number ofa P-VOL
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sdHORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu10u SER = 62486 LDEV =10 [ FIBRE FCTBL = 3 ]VG_ VG_010 CL1-A-1 0 10# /dev/sdu11u SER = 62486 LDEV =64 [ FIBRE FCTBL = 3 ]VG_ VG_011 CL1-A-1 0 11# /dev/sdu12u SER = 62486 LDEV =12 [ FIBRE FCTBL = 3 ]VG_ VG_012 CL1-A-1 0 12# /dev/sdu13u SER = 62486 LDEV =66 [ FIBRE FCTBL = 3 ]VG_ VG_013 CL1-A-1 0 13
HORCM_INST#dev_group ip_address service
2-108 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
VG_TC 123.45.80.51 20331VG_TC 123.45.80.115 20331
Example 2-62 Example of the CCI Configuration Definition File (for P-VOLs)
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.115 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sdHORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu10 SER = 62486 LDEV =20 [ FIBRE FCTBL = 3 ]VG_ VG_010 CL1-A-1 0 20# /dev/sdu11u SER = 62486 LDEV =74 [ FIBRE FCTBL = 3 ]VG_ VG_011 CL1-A-1 0 21# /dev/sdu12u SER = 62486 LDEV =22 [ FIBRE FCTBL = 3 ]VG_ VG_012 CL1-A-1 0 22# /dev/sdu13u SER = 62486 LDEV =76 [ FIBRE FCTBL = 3 ]VG_ VG_013 CL1-A-1 0 23
HORCM_INST#dev_group ip_address serviceVG_TC 123.45.80.115 20331VG_TC 123.45.80.51 20331
Example 2-63 Example of a CCI Configuration Definition File (for S-VOLs)
Cascade Configuration of Universal Replicator and ShadowImage
In the HNAS F system, you may use the Universal Replicator pair andShadowImage pair in a cascade configuration. In a cascade configuration, youmay prepare for disaster scenario where, for example, you would not be ableto recover the file system from the Universal Replicator S-VOL, by backing upthe file system copied from the Universal Replicator P-VOL to S-VOLperiodically using ShadowImage.
Figure 2-14 Example of Cascade Configuration of Universal Replicator andShadowImage on page 2-110 shows an example of a cascade configuration ofUniversal Replicator and ShadowImage.
Replication Functions 2-109Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 2-14 Example of Cascade Configuration of Universal Replicator andShadowImage
By preparing the CCI configuration definition file as shown in Example 2-64Example of CCI Configuration Definition File at Primary Site (Instance 16) onpage 2-111, Example 2-65 Example of CCI Configuration Definition File atSecondary Site (Instance 16) on page 2-111, and Example 2-66 Example ofCCI Configuration Definition File at Secondary Site (Instance 17) on page2-112, you may operate the cascade configuration of Universal Replicator andShadowImage from CCI.HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.78.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62486)\\.\CMD-62486:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu10u SER = 62486 LDEV = 10 [ FIBRE FCTBL = 3 ]VG_TC VG_032 CL1-A-1 0 10# /dev/sdu11u SER = 62486 LDEV = 64 [ FIBRE FCTBL = 3 ]VG_TC VG_033 CL1-A-1 0 11# /dev/sdu12u SER = 62486 LDEV = 12 [ FIBRE FCTBL = 3 ]VG_TC VG_034 CL1-A-1 0 12# /dev/sdu13u SER = 62486 LDEV = 66 [ FIBRE FCTBL = 3 ]VG_TC VG_035 CL1-A-1 0 13
HORCM_INST#dev_group ip_address service
2-110 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
VG_TC 123.45.80.51 20331VG_TC 123.45.80.115 20331
Example 2-64 Example of CCI Configuration Definition File at Primary Site(Instance 16)
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.80.51 20331 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62490)/dev/sdf
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu10u SER = 62486 LDEV = 20 [ FIBRE FCTBL = 3 ]VG_TC VG_032 CL1-A-1 0 20# /dev/sdu11u SER = 62486 LDEV = 74 [ FIBRE FCTBL = 3 ]VG_TC VG_033 CL1-A-1 0 21# /dev/sdu12u SER = 62486 LDEV = 22 [ FIBRE FCTBL = 3 ]VG_TC VG_034 CL1-A-1 0 22# /dev/sdu13u SER = 62486 LDEV = 76 [ FIBRE FCTBL = 3 ]VG_TC VG_035 CL1-A-1 0 23# /dev/sdu10u SER = 62486 LDEV = 20 [ FIBRE FCTBL = 3 ]VG_SI VG_014 CL1-A-1 0 20# /dev/sdu11u SER = 62486 LDEV = 74 [ FIBRE FCTBL = 3 ]VG_SI VG_015 CL1-A-1 0 21# /dev/sdu12u SER = 62486 LDEV = 22 [ FIBRE FCTBL = 3 ]VG_SI VG_015 CL1-A-1 0 22# /dev/sdu13u SER = 62486 LDEV = 76 [ FIBRE FCTBL = 3 ]VG_SI VG_016 CL1-A-1 0 23
HORCM_INST#dev_group ip_address serviceVG_TC 123.45.78.51 20331VG_TC 123.45.78.115 20331VG_SI 123.45.80.51 20332VG_SI 123.45.80.115 20332
Example 2-65 Example of CCI Configuration Definition File at SecondarySite (Instance 16)
HORCM_MON#ip_address service poll(10ms) timeout(10ms)123.45.80.51 20332 1000 3000
HORCM_CMD#dev_name dev_name dev_name#UnitID 0 (Serial# 62490)\\.\CMD-62486:/dev/sd
HORCM_DEV#dev_group dev_name port# TargetID LU# MU## /dev/sdu30u SER = 62486 LDEV = 30 [ FIBRE FCTBL = 3 ]VG_SI VG_014 CL1-A-1 0 30# /dev/sdu31u SER = 62486 LDEV = 84 [ FIBRE FCTBL = 3 ]VG_SI VG_015 CL1-A-1 0 31# /dev/sdu32u SER = 62486 LDEV = 41 [ FIBRE FCTBL = 3 ]VG_SI VG_016 CL1-A-1 0 32
Replication Functions 2-111Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
# /dev/sdu33u SER = 62486 LDEV = 86 [ FIBRE FCTBL = 3 ]VG_SI VG_017 CL1-A-1 0 33
HORCM_INST#dev_group ip_address serviceVG_SI 123.45.80.51 20331VG_SI 123.45.80.115 20331
Example 2-66 Example of CCI Configuration Definition File at SecondarySite (Instance 17)
Setting the CCI User Environmental Variable
In the following procedure, the environment variables HORCMINST andHORCC_MRCF are corrected corresponding to a system configuration. Thissetup is performed on both the node or virtual server to which UniversalReplicator P-VOL is connected, and the node or virtual server in which S-VOLwill be used. When you operate CCI on node, you must perform theseoperations in both nodes of the main site and in both nodes of the remotesite.
1. Set up the environment variable of the Command Control Interfaceinstance:sudo horcsetenv HORCMINST 16 (For instance number 16)
or sudo horcsetenv HORCMINST 17 (For instance number 17)
2. Set up the HOMRCF command environment variable of the CommandControl Interface as Universal Replicator:sudo horcunsetenv HORCC_MRCF
3. If you have logged in using SSH and set up as explained in steps 1 and 2,confirm the setup by logging and then logging back in:exitssh nasroot@fixed-IP-address-or-service-IP-address
Enter the command shown in the following example to check the result ofsetting up the environment variable:$ sudo horcprintenvThe following table shows the values of environment variables that are setimmediately after installation of HNAS F.
Table 2-17 Values of Environment Variables Immediately After Installationof HNAS F
Environment Variables Value
HORCMINST 16 for node
HORCC_MRCF None
Create File System in Universal Replicator P-VOL
Create a file system in Universal Replicator P-VOL using the Create New FileSystem window of File Services Manager or by using the fscreate command.
2-112 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Even if you create and split the Universal Replicator volume pair withoutcreating a file system in the Universal Replicator P-VOL, you cannot accessthe Universal Replicator S-VOL in the secondary site.
Overview of Universal Replicator OperationsThis section describes the overview of operations, the CCI commands, andthe commands provided by the HNAS F for the Universal Replicatoroperations of volume replication and disaster recovery.
We describe only the arguments of the CCI commands which are required forthe basic Universal Replicator operations. For other arguments, please referto the Hitachi Command Control Interface User and Reference Guide. For thecommands provided by the HNAS F products, see Commands that HNAS Fprovides on page 2-132.
For node operation examples in this section, instance numbers 16 and 17 areused. If you are using additional instance numbers, replace 16 and 17 in theexamples with those numbers.
Volume Replication
This section describes the procedure of volume replication. Figure 2-15Overview of Volume Replication on page 2-114 shows an overview of volumereplication. When the pair is in the PSUS status, the S-VOL can be accessedat the secondary site.
• For information about how to create volume pairs, see Starting UniversalReplicator Operations and Creating a Universal Replicator Pair on page2-114.
• For information about how to split volume pairs, see Splitting a UniversalReplicator Volume Pair on page 2-115.
• For information about how to resynchronize volume pairs, seeResynchronizing a Universal Replicator Volume Pair on page 2-119.
• For information about how to delete volume pairs, see Deleting aUniversal Replicator Volume Pair on page 2-121.
Replication Functions 2-113Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 2-15 Overview of Volume Replication
Starting Universal Replicator Operations and Creating a UniversalReplicator Pair
If the Universal Replicator S-VOL contains a file system, you must delete thefile system using the fsdelete command in File Services Manager beforestarting Universal Replicator operations. If the differential-data storage deviceof the file snapshot functionality is in the Universal Replicator S-VOL, youmust release it using the syncstop command before starting the UniversalReplicator operations.
To start Universal Replicator operations and create a Universal Replicatorvolume pair:
1. At the primary site and the secondary site, start CCI.sudo horcmstart.sh (1-instance configuration)
or sudo horcmstart.sh 16 17 (2-instance configuration)
2. At the secondary site, reserve device file used as a target file system.sudo horcvmdefine -d Device file number,...
3. At the primary site, create Universal Replicator volume pair.sudo paircreate {-g group-name|-d volume-name} -f async -vl -jp master-journal-group-ID -js restore-journal-group-ID
4. At the primary site, check the completion of Universal Replicator volumepair creation.sudo pairvolchk {-g group-name|-d volume-name}Note:
You can also use the pairevtwait command, which waits for thevolumes to be paired.
a. pairvolchk: Volstat is P-VOL.[status = COPY] => Creating
b. pairvolchk: Volstat is P-VOL.[status = PAIR] => Created
2-114 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Splitting a Universal Replicator Volume Pair
The method for splitting a Universal Replicator volume pair depends onwhether you perform an offline backup or an online backup. For an offlinebackup, the P-VOL is unmounted to split the pair. For an online backup,instead of unmounting the P-VOL, updates to the file system are temporarilystopped to split the pair.
During offline backup, a Universal Replicator volume pair is split aftercompletely stopping access from clients by deleting NFS/CIFS shares. Sincean I/O error is reported to the application if NFS/CIFS shares are deletedwhile the application writes data in P-VOL, or if the application tries to writedata in P-VOL after deleting NFS/CIFS shares, it can be distinguished by theapplication data reflected in the Universal Replicator volume pair. It istherefore applicable to most applications.
During online backup, however, a Universal Replicator volume pair is splitwithout deleting NFS/CIFS shares. Since an I/O error is not reported to theapplication when writing data to P-VOL, it cannot be distinguished of whichtime data is reflected in S-VOL. For this reason, it is applicable only to theapplication which can identify where data was updated with a journal file etc.
To split the Universal Replicator volume pair during offline backup:
When the file system is not operated by the file snapshotfunctionality or when the differential-data snapshot is not publishedin the shared file system even though the file system is operated bythe file snapshot functionality
1. At the primary site, stop the program that accesses P-VOL and unmountNFS shares at the client side.
2. At the primary site, delete NFS/CIFS shares in the P-VOL using thenfsdelete command and the cifsdelete command in File ServicesManager. Unmount the P-VOL using the fsumount command.
3. At the primary site, prevent the file snapshot functionality fromperforming operations on the P-VOL:sudo horcfreeze –f source-file-system-name
4. At the primary site, split the Universal Replicator volume pair.sudo pairsplit {-g group-name|-d volume-name} -rw
5. At the primary site, verify that the Universal Replicator volume pair hasbeen split.sudo pairvolchk {-g group-name|-d volume-name}Note:
You can also use the pairevtwait command, which waits for the pairto be split (PSUS).
a. pairvolchk : Volstat is P-VOL.[status = COPY]=>Splitting
b. pairvolchk : Volstat is P-VOL.[status = PSUS]=>Splitcompletes
Replication Functions 2-115Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
6. At the primary site, enable the operations from the file snapshotfunctionality on the P-VOL.sudo horcunfreeze –f source-file-system-name
7. At the primary site, mount P-VOL using the fsmount command in FileServices Manager, and create NFS/CIFS shares using the nfscreatecommand and the cifscreate command.
8. At the primary site, mount NFS shares of the P-VOL at the client side andrestart the program which accesses the P-VOL.
9. At the secondary site, connect the target file system to the node or virtualserver.
For a non-LVM and non-tiered file systemsudo horcimport -f target-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f target-file-system-name --tier1 device-file-number --tier2 device-file-number
For an LVM and non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,...
For an LVM and tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
10. At the secondary site, mount S-VOL using the fsmount command. CreateNFS/CIFS shares using the nfscreate command and the cifscreatecommand.
Note:When the S-VOL is the file system managed by the file snapshotfunctionality, mount the differential-data snapshots using thesyncmount command, and create the NFS/CIFS shares using thenfscreate command and the cifscreate command, if necessary.
11. At the secondary site, start the program that accesses the S-VOL.
When the file system is operated by the file snapshot functionalityand the differential-data snapshot is published in the shared filesystem
1. At the primary site, stop the program that accesses P-VOL and unmountNFS shares at the client side.
2. At the primary site, delete NFS/CIFS shares in the P-VOL using thenfsdelete command and the cifsdelete command in File ServicesManager. Unmount the P-VOL using the fsumount command.
3. At the primary site, prevent the file snapshot functionality fromperforming operations on the P-VOL:sudo horcfreeze –f source-file-system-name
4. At the primary site, split the Universal Replicator volume pair.
2-116 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
sudo pairsplit {-g group-name|-d volume-name} -rw5. At the primary site, verify that the Universal Replicator volume pair has
been split.sudo pairvolchk {-g group-name|-d volume-name}Note:
You can also use the pairevtwait command, which waits for the pairto be split (PSUS).
a. pairvolchk : Volstat is P-VOL.[status = COPY]=> Splitting
b. pairvolchk : Volstat is P-VOL.[status = PSUS]=> Split
6. At the primary site, enable the operations from the file snapshotfunctionality on the P-VOL.sudo horcunfreeze –f source-file-system-name
7. At the primary site, mount P-VOL using the fsmount command in FileServices Manager, and create NFS/CIFS shares using the nfscreatecommand and the cifscreate command, and then mount thedifferential-data snapshots using the syncmount command.
8. At the primary site, mount NFS shares of the P-VOL at the client side andrestart the program which accesses the P-VOL.
9. At the secondary site, connect the target file system to the node or virtualserver.
For a non-tiered file system$ sudo horcvmimport -f target-file-system-name -d device-file-number,...
For a tiered file system$ sudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
10. At the secondary site, mount S-VOL using the fsmount command. CreateNFS/CIFS shares using the nfscreate command and the cifscreatecommand.
11. At the secondary site, mount the differential-data snapshot using thesyncmount command, and will be made public the NFS/CIFS shares.
12. At the secondary site, start the program that accesses the S-VOL.
To split the Universal Replicator volume pair during online backup:
When the file system is not operated by the file snapshotfunctionality or when the differential-data snapshot is not publishedin the shared file system even though the file system is operated bythe file snapshot functionality
1. At the primary site, prevent the file snapshot functionality fromperforming operations on P-VOL, stop access from clients, and write theunreflected data to the disk.sudo horcfreeze –f Source-file-system-name
2. At the primary site, split the Universal Replicator volume pair.
Replication Functions 2-117Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
sudo pairsplit {-g group-name|-d volume-name} –rw 3. At the primary site, verify that the Universal Replicator volume pair has
been split.sudo pairvolchk {-g group-name|-d volume-name}Note:
You can also use the pairevtwait command, which waits for the pairto be split (PSUS).
a. pairvolchk : Volstat is P-VOL.[status = COPY]=> Splitting
b. pairvolchk : Volstat is P-VOL.[status = PSUS]=> Split
4. At the primary site, permit access from clients to P-VOL and enable theoperations from the file snapshot functionality on P-VOL.sudo horcunfreeze –f Source-file-system-name
5. At the secondary site, connect the target file system to the node or virtualserver.
For a non-LVM and non-tiered file systemsudo horcimport -f target-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f target-file-system-name --tier1 device-file-number --tier2 device-file-number
For an LVM and non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,...
For an LVM and tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
6. At the secondary site, mount the S-VOL using the fsmount command.
Note:When the S-VOL is the file system managed by the file snapshotfunctionality, mount the differential-data snapshots using thesyncmount command, and create the NFS/CIFS shares using thenfscreate command and the cifscreate command, if necessary.
7. At the secondary site, start the program that accesses the S-VOL.
When the file system is operated by the file snapshot functionalityand the differential-data snapshot is published in the shared filesystem
1. At the primary site, prevent the file snapshot functionality fromperforming operations on the P-VOL:sudo horcfreeze –f source-file-system-name
2. At the primary site, split the Universal Replicator volume pair.sudo pairsplit {-g group-name|-d volume-name} -rw
2-118 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
3. At the primary site, verify that the Universal Replicator volume pair hasbeen split.sudo pairvolchk {-g group-name|-d volume-name}Note:
You can also use the pairevtwait command, which waits for the pairto be split (PSUS).
a. pairvolchk : Volstat is P-VOL.[status = COPY]=> Splitting
b. pairvolchk : Volstat is P-VOL.[status = PSUS]=> Split
4. At the primary site, enable the operations from the file snapshotfunctionality on the P-VOL.sudo horcunfreeze –f source-file-system-name
5. At the secondary site, connect the target file system to the node or virtualserver.
For a non-tiered file system$ sudo horcvmimport -f target-file-system-name -d device-file-number,...
For a tiered file system$ sudo horcvmimport -f target-file-system-name --tier1 device-file-number,… --tier2 device-file-number,…
6. At the secondary site, mount S-VOL using the fsmount command. CreateNFS/CIFS shares using the nfscreate command and the cifscreatecommand.
7. At the secondary site, mount the differential-data snapshot using thesyncmount command, and will be made public the NFS/CIFS shares.
8. At the secondary site, start the program that accesses the S-VOL.
Resynchronizing a Universal Replicator Volume Pair
You must separate the target file system using the horcexport commandbefore resynchronizing the volume pair. However, in spite of separating thetarget file system, resynchronizing the volume pair is not the same asperforming initial copy.
Note:When the S-VOL is the file system managed by the file snapshotfunctionality, unmount the differential-data snapshots before using thehorcexport command.
To resynchronize the Universal Replicator volume pair:
When the file system is not operated by the file snapshotfunctionality or when the differential-data snapshot is not made
Replication Functions 2-119Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
public in the shared file system even though the file system isoperated by the file snapshot functionality
1. At the secondary site, terminate the program which accesses the S-VOLand unmount NFS shares at the client side.
2. At the secondary site, delete NFS/CIFS shares by using the nfsdeletecommand and the cifsdelete command, and perform unmounting byusing the fsumount command.
Note:For the file snapshot functionality target file system, delete the NFS/CIFS shares for the differential-data snapshots by using thenfsdelete command and the cifsdelete command, unmount thedifferential-data snapshots by using the syncumount command.
3. At the secondary site, delete the S-VOL file system by using thehorcexport command.sudo horcexport -f source-file-system-name
4. At the primary site, resume the Universal Replicator volume pair.sudo pairresync {-g group-name|-d volume-name}
5. At the primary site, verify that the Universal Replicator volume pair hasbeen resumed.sudo pairvolchk {-g group-name|-d volume-name}Note:
You can also use the pairevtwait command, which waits for thevolumes to be paired.
a. pairvolchk : Volstat is P-VOL.[status = COPY]=> Resuming
b. pairvolchk : Volstat is P-VOL.[status = PAIR]=> Resumingcompletes
When the file system is operated by the file snapshot functionalityand the differential-data snapshot is made public in the shared filesystem
1. At the secondary site, terminate the program which accesses the S-VOLand unmount NFS shares at the client side.
2. At the secondary site, un-mount the differential-data snapshot using thesyncumount command.
Note:When the differential-data snapshot is not only make public in theshared file system but also create the NFS/CIFS shares to thedifferential-data snapshot, it is required to cancel the NFS/CIFSshares for the differential-data snapshot using the nfsdelete andcifsdelete commands before un-mounting the differential-datasnapshot.
2-120 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
3. At the secondary site, delete NFS/CIFS shares by using the nfsdeletecommand and the cifsdelete command, and perform unmounting byusing the fsumount command.
4. At the secondary site, delete the file system of the secondary data volumeusing the horcexport command.sudo horcexport -f target-file-system-name
5. At the primary site, resume the Universal Replicator volume pair.sudo pairresync {-g group-name|-d volume-name}
6. At the primary site, verify that the Universal Replicator volume pair hasbeen resumed.sudo pairvolchk {-g group-name|-d volume-name}Note:
You can also use the pairevtwait command, which waits for thevolumes to be paired.
a. pairvolchk : Volstat is P-VOL.[status = COPY]=> Resuming
b. pairvolchk : Volstat is P-VOL.[status = PAIR]=> Resumingcompletes
Deleting a Universal Replicator Volume Pair
Completing Universal Replicator operations by deleting the UniversalReplicator volume pair in the PSUS status will differ depending upon whetheryou will continue using the file system in S-VOL, or if you will destroy the filesystem.
When deleting the Universal Replicator volume pair in a status other thanPSUS, you cannot use that file system because the consistency of data in theS-VOL is not guaranteed.
To delete the Universal Replicator volume pair in PSUS status, and tocontinue using the S-VOL file system:
1. At the primary site, delete the Universal Replicator volume pair.sudo pairsplit {-g group-name|-d volume-name} –S
2. At the primary site, verify that the Universal Replicator volume pair hasbeen deleted.sudo pairvolchk {-g group-name|-d volume-name}a. pairvolchk: Volstat is P-VOL.[status = COPY]=> Deleting
b. pairvolchk: Volstat is P-VOL.[status = SMPL]=> Deleted
3. At the primary site and the secondary site, stop CCI.sudo horcmshutdown.sh (1-instance configuration)
or sudo horcmshutdown.sh 16 17 (2-instance configuration)
Note:If you have started the task of splitting the Universal Replicator volumepair via an offline backup but have not yet completed steps 1 through 11,complete steps 1 through 11 before proceeding with the above steps.
Replication Functions 2-121Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When using an online backup, you need to have completed steps 1through 7 of splitting the Universal Replicator volume pair beforeproceeding with the above steps (see Splitting a Universal ReplicatorVolume Pair on page 2-115).
To delete the Universal Replicator volume pair in PSUS status without usingthe S-VOL file system:
When the file system is not operated by the file snapshotfunctionality or when the differential-data snapshot is not madepublic in the shared file system even though the file system isoperated by the file snapshot functionality
1. At the secondary site, terminate the program which accesses the S-VOL.
2. At the secondary site, delete NFS/CIFS shares using the nfsdeletecommand and the cifsdelete command, and unmount S-VOL using thefsumount command.
Note:When the S-VOL is the file system managed by the file snapshotfunctionality, delete the NFS/CIFS shares under the differential-datasnapshots using the nfsdelete command and cifsdelete command,unmount the differential-data snapshots using the syncumountcommand, and release the differential-data storage devices using thesyncstop command.
3. At the secondary site, delete the S-VOL file system using the fsdeletecommand.
4. At the primary site, delete the Universal Replicator volume pair.sudo pairsplit {-g group-name|-d volume-name} –S
5. At the primary site, verify that the Universal Replicator volume pair hasbeen deleted.sudo pairvolchk {-g group-name|-d volume-name}a. pairvolchk: Volstat is P-VOL.[status = COPY]=> Deleting
b. pairvolchk: Volstat is P-VOL.[status = SMPL]=> Deleted
6. At the primary site and the secondary site, stop CCI.sudo horcmshutdown.sh (1-instance configuration)
or sudo horcmshutdown.sh 16 17 (2-instance configuration)
Note:If you have started the task of splitting the Universal Replicator volumepair via an offline backup but have not yet completed steps 1 through 11,complete steps 1 through 11 before proceeding with the above steps.When using an online backup, you need to have completed steps 1through 7 of splitting the Universal Replicator volume pair beforeproceeding with the above steps (see Splitting a Universal ReplicatorVolume Pair on page 2-115).
2-122 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When the file system is operated by the file snapshot functionalityand the differential-data snapshot is made public in the shared filesystem
1. At the secondary site, terminate the program which accesses the S-VOL.
2. At the secondary site, un-mount the differential-data snapshot using thesyncumount command.
Note:When the differential-data snapshot is not only make public in theshared file system but also create the NFS/CIFS shares to thedifferential-data snapshot, it is required to cancel the NFS/CIFSshares for the differential-data snapshot using the nfsdelete andcifsdelete commands before un-mounting the differential-datasnapshot.
3. At the secondary site, release the device which stores the differential datausing the syncstop command.
4. At the secondary site, delete NFS/CIFS shares by using the nfsdeletecommand and the cifsdelete command, and perform unmounting byusing the fsumount command.
5. At the secondary site, delete the file system of the secondary data volumeusing the fsdelete command.
6. At the primary site, delete the Universal Replicator volume pair.sudo pairsplit {-g group-name|-d volume-name} –S
7. At the primary site, verify that the Universal Replicator volume pair hasbeen deleted.sudo pairvolchk {-g group-name|-d volume-name}a. pairvolchk: Volstat is P-VOL.[status = COPY]=> Deleting
b. pairvolchk: Volstat is P-VOL.[status = SMPL]=> Deleted
8. At the primary site and the secondary site, stop CCI.sudo horcmshutdown.sh (1-instance configuration)
or sudo horcmshutdown.sh 16 17 (2-instance configuration)
Note:If you have started the task of splitting the Universal Replicator volumepair via an offline backup but have not yet completed steps 1 through 11,complete steps 1 through 11 before proceeding with the above steps.When using an online backup, you need to have completed steps 1through 7 of splitting the Universal Replicator volume pair beforeproceeding with the above steps (see Splitting a Universal ReplicatorVolume Pair on page 2-115).
To delete the Universal Replicator volume pair in a status other than PSUSstatus:
1. At the primary site, delete the Universal Replicator volume pair.sudo pairsplit {-g group-name|-d volume-name} –S
Replication Functions 2-123Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
2. At the primary site, verify that the Universal Replicator volume pair hasbeen deleted.sudo pairvolchk {-g group-name|-d volume-name}pairvolchk: Volstat is P-VOL.[status = COPY]=> Deleting
pairvolchk: Volstat is P-VOL.[status = SMPL]=> Deleted
3. At the secondary site, release the device file used in the target filesystem.sudo horcvmdelete –d Device-file-number,...
4. At the primary site and the secondary site, stop CCI.sudo horcmshutdown.sh (1-instance configuration)
or sudo horcmshutdown.sh 16 17 (2-instances configuration)
Disaster Recovery Operations
This section describes the procedures for disaster recovery operations.
Switching Operations to the Secondary Site
To recover the data from a disaster:
When the file system is not operated by the file snapshotfunctionality or when the differential-data snapshot is not madepublic in the shared file system
Note:The response of the takeover may change according to the condition ofthe failed main site.
1. Execute SVOL-Takeover using the horctakeover command.sudo horctakeover {-g group-name|-d volume-name} -t timeout-period
2. Connect the target file system to the node or virtual server.
For a non-LVM and non-tiered file systemsudo horcimport -f target-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f target-file-system-name --tier1 device-file-number --tier2 device-file-number
For an LVM and non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,...
For an LVM and tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
3. Mount S-VOL using the fsmount command and create NFS/CIFS sharesusing the nfscreate command and the cifscreate command.
2-124 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Recover the file system when mounting the S-VOL.
Note:When the S-VOL is the file system managed by the file snapshotfunctionality, mount the differential-data snapshots using thesyncmount command, and create the NFS/CIFS shares using thenfscreate command and the cifscreate command, if necessary.
4. Start the program which accesses the S-VOL.
When the file system is operated by the file snapshot functionalityand the differential-data snapshot is made public in the shared filesystem
Note:The response of the takeover may change according to the condition ofthe failed main site.
1. Execute SVOL-Takeover using the horctakeover command.sudo horctakeover {-g group-name|-d volume-name} -t timeout-period
2. Connect the target file system to the node or virtual server.
For a non-tiered file system$ sudo horcvmimport -f target-file-system-name -d device-file-number,...
For a tiered file system$ sudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
3. Mount S-VOL using the fsmount command and create NFS/CIFS sharesusing the nfscreate command and the cifscreate command.
Recover the file system when mounting the S-VOL.
4. At the secondary site, mount the differential-data snapshot using thesyncmount command, and will be made public the NFS/CIFS shares.
5. Start the program which accesses the S-VOL.
Transferring the Data Back to the Primary Site
To transfer the data back to the primary site:
When the file system is not operated by the file snapshotfunctionality or when the differential-data snapshot is not madepublic in the shared file system
1. At the primary site, set up the environment variable of the CommandControl Interface instance:sudo horcsetenv HORCMINST 16 (For instance number 16)
or sudo horcsetenv HORCMINST 17 (For instance number 17)
Replication Functions 2-125Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
2. At the primary site, set up the HOMRCF command environment variable ofthe Command Control Interface as Universal Replicator:sudo horcunsetenv HORCC_MRCF
3. After you have logged in using SSH and set up as explained in steps 1and 2, confirm the setup by logging out and relogging in.exitssh nasroot@fixed-IP-address-or-service-IP-address
4. At the primary site and at the secondary site, start CCI.sudo horcmstart.sh (1-instance configuration)
or sudo horcmstart.sh 16 17 (2-instances configuration)
5. At the primary site, unmount the NFS shares in the old P-VOL from theclient side.
6. At the primary site, delete the NFS/CIFS shares in the old P-VOL,unmount the old P-VOL, and separate the file system in the old P-VOL.sudo horcexport -f source-file-system-nameNote:
When the old P-VOL is the file system managed by the file snapshotfunctionality, delete the NFS/CIFS shares under the differential-datasnapshots, unmount the differential-data snapshots before deletingthe file system in the old P-VOL.
7. At the secondary site, delete the Universal Replicator volume pair.sudo pairsplit {-g group-name|-d volume-name} –S
8. At the secondary site, verify that the Universal Replicator volume pair hasbeen deleted.sudo pairvolchk {-g group-name|-d volume-name}a. pairvolchk: Volstat is P-VOL.[status = COPY]=> Deleting
b. pairvolchk: Volstat is P-VOL.[status = SMPL]=> Deleted
9. At the secondary site, execute the following command to transfer the datafrom the secondary site to the primary site (Create Universal Replicatorvolume pair).sudo paircreate {-g group-name|-d volume-name} –f async –vl -jp former-restore-journal-group-ID -js former-master-journal-group-ID
10. If the status is changed to PAIR, stop the operation in the secondary site.
11. At the secondary site, delete the NFS/CIFS shares in the old S-VOL usingthe nfsdelete command and the cifsdelete command, and unmountthe old S-VOL using the fsumount command.
12. At the secondary site, prevent the file snapshot functionality fromperforming operations on the old S-VOL.sudo horcfreeze –f Source-file-system-name
13. At the secondary site, split Universal Replicator volume pair.sudo pairsplit {-g group-name|-d volume-name} –rw
14. At the secondary site, enable the operations from the file snapshotfunctionality on the old S-VOL.
2-126 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
sudo horcunfreeze –f Source-file-system-name15. At the primary site, connect the target file system to the node.
For a non-LVM and non-tiered file systemsudo horcimport -f target-file-system-name -d device-file-number
For a non-LVM and tiered file systemsudo horcimport -f target-file-system-name --tier1 device-file-number --tier2 device-file-number
For an LVM and non-tiered file systemsudo horcvmimport -f target-file-system-name -d device-file-number,...
For an LVM and tiered file systemsudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
16. At the secondary site, delete the file system in the old S-VOL using thehorcexport command.sudo horcexport -f target-file-system-nameNote:
When the old S-VOL is the file system managed by the file snapshotfunctionality, delete the NFS/CIFS shares under the differential-datasnapshots, unmount the differential-data snapshots before deletingthe file system in the old S-VOL.
17. At the primary site, enter the following command to reverseresynchronization of data from the primary site to the secondary site.sudo pairresync {-g group-name|-d volume-name} –swaps
18. At the primary site, mount the new P-VOL using the fsmount command,and create the NFS/CIFS shares using the nfscreate command and thecifscreate command.
Note:When the new P-VOL is the file system managed by the file snapshotfunctionality, mount the differential-data snapshots using thesyncmount command, and create the NFS/CIFS shares using thenfscreate command and the cifscreate command, if necessary.
19. Mount the NFS shares of the P-VOL at a client-side.
20. At the primary site, resume the operations in the primary site.
When the file system is operated by the file snapshot functionalityand the differential-data snapshot is made public in the shared filesystem
1. At the primary site, set up the environment variable of the CommandControl Interface instance:sudo horcsetenv HORCMINST 16 (For instance number 16)
or sudo horcsetenv HORCMINST 17 (For instance number 17)
Replication Functions 2-127Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
2. At the primary site, set up the HOMRCF command environment variable ofthe Command Control Interface as Universal Replicator:sudo horcunsetenv HORCC_MRCF
3. After you have logged in using SSH and set up as explained in steps 1and 2, confirm the setup by logging out and relogging in.exitssh nasroot@fixed-IP-address-or-service-IP-address
4. At the primary site and at the secondary site, start CCI.sudo horcmstart.sh (1-instance configuration)
or sudo horcmstart.sh 16 17 (2-instances configuration)
5. At the primary site, unmount the NFS shares in the old P-VOL from theclient side.
6. At the primary site, un-mount the differential-data snapshot using thesyncumount command.
Note:When the differential-data snapshot is not only make public in theshared file system but also create the NFS/CIFS shares to thedifferential-data snapshot, it is required to cancel the NFS/CIFSshares for the differential-data snapshot using the nfsdelete andcifsdelete commands before un-mounting the differential-datasnapshot.
7. At the primary site, delete NFS/CIFS shares of the old P-VOL, andperform unmounting, and then, separate the file system.sudo horcexport -f source-file-system-name
8. At the secondary site, delete the Universal Replicator volume pair.sudo pairsplit {-g group-name|-d volume-name} –S
9. At the secondary site, verify that the Universal Replicator volume pair hasbeen deleted.sudo pairvolchk {-g group-name|-d volume-name}a. pairvolchk: Volstat is P-VOL.[status = COPY]=> Deleting
b. pairvolchk: Volstat is P-VOL.[status = SMPL]=> Deleted
10. At the secondary site, execute the following command to transfer the datafrom the secondary site to the primary site (Create Universal Replicatorvolume pair).sudo paircreate {-g group-name|-d volume-name} –f async –vl -jp former-restore-journal-group-ID -js former-master-journal-group-ID
11. If the status is changed to PAIR, stop the operation in the secondary site.
12. Execute the syncumount command to unmount the differential-datasnapshot from the node or virtual server connected to the old S-VOL atthe secondary site..
Note:When the differential-data snapshot is not only make public in theshared file system but also create the NFS/CIFS shares to the
2-128 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
differential-data snapshot, it is required to cancel the NFS/CIFSshares for the differential-data snapshot using the nfsdelete andcifsdelete commands before un-mounting the differential-datasnapshot.
13. At the secondary site, delete the NFS/CIFS shares in the old S-VOL usingthe nfsdelete command and the cifsdelete command, and unmountthe old S-VOL using the fsumount command.
14. At the secondary site, prevent the file snapshot functionality fromperforming operations on the old S-VOL.sudo horcfreeze –f Source-file-system-name
15. At the secondary site, split Universal Replicator volume pair.sudo pairsplit {-g group-name|-d volume-name} –rw
16. At the secondary site, enable the operations from the file snapshotfunctionality on the old S-VOL.sudo horcunfreeze –f Source-file-system-name
17. At the primary site, connect the target file system to the node or virtualserver.
For a non-tiered file system$ sudo horcvmimport -f target-file-system-name -d device-file-number,...
For a tiered file system$ sudo horcvmimport -f target-file-system-name --tier1 device-file-number,... --tier2 device-file-number,...
18. At the secondary site, separate the file system in the old S-VOL using thehorcexport command.sudo horcexport -f target-file-system-name
19. At the primary site, enter the following command to reverseresynchronization of data from the primary site to the secondary site.sudo pairresync {-g group-name|-d volume-name} –swaps
20. At the primary site, mount the new P-VOL using the fsmount command,and create the NFS/CIFS shares using the nfscreate command and thecifscreate command.
21. From the node or virtual server connected to the new P-VOL at theprimary site, execute the syncmount command to mount the differential-data snapshot and make file shares on the file system accessible.
22. Mount the NFS shares of the P-VOL at a client-side.
23. At the primary site, resume the operations in the primary site.
Notes on Failures of External VolumesIf a trouble (including a temporary trouble, for example, the cable connectionhas cut out) occurs to the external volume which is currently used withUniversal Replicator volume pair, perform the following processes.
Replication Functions 2-129Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
1. If the external volume cannot be used succeedingly, delete the UniversalReplicator volume pair by using the pairsplit -S command.
If the S-VOL is not connected to a node or virtual server, execute thehorcvmdelete command to release the device files of the copy-target filesystem.
2. Confirm all the Universal Replicator volume pair statuses that are usingthe external volume with trouble(s) by using the pairdisplay command.
If the status of the Universal Replicator volume pair is PSUE, delete thevolume pair by using the pairsplit -S command. To delete a UniversalReplicator volume pair, if the S-VOL is not connected to a node or virtualserver, execute the horcvmdelete command to release the device files ofthe copy-target file system.
Operations of the Log FilesThe HVFP system allows you to manage failure information by viewing,downloading, or deleting log files. The log files contain detailed informationabout the execution of commands supported by the HNAS F system. If youwant to investigate the cause of a failure, look at the log files. If you cannotresolve a failure on your own, download the log files and contact maintenancepersonnel.
For details about how to manage failure information, see the ClusterTroubleshooting Guide.
Operations of the Command Control Interface Log FilesThis section explains the format of the Command Control Interface log files inthe HNAS F system, and the method of downloading log files to the SSHclient, and provides notes on the operations of the Command ControlInterface log files.
Format of the Command Control Interface Log Files
This section explains the format of the Command Control Interface log files(which consist of starting logs, error logs, trace logs, and core files). Theselog files are listed in Table 2-18. In Table 2-18, the * (asterisk) shows theinstance number of CCI; HOST shows the host name of the correspondingnode; PID shows the process ID of CCI or the Command Control Interfacecommand; CMD shows the process name (horcmgr in case of CCI, or acommand name in case of the Command Control Interface command); andTIME shows the creation time of the core file.
2-130 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Table 2-18 Format of the Command Control Interface Log Files in theHNAS F system
Log FileClassification Log File Name Format of Log Files
CCI logs underoperation
CCI starting log /home/nasroot/log*/curlog/horcm_HOST.log
Command log /home/nasroot/log*/horcc_HOST.log
CCI error log /home/nasroot/log*/curlog/horcmlog_HOST/horcm.log
CCI trace log /home/nasroot/log*/curlog/horcmlog_HOST/horcm_PID.trc
Command trace /home/nasroot/log*/curlog/horcmlog_HOST/horccc_PID.trc
Core file /var/core/core-PID-CMD-TIME
CCI logs savedautomatically
CCI starting log /home/nasroot/log*/tmplog/horcm_HOST.log
Command log /home/nasroot/log*/horcc_HOST.log
CCI error log /home/nasroot/log*/tmplog/horcmlog_HOST/horcm.log
CCI trace log /home/nasroot/log*/tmplog/horcmlog_HOST/horcm_PID.trc
Command trace /home/nasroot/log*/tmplog/horcmlog_HOST/horccc_PID.trc
Core file /var/core/core-PID-CMD-TIME
Downloading the Command Control Interface Log Files to the Client
If you want to check log files (except for core files) to identify the causes ofCCI command errors, or if you want to save log files before deleting the logfiles, you can download the log files by specifying the Backup log group onthe List of RAS Information page (for Batch-download) page on the Checkfor Errors window of File Services Manager.
You can download or delete core files output by CCI on the List of RASInformation page (for List of core files) on the Check for Errorswindow provided in File Services Manager. For details, see the Hitachi NASPlatform F1000 Series Cluster Administrator's Guide.
Note:If you read the Command Control Interface log files using Windows™,you must ensure that the text viewer you are using can display the textcontaining the line feed code LF (Line Feed).
Notes on the Operations of the Command Control Interface Log Files
Since the Command Control Interface log files are output to OS disk, ifoperations still show an error, log files will become large and the availablespace on OS disk will decrease. You should periodically check the capacity of
Replication Functions 2-131Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
the log files (except the trace files) to which Command Control Interfacerestricts capacity, stop CCI, and delete log files using the horclogremovecommand before exceeding 1 M bytes. Check the capacity of log files usingthe following command.$ ls -l -R /home/nasroot/log*(* means an instance number)
Commands that HNAS F providesThis section describes the commands that are used for replication operationsin an HNAS F system and that are available in HNAS F.
• horcdevlist• horclogremove• horcprintenv, horcsetenv, horcunsetenv• horcfreeze• horcunfreeze• horcvmdefine• horcvmdelete• horcimport, horcvmimport, horcexport• horcsetconf, horcunsetconf, horcconflist• cifscreate, cifsdelete• fscreate, fsdelete, fslist, fsmount, fsumount• nfscreate, nfsdelete• fsctl• lumapctl• synclist, syncmount, syncstop, syncumount
2-132 Replication FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
3Volume Management Functions
This section describes the program products providing volume managementfunctions that can be used in conjunction with the HNAS F.
□ Dynamic Provisioning
□ Dynamic Tiering
□ Universal Volume Manager
□ Volume Migration
□ Volume Shredder
□ Encryption License Key
Volume Management Functions 3-1Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Dynamic ProvisioningDynamic Provisioning is a program product that helps you reduce the cost ofdeployment and volume management in storage system. This is achieved byusing virtual volumes (V-VOLs).
Before using the Dynamic Provisioning functionality, make sure youunderstand it by carefully reading the Provisioning Guide.
The use of Dynamic Provisioning together with HNAS F allows you to reducedeployment costs by virtualizing volumes via HNAS F. Because this eliminatesthe need to re-create a file system each time you run out of storage space,there are fewer interruptions to system operation, meaning lowermanagement costs and less downtime.
Note:
¢ To use Dynamic Provisioning to manage the user LUs and clustermanagement LU in an HNAS F system, the user LUs must be in adifferent pool from the cluster management LU.
¢ You cannot use a virtual volume for an OS LU of an HNAS F virtualserver.
¢ File systems created as V-VOLs by Dynamic Provisioning on the HNASF will become blocked if the pool volume overflows. For this reason,prevent the pool volume from overflowing by setting the appropriatethresholds for the pool volume. Immediately provide additional drivesto increase the pool volume capacity when the thresholds areexceeded.
¢ To monitor the thresholds of pool volumes, you need to set up SNMPtrap notification to be sent out whenever the thresholds are exceeded.
Dynamic TieringDynamic Tiering is a program that helps reduce storage costs. You canimprove storage cost performance by using Dynamic Tiering to configurevolumes with different types of storage drives.
Before using the functionality of Dynamic Tiering, make sure you understandit by carefully reading the Provisioning Guide.
When Dynamic Tiering is used with an HNAS F system, data is automaticallyand optimally placed in storage tiers depending on access frequency. Thisconsiderably reduces the burden on administrators to design systems toincrease storage performance. Storage costs can also be reduced by usinginexpensive disks, while still maintaining storage performance.
Note:
¢ To use Dynamic Tiering to manage the user LUs and clustermanagement LU in an HNAS F system, the user LUs must be in adifferent pool from the cluster management LU.
3-2 Volume Management FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
¢ You cannot use a virtual volume for an OS LU of an HNAS F virtualserver.
Universal Volume ManagerUniversal Volume Manager is a program product that provides storagevirtualization for storage systems. When Universal Volume Manager is used,an external storage system connected to the storage system via a FibreChannel interface can be treated as part of the storage system.
Before you use Universal Volume Manager, carefully read the HitachiUniversal Volume Manager User's Guide (User Guide), and make sure thatyou understand the program functions.
You can also use Universal Volume Manager in conjunction with the HNAS Fto make external storage system available to the storage system.
Note:An external volume controlled by Universal Volume Manager can be usedas a user LU in the HNAS F, but cannot be used as a system LU.
Note:If you create an HNAS F file system in an external volume controlled byUniversal Volume Manager, and the LU of the external storage systembecomes unreachable due to an error in the external storage system orother reason, the file system will become blocked. To restore the blockedfile system, first resolve the error in the external storage system, andthen use Universal Volume Manager to unblock the external volume. Thefile system will be usable again after you restart the OS on the HNAS F.
Troubleshooting for the HNAS F Including External Storage SystemIn this section, the procedure to stop the external storage system on purposefor the maintenance and the procedure to recover the external storagesystem from the failure are described for the HNAS F system including theexternal storage system. This section consists of the following sections:
• Stopping and Restarting External Storage System on Purpose on page3-3
• Recovery Procedure in Case of Error in External Storage System on page3-5
Stopping and Restarting External Storage System on Purpose
Figure 3-1 An Example of HNAS F System Configuration Including an ExternalStorage System on page 3-4 shows an example of an HNAS F configurationincluding an external storage system. The procedure to stop the externalstorage system on purpose for maintenance, and the procedure to restart theexternal storage system after the maintenance to restore the HNAS F systemare described below, referring to the case of Figure 3-1 An Example of HNASF System Configuration Including an External Storage System on page 3-4.
Volume Management Functions 3-3Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 3-1 An Example of HNAS F System Configuration Including anExternal Storage System
Note:When you need to stop the external storage system on purpose, executethe procedure according to the following description. If you execute thewrong procedure, an error occurs in the HNAS F system (e.g., the filesystem may be blocked, the status of resource group may becomeinappropriate).
To stop the external storage system on purpose:
1. Stop the accesses from the client.
2. Stop the cluster using File Services Manager.
3. Delete all the pairs of ShadowImage or TrueCopy, if you have created thepairs specifying the external volumes.
4. Stop the OS 0 and the OS 1 using File Services Manager.
5. Disconnect the external storage system using Universal Volume Manager.For the procedure to disconnect an external storage system, see theHitachi Universal Volume Manager User's Guide (User Guide).
6. Stop the external storage system.For the procedure to stop an external storage system, see the HitachiUniversal Volume Manager User's Guide (User Guide).
3-4 Volume Management FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
To restart the storage system and restore the HNAS F system
1. Turn the external storage system power supply on.For the procedure to turn an external storage system power supply on,see the Hitachi Universal Volume Manager User's Guide (User Guide).
2. Confirm that the status of the external storage system is normal.
3. Execute the Check Paths & Restore Vol. command using UniversalVolume Manager to restore the path to the external storage system.For the procedure to execute the Check Paths & Restore Vol.command, see the Hitachi Universal Volume Manager User's Guide (UserGuide).
4. Confirm that the connection between the storage system and the externalstorage system has become normal.
5. Start the OS 0 and the OS 1 using File Services Manager.
6. If you have deleted the ShadowImage or TrueCopy pairs as you stoppedthe external storage system, re-create the deleted pairs.
7. Start the cluster using File Services Manager.
8. Resume the accesses from the client.
Recovery Procedure in Case of Error in External Storage System
This section describes the recovery procedures in case when an error occursin the external storage system that is in the HNAS F system. The recoveryprocedures are described using examples as follows.
• In Case of Error in a Disk in the External Storage System on page 3-5
• In Case of Error in a Path to the External Storage System on page 3-8
• In Case of Error in All the Paths to the External Storage System on page3-9
In Case of Error in a Disk in the External Storage System
Figure 3-2 In Case of Error in a Disk in the External Storage System on page3-6 shows an example of the case when an error occurs in a disk in theexternal storage system. The situation of Figure 3-2 In Case of Error in a Diskin the External Storage System on page 3-6 is as follows:
• The data itself that has been stored in the error disk of external storagesystem 1 cannot be restored. If you need to restore the data to theoriginal disk after the recovery of the HNAS F system, the data shouldhave been backed up in the disk other than the error disk.
• The clients cannot access the volumes of the error disk. The clients canonly access the volumes other than the error disk.
• The storage system recognizes that the status of the file system and thevolume of the error disk is blocked.
Volume Management Functions 3-5Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 3-2 In Case of Error in a Disk in the External Storage System
The recovery procedure of the error such as Figure 3-2 In Case of Error in aDisk in the External Storage System on page 3-6 is as follows:
1. Delete all the pairs of ShadowImage or TrueCopy, if you have created thepairs.
2. Perform the following operations on the node 0.
¢ Release the differential-data storage device using the file snapshotfunctionality.
¢ Delete the NFS® share, CIFS share, and file system using FileServices Manager.
3. Change the execution node of the resource group 0 to the node 1 usingFile Services Manager (failover).If the status of the resource group 0 has been Offline, this operation isnot required.
4. Stop the node 0 using File Services Manager.
5. Restart the OS 0 using File Services Manager.
3-6 Volume Management FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
6. Start the node 0 using File Services Manager.
7. Perform one of the following operations using File Services Manager.
¢ When the status of the resource group 0 has been Online, change theexecution node of the resource group 0 to the node 0 (failback).
¢ When the status of the resource group 0 has been Offline, start theresource group 0.
8. Perform the following operations on the node 1.
¢ Release the differential-data storage device using the file snapshotfunctionality.
¢ Delete the NFS® share, CIFS share, and file system using FileServices Manager.
9. Change the execution node of the resource group 1 to the node 0 usingFile Services Manager (failover).If the status of the resource group 1 has been Offline, this operation isnot required.
10. Stop the node 1 using File Services Manager.
11. Restart the OS 1 using File Services Manager.
12. Start the node 1 using File Services Manager.
13. Perform one of the following operations using File Services Manager.
¢ When the status of the resource group 1 has been Online, change theexecution node of the resource group 1 to the node 1 (failback).
¢ When the status of the resource group 1 has been Offline, start theresource group 1.
14. Change the error disk in the external storage system to restore the statusof the external storage system.
15. Execute the Check Paths & Restore Vol. command using UniversalVolume Manager.For the procedure to execute the Check Paths & Restore Vol.command, see the Hitachi Universal Volume Manager User's Guide (UserGuide).
16. Perform the following operations on both of the node 0 and node 1.
¢ Set up the differential-data storage device using the file snapshotfunctionality.
¢ Create the file system and then create the NFS® share and CIFSshare using File Services Manager.
17. Put back the backed up data to the restored disk of the external storagesystem if you have had the backed up data in the disk other than theerror disk.
18. Re-create the pairs of ShadowImage and TrueCopy.
Volume Management Functions 3-7Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
In Case of Error in a Path to the External Storage System
Figure 3-3 In Case of Error in a Path to the External Storage System on page3-8 shows an example of the case when an error occurs in a path betweenthe storage system and an external storage system. In Figure 3-3 In Case ofError in a Path to the External Storage System on page 3-8, an erroroccurs in the path between the storage system and the external storagesystem 1, however, the path between the storage system and the externalstorage system 2 is normal.
The situation of Figure 3-3 In Case of Error in a Path to the External StorageSystem on page 3-8 is as follows:
• The clients cannot access the volumes of the external storage system 1,but they can access the volumes of the external storage system 2.
• The storage system recognizes that the status of all the file systems andvolumes of the external storage system 1 are blocked.
Figure 3-3 In Case of Error in a Path to the External Storage System
The recovery procedure of the error such as Figure 3-3 In Case of Error in aPath to the External Storage System on page 3-8 is as follows:
3-8 Volume Management FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
1. Restore (e.g., checking the connection status of the cable, changingswitch, and so on) the status of the error path between the storagesystem and the external storage system 1.
2. Execute the Check Paths & Restore Vol. command using UniversalVolume Manager.For the procedure to execute the Check Paths & Restore Vol.command, see the Hitachi Universal Volume Manager User's Guide (UserGuide).
3. Change the execution node of the resource group 0 to the node 1 usingFile Services Manager (failover).If the status of the resource group 0 has been Offline, this operation isnot required.
4. Stop the node 0 using File Services Manager.
5. Restart the OS 0 using File Services Manager.
6. Start the node 0 using File Services Manager.
7. Perform one of the following operations using File Services Manager.
¢ When the status of the resource group 0 has been Online, change theexecution node of the resource group 0 to the node 0 (failback).
¢ When the status of the resource group 0 has been Offline, start theresource group 0.
8. Change the execution node of the resource group 1 to the node 0 usingFile Services Manager (failover).If the status of the resource group 1 has been Offline, this operation isnot required.
9. Stop the node 1 using File Services Manager.
10. Restart the OS 1 using File Services Manager.
11. Start the node 1 using File Services Manager.
12. Perform one of the following operations using File Services Manager.
¢ When the status of the resource group 1 has been Online, change theexecution node of the resource group 1 to the node 1 (failback).
¢ When the status of the resource group 1 has been Offline, start theresource group 1.
In Case of Error in All the Paths to the External Storage System
The following subsections describe the recovery procedures with twoexamples of the case when the error occurs in all the paths connected to theexternal storage system.
• In cases where both nodes use different external storage systems
• In cases where both nodes use the same external storage system
In Case that Each Node Uses Different External Storage System
In the configuration of Figure 3-4 In Case that Error Occurs on the Path tothe External Storage System that is Used for Node 1 on page 3-11, all user
Volume Management Functions 3-9Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
LUs of the HNAS F system are the volumes of the external storage system.Each node uses the volume of different storage system. All user LUs of thenode 0 are the volumes of the external storage system 1. Therefore, if anerror occurs in the path between the storage system and the external storagesystem 1, this means that the error has occurred in all the paths connectedfrom the node 0 to the external storage system 1. In this situation, no userLU is available for the node 0.
The situation of Figure 3-4 In Case that Error Occurs on the Path to theExternal Storage System that is Used for Node 1 on page 3-11 is as follows:
• The HNAS F system attempts to change the execution node of theresource group 0 to the node 1 because no user LU can be used fromnode 0 (failover). However, the node 1 cannot access the volumes of theexternal storage system 1, the processing of the failover function hasfailed, and srmd executable error is displayed as the error informationof the resource group 0.
• The status of the file system is blocked on the node 1 because of thefailure of the processing of the failover of the resource group 0.
• The clients cannot access the volumes of the external storage system 1,but they can access the volumes of the external storage system 2.
• The storage system recognizes that the status of all the file systems andvolumes of the external storage system 1 are blocked.
3-10 Volume Management FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 3-4 In Case that Error Occurs on the Path to the External StorageSystem that is Used for Node 1
The recovery procedure of the error such as Figure 3-4 In Case that ErrorOccurs on the Path to the External Storage System that is Used for Node 1 onpage 3-11 is as follows:
1. Restore (e.g., checking the connection status of the cable, changingswitch, and so on) the status of the error path between the storagesystem and the external storage system 1.
2. Execute the Check Paths & Restore Vol. command using UniversalVolume Manager.For the procedure to execute the Check Paths & Restore Vol.command, see the Hitachi Universal Volume Manager User's Guide (UserGuide).
3. Perform a forced stop operation using File Services Manager for theresource group 0 for which the status is displayed as srmd executableerror.
4. Stop the node 0 using File Services Manager.
5. Restart OS 0 using File Services Manager.
Volume Management Functions 3-11Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
6. Start the node 0 using File Services Manager.
7. Release the blocked status of the file system of the node 1.To release the blocked status of the file system of the node 1, performthe operations in the following order:
a. Change the execution node of the resource group 1 to the node 0using File Services Manager (failover).
b. Restart the OS 1 using File Services Manager.
c. Change the execution node of the resource group 1 to the node 1(failback).
8. Start the resource group 0 on the node 0 using File Services Manager.
In Case that Both Nodes Use the Same External Storage System
In the configuration of Figure 3-5 In Case that Both Nodes Use the SameExternal Storage System on page 3-13, all user LUs of the HNAS F systemare the volumes of the external storage system. Only one external storagesystem is connected to the storage system. Also only one path is set betweenthe storage system and the external storage system. In this case, if an errorhas occurred in the only set path, all the volumes in the external storagesystem cannot be used.
Note:In the configuration such as Figure 3-5 In Case that Both Nodes Use theSame External Storage System on page 3-13, setting alternate paths isrecommended to prevent the HNAS F system to be blocked because ofthe path failure. For detailed information about the alternate paths, seethe Hitachi Universal Volume Manager User's Guide (User Guide).
The situation of Figure 3-5 In Case that Both Nodes Use the Same ExternalStorage System on page 3-13 is as follows:
• All user LUs cannot be used neither from the node 0 nor the node 1.
• The clients cannot access the volumes.
• The storage system recognizes that the status of all the file systems andvolumes are blocked.
3-12 Volume Management FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Figure 3-5 In Case that Both Nodes Use the Same External Storage System
The recovery procedure of the error such as Figure 3-5 In Case that BothNodes Use the Same External Storage System on page 3-13 is as follows:
1. Restore (e.g., checking the connection status of the cable, changingswitch, and so on) the status of the error path between the storagesystem and the external storage system 1.
2. Execute the Check Paths & Restore Vol. command using UniversalVolume Manager.For the procedure to execute the Check Paths & Restore Vol.command, see the Hitachi Universal Volume Manager User's Guide (UserGuide).
3. Perform the forced stop operations using File Services Manager for both ofthe resource group 0 and resource group 1.
4. Stop the cluster using File Services Manager.
5. Restart OS 0 and OS 1 using File Services Manager.
6. Start the cluster using File Services Manager.
7. Start both of the resource group 0 and resource group 1 using FileServices Manager.
Volume Management Functions 3-13Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Volume MigrationVolume Migration is a program product that helps you eliminate systembottlenecks by distributing the load concentrated on a specific disk orprocessor to other disks or processors in a storage system. If the usagestatistics collected by Performance Monitor show that the access load isconcentrated on a specific hard disk drive, the system administrator can useVolume Migration to distribute the load to another drive.
Before you use Volume Migration, carefully read the Hitachi PerformanceManager User's Guide (User Guide) and the Hitachi Volume Migration UserGuide, and make sure that you understand the program functions.
Volume Migration also works with user LUs in the HNAS F.
You can use Volume Migration in conjunction with Universal Volume Manager.When Universal Volume Manager is used to map volumes on an externalstorage system to internal volumes, you can check the usage of the volumeson the external storage system (external volumes), and of the externalvolume groups that contain these volumes.
Note:In the HNAS F, only user LUs can be used with Volume Migration. SystemLUs are excluded.
Volume ShredderVolume Shredder is a software product that erases all of the data in a volumein storage system. Data erased by this software cannot be recovered.
Before using the Volume Shredder functionality, make sure you understand itby carefully reading the Hitachi Virtual Storage Platform Hitachi VolumeShredder User Guide.
Volume Shredder can be used to completely erase volumes used in the HNASF.
Note:When Volume Shredder is used to erase data from a system LU or userLU used in the HNAS F, the erased data can never be recovered. Exerciseadequate caution before erasing the data.
Encryption License KeyEncryption License Key is a program product used to encrypt data on storagesystem volumes. By encrypting data, information leaks can be prevented inthe event that a storage system or hard disk in the storage system isswapped out (and accidentally used for another purpose) or stolen.
Please read the Encryption License Key User's Guide (User Guide) beforeusing any Encryption License Key functionality.
3-14 Volume Management FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Volumes used by HNAS F can be encrypted by using the Encryption LicenseKey.
Volume Management Functions 3-15Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
3-16 Volume Management FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
4Resource Management Functions
HNAS F can be connected and used with a storage system configured usingthe resource management functionality provided by storage system.
□ Storage Navigator
□ LUN Manager
□ Configuration File Loader
□ Virtual Partition Manager
Resource Management Functions 4-1Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Storage NavigatorStorage Navigator is a program product for remotely operating storagesystem.
Before you use Storage Navigator, you must first prepare an environmentthat allows use of the Web browser specified for use with storage system.
Using Storage Navigator, you can perform settings and operations in thefollowing program products:
• Configuration File Loader
• Dynamic Provisioning
• Encryption License Key
• LUN Manager
• Performance Monitor
• ShadowImage
• TrueCopy
• Universal Replicator
• Universal Volume Manager
• Virtual LVI
• Virtual Partition Manager
• Volume Migration
• Volume Shredder
LUN ManagerLUN Manager is a program product that helps you build a storageenvironment using storage system.
Before you use LUN Manager, carefully read the Provisioning Guide, andmake sure that you understand the program functions.
LUN Manager can be used with the HNAS F to perform the following tasks:
Installing the HNAS F
• Create a host group for a Fibre-channel port to which a node connects
• Assign an LU to a created host group
Adding an LU used by the HNAS F
• Add a host group to a Fibre-channel port to which a node is connected
• Add an LU to a registered host group
Note:The HNAS F allows you to create an alternate path if the LU path shouldbecome unavailable for some reason. To ensure that the alternate path
4-2 Resource Management FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
behaves correctly, configure the host group and its LUs in advance instorage system so that they can use the alternate path.
Note:The HNAS F allows you to set a cluster encompassing the two nodesnode0 and node1. To ensure that the cluster works correctly, configurethe host group and its LUs in advance in the storage system so that thecluster can be used.
Configuration File LoaderConfiguration File Loader is a program product that allows you to define theconfiguration information for storage system in a single file, thus allowing forthe batch setting of configuration information.
Before you use the Configuration File Loader functionality, carefully read theHitachi Storage Navigator User's Guide (User Guide) and the Hitachi SystemOperations Using Spreadsheets, and make sure that you understand theprogram functions.
Configuration File Loader outputs the configuration information as aspreadsheet. You can then use spreadsheet software or a text editor to defineor edit the configuration for storage system.
In a storage system to which a node used by the HNAS F connects, you canuse Configuration File Loader to configure the host group used by the nodeand the LUs allocated to that host group as a batch.
The LUN Manager provided with Storage Navigator can also be used toconfigure storage system. LUN Manager is useful when defining or changingindividual settings. However, Configuration File Loader, because it loads theentire configuration as one file, is more useful in situations that require batchprocessing, such as when you create or modify settings on a large scale.
Virtual Partition ManagerVirtual Partition Manager is a program product that enables logicalpartitioning of the resources of storage system. Virtual Partition Managerprovides cache partitioning.
The cache partitioning functionality allows you to create multiple units ofvirtual cache from the storage system cache memory, which can be allocatedamong the hosts in the system. This means that when a specific host has ahigh I/O workload, it does not have a negative impact on the I/Operformance of the other hosts in the system.
Before you use Virtual Partition Manager, carefully read the Hitachi VirtualPartition Manager User's Guide (User Guide), and make sure that youunderstand the program functions.
HNAS F can be used in conjunction with the Virtual Partition Managerprovided by storage system.
Resource Management Functions 4-3Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Cache partitioning
When one storage system is shared among a large number of hosts includingthe host running the HNAS F, a large proportion of the storage system cachememory may be occupied by a specific host as it handles a large amount ofdata. Such a situation may cause a reduction in write speeds, as other hostswill need to wait for their turn to write to the cache.
The cache partitioning functionality of Virtual Partition Manager partitions thestorage system cache memory into multiple units of virtual cache memory.Because each host is allocated a specific amount of cache memory, you canavoid situations in which one host uses more than its fair share.
Each unit of virtual cache memory created by the cache partitioningfunctionality is called a CLPR (Cache Logical Partition).
Note:When using HNAS F in conjunction with the cache partitioningfunctionality of Virtual Partition Manager, make sure that the storagesystem LUs used by the node pair that makes up the HNAS F cluster arein the same CLPR defined on the storage system device.
4-4 Resource Management FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
5Performance Management Functions
The HNAS F can be used in conjunction with the performance managementfunctionality provided by storage system.
□ Performance Monitor
Performance Management Functions 5-1Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Performance MonitorPerformance Monitor is a program product that collects information about theutilization of resources such as hard disk drives, logical volumes andprocessors built into the storage system.
Before you use Performance Monitor, carefully read the Hitachi PerformanceMonitor User's Guide (User Guide), and make sure that you understand theprogram functions.
Using Performance Monitor, you can also monitor disk workloads and trafficbetween a host and a storage system. In the Performance Monitor windows,resource utilization, loading, and traffic patterns are displayed in the form ofline graphs. The system administrator can use the information displayed inPerformance Monitor to analyze trends in disk access or identify when I/Oaccess is busiest.
When using the HNAS F, you can use Performance Monitor with storagesystem to view information about utilization of resources such as hard diskdrives, logical volumes, and processors used by the HNAS F.
5-2 Performance Management FunctionsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
ADetails of ShadowImage Operations on
the HNAS F
This appendix describes operating procedures when the ShadowImagefunctionality of storage system is used in conjunction with the HNAS F .
This chapter discusses the following topics:
□ Execution of Command Operation on HNAS F
□ Pairs Recovery from Failures on the HNAS F
Details of ShadowImage Operations on the HNAS F A-1Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Execution of Command Operation on HNAS FThe following describes the HNAS F commands used in How to useShadowImage on page 2-6.
Note:To continue the operation at the failover destination when managing thesystem with node, execute the command making the remote host specifythe virtual IP address. When you connect the file system, execute thecommand by adding the -r option and specifying a resource group name.
Creating Pairs
Commands for Creating Pairs
When creating a pair, the following commands are used.
Table A-1 Commands for Creating Pairs (HNAS F)
No. Command Description
1 sudo horcvmdefine -d device-file-number[, device-file-number ...]
Reserves device file.
Table A-2 Commands for Creating Pairs (CCI)
No. Command Description
1 sudo paircreate {-g group-name | -dvolume-name} -vl
Creates a volume pair.
2 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the pair volume status.
3 sudo pairdisplay {-g group-name | -dvolume-name} -fc
Checks the pair status.
Procedure for Creating Pairs
The pair is usually created in the following procedure. The sudo command,the options of the HNAS F command, and the CCI command are omittedwhen it is described in the following table. Specify the appropriate options forthe actual operation.
Table A-3 Procedure for Creating Pairs
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount SMPL SMPL – –
A-2 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
2 Reserves S-VOL
horcvmdefine
3 Public Mount SMPL SMPL – –
4 pairdisplay
Confirm thecopy targetbeforecreating apair
5 Public Mount SMPL SMPL – –
6 paircreate
Creates avolume pair
7 Public Mount COPY COPY – –
8 pairdisplay
Confirm avolume pairstatus
9 Public Mount COPY COPY – –
10 pairvolchk
At this pointthe pairstatus isCOPY
11 Execute pairvolchkseveral times
Public Mount COPY COPY – –
12 pairvolchk
When a pairstatuschange toPAIR,crating pairiscompleted
13 Public Mount PAIR PAIR – –
Splitting Pairs
Commands for Splitting Pairs
When splitting a pair, the following commands are used.
Table A-4 Commands for Splitting Pairs (HNAS F)
No. Command Description
1 sudo nfsdelete -d shared-directory {-a |-H Host}
Delete an NFS share.
Details of ShadowImage Operations on the HNAS F A-3Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
2 sudo nfscreate -d shared-directory -HHost
Create an NFS share.
3 sudo cifsdelete -x CIFS-share-name Delete a CIFS share.
4 sudo cifscreate -x CIFS-share-name -dshared-directory
Create a CIFS share.
5 sudo fsumount file-system-name Un-mount a file system.
6 sudo fsmount {-r | -w} file-system-name
Mount a file system.
7 sudo horcfreeze -f copy-source-file-system-name
Suppresses on the P-VOL and stopsaccess from clients.
8 sudo horcunfreeze -f copy-source-file-system-name
Restarts operations on the P-VOL andaccess from clients.
9 sudo horcimport -f copy-destination-file-system-name -d device-file-number [-rresource-group-name]
When LVM is not used and the filesystem is not tiered, the commandconnects the file system to the node.
10 sudo horcimport -f copy-destination-file-system-name --tier1 device-file-number--tier2 device-file-number [-r resource-group-name]
When LVM is not used and the filesystem is tiered, the command connectsthe file system to the node.
11 sudo horcvmimport -f copy-destination-file-system-name -d device-file-number[, device-file-number...] [-r resource-group-name]
When LVM is used and the file system isnot tiered, the command connects thefile system to the node.
12 sudo horcvmimport -f copy-destination-file-system-name --tier1 device-file-number [, device-file-number...] --tier2device-file-number [, device-file-number...] [-r resource-group-name]
When LVM is used and the file system istiered, the command connects the filesystem to the node.
13 sudo syncumount mount-point-name# Un-mounts the differential-datasnapshot.
14 sudo syncmount file-system-namedifferential-data-snapshot-name mount-point-name
Mounts the differential-data snapshot.
#This information is required when publishing the data in the shared filesystem.
Table A-5 Commands for Splitting Pairs (CCI)
No. Command Description
1 sudo pairsplit {-g group-name | -dvolume-name}
Splits a volume pair.
2 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the pair volume status.
A-4 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Procedure for Splitting Pairs with P-VOL Mounted
The pair is split with P-VOL mounted in the following procedure. The sudocommand, the options of the HNAS F command, and the CCI command areomitted when it is described in the following table. Specify the appropriateoptions for the actual operation.
Table A-6 Procedure for Splitting Pairs with P-VOL Mounted
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 horcfreeze
Hold theaccess
3 Public Mount PAIR PAIR – –
4 pairsplit
Split a pair
5 Public Mount COPY COPY – –
6 pairvolchk
At this pointthe pairstatus isCOPY
7 Execute pairvolchkseveral times
Public Mount COPY COPY – –
8 pairvolchk
When thepair statuschanges toPSUS, thepair is split
9 Public Mount PSUS SSUS – –
10 horcunfreeze
Cancelsuppressionofoperations
11 Public Mount PSUS SSUS – –
12 Connect thefilesystem#1
horcvmimport(whenLVM is notused,horcimport)
13 Public Mount PSUS SSUS Un-mount
Non-public
14 Mount fsmount
Details of ShadowImage Operations on the HNAS F A-5Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
15 Public Mount PSUS SSUS Mount Non-public
16 Share#2 nfscreate/cifscreate
17 Public Mount PSUS SSUS Mount Public
18 When usingthe filesnapshotfunctionality
syncmount
19 Public Mount PSUS SSUS Mount Public
20 Start a program that accesses the filesystem at S-VOL site.
#1:Specify it as a file system name different from that in P-VOL site.
#2:Specify a public directory name and CIFS name that differ from those forthe P-VOL.
Procedure for Splitting Pairs with P-VOL Un-mounted
The pair is split with P-VOL un-mounted in the following procedure. The sudocommand, the options of the HNAS F command, and the CCI command areomitted when it is described in the following table. Specify the appropriateoptions for the actual operation.
When LVM is not used:
Table A-7 Procedure for Splitting Pairs when LVM is Not Used (with P-VOL Un-mounted)
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop a program that accesses the filesystem at P-VOL site.
2 Public Mount PAIR PAIR – –
3 nfsdelete/cifsdelete
Delete NFS/CIFS shares
A-6 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
4 Non-public
Mount PAIR PAIR – –
5 fsumount Un-mount
6 Non-public
Un-mount
PAIR PAIR – –
7 pairsplit
Split a pair
8 Non-public
Un-mount
COPY COPY – –
9 pairvolchk
At this pointthe pairstatus isCOPY
10 Execute pairvolchkseveral times
Public Un-mount
COPY COPY – –
11 pairvolchk
When thepair statuschanges toPSUS, thepair is split
12 Non-public
Un-mount
PSUS SSUS – –
13 fsmount Mount
14 Non-public
Mount PSUS SSUS – –
15 nfscreate/cifscreate
Share
16 Public Mount PSUS SSUS – –
17 Restart a program that accesses the filesystem at P-VOL site.
18 Public Mount PSUS SSUS – –
19 Connect thefilesystem#1
horcimport
20 Public Mount PSUS SSUS Un-mount
Non-public
21 Mount fsmount
22 Public Mount PSUS SSUS Mount Non-public
Details of ShadowImage Operations on the HNAS F A-7Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
23 Share#2 nfscreate/cifscreate
24 Public Mount PSUS SSUS Mount Public
25 Start a program that accesses the filesystem at S-VOL site.
#1:Specify it as a file system name different from that in P-VOL site.
#2:Specify a public directory name and CIFS name that differ from those forthe P-VOL.
When LVM is used and a differential-data snapshot is not madeavailable in a shared file system:
Table A-8 Procedure for Splitting Pairs when LVM is Used (with a differential-datasnapshot not available in the shared file system, and with P-VOL Un-mounted)
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop a program that accesses the filesystem at P-VOL site.
2 Public Mount PAIR PAIR – –
3 nfsdelete/cifsdelete
Delete NFS/CIFS shares
4 Non-public
Mount PAIR PAIR – –
5 fsumount Un-mount
6 Non-public
Un-mount
PAIR PAIR – –
7 horcfreeze
Suppressoperations
8 Non-public
Un-mount
PAIR PAIR – –
9 pairsplit
Split a pair
A-8 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
10 Non-public
Un-mount
COPY COPY – –
11 pairvolchk
At this pointthe pairstatus isCOPY
12 Execute pairvolchkseveral times
Non-public
Un-mount
PSUS SSUS – –
13 pairvolchk
When thepair statuschanges toPSUS, thepair is split
14 Non-public
Un-mount
PSUS SSUS – –
15 horcunfreeze
Cancelsuppressionofoperations
16 Non-public
Un-mount
PSUS SSUS – –
17 fsmount Mount
18 Non-public
Mount PSUS SSUS – –
19 nfscreate/cifscreate
Share
20 Public Mount PSUS SSUS – –
21 syncmount
When usingthe filesnapshotfunctionality
22 Public Mount PSUS SSUS – –
23 Restart a program that accesses the filesystem at P-VOL site.
24 Public Mount PSUS SSUS – –
25 Connect thefilesystem#1
horcvmimport
26 Public Mount PSUS SSUS Un-mount
Non-public
Details of ShadowImage Operations on the HNAS F A-9Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
27 Mount fsmount
28 Public Mount PSUS SSUS Mount Non-public
29 Share#2 nfscreate/cifscreate
30 Public Mount PSUS SSUS Mount Public
31 When usingthe filesnapshotfunctionality
syncmount
32 Public Mount PSUS SSUS Mount Public
33 Start a program that accesses the filesystem at S-VOL site.
#1:Specify it as a file system name different from that in P-VOL site.
#2:Specify a public directory name and CIFS name that differ from those forthe P-VOL.
When LVM is used and a differential-data snapshot is madeavailable in a shared file system
Table A-9 Procedure for Splitting Pairs When LVM is Used (made available in a sharedfile system) (with an unmounted P-VOL)
NoP-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stops programs that access file systems atthe P-VOL site.
2 Public Mount PAIR PAIR – –
3 syncumount
4 Public Mount PAIR PAIR – –
5 nfsdelete/cifsdelete
DeletesNFS/CIFSshares.
A-10 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
NoP-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
6 Non-public
Mount PAIR PAIR – –
7 fsumount Un-mount
8 Non-public
Un-mount
PAIR PAIR – –
9 horcfreeze
Suppressesoperations.
10 Non-public
Un-mount
PAIR PAIR – –
11 pairsplit
Splits apair.
12 Non-public
Un-mount
COPY COPY – –
13 pairvolchk
At thispoint, thepair statusis COPY.
14 Executes pairvolchkseveral times
Non-public
Un-mount
PSUS SSUS – –
15 pairvolchk
When thepair statuschanges toPSUS, thepair is split.
16 Non-public
Un-mount
PSUS SSUS – –
17 horcunfreeze
Cancels thesuppressionofoperations.
18 Non-public
Un-mount
PSUS SSUS – –
19 fsmount Mount
20 Non-public
Mount PSUS SSUS – –
21 nfscreate/cifscreate
Share
22 Public Mount PSUS SSUS – –
23 syncmount
Details of ShadowImage Operations on the HNAS F A-11Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
NoP-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
24 Public Mount PSUS SSUS – –
25 Restarts programs that access file systemsat the P-VOL site.
26 Public Mount PSUS SSUS – –
27 Connects tothe filesystem.#1
horcvmimport
28 Public Mount PSUS SSUS Un-mount
Non-public
29 Mount fsmount
30 Public Mount PSUS SSUS Mount Public
31 Share#2 nfscreate/cifscreate
32 Public Mount PSUS SSUS Mount Public
33 syncmount
34 Public Mount PSUS SSUS Mount Public
35 Starts programs that access file systems atthe S-VOL site.
#1:Specify a file system name different from that of the P-VOL site for this.
#2:Specify a public directory name different from that of the P-VOL site forthis.
Re-synchronizing Pairs
Commands for Re-synchronizing Pairs
When resynchronizing a pair, the following commands are used.
Table A-10 Commands for Re-synchronizing Pairs (HNAS F)
No. Command Description
1 sudo nfsdelete -d shared-directory {-a |-H Host}
Delete an NFS share.
2 sudo cifsdelete -x CIFS-share-name Delete a CIFS share.
A-12 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
3 sudo fsumount file-system-name Un-mount a file system.
4 sudo horcexport -f file-system-name Separate a file system.
5 sudo syncumount Un-mounts the differential-datasnapshot.
Table A-11 Commands for Re-synchronizing Pairs (CCI)
No. Command Description
1 sudo pairresync {-g group-name | -dvolume-name}
Resynchronizes the split pairs.
2 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the pair volume status.
Procedure for Re-synchronizing Pairs When P-VOL Status is PSUSand S-VOL Status is SSUS
Use the following procedure to resynchronize a pair when the P-VOL status isPSUS and the S-VOL status is SSUS. The sudo command, the options of theHNAS F command, and the CCI command are omitted when it is described inthe following table. Specify the appropriate options for the actual operation.
When LVM is not used:
Table A-12 Procedure for Re-synchronizing Pairs when LVM is Not Used (with P-VOL inPSUS status and S-VOL in SSUS status)
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop a program that accesses the filesystem at S-VOL site.
2 Public Mount PSUS SSUS Mount Public
3 Deleteshares
nfsdelete/cifsdelete
4 Public Mount PSUS SSUS Mount Non-public
5 Un-mount fsumount
6 Public Mount PSUS SSUS Un-mount
Non-public
7 Separatefile system
horcexport
Details of ShadowImage Operations on the HNAS F A-13Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
8 Public Mount PSUS SSUS – –
9 pairresync
Resynchronizes thepairs
10 Public Mount COPY COPY – –
11 pairvolchk
At this pointa pairstatus isCOPY
12 Execute pairvolchkseveral times
Public Mount COPY COPY – –
13 pairvolchk
When a pairstatuschange toPAIR, re-synchronizing pair iscompleted
14 Public Mount PAIR PAIR – –
When LVM is used and a differential-data snapshot is madeavailable in a shared file system
Table A-13 Procedure for Re-synchronizing Pairs when LVM is Used (with a differential-data snapshot not available in the shared file system, and with P-VOL in PSUS status
and S-VOL in SSUS status)
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop a program that accesses the filesystem at S-VOL site.
2 Public Mount PSUS SSUS Mount Public
3 Deleteshares#
nfsdelete/cifsdelete
4 Public Mount PSUS SSUS Mount Public
5 When usingthe filesnapshotfunctionality
syncumount
A-14 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
6 Public Mount PSUS SSUS Mount Non-public
7 Un-mount fsumount
8 Public Mount PSUS SSUS Un-mount
Non-public
9 Separatefile system
horcexport
10 Public Mount PSUS SSUS – –
11 pairresync
Resynchronizes thepairs
12 Public Mount COPY COPY – –
13 pairvolchk
At this pointthe pairstatus isCOPY
14 Execute pairvolchkseveral times
Public Mount COPY COPY – –
15 pairvolchk
When thepair statuschanges toPAIR,resynchronization iscompleted
16 Public Mount PAIR PAIR – –
#If a differential-data snapshot is made available to a shared file system,and if NFS shares or CIFS shares are created for the differential-datasnapshot, before executing the syncumount command, you need to usethe nfsdelete command or the cifsdelete command to release the NFSshares or the CIFS shares for the differential-data snapshot.
Details of ShadowImage Operations on the HNAS F A-15Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When LVM is used and a differential-data snapshot is madeavailable in a shared file system
Table A-14 Procedure for Re-synchronizing Pairs When LVM Is Used (made available inthe shared file system) (with the P-VOL in the PSUS status and the S-VOL in the SSUS
status)
Note
P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stops programs that access file systems atthe S-VOL site.
2 Public Mount PSUS SSUS Mount Public
3 syncumount
4 Public Mount PSUS SSUS Mount Public
5 Deletesshares.#
nfsdelete/cifsdelete
6 Public Mount PSUS SSUS Mount Non-public
7 Un-mount fsumount
8 Public Mount PSUS SSUS Un-mount
Non-public
9 Separatesthe filesystem.
horcexport
10 Public Mount PSUS SSUS – –
11 pairresync
Resynchronizes thepairs.
12 Public Mount COPY COPY – –
13 pairvolchk
At thispoint, thepair statusis COPY.
14 Executes pairvolchkseveral times.
Public Mount COPY COPY – –
15 pairvolchk
When thepair statuschanges toPAIR, theresynchronization ofthe pair iscomplete.
A-16 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Note
P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
16 Public Mount PAIR PAIR – –
#If a differential-data snapshot is made available to a shared file system,and if NFS shares or CIFS shares are created for the differential-datasnapshot, before executing the syncumount command, you need to usethe nfsdelete command or the cifsdelete command to release the NFSshares or the CIFS shares for the differential-data snapshot.
Restoring Pairs
Commands for Restoring Pairs
When restoring a pair, the following commands are used.
Table A-15 Commands for Restoring Pairs (HNAS F)
No. Command Description
1 sudo nfsdelete -d shared-directory {-a |-H Host}
Delete an NFS share.
2 sudo nfscreate -d shared-directory -HHost
Create an NFS share.
3 sudo cifsdelete -x CIFS-share-name Delete a CIFS share.
4 sudo cifscreate -x CIFS-share-name -dshared-directory
Create a CIFS share.
5 sudo fsumount file-system-name Un-mount a file system.
6 sudo fsmount {-r | -w} file-system-name
Mount a file system.
7 sudo horcexport -f file-system-name Separate a file system.
8 sudo horcimport -f copy-destination-file-system-name -d device-file-number [-rresource-group-name]
When LVM is not used and the filesystem is not tiered, the commandconnects the file system to the node.
9 sudo horcimport -f copy-destination-file-system-name --tier1 device-file-number--tier2 device-file-number [-r resource-group-name]
When LVM is not used and the filesystem is tiered, the command connectsthe file system to the node.
10 sudo horcvmimport -f copy-destination-file-system-name -d device-file-number[, device-file-number...] [-r resource-group-name]
When LVM is used and the file system isnot tiered, the command connects thefile system to the node.
Details of ShadowImage Operations on the HNAS F A-17Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
11 sudo horcvmimport -f copy-destination-file-system-name --tier1 device-file-number [, device-file-number...] --tier2device-file-number [, device-file-number...] [-r resource-group-name]
When LVM is used and the file system istiered, the command connects the filesystem to the node.
12 sudo syncumount mount-point-name Un-mounts the differential-datasnapshot.
13 sudo syncmount file-system-namedifferential-data-snapshot-name file-system-name
Mounts the differential-data snapshot.
Table A-16 Commands for Restoring Pairs (CCI)
No. Command Description
1 sudo pairsplit {-g group-name | -dvolume-name}
Splits a volume pair.
2 sudo pairresync {-g group-name | -dvolume-name} -restore
Resynchronizes the P-VOL and S-VOL.
3 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the pair volume status.
Procedure for Restoring Pairs
The pair is restored with volume status PSUS/SSUS in the followingprocedure. The sudo command, the options of the HNAS F command, and theCCI command are omitted when it is described in the following table. Specifythe appropriate options for the actual operation.
When an LVM is not used
Table A-17 Procedure for Restoring Pairs (LVM is not used)
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop a program that accesses the filesystem at P-VOL site.
2 Public Mount PSUS SSUS Mount Public
3 nfsdelete/cifsdelete
Delete NFS/CIFS shares
4 Non-public
Mount PSUS SSUS Mount Public
A-18 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
5 fsumount Un-mount
6 Non-public
Un-mount
PSUS SSUS Mount Public
7 horcexport
Separatefile system
8 – – PSUS SSUS Mount Public
9 Stop a program that accesses the filesystem at S-VOL site.
10 – – PSUS SSUS Mount Public
11 Delete NFS/CIFS shares
nfsdelete/cifsdelete
12 – – PSUS SSUS Mount Non-public
13 Un-mount fsumount
14 – – PSUS SSUS Un-mount
Non-public
15 Separatefile system
horcexport
16 – – PSUS SSUS – –
17 pairresync -restore
Restores apairs
18 – – RCPY RCPY – –
19 pairvolchk
At this pointa pairstatus isRCPY
20 Execute pairvolchkseveral times
– – RCPY RCPY – –
21 pairvolchk
When a pairstatuschange toPAIR,restoring iscompleted
22 – – PAIR PAIR – –
23 pairsplit
Splits avolume pair
24 – – PSUS COPY – –
Details of ShadowImage Operations on the HNAS F A-19Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
25 pairvolchk
At this pointthe pairstatus isCOPY
26 Execute pairvolchkseveral times
– – PSUS COPY – –
27 pairvolchk
When thepair statuschanges toPSUS,splitting iscompleted
28 – – PSUS SSUS – –
29 horcimport
Connect thefile system
30 Non-public
Un-mount
PSUS SSUS – –
31 fsmount Mount
32 Non-public
Mount PSUS SSUS – –
33 nfscreate/cifscreate
Share
34 Public Mount PSUS SSUS – –
35 Start a program that accesses the filesystem at P-VOL site.
36 Connect thefile system
horcimport
37 Public Mount PSUS SSUS Un-mount
Non-public
38 Mount fsmount
39 Public Mount PSUS SSUS Mount Non-public
40 Share nfscreate/cifscreate
41 Public Mount PSUS SSUS Mount Public
42 Restart a program that accesses the filesystem at S-VOL site.
A-20 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When an LVM is used and a differential-data snapshot is not madeavailable in a shared file system
Table A-18 Procedure for Restoring Pairs (LVM is used and a differential-data snapshotis not made available in a shared file system)
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop a program that accesses the filesystem at P-VOL site.
2 Public Mount PSUS SSUS Mount Public
3 syncumount
When usingthe filesnapshotfunctionality
4 Public Mount PSUS SSUS Mount Public
5 nfsdelete/cifsdelete
Delete NFS/CIFSshares#
6 Non-public
Mount PSUS SSUS Mount Public
7 fsumount Un-mount
8 Non-public
Un-mount
PSUS SSUS Mount Public
9 horcexport
Separatefile system
10 – – PSUS SSUS Mount Public
11 Stop a program that accesses the filesystem at S-VOL site.
12 – – PSUS SSUS Mount Public
13 When usingthe filesnapshotfunctionality
syncumount
14 – – PSUS SSUS Mount Public
15 Delete NFS/CIFS shares
nfsdelete/cifsdelete
16 – – PSUS SSUS Mount Non-public
17 Un-mount fsumount
Details of ShadowImage Operations on the HNAS F A-21Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
18 – – PSUS SSUS Un-mount
Non-public
19 Separatefile system
horcexport
20 – – PSUS SSUS – –
21 pairresync -restore
Restores apairs
22 – – RCPY RCPY – –
23 pairvolchk
At this pointa pairstatus isRCPY
24 Execute pairvolchkseveral times
– – RCPY RCPY – –
25 pairvolchk
When a pairstatuschange toPAIR,restoring iscompleted
26 – – PAIR PAIR – –
27 pairsplit
Splits avolume pair
28 – – PSUS COPY – –
29 pairvolchk
At this pointthe pairstatus isCOPY
30 Execute pairvolchkseveral times
– – PSUS COPY – –
31 pairvolchk
When thepair statuschanges toPSUS,splitting iscompleted
32 – – PSUS SSUS – –
33 horcvmimport
Connect thefile system
34 Non-public
Un-mount
PSUS SSUS – –
A-22 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
35 fsmount Mount
36 Non-public
Mount PSUS SSUS – –
37 nfscreate/cifscreate
Share
38 Public Mount PSUS SSUS – –
39 syncmount
When usingthe filesnapshotfunctionality
40 Public Mount PSUS SSUS – –
41 Start a program that accesses the filesystem at P-VOL site.
42 Connect thefile system
horcvmimport
43 Public Mount PSUS SSUS Un-mount
Non-public
44 Mount fsmount
45 Public Mount PSUS SSUS Mount Non-public
46 Share nfscreate/cifscreate
47 Public Mount PSUS SSUS Mount Public
48 When usingthe filesnapshotfunctionality
syncmount
49 Public Mount PSUS SSUS Mount Public
50 Restart a program that accesses the filesystem at S-VOL site.
#In the snapshot function target file system, un-mount the differential-datasnapshot using the syncumount command before executing thenfsdelete or cifsdelete command.
Details of ShadowImage Operations on the HNAS F A-23Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When LVM is used and a differential-data snapshot is madeavailable in a shared file system
Table A-19 Procedure for Restoring Pairs (LVM is used, the differential-data snapshot ismade available)
NoP-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stops programs that access file systems atthe P-VOL site.
2 Public Mount PSUS SSUS Mount Public
3 syncumount
4 Public Mount PSUS SSUS Mount Public
5 nfsdelete/cifsdelete
DeletesNFS/CIFSshares.#
6 Non-public
Mount PSUS SSUS Mount Public
7 fsumount Un-mount
8 Non-public
Un-mount
PSUS SSUS Mount Public
9 horcexport
Separatefile system
10 – – PSUS SSUS Mount Public
11 Stops programs that access file systems atthe S-VOL site.
12 – – PSUS SSUS Mount Public
13 syncumount
14 – – PSUS SSUS Mount Public
15 DeletesNFS/CIFSshares.#
nfsdelete/cifsdelete
16 – – PSUS SSUS Mount Non-public
17 Un-mount fsumount
18 – – PSUS SSUS Un-mount
Non-public
19 Separatefile system
horcexport
A-24 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
NoP-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
20 – – PSUS SSUS – –
21 pairresync -restore
Restores apair.
22 – – RCPY RCPY – –
23 pairvolchk
At thispoint, thepair statusis RCPY.
24 Executes pairvolchkseveral times.
– – RCPY RCPY – –
25 pairvolchk
When a pairstatuschanges toPAIR, therestorationis complete.
26 – – PAIR PAIR – –
27 pairsplit
Splits avolume pair
28 – – PSUS COPY – –
29 pairvolchk
At thispoint, thepair statusis COPY.
30 Executes pairvolchkseveral times.
– – PSUS COPY – –
31 pairvolchk
When thepair statuschanges toPSUS, thesplitoperation iscomplete.
32 – – PSUS SSUS – –
33 horcvmimport
Connects tothe filesystem.
34 Non-public
Un-mount
PSUS SSUS – –
35 fsmount Mount
36 Non-public
Mount PSUS SSUS – –
Details of ShadowImage Operations on the HNAS F A-25Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
NoP-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
37 nfscreate/cifscreate
Share
38 Public Mount PSUS SSUS – –
39 syncmount
40 Public Mount PSUS SSUS – –
41 Starts programs that access file systems atthe P-VOL site.
42 Connect thefile system
horcvmimport
43 Public Mount PSUS SSUS Un-mount
Non-public
44 Mount fsmount
45 Public Mount PSUS SSUS Mount Non-public
46 Share nfscreate/cifscreate
47 Public Mount PSUS SSUS Mount Public
48 syncmount
49 Public Mount PSUS SSUS Mount Public
50 Restarts program that access file systemsat the S-VOL site.
#If a differential-data snapshot is made available to a shared file system,and if NFS shares or CIFS shares are created for the differential-datasnapshot, before executing the syncumount command, you need to usethe nfsdelete command or the cifsdelete command to release the NFSshares or the CIFS shares for the differential-data snapshot.
Deleting Pairs
Commands for Deleting Pairs
When deleting a pair, the following commands are used.
A-26 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Table A-20 Commands for Deleting Pairs (HNAS F)
No. Command Description
1 sudo nfsdelete -d shared-directory {-a |-H Host}
Delete an NFS share.
2 sudo nfscreate -d shared-directory -HHost
Create an NFS share.
3 sudo cifsdelete -x CIFS-share-name Delete a CIFS share.
4 sudo cifscreate -x CIFS-share-name -dshared-directory
Create a CIFS share.
5 sudo fsumount file-system-name Un-mount a file system.
6 sudo fsmount {-r | -w} file-system-name
Mount a file system.
7 sudo fsdelete file-system-name Delete a file system.
8 sudo horcimport -f copy-destination-file-system-name -d device-file-number [-rresource-group-name]
When LVM is not used and the filesystem is not tiered, the commandconnects the file system to the node.
9 sudo horcimport -f copy-destination-file-system-name --tier1 device-file-number--tier2 device-file-number [-r resource-group-name]
When LVM is not used and the filesystem is tiered, the command connectsthe file system to the node.
10 sudo horcvmimport -f copy-destination-file-system-name -d device-file-number[, device-file-number...] [-r resource-group-name]
When LVM is used and the file system isnot tiered, the command connects thefile system to the node.
11 sudo horcvmimport -f copy-destination-file-system-name --tier1 device-file-number [, device-file-number...] --tier2device-file-number [, device-file-number...] [-r resource-group-name]
When LVM is used and the file system istiered, the command connects the filesystem to the node.
12 sudo horcfreeze -f copy-source-file-system-name
Suppresses on the P-VOL and stopsaccess from clients.
13 sudo horcunfreeze -f copy-source-file-system-name
Restarts operations on the P-VOL andaccess from clients.
14 sudo syncumount mount-point-name Un-mounts the differential-datasnapshot.
15 sudo syncmount file-system-namedifferential-data-snapshot-name mount-point-name
Mounts the differential-data snapshot.
16 sudo syncstop file-system-name Releases the differential-data storagedevice.
17 sudo horcvmdelete -d device-file-number[, device-file-number...]
Releases device files.
Details of ShadowImage Operations on the HNAS F A-27Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Table A-21 Commands for Deleting Pairs (CCI)
No. Command Description
1 sudo pairsplit {-g group-name | -dvolume-name} -S
Deletes a volume pair.
2 sudo pairsplit { -g group-name | -dvolume-name}
Split a volume pair.
3 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the pair volume status.
Procedure for Deleting Pairs when the Pair Status is PAIR (S-VOLContinuously Used-1)
The pair split when S-VOL is continuously used after the pair deletion withstatus PAIR in the following procedure. The sudo command, the options ofthe HNAS F command, and the CCI command are omitted when it isdescribed in the following table. Specify the appropriate options for the actualoperation.
Table A-22 Procedure for Deleting Pairs when the Pair Status is PAIR (S-VOLContinuously Used-1)
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 horcfreeze
Suppresstheoperations
3 Public Mount PAIR PAIR – –
4 pairsplit
Splits avolume pair
5 Public Mount COPY COPY – –
6 pairvolchk
The pairstatus atthis time isCOPY.
7 Execute pairvolchkseveral times
Public Mount COPY COPY – –
8 pairvolchk
The pairstatuschanges toPSUS andthe splittingoperationcompletes.
A-28 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
9 Public Mount PSUS SSUS – –
10 horcunfreeze
Restarts theoperations
11 Public Mount PSUS SSUS – –
12 pairsplit -S
Deletes avolume pair
13 Public Mount COPY COPY – –
14 pairvolchk
At this pointthe pairstatus isCOPY
15 Execute pairvolchkseveral times
Public Mount COPY COPY – –
16 pairvolchk
When a pairstatuschange toSMPL,deleting iscompleted
17 Public Mount SMPL SMPL – –
18 Connect thefilesystem#1
horcvmimport(whenLVM is notused,horcimport)
19 Public Mount SMPL SMPL Un-mount
Non-public
20 Mount fsmount
21 Public Mount SMPL SMPL Mount Non-public
22 Share#2 nfscreate/cifscreate
23 Public Mount SMPL SMPL Mount Public
24 When usingthe filesnapshotfunctionality
syncmount
25 Public Mount SMPL SMPL Mount Public
Details of ShadowImage Operations on the HNAS F A-29Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
26 Start a program that accesses the filesystem at the S-VOL site.
#1:Specify it as a file system name different from that in P-VOL site.
#2:Specify a public directory name and CIFS name that differ from those forthe P-VOL.
Procedure for Deleting Pairs when the Pair Status is PSUS (S-VOLContinuously Used-2)
The pair split when S-VOL is continuously used after the pair deletion withstatus PSUS/SSUS in the following procedure. The sudo command, theoptions of the HNAS F command, and the CCI command are omitted when itis described in the following table. Specify the appropriate options for theactual operation.
Table A-23 Procedure for Deleting Pairs When the Pair Status Is PSUS (S-VOLContinuously Used-2)
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PSUS PSUS Mount Public
2 pairsplit -S
Deletes avolume pair
3 Public Mount SMPL SMPL Mount Public
4 pairvolchk
When a pairstatuschange toSMPL,deleting iscompleted
5 Public Mount SMPL SMPL Mount Public
Procedure for Deleting Pairs when the Pair Status is PAIR (S-VOLNot Used-1)
The pair split when S-VOL is not used after the pair deletion with status PAIRin the following procedure. The sudo command, the options of the HNAS Fcommand, and the CCI command are omitted when it is described in thefollowing table. Specify the appropriate options for the actual operation.
A-30 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Table A-24 Procedure for Deleting Pairs When the Pair Status Is PAIR (S-VOL NotUsed-1)
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 pairsplit -S
Deletes avolume pair
3 Public Mount COPY COPY – –
4 pairvolchk
At this pointthe pairstatus isCOPY
5 Execute pairvolchkseveral times
Public Mount COPY COPY – –
6 pairvolchk
When a pairstatuschange toSMPL,deleting iscompleted
7 Public Mount SMPL SMPL – –
8 Releasesdevice files
horcvmdelete
9 Public Mount SMPL SMPL – –
Procedure for Deleting Pairs (S-VOL Not Used-2) when the PairStatus is PSUS
The pair split when S-VOL is not used after the pair deletion with statusPSUS/SSUS in the following procedure. The sudo command, the options ofthe HNAS F command, and the CCI command are omitted when it isdescribed in the following table. Specify the appropriate options for the actualoperation.
Details of ShadowImage Operations on the HNAS F A-31Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When LVM is not used:
Table A-25 Procedure for Deleting Pairs When the Pair Status Is PSUS while LVM is NotUsed (S-VOL Not Used-2)
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop a program that accesses the filesystem at S-VOL site.
2 Public Mount PSUS PSUS Mount Public
3 Delete NFS/CIFS shares
nfsdelete/cifsdelete
4 Public Mount PSUS PSUS Mount Non-public
5 Un-mount fsumount
6 Public Mount PSUS PSUS Un-mount
Non-public
7 Delete filesystem
fsdelete
8 Public Mount PSUS PSUS – –
9 pairsplit -S
Deletes avolume pair
10 Public Mount SMPL SMPL – –
11 pairvolchk
When a pairstatuschange toSMPL,deleting iscompleted
12 Public Mount SMPL SMPL – –
When LVM is used and a differential-data snapshot is not madeavailable in a shared file system:
Table A-26 Procedure for Deleting Pairs When the Pair Status Is PSUS while LVM isUsed (S-VOL Not Used-2)
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop a program that accesses the filesystem at S-VOL site.
A-32 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
2 Public Mount PSUS PSUS Mount Public
3 When usingthe filesnapshotfunctionality
syncumount
4 Public Mount PSUS PSUS Mount Public
5 When usingthe filesnapshotfunctionality
syncstop
6 Public Mount PSUS PSUS Mount Public
7 Delete NFS/CIFSshares#
nfsdelete/cifsdelete
8 Public Mount PSUS PSUS Mount Non-public
9 Un-mount fsumount
10 Public Mount PSUS PSUS Un-mount
Non-public
11 Delete filesystem
fsdelete
12 Public Mount PSUS PSUS – –
13 pairsplit -S
Deletes avolume pair
14 Public Mount SMPL SMPL – –
15 pairvolchk
When thepair statuschanges toSMPL,deleting iscompleted
16 Public Mount SMPL SMPL – –
#In the target file system of the file snapshot functionality, un-mount thedifferential-data snapshot using the syncumount command beforeexecuting the nfsdelete or cifsdelete command, and release thedevice which stores the differential data using the syncstop command.
Details of ShadowImage Operations on the HNAS F A-33Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When LVM is used and a differential-data snapshot is madeavailable in a shared file system:
Table A-27 Procedure for Deleting Pairs When the Pair Status Is PSUS while LVM isUsed (the S-VOL Is Not Used)
NoP-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stops programs that access file systems atthe S-VOL site.
2 Public Mount PSUS PSUS Mount Public
3 syncumount
4 Public Mount PSUS PSUS Mount Public
5 DeletesNFS/CIFSshares.#
nfsdelete/cifsdelete
6 Public Mount PSUS PSUS Mount Non-public
7 syncstop
8 Public Mount PSUS PSUS Mount Non-public
9 Un-mount fsumount
10 Public Mount PSUS PSUS Un-mount
Non-public
11 Deletes thefile system.
fsdelete
12 Public Mount PSUS PSUS – –
13 pairsplit -S
Deletes avolumepair.
14 Public Mount SMPL SMPL – –
15 pairvolchk
When thepair statuschanges toSMPL, thedeletionoperation iscomplete.
16 Public Mount SMPL SMPL – –
#If a differential-data snapshot is made available to a shared file system,and if NFS shares or CIFS shares are created for the differential-data
A-34 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
snapshot, before executing the syncumount command, you need to usethe nfsdelete command or the cifsdelete command to release the NFSshares or the CIFS shares for the differential-data snapshot.
Pairs Recovery from Failures on the HNAS F
Commands for Pairs RecoveryWhen recover a pair from failures, the following commands are used.
Table A-28 Commands for Recovery (HNAS F)
No. Command Description
1 sudo nfsdelete -d shared-directory {-a |-H Host}
Delete an NFS share.
2 sudo nfscreate -d shared-directory -HHost
Create an NFS share.
3 sudo cifsdelete -x CIFS-share-name [-rresource-group-name]
Delete a CIFS share.
4 sudo cifscreate -x CIFS-share-name -dshared-directory
Create a CIFS share.
5 sudo fsumount file-system-name Un-mount a file system.
6 sudo fsmount {-r | -w} file-system-name
Mount a file system.
7 sudo fscreate file-system-name Create a file system.
8 sudo fsdelete file-system-name Delete a file system.
9 sudo lumapctl -t m --on Change the mode of allocation of a userLUN to the maintenance mode.
10 sudo lumapctl -t m --off Change the mode of allocation of a userLUN to the normal management mode.
11 sudo horcimport -f copy-destination-file-system-name -d device-file-number [-rresource-group-name]
When LVM is not used and the filesystem is not tiered, the commandconnects the file system to the node.
12 sudo horcimport -f copy-destination-file-system-name --tier1 device-file-number--tier2 device-file-number [-r resource-group-name]
When LVM is not used and the filesystem is tiered, the command connectsthe file system to the node.
13 sudo horcvmimport -f copy-destination-file-system-name -d device-file-number[, device-file-number ...] [-r resource-group-name]
When LVM is used and the file system isnot tiered, the command connects thefile system to the node.
14 sudo horcvmimport -f copy-destination-file-system-name --tier1 device-file-number [, device-file-number...] --tier2
When LVM is used and the file system istiered, the command connects the filesystem to the node.
Details of ShadowImage Operations on the HNAS F A-35Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
device-file-number [, device-file-number...] [-r resource-group-name]
15 syncumount mount-point-name Un-mounts the differential-datasnapshot.
16 sudo syncmount file-system-namedifferential-data-snapshot-name mount-point-name
Mounts the differential-data snapshot.
17 syncstop file-system-name Releases the differential-data storagedevice.
18 sudo horcvmdefine -d device-file-number[, device-file-number ...]
Reserves device file.
19 sudo horcfreeze -f copy-source-file-system-name
Suppresses on the P-VOL and stopsaccess from clients.
20 sudo horcunfreeze -f copy-source-file-system-name
Restarts operations on the P-VOL andaccess from clients.
21 sudo horcexport -f file-system-name Separate a file system.
Table A-29 Commands for Recovery (CCI)
No. Command Description
1 sudo paircreate {-g group-name | -dvolume-name} -vl
Creates a volume pair.
2 sudo pairsplit {-g group-name | -dvolume-name} -S
Deletes a volume pair.
3 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the pair volume status.
4 sudo pairdisplay {-g group-name | -dvolume-name} -fc
Checks the pair status.
5 sudo horcmstart.sh Starts CCI.
Note:To continue the operation at the fail-over destination when managing thesystem with node, execute the command making the remote host specifythe virtual IP address. When you connect the file system, execute thecommand by adding the -r option and specifying a resource group name.
Recovery Procedure from FailuresWhen a pair was suspended (PSUE) due to a failure, recover in the followingprocedure. The sudo command, the options of the HNAS F command, and theCCI command are omitted when it is described in the following table. Specifythe appropriate options for the actual operation.
A-36 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When LVM is not Used
Table A-30 Procedure for Recovering Pairs when Using a Node (When LVM Is Not Used)
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 lumapctl-t m --on
Maintenance mode
2 Public Mount PSUE PSUE Mount Public
3 pairsplit -S
Deletes avolume pair
4 Public Mount SMPL SMPL Mount Public
5 nfsdelete/cifsdelete
Delete NFS/CIFS shares
6 Non-public
Mount SMPL SMPL Mount Public
7 fsumount Un-mount
8 Non-public
Un-mount
SMPL SMPL Mount Public
9 fsdelete Delete filesystem
10 – – SMPL SMPL Mount Public
11 Failover from the failed node (node #1) tothe normal node (node #2) #
12 – – SMPL SMPL Mount Public
13 Stop the node #1
14 – – SMPL SMPL Mount Public
15 Restart the OS #1
16 – – SMPL SMPL Mount Public
17 Start the node #1
18 – – SMPL SMPL Mount Public
19 Failback the node #1
20 – – SMPL SMPL Mount Public
21 Failover from the node #2 to the node #1
22 – – SMPL SMPL Mount Public
23 Stop the node #2
24 – – SMPL SMPL Mount Public
Details of ShadowImage Operations on the HNAS F A-37Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
25 Restart the OS #2
26 – – SMPL SMPL Mount Public
27 Start the node #2
28 – – SMPL SMPL Mount Public
29 Failback the node #2
30 – – SMPL SMPL Mount Public
31 Device file recovery from the failure (Thisoperation should be done by a servicepersonnel)
32 – – SMPL SMPL Mount Public
33 horcmstart.sh
Starts CCI
34 – – SMPL SMPL Mount Public
35 Starts CCI horcmstart.sh
36 – – SMPL SMPL Mount Public
37 Stop a program that accesses the filesystem at S-VOL site.
38 – – SMPL SMPL Mount Public
39 Delete NFS/CIFS shares
nfsdelete/cifsdelete
40 – – SMPL SMPL Mount Non-public
41 Un-mount fsumount
42 – – SMPL SMPL Un-mount
Non-public
43 horcvmdefine
Reserves S-VOL
44 – – SMPL SMPL Un-mount
Non-public
45 Create avolume pair
paircreate
46 – – COPY COPY Un-mount
Non-public
47 Confirmthat a
pairdisplay
A-38 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
volume pairis created
48 – – COPY COPY Un-mount
Non-public
49 Check theprogress ofcopying
pairvolchk
50 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
51 When a pairstatuschange toPAIR,creatingpair iscompleted
pairvolchk
52 – – PAIR PAIR Un-mount
Non-public
53 Splitting avolume pair
pairsplit
54 – – COPY COPY Un-mount
Non-public
55 When a pairstatuschange toPSUS,splittingpair iscompleted
pairvolchk
56 – – SSUS PSUS Un-mount
Non-public
57 horcimport
Connect thefile system
58 Non-public
Un-mount
SSUS PSUS Un-mount
Non-public
59 Deleting avolume pair
pairsplit -S
60 Non-public
Un-mount
COPY COPY Un-mount
Non-public
61 Check theprogress ofcopying
pairvolchk
62 Non-public
Un-mount
COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
Details of ShadowImage Operations on the HNAS F A-39Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
63 When a pairstatuschange toSMPL,confirm thatsplittingpair iscompleted
pairvolchk
64 Non-public
Un-mount
SMPL SMPL Un-mount
Non-public
65 fsmount Mount
66 Non-public
Mount SMPL SMPL Un-mount
Non-public
67 nfscreate/cifscreate
Share
68 Public Mount SMPL SMPL Un-mount
Non-public
69 Separatesfile system
fsdelete
70 Public Mount SMPL SMPL – –
71 paircreate
Creates avolume pair
72 Public Mount COPY COPY – –
73 pairvolchk
Check theprogress ofcopying
74 Execute pairvolchkseveral times
Public Mount COPY COPY – –
75 pairvolchk
When a pairstatuschange toPAIR,creatingpair iscompleted
76 Public Mount PAIR PAIR – –
77 lumapctl-t m --off
Normalmanagement mode
78 Public Mount PAIR PAIR – –
#
A-40 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
For example, if the file system is blocked because of the failure occurredon the drive, you need to reboot the OS to release the blocked status thatis recognized by the OS site.
Table A-31 Procedure for Recovering Pairs when Using a Virtual Server (When LVM IsNot Used)
No.
P-VOL Site S-VOL Site
Command Processing NFS/CIFS FS
P-VOL
Status
S-VOL
Status
FS NFS/CIFS Processing Command
1 Public Mount
PSUE PSUE Mount
Public
2 pairsplit-S
Deletes avolume pair
3 Public Mount
SMPL SMPL Mount
Public
4 nfsdelete/cifsdelete
Delete NFS/CIFS shares
5 Non-public
Mount
SMPL SMPL Mount
Public
6 fsumount Un-mount
7 Non-public
Un-mount
SMPL SMPL Mount
Public
8 fsdelete Delete filesystem
9 – – SMPL SMPL Mount
Public
10 Fails over any virtual servers running on thesame node as the virtual server that failed.
11 – – SMPL SMPL Mount
Public
12 Acquires the dump
13 – – SMPL SMPL Mount
Public
14 Fails back the virtual servers
15 – – SMPL SMPL Mount
Public
16 Recovers the device file from the error (tobe performed by service personnel).
17 – – SMPL SMPL Mount
Public
Details of ShadowImage Operations on the HNAS F A-41Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.
P-VOL Site S-VOL Site
Command Processing NFS/CIFS FS
P-VOL
Status
S-VOL
Status
FS NFS/CIFS Processing Command
18 horcmstart.sh
Starts CCI
19 – – SMPL SMPL Mount
Public
20 Starts CCI horcmstart.sh
21 – – SMPL SMPL Mount
Public
22 Stops the programs that access the filesystem at the S-VOL site.
23 – – SMPL SMPL Mount
Public
24 Delete NFS/CIFS shares
nfsdelete/cifsdelete
25 – – SMPL SMPL Mount
Non-public
26 Un-mount fsumount
27 – – SMPL SMPL Un-mount
Non-public
28 horcvmdefine
Reserves S-VOL
29 – – SMPL SMPL Un-mount
Non-public
30 Create avolume pair
paircreate
31 – – COPY COPY Un-mount
Non-public
32 Confirm thata volumepair iscreated
pairdisplay
33 – – COPY COPY Un-mount
Non-public
34 Check theprogress ofcopying
pairvolchk
A-42 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.
P-VOL Site S-VOL Site
Command Processing NFS/CIFS FS
P-VOL
Status
S-VOL
Status
FS NFS/CIFS Processing Command
35 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
36 When a pairstatuschange toPAIR,creating pairis completed
pairvolchk
37 – – PAIR PAIR Un-mount
Non-public
38 Splitting avolume pair
pairsplit
39 – – COPY COPY Un-mount
Non-public
40 When a pairstatuschange toPSUS,splitting pairis completed
pairvolchk
41 – – SSUS PSUS Un-mount
Non-public
42 horcimport Connect thefile system
43 Non-public
Un-mount
SSUS PSUS Un-mount
Non-public
44 Deleting avolume pair
pairsplit-S
45 Non-public
Un-mount
COPY COPY Un-mount
Non-public
46 Check theprogress ofcopying
pairvolchk
47 Non-public
Un-mount
COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
Details of ShadowImage Operations on the HNAS F A-43Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.
P-VOL Site S-VOL Site
Command Processing NFS/CIFS FS
P-VOL
Status
S-VOL
Status
FS NFS/CIFS Processing Command
48 When a pairstatuschange toSMPL,confirm thatsplitting pairis completed
pairvolchk
49 Non-public
Un-mount
SMPL SMPL Un-mount
Non-public
50 fsmount Mount
51 Non-public
Mount
SMPL SMPL Un-mount
Non-public
52 nfscreate/cifscreate
Share
53 Public Mount
SMPL SMPL Un-mount
Non-public
54 Separatesfile system
horcexport
55 Public Mount
SMPL SMPL - -
56 paircreate Creates avolume pair
57 Public Mount
COPY COPY - -
58 pairvolchk Check theprogress ofcopying
59 Execute pairvolchkseveral times
Public Mount
COPY COPY - -
60 pairvolchk When a pairstatuschange toPAIR,creating pairis completed
61 Public Mount
PAIR PAIR - -
A-44 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When LVM is Used
Table A-32 Recovery Procedure from Failures When Using node (LVM is Used)
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 lumapctl-t m --on
Maintenance mode
2 Public Mount PSUE PSUE Mount Public
3 pairsplit -S
Deletes avolume pair
4 Public Mount SMPL SMPL Mount Public
5 syncumount
When usingthe filesnapshotfunctionality
6 Public Mount SMPL SMPL Mount Public
7 syncstop When usingthe filesnapshotfunctionality
8 Public Mount SMPL SMPL Mount Public
9 nfsdelete/cifsdelete
Delete NFS/CIFSshares#1
10 Non-public
Mount SMPL SMPL Mount Public
11 fsumount Un-mount
12 Non-public
Un-mount
SMPL SMPL Mount Public
13 fsdelete Delete filesystem
14 – – SMPL SMPL Mount Public
15 Failover from the failed node (node #1) tothe normal node (node #2) #2
16 – – SMPL SMPL Mount Public
17 Stop the node #1
18 – – SMPL SMPL Mount Public
19 Restart the OS #1
20 – – SMPL SMPL Mount Public
Details of ShadowImage Operations on the HNAS F A-45Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
21 Start the node #1
22 – – SMPL SMPL Mount Public
23 Failback the node #1
24 – – SMPL SMPL Mount Public
25 Failover from the node #2 to the node #1
26 – – SMPL SMPL Mount Public
27 Stop the node #2
28 – – SMPL SMPL Mount Public
29 Restart the OS #2
30 – – SMPL SMPL Mount Public
31 Start the node #2
32 – – SMPL SMPL Mount Public
33 Failback the node #2
34 – – SMPL SMPL Mount Public
35 Device file recovery from the failure (Thisoperation should be done by a servicepersonnel)
36 – – SMPL SMPL Mount Non-public
37 horcmstart.sh
Starts CCI
38 – – SMPL SMPL Mount Public
39 Starts CCI horcmstart.sh
40 – – SMPL SMPL Mount Public
41 Stop a program that accesses the filesystem at S-VOL site.
42 – – SMPL SMPL Mount Public
43 When usingthe filesnapshotfunctionality
syncumount
44 – – SMPL SMPL Mount Public
45 Delete NFS/CIFS shares
nfsdelete/cifsdelete
A-46 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
46 – – SMPL SMPL Mount Non-public
47 Un-mount fsumount
48 – – SMPL SMPL Un-mount
Non-public
49 horcvmdefine
Reserves S-VOL
50 – – SMPL SMPL Un-mount
Non-public
51 Create avolume pair
paircreate
52 – – COPY COPY Un-mount
Non-public
53 Confirmthat avolume pairis created
pairdisplay
54 – – COPY COPY Un-mount
Non-public
55 Check theprogress ofcopying
pairvolchk
56 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
57 When a pairstatuschange toPAIR,creatingpair iscompleted
pairvolchk
58 – – PAIR PAIR Un-mount
Non-public
59 Suppresstheoperations
horcfreeze
60 – – PAIR PAIR Un-mount
Non-public
61 Splitting avolume pair
pairsplit
62 – – COPY COPY Un-mount
Non-public
Details of ShadowImage Operations on the HNAS F A-47Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
63 When a pairstatuschange toPSUS,splittingpair iscompleted
pairvolchk
64 – – SSUS PSUS Un-mount
Non-public
65 Restartsoperations
horcunfreeze
66 – – SSUS PSUS Un-mount
Non-public
67 horcvmimport
Connect thefile system
68 Non-public
Un-mount
PAIR PAIR Un-mount
Non-public
69 Deleting avolume pair
pairsplit -S
70 Non-public
Un-mount
COPY COPY Un-mount
Non-public
71 Check theprogress ofcopying
pairvolchk
72 Non-public
Un-mount
COPY COPY – – Execute pairvolchkseveral times
73 When a pairstatuschange toSMPL,confirm thatsplittingpair iscompleted
pairvolchk
74 Non-public
Un-mount
SMPL SMPL – –
75 fsmount Mount
76 Non-public
Mount SMPL SMPL Un-mount
Non-public
77 nfscreate/cifscreate
Share
A-48 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.P-VOL Site S-VOL Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
78 Public Mount SMPL SMPL Un-mount
Non-public
79 syncmount
When usingthe filesnapshotfunctionality
80 Public Mount SMPL SMPL Un-mount
Non-public
81 Separatesfile system
horcexport
82 Public Mount SMPL SMPL – –
83 paircreate
Creates avolume pair
84 Public Mount COPY COPY – –
85 pairvolchk
Check theprogress ofcopying
86 Execute pairvolchkseveral times
Public Mount COPY COPY – –
87 pairvolchk
When a pairstatuschange toPAIR,creatingpair iscompleted
88 Public Mount PAIR PAIR – –
89 lumapctl-t m --off
Normalmanagement mode
90 Public Mount PAIR PAIR – –
#1:In the target file system of the file snapshot functionality, un-mount thedifferential-data snapshot using the syncumount command beforeexecuting the nfsdelete or cifsdelete command, and release thedevice which stores the differential data using the syncstop command.
#2:For example, if the file system is blocked because of the failure occurredon the drive, you need to reboot the OS to release the blocked status thatis recognized by the OS site.
Details of ShadowImage Operations on the HNAS F A-49Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Table A-33 Procedure for Recovering Pairs when Using a Virtual Server (When LVM IsUsed)
No.
P-VOL Site S-VOL Site
Command Processing NFS/CIFS FS
P-VOL
Status
S-VOL
Status
FS NFS/CIFS Processing Command
1 Public Mount
PSUE PSUE Mount
Public
2 pairsplit-S
Deletes avolume pair
3 Public Mount
SMPL SMPL Mount
Public
4 syncumount When usingthe filesnapshotfunctionality
5 Public Mount
PSUE PSUE Mount
Public
6 syncstop When usingthe filesnapshotfunctionality
7 Public Mount
PSUE PSUE Mount
Public
8 nfsdelete/cifsdelete
Delete NFS/CIFSshares#
9 Non-public
Mount
SMPL SMPL Mount
Public
10 fsumount Un-mount
11 Non-public
Un-mount
SMPL SMPL Mount
Public
12 fsdelete Delete filesystem
13 – – SMPL SMPL Mount
Public
14 Fails over the virtual servers running on thesame node as the virtual server that failed.
15 – – SMPL SMPL Mount
Public
16 Acquire the dump.
A-50 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.
P-VOL Site S-VOL Site
Command Processing NFS/CIFS FS
P-VOL
Status
S-VOL
Status
FS NFS/CIFS Processing Command
17 – – SMPL SMPL Mount
Public
18 Fails back the virtual servers.
19 – – SMPL SMPL Mount
Public
20 Recovers the device file from the error (tobe performed by service personnel).
21 – – SMPL SMPL Mount
Public
22 horcmstart.sh
Starts CCI
23 – – SMPL SMPL Mount
Public
24 Starts CCI horcmstart.sh
25 – – SMPL SMPL Mount
Public
26 Stop programs that access the file systemat the S-VOL site.
27 – – SMPL SMPL Mount
Public
28 When usingthe filesnapshotfunctionality
syncumount
29 – – SMPL SMPL Mount
Public
30 Delete NFS/CIFS shares
nfsdelete/cifsdelete
31 – – SMPL SMPL Mount
Non-public
32 Un-mount fsumount
33 – – SMPL SMPL Un-mount
Non-public
34 horcvmdefine
Reserves S-VOL
Details of ShadowImage Operations on the HNAS F A-51Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.
P-VOL Site S-VOL Site
Command Processing NFS/CIFS FS
P-VOL
Status
S-VOL
Status
FS NFS/CIFS Processing Command
35 – – SMPL SMPL Un-mount
Non-public
36 Create avolume pair
paircreate
37 – – COPY COPY Un-mount
Non-public
38 Confirm thata volumepair iscreated
pairdisplay
39 – – COPY COPY Un-mount
Non-public
40 Check theprogress ofcopying
pairvolchk
41 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times.
42 When a pairstatuschange toPAIR,creating pairis completed
pairvolchk
43 – – PAIR PAIR Un-mount
Non-public
44 Suppresstheoperations
horcfreeze
45 – – PAIR PAIR Un-mount
Non-public
46 Splitting avolume pair
pairsplit
47 – – COPY COPY Un-mount
Non-public
48 When a pairstatus
pairvolchk
A-52 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.
P-VOL Site S-VOL Site
Command Processing NFS/CIFS FS
P-VOL
Status
S-VOL
Status
FS NFS/CIFS Processing Command
change toPSUS,splitting pairis completed
49 – – SSUS PSUS Un-mount
Non-public
50 horcvmimport
Connect thefile system
51 Non-public
Un-mount
SSUS PSUS Un-mount
Non-public
52 Deleting avolume pair
pairsplit-S
53 Non-public
Un-mount
COPY COPY Un-mount
Non-public
54 Check theprogress ofcopying
pairvolchk
55 Non-public
Un-mount
COPY COPY Non-public
Un-mount
Execute pairvolchkseveral times.
56 When a pairstatuschange toSMPL,confirm thatsplitting pairis completed
pairvolchk
57 Non-public
Un-mount
SMPL SMPL Non-public
Un-mount
58 Restartsoperations
horcunfreeze
59 Non-public
Un-mount
SMPL SMPL Non-public
Un-mount
60 fsmount Mount
61 Non-public
Mount
SMPL SMPL Un-mount
Non-public
Details of ShadowImage Operations on the HNAS F A-53Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.
P-VOL Site S-VOL Site
Command Processing NFS/CIFS FS
P-VOL
Status
S-VOL
Status
FS NFS/CIFS Processing Command
62 nfscreate/cifscreate
Share
63 Public Mount
SMPL SMPL Un-mount
Non-public
64 syncmount When usingthe filesnapshotfunctionality
65 Public Mount
SMPL SMPL Un-mount
Non-public
66 Separatesfile system
horcexport
67 Public Mount
SMPL SMPL - -
68 paircreate Creates avolume pair
69 Public Mount
COPY COPY - -
70 pairvolchk Check theprogress ofcopying
71 Execute pairvolchkseveral times.
Public Mount
COPY COPY – –
72 pairvolchk When a pairstatuschange toPAIR,creating pairis completed
73 Public Mount
PAIR PAIR – –
#If the target file system uses the file snapshot functionality, beforeexecuting the nfsdelete or cifsdelete command, use the syncumountcommand to unmount the differential-data snapshots, and then use thesyncstop command to release the differential-date storage device.
A-54 Details of ShadowImage Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
BDetails of TrueCopy and Universal
Replicator Operations on the HNAS F
This appendix describes operating procedures when the TrueCopy orUniversal Replicator functionality of storage system is used in conjunctionwith the HNAS F.
This chapter discusses the following topics:
□ Execution of Command Operation on HNAS F
Details of TrueCopy and Universal Replicator Operations on the HNAS F B-1Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Execution of Command Operation on HNAS FThis appendix explains how to use the commands described in How to useTrueCopy on page 2-57 and How to use Universal Replicator on the HNAS Fon page 2-93 on the HNAS F.
Note:To continue the operation at the fail-over destination when managing thesystem with node, execute the command making the remote host specifythe virtual IP address. When you connect the file system, execute thecommand by adding the -r option and specifying a resource group name.
Creating Pairs
Commands for Creating Pairs
When creating a pair, the following commands are used.
Table B-1 Commands for Creating Pairs (HNAS F)
No. Command Description
1 sudo horcvmdefine -d device-file-number[, device-file-number...]
Reserves device file.
Table B-2 Commands for Creating Pairs (CCI)
No. Command Description
1 sudo paircreate {-g group-name | -dvolume-name} -f never -vl
Creates a TrueCopy pair.
2 sudo paircreate {-g group-name | -dvolume-name} -f async –vl -jp P-VOL-journal-volume-ID -js S-VOL-journal-volume-ID
Creates a Universal Replicator pair.
3 sudo pairvolchk {-g group-name | -dvolume-name} -ss
Checks the status of a volume pair.
4 sudo pairdisplay {-g group-name | -dvolume-name} -fc
Checks the pair status. Use thiscommand to check if the pair is created.
Procedure for Creating Pairs
The pair is usually created in the following procedure. The sudo command,the options of the HNAS F command, and the CCI command are omittedwhen it is described in the following table. Specify the appropriate options forthe actual operation.
B-2 Details of TrueCopy and Universal Replicator Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Table B-3 Procedure for Creating Pairs
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount SMPL SMPL – –
2 Reserves S-VOL
horcvmdefine
3 Public Mount SMPL SMPL – –
4 pairdisplay
Confirm thecopy targetbeforecreating apair
5 Public Mount SMPL SMPL – –
6 paircreate
Creates avolume pair
7 Public Mount COPY COPY – –
8 pairdisplay
Confirm avolume pairstatus
9 Public Mount COPY COPY – –
10 pairvolchk
At this pointthe pairstatus isCOPY
11 Execute pairvolchkseveral times
Public Mount COPY COPY – –
12 pairvolchk
When a pairstatuschange toPAIR,crating pairiscompleted
13 Public Mount PAIR PAIR – –
Splitting Pairs
Commands for Splitting Pairs
When splitting a pair, the following commands are used.
Details of TrueCopy and Universal Replicator Operations on the HNAS F B-3Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Table B-4 Commands for Splitting Pairs (HNAS F)
No. Command Description
1 sudo nfsdelete -d shared-directory {-a |-H Host}
Delete an NFS share.
2 sudo nfscreate -d shared-directory -HHost
Create an NFS share.
3 sudo cifsdelete -x CIFS-share-name [-rresource-group-name]
Delete a CIFS share.
4 sudo cifscreate -x CIFS-share-name -dshared-directory
Create a CIFS share.
5 sudo fsumount file-system-name Un-mount a file system.
6 sudo fsmount {-r | -w} file-system-name
Mount a file system.
7 sudo horcfreeze -f copy-source-file-system-name
Suppresses on the P-VOL and stopsaccess from clients.
8 sudo horcunfreeze -f copy-source-file-system-name
Restarts operations on the P-VOL andaccess from clients.
9 sudo horcimport -f copy-destination-file-system-name -d device-file-number [-rresource-group-name]
When LVM is not used and the filesystem is not tiered, the commandconnects the file system to the node.
10 sudo horcimport -f copy-destination-file-system-name --tier1 device-file-number--tier2 device-file-number [-r resource-group-name]
When LVM is not used and the filesystem is tiered, the command connectsthe file system to the node.
11 sudo horcvmimport -f copy-destination-file-system-name -d device-file-number[, device-file-number...] [-r resource-group-name]
When LVM is used and the file system isnot tiered, the command connects thefile system to the node.
12 sudo horcvmimport -f copy-destination-file-system-name --tier1 device-file-number [, device-file-number...] --tier2device-file-number [, device-file-number...] [-r resource-group-name]
When LVM is used and the file system istiered, the command connects the filesystem to the node.
13 sudo syncumount mount-point-name# Un-mounts the differential-datasnapshot.
14 sudo syncmount file-system-namedifferential-data-snapshot-name mount-point-name
Mounts the differential-data snapshot.
#This information is required when publishing the data in the shared filesystem.
B-4 Details of TrueCopy and Universal Replicator Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Table B-5 Commands for Splitting Pairs (CCI)
No. Command Description
1 sudo pairsplit {-g group-name | -dvolume-name} -rw
Splits a volume pair.
2 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the pair status.
Procedure for Splitting Pairs with P-VOL Mounted
The pair is split with P-VOL mounted in the following procedure. The sudocommand, the options of the HNAS F command, and the CCI command areomitted when it is described in the following table. Specify the appropriateoptions for the actual operation.
Table B-6 Procedure for Splitting Pairs with P-VOL Mounted
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 horcfreeze
Hold theaccess
3 Public Mount PAIR PAIR – –
4 pairsplit -rw
Split a pair
5 Public Mount COPY COPY – –
6 pairvolchk
At this pointthe pairstatus isCOPY
7 Execute pairvolchkseveral times
Public Mount COPY COPY – –
8 pairvolchk
When a pairstatuschange toPSUS,splittingpair iscompleted
9 Public Mount PSUS SSUS – –
10 horcunfreeze
Cancelsuppressionofoperations
11 Public Mount PSUS SSUS – –
Details of TrueCopy and Universal Replicator Operations on the HNAS F B-5Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
12 Connect thefile system#
horcvmimport(whenLVM is notused,horcimport)
13 Public Mount PSUS SSUS Un-mount
Non-public
14 Mount fsmount
15 Public Mount PSUS SSUS Mount Non-public
16 Share nfscreate/cifscreate
17 Public Mount PSUS SSUS Mount Public
18 When usingthe filesnapshotfunctionality
syncmount
19 Public Mount PSUS SSUS Mount Public
20 Start a program that accesses the filesystem at the S-VOL.
#:Specify the same file system name as the main site for the file systemname connected with node.
Procedure for Splitting Pairs with P-VOL Un-mounted
The pair is split with P-VOL un-mounted in the following procedure. The sudocommand, the options of the HNAS F command, and the CCI command areomitted when it is described in the following table. Specify the appropriateoptions for the actual operation.
B-6 Details of TrueCopy and Universal Replicator Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When LVM is not used:
Table B-7 Procedure for Splitting Pairs when LVM is Not Used (with P-VOL Un-mounted)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop a program that accesses the filesystem at the P-VOL.
2 Public Mount PAIR PAIR – –
3 nfsdelete/cifsdelete
Delete NFS/CIFS shares
4 Non-public
Mount PAIR PAIR – –
5 fsumount Un-mount
6 Non-public
Un-mount
PAIR PAIR – –
7 pairsplit -rw
Split a pair
8 Non-public
Un-mount
COPY COPY – –
9 pairvolchk
At this pointthe pairstatus isCOPY
10 Execute pairvolchkseveral times
Non-public
Un-mount
COPY COPY – –
11 pairvolchk
When a pairstatuschange toPSUS,splittingpair iscompleted
12 Non-public
Un-mount
PSUS SSUS – –
13 fsmount Mount
14 Non-public
Mount PSUS SSUS – –
15 nfscreate/cifscreate
Share
16 Public Mount PSUS SSUS – –
Details of TrueCopy and Universal Replicator Operations on the HNAS F B-7Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
17 Restart a program that accesses the filesystem at the P-VOL.
18 Public Mount PSUS SSUS – –
19 Connect thefile system#
horcimport
20 Public Mount PSUS SSUS Un-mount
Non-public
21 Mount fsmount
22 Public Mount PSUS SSUS Mount Non-public
23 Share nfscreate/cifscreate
24 Public Mount PSUS SSUS Mount Public
25 Start a program that accesses the filesystem at the S-VOL.
#Specify the same file system name as the main site for the file systemname connected with node.
When LVM is used:
Table B-8 Procedure for Splitting Pairs when LVM is Used (with P-VOL Un-mounted)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop a program that accesses the filesystem on the P-VOL.
2 Public Mount PAIR PAIR – –
3 syncumount
When usingthe filesnapshotfunctionality
4 Public Mount PAIR PAIR – –
5 nfsdelete/cifsdelete
Delete NFS/CIFS shares
B-8 Details of TrueCopy and Universal Replicator Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
6 Non-public
Mount PAIR PAIR – –
7 fsumount Un-mount
8 Non-public
Un-mount
PAIR PAIR – –
9 horcfreeze
Hold theaccess
10 Non-public
Un-mount
PAIR PAIR – –
11 pairsplit -rw
Split a pair
12 Non-public
Un-mount
COPY COPY – –
13 pairvolchk
At this pointthe pairstatus isCOPY
14 Execute pairvolchkseveral times
Non-public
Un-mount
PSUS SSUS – –
15 pairvolchk
When thepair statuschanges toPSUS, thepair is split
16 Non-public
Un-mount
PSUS SSUS – –
17 horcunfreeze
Cancelsuppressionofoperations
18 Non-public
Un-mount
PSUS SSUS – –
19 fsmount Mount
20 Non-public
Mount PSUS SSUS – –
21 nfscreate/cifscreate
Create NFS/CIFS shares
22 Public Mount PSUS SSUS – –
23 Restart the programs that access the filesystem at the P-VOL.
Details of TrueCopy and Universal Replicator Operations on the HNAS F B-9Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
24 Public Mount PSUS SSUS – –
25 Connect thefile system#
horcvmimport
26 Public Mount PSUS SSUS Un-mount
Non-public
27 Mount fsmount
28 Public Mount PSUS SSUS Mount Non-public
29 Create NFS/CIFS shares
nfscreate/cifscreate
30 Public Mount PSUS SSUS Mount Public
31 When usingthe filesnapshotfunctionality
syncmount
32 Public Mount PSUS SSUS Mount Public
33 Start the programs that access the filesystem on the S-VOL.
#Specify the same file system name as that connected to the main site.
Re-synchronizing Pairs
Commands for Re-synchronizing Pairs
When resynchronizing a pair, the following commands are used.
Table B-9 Commands for Re-synchronizing Pairs (HNAS F)
No. Command Description
1 sudo nfsdelete -d shared-directory {-a |-H Host}
Delete an NFS share.
2 sudo cifsdelete -x CIFS-share-name [-rresource-group-name]
Delete a CIFS share.
3 sudo fsumount file-system-name Un-mount a file system.
4 sudo horcexport -f file-system-name Separate a file system.
5 sudo syncumount mount-point-name Un-mounts the differential-datasnapshot.
B-10 Details of TrueCopy and Universal Replicator Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Table B-10 Commands for Re-synchronizing Pairs (CCI)
No. Command Description
1 sudo pairresync {-g group-name | -dvolume-name}
Resynchronizes the split pairs.
2 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the pair status.
Procedure for Re-synchronizing Pairs
The pair is resynchronized with volume status PSUS/SSUS in the followingprocedure. The sudo command, the options of the HNAS F command, and theCCI command are omitted when it is described in the following table. Specifythe appropriate options for the actual operation.
When LVM is not used:
Table B-11 Procedure for Re-synchronizing Pairs when LVM is Not Used
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop a program that accesses the filesystem at the S-VOL.
2 Public Mount PSUS SSUS Mount Public
3 Delete NFS/CIFS shares
nfsdelete/cifsdelete
4 Public Mount PSUS SSUS Mount Non-public
5 Un-mount fsumount
6 Public Mount PSUS SSUS Un-mount
Non-public
7 Separatefile system
horcexport
8 Public Mount PSUS SSUS – –
9 pairresync
Resynchronizes thepairs
10 Public Mount COPY COPY – –
11 pairvolchk
At this pointthe pairstatus isCOPY
Details of TrueCopy and Universal Replicator Operations on the HNAS F B-11Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
12 Execute pairvolchkseveral times
Public Mount COPY COPY – –
13 pairvolchk
When a pairstatuschange toPAIR, re-synchronizing pair iscompleted
14 Public Mount PAIR PAIR – –
When LVM is used:
Table B-12 Procedure for Re-synchronizing Pairs when LVM is Used
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop the programs that access the S-VOLfile system at the S-VOL.
2 Public Mount PSUS SSUS Mount Public
3 When usingthe filesnapshotfunctionality
syncumount
4 Public Mount PSUS SSUS Mount Public
5 Delete NFS/CIFSshares#
nfsdelete/cifsdelete
6 Public Mount PSUS SSUS Mount Non-public
7 Un-mount fsumount
8 Public Mount PSUS SSUS Un-mount
Non-public
9 Separatefile system
horcexport
10 Public Mount PSUS SSUS – –
11 pairresync
Resynchronizes thepairs
B-12 Details of TrueCopy and Universal Replicator Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
12 Public Mount COPY COPY – –
13 pairvolchk
At this pointthe pairstatus isCOPY
14 Execute pairvolchkseveral times
Public Mount COPY COPY – –
15 pairvolchk
When thepair statuschanges toPAIR,resynchronization iscompleted
16 Public Mount PAIR PAIR – –
#In the target file system of the file snapshot functionality, un-mount thedifferential-data snapshot using the syncumount command beforeexecuting the nfsdelete or cifsdelete command.
Deleting Pairs
Commands for Deleting Pairs
When deleting a pair, the following commands are used.
Table B-13 Commands for Deleting Pairs (HNAS F)
No. Command Description
1 sudo nfsdelete -d shared-directory {-a |-H Host}
Delete an NFS share.
2 sudo nfscreate -d shared-directory -HHost
Create an NFS share.
3 sudo cifsdelete -x CIFS-share-name [-rresource-group-name]
Delete a CIFS share.
4 sudo cifscreate -x CIFS-share-name -dshared-directory
Create a CIFS share.
5 sudo fsumount file-system-name Un-mount a file system.
6 sudo fsmount {-r | -w} file-system-name
Mount a file system.
7 sudo fsdelete file-system-name Delete a file system.
Details of TrueCopy and Universal Replicator Operations on the HNAS F B-13Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
8 sudo horcimport -f copy-destination-file-system-name -d device-file-number [-rresource-group-name]
When LVM is not used and the filesystem is not tiered, the commandconnects the file system to the node.
9 sudo horcimport -f copy-destination-file-system-name --tier1 device-file-number--tier2 device-file-number [-r resource-group-name]
When LVM is not used and the filesystem is tiered, the command connectsthe file system to the node.
10 sudo horcvmimport -f copy-destination-file-system-name -d device-file-number[, device-file-number...] [-r resource-group-name]
When LVM is used and the file system isnot tiered, the command connects thefile system to the node.
11 sudo horcvmimport -f copy-destination-file-system-name --tier1 device-file-number [, device-file-number...] --tier2device-file-number [, device-file-number...] [-r resource-group-name]
When LVM is used and the file system istiered, the command connects the filesystem to the node.
12 sudo horcfreeze -f copy-source-file-system-name
Suppresses on the P-VOL and stopsaccess from clients.
13 sudo horcunfreeze -f copy-source-file-system-name
Restarts operations on the P-VOL andaccess from clients.
14 sudo syncumount mount-point-name Un-mounts the differential-datasnapshot.
15 sudo syncmount file-system-namedifferential-data-snapshot-name mount-point-name
Mounts the differential-data snapshot.
16 sudo syncstop file-system-name Releases the differential-data storagedevice.
17 sudo horcvmdelete -d device-file-number[, device-file-number...]
Releases device files.
Table B-14 Commands for Deleting Pairs (CCI)
No. Command Description
1 sudo pairsplit {-g group-name | -dvolume-name} -S
Deletes a volume pair.
2 sudo pairsplit {-g group-name | -dvolume-name} -rw
Splits a volume pair.
3 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the pair status.
Procedure for Deleting Pairs when the Pair Status is PAIR (S-VOLContinuously Used-1)
The pair delete when S-VOL is continuously used after the pair deletion withstatus PAIR in the following procedure. The sudo command, the options ofthe HNAS F command, and the CCI command are omitted when it is
B-14 Details of TrueCopy and Universal Replicator Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
described in the following table. Specify the appropriate options for the actualoperation.
Table B-15 Procedure for Deleting Pairs when the Pair Status is PAIR (S-VOLContinuously Used-1)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 horcfreeze
Suppressestheoperations
3 Public Mount PAIR PAIR – –
4 pairsplit
Splitting avolume pair
5 Public Mount COPY COPY – –
6 pairvolchk
At thispoint, thepair statusis COPY.
7 Execute pairvolchkseveral times
Public Mount COPY COPY – –
8 pairvolchk
When a pairstatuschange toPSUS,splitting iscompleted.
9 Public Mount PSUS SSUS – –
10 horcunfreeze
Restarts theoperations
11 Public Mount PSUS SSUS – –
12 pairsplit -S
Deletes avolume pair
13 Public Mount COPY COPY – –
14 pairvolchk
At this pointthe pairstatus isCOPY
15 Execute pairvolchkseveral times
Public Mount COPY COPY – –
16 pairvolchk
When a pairstatuschange toSMPL,
Details of TrueCopy and Universal Replicator Operations on the HNAS F B-15Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
deleting iscompleted
17 Public Mount SMPL SMPL – –
18 Connect thefile system#
horcvmimport(whenLVM is notused,horcimport)
19 Public Mount SMPL SMPL Un-mount
Non-public
20 Mount fsmount
21 Public Mount SMPL SMPL Mount Non-public
22 Share nfscreate/cifscreate
23 Public Mount SMPL SMPL Mount Public
24 When usingthe filesnapshotfunctionality
syncmount
25 Public Mount SMPL SMPL Mount Public
26 Start a program that accesses the file system at theS-VOL.
#Specify the same file system name as the main site for the file systemname connected with node.
Procedure for Deleting Pairs when the Pair Status is PSUS (S-VOLContinuously Used-2)
The pair delete when S-VOL is continuously used after the pair deletion withstatus PSUS/SSUS in the following procedure. The sudo command, theoptions of the HNAS F command, and the CCI command are omitted when itis described in the following table. Specify the appropriate options for theactual operation.
B-16 Details of TrueCopy and Universal Replicator Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Table B-16 Procedure for Deleting Pairs When the Pair Status Is PSUS (S-VOLContinuously Used-2)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PSUS SSUS Mount Public
2 pairsplit -S
Deletes avolume pair
3 Public Mount SMPL SMPL Mount Public
4 pairvolchk
When a pairstatuschange toSMPL,deleting iscompleted
5 Public Mount SMPL SMPL Mount Public
Procedure for Deleting Pairs when the Pair Status is PAIR (S-VOLNot Used-1)
The pair delete when S-VOL is not used after the pair deletion with statusPAIR in the following procedure. The sudo command, the options of the HNASF command, and the CCI command are omitted when it is described in thefollowing table. Specify the appropriate options for the actual operation.
Table B-17 Procedure for Deleting Pairs When the Pair Status Is PAIR (S-VOL NotUsed-1)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 pairsplit -S
Deletes avolume pair
3 Public Mount COPY COPY – –
4 pairvolchk
At thispoint, thepair statusis COPY
5 Execute pairvolchkseveral times
Public Mount COPY COPY – –
6 pairvolchk
When a pairstatuschange toSMPL,
Details of TrueCopy and Universal Replicator Operations on the HNAS F B-17Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
deleting iscompleted
7 Public Mount SMPL SMPL – –
8 Releasesdevice files
horcvmdelete
9 Public Mount SMPL SMPL – –
Procedure for Deleting Pairs (S-VOL Not Used-2) when the PairStatus is PSUS
The pair delete when S-VOL is not used after the pair deletion with statusPSUS/SSUS in the following procedure. The sudo command, the options ofthe HNAS F command, and the CCI command are omitted when it isdescribed in the following table. Specify the appropriate options for the actualoperation.
When LVM is not used:
Table B-18 Procedure for Deleting Pairs When the Pair Status Is PSUS while LVM is NotUsed (S-VOL Not Used-2)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop a program that accesses the S-VOL filesystem.
2 Public Mount PSUS SSUS Mount Public
3 Delete NFS/CIFS shares
nfsdelete/cifsdelete
4 Public Mount PSUS SSUS Mount Non-public
5 Un-mount fsumount
6 Public Mount PSUS SSUS Un-mount
Non-public
7 Delete filesystem
fsdelete
8 Public Mount PSUS SSUS – –
9 pairsplit -S
Deletes avolume pair
B-18 Details of TrueCopy and Universal Replicator Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
10 Public Mount SMPL SMPL – –
11 pairvolchk
When a pairstatuschange toSMPL,deleting iscompleted
12 Public Mount SMPL SMPL – –
When LVM is used:
Table B-19 Procedure for Deleting Pairs When the Pair Status Is PSUS while LVM isUsed (S-VOL Not Used-2)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Stop a program that accesses the S-VOL filesystem.
2 Public Mount PSUS SSUS Mount Public
3 When usingthe filesnapshotfunctionality
syncumount
4 Public Mount PSUS SSUS Mount Public
5 When usingthe filesnapshotfunctionality
syncstop
6 Public Mount PSUS SSUS Mount Public
7 Delete NFS/CIFSshares#
nfsdelete/cifsdelete
8 Public Mount PSUS SSUS Mount Non-public
9 Un-mount fsumount
10 Public Mount PSUS SSUS Un-mount
Non-public
11 Delete filesystem
fsdelete
Details of TrueCopy and Universal Replicator Operations on the HNAS F B-19Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
12 Public Mount PSUS SSUS – –
13 pairsplit -S
Deletes avolume pair
14 Public Mount SMPL SMPL – –
15 pairvolchk
When thepair statuschanges toSMPL,deletion iscompleted
16 Public Mount SMPL SMPL – –
#In the target file system of the file snapshot functionality, un-mount thedifferential-data snapshot using the syncumount command beforeexecuting the nfsdelete or cifsdelete command, and release thedevice which stores the differential data using the syncstop command.
B-20 Details of TrueCopy and Universal Replicator Operations on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
COperation when Failures Occurred on
the HNAS F
This appendix explains eight potential failures when the TrueCopy orUniversal Replicator functionality of storage system is used in conjunctionwith the HNAS F .
- Operation when the Main Site or Storage System Went Down on pageC-4
- Operation when the Cluster of the Main Site Went Down on page C-16
- Operation when Multiple Failures Occurred in All the Storages on the MainSite on page C-28
- Operation when Multiple Failures Occurred in a Part of Storages on the MainSite on page C-43
- Operation when Network Failures Occurred Between the Main Site and theRemote Site Causing Journal Overflow on page C-58
- Operation when Network Failures Occurred Between the Main Site and theRemote Site Without Causing Journal Overflow on page C-60
- Operation when Network Failures Occurred in the Main Site on page C-61
- Operation when Multiple Failures Occurred in Some or All Journals on theMain Site on page C-72
The commands and the important options are described, but specify theappropriate options referring to the following manual.
- Command References
- Hitachi Command Control Interface (CCI) User and Reference Guide
Note:
To continue operation at the failover destination, specify the virtual IPaddress when you execute commands from the remote host. When youconnect the file system, execute the commands by adding the -r option andspecifying a resource group name.
Operation when Failures Occurred on the HNAS F C-1Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
The operations described in this document are typical operations, and theseare not likely to be able to solve the problem. Please evaluate the operationenough before executing it.
□ Isolation when Failures Occur
□ Operation when the Main Site or Storage System Went Down
□ Operation when the Cluster of the Main Site Went Down
□ Operation when Multiple Failures Occurred in All the Storages on the MainSite
□ Operation when Multiple Failures Occurred in a Part of Storages on theMain Site
□ Operation when Network Failures Occurred Between the Main Site and theRemote Site Causing Journal Overflow
□ Operation when Network Failures Occurred Between the Main Site and theRemote Site Without Causing Journal Overflow
□ Operation when Network Failures Occurred in the Main Site
□ Operation when Multiple Failures Occurred in Some or All Journals on theMain Site
C-2 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Isolation when Failures OccurPerform the isolation based on the following flow chart when failuresoccurred. Moreover, failures of the case, which is not applied, to thisflowchart may have occurred, so that perform the isolation contacting theservice personnel. It is described by separating the cases when it can beconnected to the HNAS F volume from the client and when it cannot beconnected to the client.
When the failure prevents access to the HNAS F volume
Operation when Failures Occurred on the HNAS F C-3Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When the failure does not prevent access to the HNAS F volume
Operation when the Main Site or Storage System WentDown
Assumed Scenarios
The main site is struck, the business operation on the main site cannot beperformed, and the business operation is continued on the remote site. Themain site cannot be restored, and a new storage system was introduced.
C-4 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Commands to be Used
The following commands are used when the main site or storage system wentdown.
Table C-1 Commands for Recovery when the Main Site or Storage SystemWent Down (HNAS F)
No. Command Description
1 sudo horcimport -f copy-destination-file-system-name -d device-file-number [-rresource-group-name]
When LVM is not used and the filesystem is not tiered, the commandconnects the file system to the node.
2 sudo horcimport -f copy-destination-file-system-name --tier1 device-file-number--tier2 device-file-number [-r resource-group-name]
When LVM is not used and the filesystem is tiered, the command connectsthe file system to the node.
3 sudo horcvmimport -f copy-destination-file-system-name -d device-file-number[, device-file-number ...] [-r resource-group-name]
When LVM is used and the file system isnot tiered, the command connects thefile system to the node.
4 sudo horcvmimport -f copy-destination-file-system-name --tier1 device-file-number [, device-file-number...] --tier2
When LVM is used and the file system istiered, the command connects the filesystem to the node.
Operation when Failures Occurred on the HNAS F C-5Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
device-file-number [, device-file-number...] [-r resource-group-name]
5 sudo nfsdelete -d shared-directory {-a |-H Host}
Delete an NFS share.
6 sudo nfscreate -d shared-directory -HHost
Create an NFS share.
7 sudo cifsdelete -x CIFS-share-name [-rresource-group-name]
Delete a CIFS share.
8 sudo cifscreate -x CIFS-share-name -dshared-directory
Create a CIFS share.
9 sudo fsumount file-system-name Un-mount a file system.
10 sudo fsmount {-r | -w} file-system-name
Mount a file system.
11 sudo horcexport -f file-system-name Separate a file system.
12 sudo horcsetenv HORCMINST-instance-number
Sets up or modifies the CCI environmentvariable.
13 sudo horcunsetenv HORCC_MRCF Deletes the CCI environment variable.
14 sudo horcvmdefine -d device-file-number[, device-file-number ...]
Reserves device file.
15 sudo horcfreeze -f copy-source-file-system-name
Suppresses on the P-VOL and stopsaccess from clients.
16 sudo horcunfreeze -f copy-source-file-system-name
Restarts operations on the P-VOL andaccess from clients.
17 sudo syncmount file-system-namedifferential-data-snapshot-name mount-point-name
Mounts the differential-data snapshot.
18 sudo syncumount mount-point-name Un-mounts the differential-datasnapshot.
Table C-2 Commands for Recovery when the Main Site or Storage SystemWent Down (CCI)
No. Command Description
1 sudo horctakeover {-g group-name | -dvolume-name} [-t time out]
Takeover the pair. When using UniversalReplicator, you can only executetakeover for individual groups. The -toption is mandatory with async.
2 sudo paircreate {-g group-name | -dvolume-name} -f async –vl -jp P-VOL-journal-ID -js journal-ID
Creates a Universal Replicator pair.
3 sudo paircreate { -g group-name | -dvolume-name -f never -vl
Creates a TrueCopy pair.
C-6 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
4 sudo pairsplit {-g group-name | -dvolume-name} -R
Brings S-VOL into SMPL forcibly.
5 sudo pairsplit {-g group-name | -dvolume-name} -rw
Splits a volume pair.
6 sudo pairresync {-g group-name | -dvolume-name} -swaps
Resynchronizes the split pairs.
7 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the status of the paired volumeon the applicable site.
8 sudo pairdisplay {-g group-name | -dvolume-name} -fce
Checks the pair status. The commandoutput contains lines corresponding toCTG (CT group ID), JNL (journal groupID), and AP (path number). When usingUniversal Replicator, verify that the JNLin the command output is the onespecified in the paircreate command.
9 sudo horcmstart.sh Starts CCI.
Recovery Procedure from Failures
When the main site is struck, recover it in the following procedure. The sudocommand, the options of the HNAS F command, and the CCI command areomitted when it is described in the following table. Specify the appropriateoptions for the actual operation.
It is a prerequisite that S-VOL is reserved.
When LVM is not used:
Table C-3 Recovery Procedure when the Main Site or Storage System Went Down (whenLVM is Not Used)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 Main site is down.
3 – – – SSUS – –
4 The takeover execution is decided by thecustomer judgment.
5 Executetakeover
horctakeover
6 – – – SSUS – –
Operation when Failures Occurred on the HNAS F C-7Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
7 Connect thefilesystem#1
horcimport
8 – – – SSUS Un-mount
Non-public
9 Mount fsmount
10 – – – SSUS Mount Non-public
Mountcompleted
11 Share nfscreate/cifscreate
12 – – – SSUS Mount Public
13 Start the business operation at the remotesite. #2
14 Main site is recovered.
15 – – – SSUS Mount Public
16 cluster in the main site was started.
17 – – – SSUS Mount Public
18 horcsetenv
Set CCIenvironment
19 – – – SSUS Mount Public
20 horcunsetenv
Set CCIenvironment
21 – – – SSUS Mount Public
22 Deletes avolume pair
pairsplit -R
23 – – – SMPL Mount Public
24 When a pairstatuschange toSMPL,deleting iscompleted
pairvolchk
25 – – – SMPL Mount Public
26 horcmstart.sh
Start CCI
27 – – SMPL SMPL Mount Public
C-8 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
28 horcvmdefine
Reservesthe old P-VOL
29 – – SMPL SMPL Mount Public
30 Confirm theconfiguration definitionfile beforecrating pair
pairdisplay
31 – – SMPL SMPL Mount Public
32 Creates avolume pair
paircreate
33 – – COPY COPY Mount Public
34 Confirm avolume pairstatus
pairdisplay
35 – – COPY COPY Mount Public
36 At this pointthe pairstatus isCOPY
pairvolchk
37 – – COPY COPY Mount Public Execute pairvolchkseveral times
38 When a pairstatuschange toPAIR,crating pairiscompleted
pairvolchk
39 – – PAIR PAIR Mount Public
40 Stop the business operation at the remotesite.
41 Delete NFS/CIFS shares
nfsdelete/cifsdelete
42 – – PAIR PAIR Mount Non-public
43 Un-mount fsumount
44 – – PAIR PAIR Un-mount
Non-public
Operation when Failures Occurred on the HNAS F C-9Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
45 Splits avolume pair
pairsplit -rw
46 COPY COPY Un-mount
Non-public
47 At this pointthe pairstatus isCOPY
pairvolchk
48 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
49 When a pairstatuschange toPSUS,splittingpair iscompleted
pairvolchk
50 – – SSUS PSUS Un-mount
Non-public
51 horcimport
Connect thefilesystem#1
52 Non-public
Un-mount
SSUS PSUS Un-mount
Non-public
53 Separatefile system
horcexport
54 Non-public
Un-mount
SSUS PSUS – –
55 pairresync -swaps
Reverseresynchronize
56 Non-public
Un-mount
COPY COPY – –
57 pairvolchk
At this pointthe pairstatus isCOPY
58 Execute pairvolchkseveral times
Non-public
Un-mount
COPY COPY – –
59 pairvolchk
When a pairstatuschange toPAIR, re-synchronizi
C-10 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
ng pair iscompleted
60 Non-public
Un-mount
PAIR PAIR – –
61 fsmount Mount
62 Mountcompleted
Non-public
Mount PAIR PAIR – –
63 nfscreate/cifscreate
Share
64 Sharecreationcompleted
Public Mount PAIR PAIR – –
65 Resume business operations at the mainsite. #2
#1:Specify the same file system name as the main site for the file systemname connected with node.
#2:When the takeover is performed, the IP address cannot be taken overfrom the primary site to the secondary site. When starting a job at thesecondary site, un-mount the client from the primary site, change the IPaddress of the site on which the client is to be mounted to that of thesecondary site, and then mount the client again. When the job is resumedat the primary site because the primary site has been restored, un-mountthe client from the secondary site, return the IP address of the site onwhich the client is to be mounted to that of the primary site, and thenmount the client again.
When LVM is used:
Table C-4 Recovery Procedure when the Main Site or Storage System Went Down (whenLVM is Used)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 Main site is down.
3 – – – SSUS – –
Operation when Failures Occurred on the HNAS F C-11Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
4 The user determines whether to executetakeover.
5 Executetakeover
horctakeover
6 – – – SSUS – –
7 Connect thefilesystem#1
horcvmimport
8 – – – SSUS Un-mount
Non-public
9 Mount fsmount
10 – – – SSUS Mount Non-public
Mountcompleted
11 Create NFS/CIFS shares
nfscreate/cifscreate
12 – – – SSUS Mount Public
13 When usingthe filesnapshotfunctionality
syncmount
14 – – – SSUS Mount Public
15 Start business operations at the remotesite. #2
16 Main site is recovered.
17 – – – SSUS Mount Public
18 Cluster in the main site was started.
19 – – – SSUS Mount Public
20 horcsetenv
Set CCIenvironment
21 – – – SSUS Mount Public
22 horcunsetenv
Set CCIenvironment
23 – – – SSUS Mount Public
24 Deletes avolume pair
pairsplit -R
C-12 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
25 – – – SMPL Mount Public
26 When thepair statuschanges toSMPL,deletion iscompleted
pairvolchk
27 – – – SMPL Mount Public
28 horcmstart.sh
Start CCI
29 – – SMPL SMPL Mount Public
30 horcvmdefine
Reservesthe old P-VOL
31 – – SMPL SMPL Mount Public
32 Confirm thecopy targetbeforecreatingpair
pairdisplay
33 – – SMPL SMPL Mount Public
34 Create avolume pair
paircreate
35 – – COPY COPY Mount Public
36 Checkwhethervolume pairhas beencreated
pairdisplay
37 – – COPY COPY Mount Public
38 At this pointthe pairstatus isCOPY
pairvolchk
39 – – COPY COPY Mount Public Execute pairvolchkseveral times
40 When thepair statuschanges toPAIR,creation iscompleted
pairvolchk
41 – – PAIR PAIR Mount Public
Operation when Failures Occurred on the HNAS F C-13Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
42 Stop business operations at the remotesite.
43 – – PAIR PAIR Mount Public
44 When usingthe filesnapshotfunctionality
syncumount
45 – – PAIR PAIR Mount Public
46 Delete NFS/CIFSshares.
nfsdelete/cifsdelete
47 – – PAIR PAIR Mount Non-public
48 Un-mount fsumount
49 – – PAIR PAIR Un-mount
Non-public
50 Suppressfile systemoperations
horcfreeze
51 PAIR PAIR Un-mount
Non-public
52 Splits avolume pair
pairsplit -rw
53 COPY COPY Un-mount
Non-public
54 At this pointthe pairstatus isCOPY
pairvolchk
55 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
56 When thepair statuschanges toPSUS,splitting iscompleted
pairvolchk
57 – – SSUS PSUS Un-mount
Non-public
58 Cancelsuppressionof file
horcunfreeze
C-14 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
systemoperations
59 – – SSUS PSUS Un-mount
Non-public
60 horcvmimport
Connect thefilesystem#1
61 Non-public
Un-mount
SSUS PSUS Un-mount
Non-public
62 Separatefile system
horcexport
63 Non-public
Un-mount
SSUS PSUS – –
64 pairresync -swaps
Reverseresynchronize
65 Non-public
Un-mount
COPY COPY – –
66 pairvolchk
At this pointthe pairstatus isCOPY
67 Execute pairvolchkseveral times
Non-public
Un-mount
COPY COPY – –
68 pairvolchk
When thepair statuschanges toPAIR,resynchronization iscompleted
69 Non-public
Un-mount
PAIR PAIR – –
70 fsmount Mount
71 Mountcompleted
Non-public
Mount PAIR PAIR – –
72 nfscreate/cifscreate
Create NFS/CIFS shares
73 Sharecreationcompleted
Public Mount PAIR PAIR – –
Operation when Failures Occurred on the HNAS F C-15Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
74 syncmount
When usingthe filesnapshotfunctionality
75 Public Mount PAIR PAIR – –
76 Resume business operations at the mainsite. #2
#1:Specify the same file system name as that connected to the main site.
#2:When a takeover is performed, the IP address cannot be taken over fromthe main site to the remote site. When starting a job at the remote site,unmount the client from the primary site, change the IP address of thesite on which the client is to be mounted to that of the remote site, andthen mount the client again. When the main site is restored and businessoperations are resumed at the main site, unmount the client from theremote site, set the IP address of the site on which the client is to bemounted back to that of the main site, and then mount the client again.
Operation when the Cluster of the Main Site Went Down
Assumed Scenarios
Both two nodes of OS on the main side went down. Because OS does notstart, the business operation on the main site cannot be performed, and thebusiness operation is continued on the remote site.
C-16 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Commands to be Used
The following commands are used when the cluster of the main site wentdown.
Table C-5 Commands for Recovery when the Cluster of Main Site WentDown (HNAS F)
No. Command Description
1 sudo horcimport -f copy-destination-file-system-name -d device-file-number [-rresource-group-name]
When LVM is not used and the filesystem is not tiered, the commandconnects the file system to the node.
2 sudo horcimport -f copy-destination-file-system-name --tier1 device-file-number--tier2 device-file-number [-r resource-group-name]
When LVM is not used and the filesystem is tiered, the command connectsthe file system to the node.
3 sudo horcvmimport -f copy-destination-file-system-name -d device-file-number[, device-file-number ...] [-r resource-group-name]
When LVM is used and the file system isnot tiered, the command connects thefile system to the node.
4 sudo horcvmimport -f copy-destination-file-system-name --tier1 device-file-number [, device-file-number...] --tier2
When LVM is used and the file system istiered, the command connects the filesystem to the node.
Operation when Failures Occurred on the HNAS F C-17Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
device-file-number [, device-file-number...] [-r resource-group-name]
5 sudo nfsdelete -d shared-directory {-a |-H Host}
Delete an NFS share.
6 sudo nfscreate -d shared-directory -HHost
Create an NFS share.
7 sudo cifsdelete -x CIFS-share-name [-rresource-group-name]
Delete a CIFS share.
8 sudo cifscreate -x CIFS-share-name -dshared-directory
Create a CIFS share.
9 sudo fsumount file-system-name Un-mount a file system.
10 sudo fsmount {-r | -w} file-system-name
Mount a file system.
11 sudo horcexport -f file-system-name Separate a file system.
12 sudo horcfreeze -f copy-source-file-system-name
Suppresses on the P-VOL and stopsaccess from clients.
13 sudo horcunfreeze -f copy-source-file-system-name
Restarts operations on the P-VOL andaccess from clients.
14 sudo syncmount file-system-namedifferential-data-snapshot-name mount-point-name
Mounts the differential-data snapshot.
15 sudo syncumount mount-point-name Un-mounts the differential-datasnapshot.
Table C-6 Commands for Recovery when the Cluster of Main Site WentDown (CCI)
No. Command Description
1 sudo horctakeover {-g group-name | -dvolume-name} [-t time out]
Takeover the pair. When using UniversalReplicator, you can only executetakeover for individual groups. The -toption is mandatory with async.
2 sudo paircreate {-g group-name | -dvolume-name} -f async –vl -jp P-VOL-journal-ID -js journal-ID
Creates a Universal Replicator pair.
3 sudo paircreate { -g group-name | -dvolume-name -f never} -vl
Creates a TrueCopy pair.
4 sudo pairsplit {-g group-name | -dvolume-name} -S
Deletes a pair.
5 sudo pairsplit {-g group-name | -dvolume-name} -rw
Splits a volume pair.
6 sudo pairresync {-g group-name | -dvolume-name} -swaps
Resynchronizes the split pairs.
C-18 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
7 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the status of the paired volumeon the applicable site.
8 sudo pairdisplay {-g group-name | -dvolume-name} -fce
Checks the pair status. The commandoutput contains lines corresponding toCTG (CT group ID), JNL (journal groupID), and AP (path number). When usingUniversal Replicator, verify that the JNLin the command output is the onespecified in the paircreate command.
9 sudo horcmstart.sh Starts CCI.
Recovery Procedure from Failures
When the cluster of the main site is struck, recover it in the followingprocedure. The sudo command, the options of the HNAS F command, and theCCI command are omitted when it is described in the following table. Specifythe appropriate options for the actual operation.
It is a prerequisite that S-VOL is reserved.
When LVM is not used:
Table C-7 Recovery Procedure when the Cluster of the Main Site Went Down (when LVMis Not Used)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 cluster is down.
3 – – PAIR PAIR – –
4 The takeover execution is decided by thecustomer judgment.
5 Executetakeover
horctakeover
6 PAIR PAIR – –
7 Begindeletingpair
pairsplit -S
8 – – SMPL SMPL – –
9 Confirm avolume pairstatus to beSMPL
pairvolchk
Operation when Failures Occurred on the HNAS F C-19Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
10 – – SMPL SMPL – –
11 Connect thefilesystem#1
horcimport
12 – – SMPL SMPL Un-mount
Non-public
13 Mount fsmount
14 – – SMPL SMPL Mount Non-public
Mountcompleted
15 Share nfscreate/cifscreate
16 – – SMPL SMPL Mount Public
17 Start the business operation at the remotesite. #2
18 cluster is recovered.
19 Public Mount SMPL SMPL Mount Public
20 nfsdelete/cifsdelete
Delete NFS/CIFS shares
21 Non-public
Mount SMPL SMPL Mount Public
22 fsumount Un-mount
23 Non-public
Un-mount
SMPL SMPL Mount Public
24 horcexport
Separatefile system
25 – – SMPL SMPL Mount Public
26 horcmstart.sh
Start CCI
27 – – SMPL SMPL Mount Public
28 Confirm theconfiguration definitionfile beforecreatingpair
pairdisplay
29 – – SMPL SMPL Mount Public
C-20 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
30 Create apair
paircreate
31 – – COPY COPY Mount Public
32 Confirm avolume pairstatus
pairdisplay
33 – – COPY COPY Mount Public
34 At this pointthe pairstatus isCOPY
pairvolchk
35 – – COPY COPY Mount Public Execute pairvolchkseveral times
36 When a pairstatuschange toPAIR, re-synchronizing pair iscompleted
pairvolchk
37 – – PAIR PAIR Mount Public
38 Stop the business operation at the remotesite.
39 Delete NFS/CIFS shares
nfsdelete/cifsdelete
40 – – PAIR PAIR Mount Non-public
41 Un-mount fsumount
42 – – PAIR PAIR Un-mount
Non-public
43 Beginsplittingpair
pairsplit -rw
44 COPY COPY Un-mount
Non-public
45 At this pointthe pairstatus isCOPY
pairvolchk
46 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
Operation when Failures Occurred on the HNAS F C-21Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
47 When a pairstatuschange toPSUS,splittingpair iscompleted
pairvolchk
48 – – SSUS PSUS Un-mount
Non-public
49 horcimport
Connect thefilesystem#1
50 Non-public
Un-mount
SSUS PSUS Un-mount
Non-public
51 Separatefile system
horcexport
52 Non-public
Un-mount
SSUS PSUS – –
53 pairresync -swaps
Reverseresynchronize
54 Non-public
Un-mount
COPY COPY – –
55 pairvolchk
Confirmpaircreation
56 Execute pairvolchkseveral times
Non-public
Un-mount
COPY COPY – –
57 pairvolchk
When a pairstatuschange toPAIR, paircreation iscompleted
58 Non-public
Un-mount
PAIR PAIR – –
59 fsmount Mount
60 Mountcompleted
Non-public
Mount PAIR PAIR – –
61 nfscreate/cifscreate
Share
C-22 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
62 Sharecreationcompleted
Public Mount PAIR PAIR – –
63 Start the business operation at the mainsite. #2
#1:Specify the same file system name as the main site for the file systemname connected with node.
#2:When the takeover is performed, the IP address cannot be taken overfrom the primary site to the secondary site. When starting a job at thesecondary site, un-mount the client from the primary site, change the IPaddress of the site on which the client is to be mounted to that of thesecondary site, and then mount the client again. When the job is resumedat the primary site because the primary site has been restored, un-mountthe client from the secondary site, return the IP address of the site onwhich the client is to be mounted to that of the primary site, and thenmount the client again.
When LVM is used:
Table C-8 Recovery Procedure when the Cluster of the Main Site Went Down (when LVMis Used)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 cluster is down.
3 – – PAIR PAIR – –
4 The user determines whether to executetakeover.
5 Executetakeover
horctakeover
6 PAIR PAIR – –
7 Begindeletingpair
pairsplit -S
8 – – SMPL SMPL – –
Operation when Failures Occurred on the HNAS F C-23Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
9 Confirmthat thevolume pairstatus isSMPL
pairvolchk
10 – – SMPL SMPL – –
11 Connect thefilesystem#1
horcvmimport
12 – – SMPL SMPL Un-mount
Non-public
13 Mount fsmount
14 – – SMPL SMPL Mount Non-public
Mountcompleted
15 Create NFS/CIFS shares
nfscreate/cifscreate
16 – – SMPL SMPL Mount Public
17 When usingthe filesnapshotfunctionality
syncmount
18 – – SMPL SMPL Mount Public
19 Start business operations at the remotesite. #2
20 cluster on main site is recovered.
21 Public Mount SMPL SMPL Mount Public
22 syncumount
When usingthe filesnapshotfunctionality
23 Public Mount SMPL SMPL Mount Public
24 nfsdelete/cifsdelete
Delete NFS/CIFS shares
25 Non-public
Mount SMPL SMPL Mount Public
26 fsumount Un-mount
C-24 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
27 Non-public
Un-mount
SMPL SMPL Mount Public
28 horcexport
Separatefile system
29 – – SMPL SMPL Mount Public
30 horcmstart.sh
Start CCI
31 – – SMPL SMPL Mount Public
32 Confirm thecopy targetbeforecreatingpair
pairdisplay
33 – – SMPL SMPL Mount Public
34 Create apair
paircreate
35 – – COPY COPY Mount Public
36 At this pointthe pairstatus isCOPY
pairdisplay
37 – – COPY COPY Mount Public
38 Checkwhethervolume pairhas beencreated
pairvolchk
39 – – COPY COPY Mount Public Execute pairvolchkseveral times
40 When thepair statuschanges toPAIR, paircreation iscompleted
pairvolchk
41 – – PAIR PAIR Mount Public
42 Stop business operations at the remotesite.
43 – – PAIR PAIR Mount Public
44 When usingthe file
syncumount
Operation when Failures Occurred on the HNAS F C-25Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
snapshotfunctionality
45 – – PAIR PAIR Mount Public
46 Delete NFS/CIFSshares.
nfsdelete/cifsdelete
47 – – PAIR PAIR Mount Non-public
48 Un-mount fsumount
49 – – PAIR PAIR Un-mount
Non-public
50 Suppressoperations
horcfreeze
51 – – PAIR PAIR Un-mount
Non-public
52 Beginsplittingpair
pairsplit -rw
53 – – COPY COPY Un-mount
Non-public
54 At this pointthe pairstatus isCOPY
pairvolchk
55 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
56 When thepair statuschanges toPSUS,splitting iscompleted
pairvolchk
57 – – SSUS PSUS Un-mount
Non-public
58 Cancelsuppressionofoperations
horcunfreeze
59 – – SSUS PSUS Un-mount
Non-public
C-26 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
60 horcvmimport
Connect thefilesystem#1
61 Non-public
Un-mount
SSUS PSUS Un-mount
Non-public
62 Separatefile system
horcexport
63 Non-public
Un-mount
SSUS PSUS – –
64 pairresync -swaps
Reverseresynchronize
65 Non-public
Un-mount
COPY COPY – –
66 pairvolchk
At this pointthe pairstatus isCOPY
67 Execute pairvolchkseveral times
Non-public
Un-mount
COPY COPY – –
68 pairvolchk
When thepair statuschanges toPAIR, paircreation iscompleted
69 Non-public
Un-mount
PAIR PAIR – –
70 fsmount Mount
71 Mountcompleted
Non-public
Mount PAIR PAIR – –
72 nfscreate/cifscreate
Create NFS/CIFS shares
73 Sharecreationcompleted
Public Mount PAIR PAIR – –
74 syncmount
When usingthe filesnapshotfunctionality
75 Public Mount PAIR PAIR – –
Operation when Failures Occurred on the HNAS F C-27Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
76 Start business operations at the main site.#2
#1:Specify the same file system name as that connected to the main site.
#2:When a takeover is performed, the IP address cannot be taken over fromthe main site to the remote site. When starting a job at the remote site,unmount the client from the primary site, change the IP address of thesite on which the client is to be mounted to that of the remote site, andthen mount the client again. When the main site is restored and businessoperations are resumed at the main site, unmount the client from theremote site, set the IP address of the site on which the client is to bemounted back to that of the main site, and then mount the client again.
Operation when Multiple Failures Occurred in All theStorages on the Main Site
Assumed Scenarios
When multiple failures occurred in all the storages used in the user LU on themain site, the business operation cannot be performed on the main site, andthe business operation is continued on the remote site. Note thatShadowImage operation is not performed.
The failed storage on the main site is recovered by the replacement (servicepersonnel).
C-28 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Commands to be Used
The following commands are used when multiple failures occurred in all thestorages used in the user LU on the main site.
Table C-9 Commands for Recovery when Multiple Failures Occurred in Allthe Storages Used in the User LU on the Main Site (HNAS F)
No. Command Description
1 sudo horcimport -f copy-destination-file-system-name -d device-file-number [-rresource-group-name]
When LVM is not used and the filesystem is not tiered, the commandconnects the file system to the node.
2 sudo horcimport -f copy-destination-file-system-name --tier1 device-file-number--tier2 device-file-number [-r resource-group-name]
When LVM is not used and the filesystem is tiered, the command connectsthe file system to the node.
3 sudo horcvmimport -f copy-destination-file-system-name -d device-file-number[, device-file-number ...] [-r resource-group-name]
When LVM is used and the file system isnot tiered, the command connects thefile system to the node.
4 sudo horcvmimport -f copy-destination-file-system-name --tier1 device-file-number [, device-file-number...] --tier2
When LVM is used and the file system istiered, the command connects the filesystem to the node.
Operation when Failures Occurred on the HNAS F C-29Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
device-file-number [, device-file-number...] [-r resource-group-name]
5 sudo nfsdelete -d shared-directory {-a |-H Host}
Delete an NFS share.
6 sudo nfscreate -d shared-directory -HHost
Create an NFS share.
7 sudo cifsdelete -x CIFS-share-name [-rresource-group-name]
Delete a CIFS share.
8 sudo cifscreate -x CIFS-share-name -dshared-directory
Create a CIFS share.
9 sudo fsumount file-system-name Un-mount a file system.
10 sudo fsmount {-r | -w} file-system-name
Mount a file system.
11 sudo fsdelete file-system-name Delete a file system.
13 sudo horcvmdefine -d device-file-number[, device-file-number ...]
Reserves device file.
14 sudo horcfreeze -f copy-source-file-system-name
Suppresses on the P-VOL and stopsaccess from clients.
15 sudo horcunfreeze -f copy-source-file-system-name
Restarts operations on the P-VOL andaccess from clients.
16 sudo horcexport -f file-system-name Separate a file system.
17 sudo syncmount file-system-namedifferential-data-snapshot-name mount-point-name
Mounts the differential-data snapshot.
18 sudo syncumount mount-point-name Un-mounts the differential-datasnapshot.
19 sudo syncstop file-system-name Releases the device which stores thedifferential data.
20 sudo lumapctl -t m --on Change the mode of allocation of a userLU to the maintenance mode.
21 sudo lumapctl -t m --off Change the mode of allocation of a userLU to the normal management mode.
Table C-10 Commands for Recovery when Multiple Failures Occurred in Allthe Storages Used in the User LU on the Main Site (CCI)
No. Command Description
1 sudo horctakeover {-g group-name | -dvolume-name} [-t time out]
Takeover the pair. When using UniversalReplicator, you can only executetakeover for individual groups. The -toption is mandatory with async.
C-30 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
2 sudo paircreate {-g group-name | -dvolume-name} -f async –vl -jp P-VOL-journal-ID -js journal-ID
Creates a Universal Replicator pair.
3 sudo paircreate { -g group-name | -dvolume-name -f never -vl
Creates a TrueCopy pair.
4 sudo pairsplit {-g group-name | -dvolume-name} -S
Deletes a pair.
5 sudo pairsplit {-g group-name | -dvolume-name} -rw
Splits a volume pair.
6 sudo pairresync {-g group-name | -dvolume-name} -swaps
Resynchronizes the split pairs.
7 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the status of the paired volumeon the applicable site.
8 sudo pairdisplay {-g group-name | -dvolume-name} -fce
Checks the pair status. The commandoutput contains lines corresponding toCTG (CT group ID), JNL (journal groupID), and AP (path number). When usingUniversal Replicator, verify that the JNLin the command output is the onespecified in the paircreate command.
9 sudo horcmstart.sh Starts CCI.
Recovery Procedure from Failures
When multiple failures occurred in all the storages used in the user LU on themain site, recover it in the following procedure. The sudo command, theoptions of the HNAS F command, and the CCI command are omitted when itis described in the following table. Specify the appropriate options for theactual operation. It is a prerequisite that S-VOL is reserved.
When LVM is not used:
Table C-11 Recovery Procedure when Multiple Failures Occurred in All the StoragesUsed in the User LU on the Main Site (when LVM is Not Used)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 Multiple failures occurred in all user LU.
3 – – PSUE SSUS – –
4 The takeover execution is decided by thecustomer judgment.
Operation when Failures Occurred on the HNAS F C-31Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
5 Executetakeover
horctakeover
6 – – PSUE SSUS – –
7 Deletes avolume pair
pairsplit -S
8 – – SMPL SMPL – –
9 When a pairstatuschange toSMPL,deleting iscompleted
pairvolchk
10 – – SMPL SMPL – –
11 Connect thefilesystem#1
horcimport
12 – – SMPL SMPL Un-mount
Non-public
13 Mount fsmount
14 – – SMPL SMPL Mount Non-public
15 Share nfscreate/cifscreate
16 – – SMPL SMPL Mount Public
17 Start the business operation at the remotesite. #2
18 – – SMPL SMPL Mount Public
19 lumapctl-t m --on
Maintenance mode
20 – – SMPL SMPL Mount Public
21 nfsdelete/cifsdelete
Deletes theshare
22 Non-public
Mount SMPL SMPL Mount Public
23 fsumount Un-mount
C-32 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
24 Non-public
Un-mount
SMPL SMPL Mount Public
25 fsdelete Deletes afile system
26 – – SMPL SMPL Mount Public
27 Failover from the failed node (node #1) tothe normal node (node #2) #3
28 – – SMPL SMPL Mount Public
29 Stop the node #1
30 – – SMPL SMPL Mount Public
31 Restart the OS #1
32 – – SMPL SMPL Mount Public
33 Failback the node #1
34 – – SMPL SMPL Mount Public
35 Start the node #1
36 – – SMPL SMPL Mount Public
37 Failover from the node #2 to the node #1
38 – – SMPL SMPL Mount Public
39 Stop the node #2
40 – – SMPL SMPL Mount Public
41 Restart the OS #2
42 – – SMPL SMPL Mount Public
43 Failback the node #2
44 – – SMPL SMPL Mount Public
45 Start the node #2
46 – – SMPL SMPL Mount Public
47 Stop the cluster in the main site. #3
48 – – SMPL SMPL Mount Public
49 Shut down the OS in the main site.
50 – – SMPL SMPL Mount Public
51 Replace and format the storage with errorin the main site.
52 – – SMPL SMPL Mount Public
53 Start the OS in the main site.
Operation when Failures Occurred on the HNAS F C-33Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
54 – – SMPL SMPL Mount Public
55 Start the cluster in the main site.
56 – – SMPL SMPL Mount Public
57 horcmstart.sh
Start CCI
58 – – SMPL SMPL Mount Public
59 horcvmdefine
Reservesthe old P-VOL
60 – – SMPL SMPL Mount Public
61 Confirm theconfiguration definitionfile beforecrating pair
pairdisplay
62 – – SMPL SMPL Mount Public
63 Creates avolume pair
paircreate
64 – – COPY COPY Mount Public
65 Confirm avolume pairstatus
pairdisplay
66 – – COPY COPY Mount Public
67 At this pointthe pairstatus isCOPY
pairvolchk
68 – – COPY COPY Mount Public When a pair statuschanges to PAIR, paircreation is completed
69 When a pairstatuschanges toPAIR, paircreation iscompleted
pairvolchk
70 – – PAIR PAIR Mount Public
71 Stop the business operation at the remotesite.
72 Delete NFS/CIFS shares
nfsdelete/
C-34 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
cifsdelete
73 – – PAIR PAIR Mount Non-public
74 Un-mount fsumount
75 – – PAIR PAIR Un-mount
Non-public
76 Beginsplittingpair
pairsplit -rw
77 – – COPY COPY Un-mount
Non-public
78 At this pointthe pairstatus isCOPY
pairvolchk
79 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
80 When a pairstatuschange toPSUS,splittingpair iscompleted
pairvolchk
81 – – SSUS PSUS Un-mount
Non-public
82 horcimport
Connect thefilesystem#1
83 Non-public
Un-mount
SSUS PSUS Un-mount
Non-public
84 Separatefile system
horcexport
85 Non-public
Un-mount
SSUS PSUS – –
86 pairresync -swaps
Reverseresynchronize
87 pairvolchk
Non-public
Un-mount
COPY COPY – –
Operation when Failures Occurred on the HNAS F C-35Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
88 Confirmpaircreation
89 Execute pairvolchkseveral times
Non-public
Un-mount
COPY COPY – –
90 pairvolchk
When a pairstatuschange toPAIR,resynchronizationcompleted
91 Non-public
Un-mount
PAIR PAIR – –
92 fsmount Mount
93 Mountcompleted
Non-public
Mount PAIR PAIR – –
94 nfscreate/cifscreate
Share
95 Sharecreationcompleted
Public Mount PAIR PAIR – –
96 lumapctl-t m --off
Normalmanagement mode
97 Public Mount PAIR PAIR – –
98 Start the business operation at the mainsite. #2
#1:Specify the same file system name as the main site for the file systemname connected with node.
#2:When the takeover is performed, the IP address cannot be taken overfrom the primary site to the secondary site. When starting a job at thesecondary site, un-mount the client from the primary site, change the IPaddress of the site on which the client is to be mounted to that of thesecondary site, and then mount the client again. When the job is resumedat the primary site because the primary site has been restored, un-mountthe client from the secondary site, return the IP address of the site onwhich the client is to be mounted to that of the primary site, and thenmount the client again.
C-36 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
#3:When the file system is blocked due to a drive failure etc., the OS needsto be restarted so that the blockade status recognized by the OS will bereleased.
When LVM is used:
Table C-12 Recovery Procedure when Multiple Failures Occurred in All the StoragesUsed in the User LU on the Main Site (When LVM is Used)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 Failures occurred in all user LU.
3 – – PSUE SSUS – –
4 The user determines whether to executetakeover.
5 Executetakeover
horctakeover
6 – – PSUE SSUS – –
7 Deletes thevolume pair
pairsplit -S
8 – – SMPL SMPL – –
9
When thepair statuschanges toSMPL,deletion iscompleted
pairvolchk
10 – – SMPL SMPL – –
11 Connect thefilesystem#1
horcvmimport
12 – – SMPL SMPL Un-mount
Non-public
13 Mount fsmount
14 – – SMPL SMPL Mount Non-public
Mountcompleted
15 Create NFS/CIFS shares
nfscreate/cifscreate
16 – – SMPL SMPL Mount Public
Operation when Failures Occurred on the HNAS F C-37Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
17 When usingthe filesnapshotfunctionality
syncmount
18 – – SMPL SMPL Mount Public
19 Start business operations at the remotesite. #2
20 Non-public
Mount SMPL SMPL Mount Public
21 lumapctl-t m --on
Maintenance mode
22 – – SMPL SMPL Mount Public
23 syncumount
When usingthe filesnapshotfunctionality
24 – – SMPL SMPL Mount Public
25 syncstop When usingthe filesnapshotfunctionality
26 – – SMPL SMPL Mount Public
27 nfsdelete/cifsdelete
Deletes theshare#3
28 Non-public
Mount SMPL SMPL Mount Public
29 fsumount Un-mount
30 Non-public
Un-mount
SMPL SMPL Mount Public
31 fsdelete Deletes filesystem
32 – – SMPL SMPL Mount Public
33 Failover from the failed node (node #1) tothe normal node (node #2) #4
34 – – SMPL SMPL Mount Public
35 Stop the node #1
36 – – SMPL SMPL Mount Public
C-38 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
37 Restart the OS #1
38 – – SMPL SMPL Mount Public
39 Failback the node #1
40 – – SMPL SMPL Mount Public
41 Start the node #1
42 – – SMPL SMPL Mount Public
43 Failover from the node #2 to the node #1
44 – – SMPL SMPL Mount Public
45 Stop the node #2
46 – – SMPL SMPL Mount Public
47 Restart the OS #2
48
– – SMPL SMPL Mount Public
49 Failback the node #2
50 – – SMPL SMPL Mount Public
51 Start the node #2
52 – – SMPL SMPL Mount Public
53 Stop the cluster in the main site. #4
54 – – SMPL SMPL Mount Public
55 Shut down the OS in the main site.
56 – – SMPL SMPL Mount Public
57 Replace and format the faulty storage atthe main site.
58 – – SMPL SMPL Mount Public
59 Start the OS at the main site.
60 – – SMPL SMPL Mount Public
61 Start the cluster at the main site.
62 – – SMPL SMPL Mount Public
63 horcmstart.sh
Start CCI
64 – – SMPL SMPL Mount Public
65 horcvmdefine
Reservesthe old P-VOL
Operation when Failures Occurred on the HNAS F C-39Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
66 – – SMPL SMPL Mount Public
67 Confirm thecopy targetbeforecreatingpair
pairdisplay
68 – – SMPL SMPL Mount Public
69 Create thevolume pair
paircreate
70 – – COPY COPY Mount Public
71 Checkwhethervolume pairhas beencreated
pairdisplay
72 – – COPY COPY Mount Public
73 Confirmthat thevolume pairhas beencreated
pairvolchk
74 – – COPY COPY Mount Public Execute pairvolchkseveral times
75 When thepair statuschanges toPAIR, paircreation iscompleted
pairvolchk
76 – – PAIR PAIR Mount Public
77 Stop business operations at the remotesite.
78 – – PAIR PAIR Mount Public
79 When usingthe filesnapshotfunctionality
syncumount
80 – – PAIR PAIR Mount Public
81 Delete NFS/CIFSshares.
nfsdelete/cifsdelete
82 – – PAIR PAIR Mount Non-public
C-40 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
83 Un-mount fsumount
84 – – PAIR PAIR Un-mount
Non-public
85 Suppressoperations
horcfreeze
86 – – PAIR PAIR Un-mount
Non-public
87 Beginsplittingpair
pairsplit -rw
88 – – COPY COPY Un-mount
Non-public
89 At this pointthe pairstatus isCOPY
pairvolchk
90 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
91 When thepair statuschanges toPSUS,splitting iscompleted
pairvolchk
92 – – SSUS PSUS Un-mount
Non-public
93 Cancelsuppressionofoperations
horcunfreeze
94 – – SSUS PSUS Un-mount
Non-public
95 horcvmimport
Connect thefilesystem#1
96 Non-public
Un-mount
SSUS PSUS Un-mount
Non-public
97 Separatefile system
horcexport
98 Non-public
Un-mount
SSUS PSUS – –
Operation when Failures Occurred on the HNAS F C-41Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
99 pairresync -swaps
Reverseresynchronize
100 Non-public
Un-mount
COPY COPY – –
101 pairvolchk
At this pointthe pairstatus isCOPY
102 Execute pairvolchkseveral times
Non-public
Un-mount
COPY COPY – –
103 pairvolchk
When thepair statuschanges toPAIR,resynchronization iscompleted
104 Non-public
Un-mount
PAIR PAIR – –
105 fsmount Mount
106 Mountcompleted
Non-public
Mount PAIR PAIR – –
107 nfscreate/cifscreate
Create NFS/CIFS shares
108 Sharescreated
Public Mount PAIR PAIR – –
109 syncmount
When usingthe filesnapshotfunctionality
110 Public Mount PAIR PAIR – –
111 lumapctl–t m --off
Normalmanagement mode
112 Public Mount PAIR PAIR – –
113 Resume business operations at the mainsite. #2
#1:Specify the same file system name as that connected to the main site.
C-42 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
#2:When a takeover is performed, the IP address cannot be taken over fromthe main site to the remote site. When starting a job at the remote site,unmount the client from the primary site, change the IP address of thesite on which the client is to be mounted to that of the remote site, andthen mount the client again. When the main site is restored and businessoperations are resumed at the main site, unmount the client from theremote site, set the IP address of the site on which the client is to bemounted back to that of the main site, and then mount the client again.
#3:In the target file system of the file snapshot functionality, un-mount thedifferential-data snapshot using the syncumount command beforeexecuting the nfsdelete or cifsdelete command, and release thedevice which stores the differential data using the syncstop command.
#4:When the file system is blocked due to a drive failure etc., the OS needsto be restarted to release the blocked status recognized by the OS.
Operation when Multiple Failures Occurred in a Part ofStorages on the Main Site
Assumed Scenarios
When multiple failures occurred in a part of storages on the main site, thebusiness operation cannot be performed on the main site, and the businessoperation is continued on the remote site. Note that ShadowImage operationis not performed.
The failed storage on the main site is recovered by the replacement (servicepersonnel).
Operation when Failures Occurred on the HNAS F C-43Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Commands to be Used
The following commands are used when multiple failures occurred in a part ofstorages used in the user LU on the main site.
Table C-13 Commands for Recovery when Multiple Failures Occurred in aPart of Storages Used in the User LU on the Main Site (HNAS F)
No. Command Description
1 sudo horcimport -f copy-destination-file-system-name -d device-file-number [-rresource-group-name]
When LVM is not used and the filesystem is not tiered, the commandconnects the file system to the node.
2 sudo horcimport -f copy-destination-file-system-name --tier1 device-file-number--tier2 device-file-number [-r resource-group-name]
When LVM is not used and the filesystem is tiered, the command connectsthe file system to the node.
3 sudo horcvmimport -f copy-destination-file-system-name -d device-file-number[, device-file-number ...] [-r resource-group-name]
When LVM is used and the file system isnot tiered, the command connects thefile system to the node.
4 sudo horcvmimport -f copy-destination-file-system-name --tier1 device-file-number [, device-file-number...] --tier2device-file-number [, device-file-number...] [-r resource-group-name]
When LVM is used and the file system istiered, the command connects the filesystem to the node.
C-44 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
5 sudo nfsdelete -d shared-directory {-a |-H Host}
Delete an NFS share.
6 sudo nfscreate -d shared-directory -HHost
Create an NFS share.
7 sudo cifsdelete -x CIFS-share-name [-rresource-group-name]
Delete a CIFS share.
8 sudo cifscreate -x CIFS-share-name -dshared-directory
Create a CIFS share.
9 sudo fsumount file-system-name Un-mount a file system.
10 sudo fsmount {-r | -w} file-system-name
Mount a file system.
11 sudo fsdelete file-system-name Delete a file system.
12 sudo horcvmdefine -d device-file-number[, device-file-number ...]
Reserves device file.
13 sudo horcfreeze -f copy-source-file-system-name
Suppresses on the P-VOL and stopsaccess from clients.
14 sudo horcunfreeze -f copy-source-file-system-name
Restarts operations on the P-VOL andaccess from clients.
15 sudo horcexport -f file-system-name Separate a file system.
16 sudo syncmount file-system-namedifferential-data-snapshot-name mount-point-name
Mounts the differential-data snapshot.
17 sudo syncumount mount-point-name Un-mounts the differential-datasnapshot.
18 sudo syncstop file-system-name Releases the device which stores thedifferential data.
19 sudo lumapctl -t m --on Change the mode of allocation of a userLU to the maintenance mode.
20 sudo lumapctl -t m --off Change the mode of allocation of a userLU to the normal management mode.
Table C-14 Commands for Recovery when Multiple Failures Occurred in aPart of Storages Used in the User LU on the Main Site (CCI)
No. Command Description
1 sudo horctakeover {-g group-name | -dvolume-name} [-t time out]
Takeover the pair. When using UniversalReplicator, you can only executetakeover for individual groups. The -toption is mandatory with async.
2 sudo paircreate {-g group-name | -dvolume-name} -f async –vl -jp P-VOL-journal-ID -js journal-ID
Creates a Universal Replicator pair.
Operation when Failures Occurred on the HNAS F C-45Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
3 sudo paircreate { -g group-name | -dvolume-name -f never -vl
Creates a TrueCopy pair.
4 sudo pairsplit {-g group-name | -dvolume-name} -S
Deletes a pair.
5 sudo pairsplit {-g group-name | -dvolume-name} -rw
Splits a volume pair.
6 sudo pairresync {-g group-name | -dvolume-name} -swaps
Resynchronizes the split pairs.
7 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the status of the paired volumeon the applicable site.
8 sudo pairdisplay {-g group-name | -dvolume-name} -fce
Checks the pair status. The commandoutput contains lines corresponding toCTG (CT group ID), JNL (journal groupID), and AP (path number). When usingUniversal Replicator, verify that the JNLin the command output is the onespecified in the paircreate command.
9 sudo horcmstart.sh Start CCI
Recovery Procedure from Failures
When multiple failures occurred in a part of storages used in the user LU onthe main site, recover it in the following procedure. The sudo command, theoptions of the HNAS F command, and the CCI command are omitted when itis described in the following table. Specify the appropriate options for theactual operation. It is a prerequisite that S-VOL is reserved.
When using Universal Replicator, you cannot take over individual volumes.This means that when a fault is localized to a single LU, you will need to takeover the entire group.
When LVM is not used:
Table C-15 Recovery Procedure when Multiple Failures Occurred in a Part of StoragesUsed in the User LU on the Main Site (When LVM is Not Used)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 Multiple failures occurred in a part of userLU.
3 – – PSUE SSUS – –
4 The takeover execution is decided by thecustomer judgment.
C-46 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
5 Executetakeover
horctakeover
6 – – PSUE SSUS – –
7 Deletes avolume pair
pairsplit -S
8 – – SMPL SMPL – –
9 When a pairstatuschange toSMPL,deleting iscompleted
pairvolchk
10 – – SMPL SMPL – –
11 Connect thefilesystem#1
horcimport
12 – – SMPL SMPL Un-mount
Non-public
13 Mount fsmount
14 – – SMPL SMPL Mount Non-public
Mountcompleted
15 Share nfscreate/cifscreate
16 – – SMPL SMPL Mount Public
17 Start the business operation at the remotesite. #2
18 – – SMPL SMPL Mount Public
19 lumapctl-t m --on
Maintenance mode
20 – – SMPL SMPL Mount Public
21 nfsdelete/cifsdelete
Delete NFS/CIFS shares
22 Non-public
Mount SMPL SMPL Mount Public
23 fsumount Un-mount
Operation when Failures Occurred on the HNAS F C-47Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
24 Non-public
Un-mount
SMPL SMPL Mount Public
25 fsdelete Delete filesystem
26 – – SMPL SMPL Mount Public
27 Replace and format the storage with errorin the main site.
28 – – SMPL SMPL Mount Public
29 Failover from the node1 (blocked node) tothe node2 (non-blocked node) in the mainsite. #3
30 – – SMPL SMPL Mount Public
31 Stop the node1 in the main site.
32 – – SMPL SMPL Mount Public
33 Reboot the OS1 in the main site.
34 – – SMPL SMPL Mount Public
35 Failback in the main site1.
36 – – SMPL SMPL Mount Public
37 Stop the node1 in the main site.
38 – – SMPL SMPL Mount Public
39 Failover from the node2 to the node1 in themain site.
40 – – SMPL SMPL Mount Public
41 Stop the node2 in the main site.
42 – – SMPL SMPL Mount Public
43 Reboot the OS2 in the main site.
44 – – SMPL SMPL Mount Public
45 Failback in the main site2.
46 – – SMPL SMPL Mount Public
47 Start the node2 in the main site.
48 – – SMPL SMPL Mount Public
49 horcmstart.sh
Start CCI
50 – – SMPL SMPL Mount Public
C-48 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
51 horcvmdefine
Reservesthe old P-VOL
52 – – SMPL SMPL Mount Public
53 Confirm theconfiguration definitionfile beforecreatingpair
pairdisplay
54 – – SMPL SMPL Mount Public
55 Creates avolume pair
paircreate
56 – – COPY COPY Mount Public
57 Confirm avolume pairstatus
pairdisplay
58 – – COPY COPY Mount Public
59 At this pointthe pairstatus isCOPY
pairvolchk
60 – – COPY COPY Mount Public Execute pairvolchkseveral times
61 When a pairstatuschange toPAIR, paircreation iscompleted
pairvolchk
62 – – PAIR PAIR Mount Public
63 Stop the business operation at the remotesite.
64 Delete NFS/CIFS shares
nfsdelete/cifsdelete
65 – – PAIR PAIR Mount Non-public
66 Un-mount fsumount
67 – – PAIR PAIR Un-mount
Non-public
Operation when Failures Occurred on the HNAS F C-49Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
68 Beginsplittingpair
pairsplit -rw
69 – – COPY COPY Un-mount
Non-public
70 At this pointthe pairstatus isCOPY
pairvolchk
71 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
72 When a pairstatuschange toPSUS,splittingpair iscompleted
pairvolchk
73 – – SSUS PSUS Un-mount
Non-public
74 horcimport
Connect thefilesystem#1
75 Non-public
Un-mount
SSUS PSUS Un-mount
Non-public
76 Separatefile system
horcexport
77 Non-public
Un-mount
SSUS PSUS – –
78 pairresync -swaps
Reverseresynchronize
79 Non-public
Un-mount
COPY COPY – –
80 pairvolchk
At this pointa pairstatus isCOPY
81 Execute pairvolchkseveral times
Non-public
Un-mount
COPY COPY – –
82 pairvolchk
When a pairstatuschange toPAIR,resynchroni
C-50 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
zation iscompleted
83 Non-public
Un-mount
PAIR PAIR – –
84 fsmount Mount
85 Mountcompleted
Non-public
Mount PAIR PAIR – –
86 nfscreate/cifscreate
Share
87 Sharecreationcompleted
Public Mount PAIR PAIR – –
88 lumapctl-t m --off
Normalmanagement mode
89 Public Mount PAIR PAIR – –
90 Resume business operations at the mainsite. #2
#1:Specify the same file system name as the main site for the file systemname connected with node.
#2:When the takeover is performed, the IP address cannot be taken overfrom the primary site to the secondary site. When starting a job at thesecondary site, un-mount the client from the primary site, change the IPaddress of the site on which the client is to be mounted to that of thesecondary site, and then mount the client again. When the job is resumedat the primary site because the primary site has been restored, un-mountthe client from the secondary site, return the IP address of the site onwhich the client is to be mounted to that of the primary site, and thenmount the client again.
#3:When the file system is blocked due to a drive failure etc., the OS needsto be restarted so that the blockade status recognized by the OS will bereleased.
Operation when Failures Occurred on the HNAS F C-51Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When LVM is used:
Table C-16 Recovery Procedure when Multiple Failures Occurred in Part of the StorageUsed in the User LU on the Main Site (When LVM is Used)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 Failures occurred in a part of user LU.
3 – – PSUE SSUS – –
4 The user determines whether to executetakeover.
5 Executetakeover
horctakeover
6 – – PSUE SSUS – –
7 Beginvolume pairdeletion
pairsplit -S
8 – – SMPL SMPL – –
9 When thepair statuschanges toSMPL,deletion iscompleted
pairvolchk
10 – – SMPL SMPL – –
11 Connect thefilesystem#1
horcvmimport
12 – – SMPL SMPL Un-mount
Non-public
13 Mount fsmount
14 – – SMPL SMPL Mount Non-public
Mountcompleted
15 Create NFS/CIFS shares
nfscreate/cifscreate
16 – – SMPL SMPL Mount Public
17 When usingthe filesnapshotfunctionality
syncmount
18 – – SMPL SMPL Mount Public
C-52 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
19 Start business operations at the remotesite. #2
20 – – SMPL SMPL Mount Public
21 lumapctl-t m --on
Maintenance mode
22 – – SMPL SMPL Mount Public
23 syncumount
When usingthe filesnapshotfunctionality
24 – – SMPL SMPL Mount Public
25 syncstop When usingthe filesnapshotfunctionality
26 – – SMPL SMPL Mount Public
27 nfsdelete/cifsdelete
Delete NFS/CIFSshares.#3
28 Non-public
Mount SMPL SMPL Mount Public
29 fsumount Un-mount
30 Non-public
Un-mount
SMPL SMPL Mount Public
31 fsdelete Delete filesystem
32 – – SMPL SMPL Mount Public
33 Replace and format the faulty storage atthe main site.
34 – – SMPL SMPL Mount Public
35 Failover from node1 (blocked node) tonode2 (non-blocked node) at the main site.#4
36 – – SMPL SMPL Mount Public
37 Stop node1 at the main site.
38 – – SMPL SMPL Mount Public
39 Reboot OS1 at the main site.
Operation when Failures Occurred on the HNAS F C-53Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
40 – – SMPL SMPL Mount Public
41 Failback in the main site1.
42 – – SMPL SMPL Mount Public
43 Start node1 at the main site.
44 – – SMPL SMPL Mount Public
45 Failover from node2 to the node1 at themain site.
46 – – SMPL SMPL Mount Public
47 Stop node2 at the main site.
48 – – SMPL SMPL Mount Public
49 Reboot OS2 at the main site.
50 – – SMPL SMPL Mount Public
51 Failback in the main site2.
52 – – SMPL SMPL Mount Public
53 Start node2 at the main site.
54 – – SMPL SMPL Mount Public
55 horcmstart.sh
Start CCI
56 – – SMPL SMPL Mount Public
57 horcvmdefine
Reservesthe old P-VOL
58 – – SMPL SMPL Mount Public
59 Confirm thecopy targetbeforecreatingpair
pairdisplay
60 – – SMPL SMPL Mount Public
61 Creates avolume pair
paircreate
62 – – COPY COPY Mount Public
63 Checkwhethervolume pairhas beencreated
pairdisplay
C-54 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
64 – – COPY COPY Mount Public
65 Confirmthat thepair hasbeencreated
pairvolchk
66 – – COPY COPY Mount Public Execute pairvolchkseveral times
67 When thepair statuschanges toPAIR, paircreation iscompleted
pairvolchk
68 – – PAIR PAIR Mount Public
69 Stop business operations at the remotesite.
70 – – PAIR PAIR Mount Public
71 When usingthe filesnapshotfunctionality
syncumount
72 – – PAIR PAIR Mount Public
73 Delete NFS/CIFSshares.
nfsdelete/cifsdelete
74 – – PAIR PAIR Mount Non-public
75 Un-mount fsumount
76 – – PAIR PAIR Un-mount
Non-public
77 Suppressoperations
horcfreeze
78 – – PAIR PAIR Un-mount
Non-public
79 Beginsplittingpair
pairsplit -rw
80 – – COPY COPY Un-mount
Non-public
81 At this pointthe pair
pairvolchk
Operation when Failures Occurred on the HNAS F C-55Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
status isCOPY
82 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
83 When thepair statuschanges toPSUS,splitting iscompleted
pairvolchk
84 – – SSUS PSUS Un-mount
Non-public
85 Cancelsuppressionofoperations
horcunfreeze
86 – – SSUS PSUS Un-mount
Non-public
87 horcvmimport
Connect thefilesystem#1
88 Non-public
Un-mount
SSUS PSUS Un-mount
Non-public
89 Separatefile system
horcexport
90 Non-public
Un-mount
SSUS PSUS – –
91 pairresync -swaps
Reverseresynchronize
92 Non-public
Un-mount
COPY COPY – –
93 pairvolchk
At this pointthe pairstatus isCOPY
94 Execute pairvolchkseveral times
Non-public
Un-mount
COPY COPY – –
95 pairvolchk
When thepair statuschanges toPAIR,resynchronization iscompleted
C-56 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
96 Non-public
Un-mount
PAIR PAIR – –
97 fsmount Mount
98 Mountcompleted
Non-public
Mount PAIR PAIR – –
99 nfscreate/cifscreate
Create NFS/CIFS shares
100 Sharescreated
Public Mount PAIR PAIR – –
101 syncmount
When usingthe filesnapshotfunctionality
102 Public Mount PAIR PAIR – –
103 lumapctl-t m --off
Normalmanagement mode
104 Public Mount PAIR PAIR – –
105 Resume business operations at the mainsite. #2
#1:Specify the same file system name as that connected to the main site.
#2:When a takeover is performed, the IP address cannot be taken over fromthe main site to the remote site. When starting a job at the remote site,unmount the client from the primary site, change the IP address of thesite on which the client is to be mounted to that of the remote site, andthen mount the client again. When the main site is restored and businessoperations are resumed at the main site, unmount the client from theremote site, set the IP address of the site on which the client is to bemounted back to that of the main site, and then mount the client again.
#3:In the target file system of the file snapshot functionality, un-mount thedifferential-data snapshot using the syncumount command beforeexecuting the nfsdelete or cifsdelete command, and release thedevice which stores the differential data using the syncstop command.
#4:
Operation when Failures Occurred on the HNAS F C-57Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
When the file system is blocked due to a drive failure etc., the OS needsto be restarted to release the blocked status recognized by the OS.
Operation when Network Failures Occurred Between theMain Site and the Remote Site Causing Journal Overflow
Assumed Scenarios
When using Universal Replicator, communication was lost between the mainsite and the remote site because a cable was disconnected. This led to thejournal overflowing.
Commands to be Used
Table C-17 Commands for Recovery when Network Failures OccurredBetween the Main Site and the Remote Site Causing Journal Overflow
(CCI)
No. Command Description
1 sudo pairresync {-g group-name | -dvolume-name}
Resynchronizes split pairs.
C-58 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No. Command Description
2 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the status of the paired volumeon the applicable site.
Recovery Procedure from Failures
Use the following procedure to recover from a situation in which a networkfailure has occurred between the main site and the remote site, causing thejournal to overflow. The options of the sudo command, the HNAS Fcommand, and the CCI commands are omitted in the following table. Specifythe appropriate options when using the commands.
Table C-18 Recovery Procedure when Network Failures Occurred Between the Main Siteand the Remote Site Causing Journal Overflow
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 Journal overflows.
3 Public Mount SSUS PSUS – –
4 Reconnect the cable between the main siteand the remote site.
5 Public Mount SSUS PSUS – –
6 pairresync
Resynchronize the pair
7 Public Mount COPY COPY – –
8 pairvolchk
At this pointthe pairstatus isCOPY
9 Execute pairvolchkseveral times
Public Mount COPY COPY – –
10 pairvolchk
When thepair statuschanges toPAIR,resynchronization iscompleted
11 Public Mount PAIR PAIR – –
Operation when Failures Occurred on the HNAS F C-59Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Operation when Network Failures Occurred Between theMain Site and the Remote Site Without Causing JournalOverflow
Assumed Scenarios
When using Universal Replicator, communication was lost between the mainsite and the remote site because a cable was disconnected, but the journaldid not overflow.
Recovery Procedure from Failures
Use the following procedure to recover from a situation in which a networkfailure occurred between the main site and the remote site, but the journaldid not overflow.
Table C-19 Recovery Procedure when Network Failures Occurred Between the Main Siteand the Remote Site Without Causing Journal Overflow
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
C-60 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
2 Cable is disconnected.
3 Public Mount PAIR PAIR – –
4 Reconnect the cable between the main siteand the remote site.
Operation when Network Failures Occurred in the Main Site
Assumed Scenarios
The access from the client cannot be performed due to the switch failures inthe main site.
Operation when Failures Occurred on the HNAS F C-61Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Commands to be Used
Table C-20 Commands for Recovery when a Network Failure Occurredwithin the Main Site (HNAS F)
No. Command Description
1 sudo horcimport -f copy-destination-file-system-name -d device-file-number [-rresource-group-name]
When LVM is not used and the filesystem is not tiered, the commandconnects the file system to the node.
2 sudo horcimport -f copy-destination-file-system-name --tier1 device-file-number--tier2 device-file-number [-r resource-group-name]
When LVM is not used and the filesystem is tiered, the command connectsthe file system to the node.
3 sudo horcvmimport -f copy-destination-file-system-name -d device-file-number[, device-file-number ...] [-r resource-group-name]
When LVM is used and the file system isnot tiered, the command connects thefile system to the node.
4 sudo horcvmimport -f copy-destination-file-system-name --tier1 device-file-number [, device-file-number...] --tier2device-file-number [, device-file-number...] [-r resource-group-name]
When LVM is used and the file system istiered, the command connects the filesystem to the node.
5 sudo nfsdelete -d shared-directory {-a |-H Host}
Delete an NFS share.
6 sudo nfscreate -d shared-directory -HHost
Create an NFS share.
7 sudo cifsdelete -x CIFS-share-name [-rresource-group-name]
Delete a CIFS share.
8 sudo cifscreate -x CIFS-share-name -dshared-directory
Create a CIFS share.
9 sudo fsumount file-system-name Un-mount a file system.
10 Un-mount a file system. Mount a file system.
11 sudo horcfreeze -f copy-source-file-system-name
Suppresses on the P-VOL and stopsaccess from clients.
12 sudo horcunfreeze -f copy-source-file-system-name
Restarts operations on the P-VOL andaccess from clients.
13 sudo horcexport -f file-system-name Separate a file system.
14 sudo syncmount file-system-namedifferential-data-snapshot-name mount-point-name
Mounts the differential-data snapshot.
15 sudo syncumount mount-point-name Un-mounts the differential-datasnapshot.
C-62 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Table C-21 Commands for Recovery when a Network Failure Occurredwithin the Main Site (CCI)
No. Command Description
1 sudo horctakeover {-g group-name | -dvolume-name} [-t time out]
Takeover the pair. When using UniversalReplicator, you can only executetakeover for individual groups. The -toption is mandatory with async.
2 sudo paircreate {-g group-name | -dvolume-name} -f async –vl -jp P-VOL-journal-ID -js journal-ID
Creates a Universal Replicator pair.
3 sudo paircreate { -g group-name | -dvolume-name -f never -vl
Creates a TrueCopy pair.
4 sudo pairsplit {-g group-name | -dvolume-name} -S
Deletes a pair.
5 sudo pairsplit {-g group-name | -dvolume-name} -rw
Splits a volume pair.
6 sudo pairresync {-g group-name | -dvolume-name} -swaps
Resynchronizes the split pairs.
7 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the status of the paired volumeon the applicable site.
8 sudo pairdisplay {-g group-name | -dvolume-name} -fce
Checks the pair status. The commandoutput contains lines corresponding toCTG (CT group ID), JNL (journal groupID), and AP (path number). When usingUniversal Replicator, verify that the JNLin the command output is the onespecified in the paircreate command.
9 sudo horcmstart.sh Start CCI
Recovery Procedure from Failures
The switch failure is usually removed by replacing the switch. However, whenyou want to start the operation on the remote site immediately, remove it inthe following procedure. The sudo command, the options of the HNAS Fcommand, and the CCI command are omitted when it is described in thefollowing table. Specify the appropriate options for the actual operation.
When LVM is not used:
Table C-22 Recovery Procedure when Switch Failures Occurred (when LVM is Not Used)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
Operation when Failures Occurred on the HNAS F C-63Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
2 Switch obstacle occurs.
3 Public Mount PAIR PAIR – –
4 The takeover execution is decided by thecustomer judgment.
5 Executetakeover
horctakeover
6 Public Mount PAIR PAIR – –
7 Begin pairdeletion
pairsplit -S
8 Public Mount SMPL SMPL – –
9 Confirm avolume pairstatus to beSMPL
pairvolchk
10 Public Mount SMPL SMPL – –
11 Connect thefilesystem#1
horcimport
12 Public Mount SMPL SMPL Un-mount
Non-public
13 Mount fsmount
14 Public Mount SMPL SMPL Mount Non-public
Mountcompleted
15 Share nfscreate/cifscreate
16 Public Mount SMPL SMPL Mount Public
17 Start the business operation at the remotesite. #2
18 Replace the switch.
19 nfsdelete/cifsdelete
Delete NFS/CIFS shares
20 Non-public
Mount SMPL SMPL Mount Public
21 fsumount Un-mount
22 Non-public
Un-mount
SMPL SMPL Mount Public
C-64 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
23 horcexport
Separatefile system
24 – – SMPL SMPL Mount Public
25 Confirm thecopy targetbeforecreatingpair
pairdisplay
26 – – SMPL SMPL Mount Public
27 Create pair paircreate
28 – – COPY COPY Mount Public
29 Confirm avolume pairstatus
pairdisplay
30 – – COPY COPY Mount Public
31 Confirm thevolume pairstatus
pairvolchk
32 – – COPY COPY Mount Public Execute pairvolchkseveral times
33 When thepair statuschanges toPAIR, paircreation iscompleted
pairvolchk
34 – – PAIR PAIR Mount Public
35 Stop the business operation at the remotesite.
36 Delete NFS/CIFS shares
nfsdelete/cifsdelete
37 – – PAIR PAIR Mount Non-public
38 Un-mount fsumount
39 – – PAIR PAIR Un-mount
Non-public
40 Beginsplittingpair
pairsplit -rw
Operation when Failures Occurred on the HNAS F C-65Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
41 – – COPY COPY Un-mount
Non-public
42 At this pointthe pairstatus isCOPY
pairvolchk
43 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
44 When a pairstatuschange toPSUS,splittingpair iscompleted
pairvolchk
45 – – SSUS PSUS Un-mount
Non-public
46 horcimport
Connect thefilesystem#1
47 Non-public
Un-mount
SSUS PSUS Un-mount
Non-public
48 Separatefile system
horcexport
49 Non-public
Un-mount
SSUS PSUS – –
50 pairresync -swaps
Resynchronize
51 Non-public
Un-mount
COPY COPY – –
52 pairvolchk
At this pointthe pairstatus isCOPY
53 Execute pairvolchkseveral times
Non-public
Un-mount
COPY COPY – –
54 pairvolchk
When a pairstatuschange toPAIR, re-synchronizing pair iscompleted
C-66 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
55 Non-public
Un-mount
PAIR PAIR – –
56 fsmount Mount
57 Mountcompleted
Non-public
Mount PAIR PAIR – –
58 nfscreate/cifscreate
Share
59 Sharecreationcompleted
Public Mount PAIR PAIR – –
60 Start the business operation at the mainsite. #2
#1:Specify the same file system name as the main site for the file systemname connected with node.
#2:When the takeover is performed, the IP address cannot be taken overfrom the primary site to the secondary site. When starting a job at thesecondary site, un-mount the client from the primary site, change the IPaddress of the site on which the client is to be mounted to that of thesecondary site, and then mount the client again. When the job is resumedat the primary site because the primary site has been restored, un-mountthe client from the secondary site, return the IP address of the site onwhich the client is to be mounted to that of the primary site, and thenmount the client again.
When LVM is used:
Table C-23 Recovery Procedure when Switch Failures Occurred (when LVM is Used)
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 Switch fault occurs.
3 Public Mount PAIR PAIR – –
4 The user determines whether to executetakeover.
Operation when Failures Occurred on the HNAS F C-67Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
5 Executetakeover
horctakeover
6 Public Mount PAIR PAIR – –
7 Begin pairdeletion
pairsplit -S
8 Public Mount SMPL SMPL – –
9 Confirmthat thevolume pairstatus isSMPL
pairvolchk
10 Public Mount SMPL SMPL – –
11 Connect thefilesystem#1
horcvmimport
12 Public Mount SMPL SMPL Un-mount
Non-public
13 Mount fsmount
14 Public Mount SMPL SMPL Mount Non-public
Mountcompleted
15 Create NFS/CIFS shares
nfscreate/cifscreate
16 Public Mount SMPL SMPL Mount Public
17 When usingthe filesnapshotfunctionality
syncmount
18 Public Mount SMPL SMPL Mount Public
19 Start business operations at the remotesite. #2
20 Replace the switch.
21 syncumount
When usingthe filesnapshotfunctionality
22 Public Mount SMPL SMPL Mount Public
23 nfsdelete/
Delete NFS/CIFSshares.#3
C-68 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
cifsdelete
24 Non-public
Mount SMPL SMPL Mount Public
25 fsumount Un-mount
26 Non-public
Un-mount
SMPL SMPL Mount Public
27 horcexport
Separatefile system
28 – – SMPL SMPL Mount Public
29 Confirm thecopy targetbeforecreatingpair
pairdisplay
30 – – SMPL SMPL Mount Public
31 Create pair paircreate
32 – – COPY COPY Mount Public
33 Checkwhether thevolume pairhas beencreated
pairdisplay
34 – – COPY COPY Mount Public
35 Checkwhether thevolume pairhas beencreated
pairvolchk
36 – – COPY COPY Mount Public Execute pairvolchkseveral times
37 When thepair statuschanges toPAIR, paircreation iscompleted
pairvolchk
38 – – PAIR PAIR Mount Public
39 Stop business operations at the remotesite.
40 – – PAIR PAIR Mount Public
Operation when Failures Occurred on the HNAS F C-69Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
41 When usingthe filesnapshotfunctionality
syncumount
42 – – PAIR PAIR Mount Public
43 Delete NFS/CIFSshares.
nfsdelete/cifsdelete
44 – – PAIR PAIR Mount Non-public
45 Un-mount fsumount
46 – – PAIR PAIR Un-mount
Non-public
47 Suppressoperations
horcfreeze
48 – – PAIR PAIR Un-mount
Non-public
49 Beginsplittingpair
pairsplit -rw
50 – – COPY COPY Un-mount
Non-public
51 At this pointthe pairstatus isCOPY
pairvolchk
52 – – COPY COPY Un-mount
Non-public
Execute pairvolchkseveral times
53 When thepair statuschanges toPSUS,splitting iscompleted
pairvolchk
54 – – SSUS PSUS Un-mount
Non-public
55 Cancelsuppressionofoperations
horcunfreeze
56 – – SSUS PSUS Un-mount
Non-public
C-70 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
57 horcvmimport
Connect thefilesystem#1
58 Non-public
Un-mount
SSUS PSUS Un-mount
Non-public
59 Separatefile system
horcexport
60 Non-public
Un-mount
SSUS PSUS – –
61 pairresync -swaps
Resynchronize
62 Non-public
Un-mount
COPY COPY – –
63 pairvolchk
At this pointthe pairstatus isCOPY
64 Execute pairvolchkseveral times
Non-public
Un-mount
COPY COPY – –
65 pairvolchk
When thepair statuschanges toPAIR,resynchronization iscompleted
66 Non-public
Un-mount
PAIR PAIR – –
67 fsmount Mount
68 Mountcompleted
Non-public
Mount PAIR PAIR – –
69 nfscreate/cifscreate
Create NFS/CIFS shares
70 Sharescreated
Public Mount PAIR PAIR – –
71 syncmount
When usingthe filesnapshotfunctionality
72 Public Mount PAIR PAIR – –
Operation when Failures Occurred on the HNAS F C-71Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
73 Resume business operations at the mainsite.#2
#1:Specify the same file system name as that connected to the main site.
#2:When a takeover is performed, the IP address cannot be taken over fromthe main site to the remote site. When starting a job at the remote site,unmount the client from the primary site, change the IP address of thesite on which the client is to be mounted to that of the remote site, andthen mount the client again. When the main site is restored and businessoperations are resumed at the main site, unmount the client from theremote site, set the IP address of the site on which the client is to bemounted back to that of the main site, and then mount the client again.
#3:In the target file system of the file snapshot functionality, un-mount thedifferential-data snapshot using the syncumount command beforeexecuting the nfsdelete or cifsdelete command, and release thedevice which stores the differential data using the syncstop command.
Operation when Multiple Failures Occurred in Some or AllJournals on the Main Site
Assumed Scenarios
Multiple failures occurred on all or some of the journals on the main site.
C-72 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Commands to be Used
Table C-24 Commands for Recovery when Multiple Failures Occurred in Allor Some Journals on the Main Site
No. Command Description
1 sudo paircreate {-g group-name | -dvolume-name} -f async –vl -jp P-VOL-journal-ID -js journal-ID
Creates a Universal Replicator pair.
2 sudo pairsplit {-g group-name | -dvolume-name} -S
Deletes a volume pair.
3 sudo pairvolchk {-g group-name | -dvolume-name}
Checks the status of the paired volumeon the applicable site.
4 sudo pairdisplay {-g group-name | -dvolume-name} -fce
Checks the pair status. The commandoutput contains lines corresponding toCTG (CT group ID), JNL (journal groupID), and AP (path number). When usingUniversal Replicator, verify that the JNLin the command output is the onespecified in the paircreate command.
Operation when Failures Occurred on the HNAS F C-73Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Recovery Procedure from Failures
Use the following procedure to recover from multiple failures of all or some ofthe journals on the main site. The options of the sudo command, the HNAS Fcommands, and the CCI commands are omitted in the following table. Specifythe appropriate options when using the commands.
Table C-25 Recovery Procedure when Multiple Failures Occurred in Some or All Journalson the Main Site
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
1 Public Mount PAIR PAIR – –
2 Journal failure occurs.
3 Public Mount PSUE SSUS – –
4 pairsplit -S
Begindeletingpair
5 Public Mount SMPL SMPL – –
6 At the main site, clear the definition for thejournal(s) where the fault occurred.
7 Public Mount SMPL SMPL – –
8 Replace and format the faulty storage atthe main site.
9 Public Mount SMPL SMPL – –
10 Define the journal(s).
11 Public Mount SMPL SMPL – –
12 pairdisplay
Confirm thecopy targetbeforecreatingpair
13 Public Mount SMPL SMPL – –
14 paircreate
Create apair
15 Public Mount COPY COPY – –
16 pairdisplay
Checkwhethervolume pairhas beencreated
17 Public Mount COPY COPY – –
C-74 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
No.Main Site Remote Site
Command Processing NFS/
CIFS FS P-VOLStatus
S-VOLStatus FS NFS/
CIFS Processing Command
18 pairvolchk
At this pointthe pairstatus isCOPY
19 Public Mount COPY COPY – –
20 pairvolchk
When thepair statuschanges toPAIR, paircreation iscompleted
21 Public Mount PAIR PAIR – –
Operation when Failures Occurred on the HNAS F C-75Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
C-76 Operation when Failures Occurred on the HNAS FHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
DSystem Option Settings
This appendix describes the system option for storage systems.
□ mode495
System Option Settings D-1Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
mode495If a replication function (ShadowImage) is used by a script which does notuse the horcexport command (supported by Hitachi NAS Base Suite06-03-00-00), set the system option mode to 495 for the storage system.
D-2 System Option SettingsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
EAcronyms
This section lists the acronyms used in the HNAS F manuals.
□ Acronyms used in the HNAS F manuals.
Acronyms E-1Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Acronyms used in the HNAS F manuals.Following acronyms used in the HNAS F manuals.
ABE Access Based Enumeration
ACE access control entry
ACL access control list
AJP Apache JServ Protocol
API application programming interface
ARP Address Resolution Protocol
ASCII American Standard Code for Information Interchange
ASN Abstract Syntax Notation
BDC Backup Domain Controller
BMC baseboard management controller
CA certificate authority
CHA channel adapter
CHAP Challenge-Handshake Authentication Protocol
CIFS Common Internet File System
CIM Common Information Model
CLI command line interface
CPU central processing unit
CSR certificate signing request
CSV comma-separated values
CTL controller
CU control unit
CV custom volume
DACL discretionary access control list
DAR Direct Access Recovery
DB database
DBMS database management system
DC domain controller
DEP data execution prevention
DES Data Encryption Standard
DFS distributed file system
DIMM dual in-line memory module
DLL dynamic-link library
DN distinguished name
E-2 AcronymsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
DNS Domain Name System
DOM Document Object Model
DOS Disk Operating System
DRAM dynamic random access memory
DSA digital signal algorithm
DTD Document Type Definition
ECC error-correcting code
EUC Extended UNIX Code
FC Fibre Channel
FIB forwarding information base
FIFO First In, First Out
FQDN fully qualified domain name
FTP File Transfer Protocol
FV Fixed Volume
FXP File Exchange Protocol
GbE Gigabit Ethernet
GID group identifier
GMT Greenwich Mean Time
GPL GNU General Public License
GUI graphical user interface
HBA host bus adapter
H-LUN host logical unit number
HPFS High Performance File System
HSSO HiCommand single sign-on
HTML HyperText Markup Language
HTTP Hypertext Transfer Protocol
HTTPS Hypertext Transfer Protocol Secure
I/O input/output
ICAP Internet Content Adaptation Protocol
ICMP Internet Control Message Protocol
ID identifier
IP Internet Protocol
IP-SW IP switch
JDK Java Development Kit
JIS Japanese Industrial Standards
Acronyms E-3Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
JSP JavaServer Pages
KDC Key Distribution Center
LACP Link Aggregation Control Protocol
LAN local area network
LBA logical block addressing
LCD Local Configuration Datastore
LDAP Lightweight Directory Access Protocol
LDEV logical device
LDIF LDAP Data Interchange Format
LDKC logical disk controller
LED light-emitting diode
LF Line Feed
LTS long term support
LU logical unit
LUN logical unit number
LUSE logical unit size expansion
LVI Logical Volume Image
LVM Logical Volume Manager
MAC Media Access Control
MD5 Message-Digest algorithm 5
MIB management information base
MMC Microsoft Management Console
MP microprocessor
MSS maximum segment size
MTU maximum transmission unit
NAS Network-Attached Storage
NAT network address translation
NDMP Network Data Management Protocol
NetBIOS Network Basic Input/Output System
NFS Network File System
NIC network interface card
NIS Network Information Service
NTFS New Technology File System
NTP Network Time Protocol
OID object identifier
E-4 AcronymsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
ORB object request broker
OS operating system
PAP Password Authentication Protocol
PC personal computer
PCI Peripheral Component Interconnect
PDC Primary Domain Controller
PDU protocol data unit
PID process identifier
POSIX Portable Operating System Interface for UNIX
PP program product
RADIUS Remote Authentication Dial In User Service
RAID Redundant Array of Independent Disks
RAM random access memory
RAS Reliability Availability Serviceability
RCS Revision Control System
RD relational database
RFC Request for Comments
RID relative identifier
RPC remote procedure call
RSA Rivest, Shamir, and Adleman
SACL system access control list
SAN storage area network
SAS Serial Attached SCSI
SATA serial ATA
SAX Simple API for XML
SCSI Small Computer System Interface
SFTP SSH File Transfer Protocol
SHA secure hash algorithm
SID security identifier
SJIS Shift JIS
SLPR Storage Logical Partition
SMB Server Message Block
SMD5 Salted Message Digest 5
SMTP Simple Mail Transfer Protocol
SNMP Simple Network Management Protocol
Acronyms E-5Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
SOAP Simple Object Access Protocol
SP service pack
SSD solid-state drive
SSH Secure Shell
SSHA Salted Secure Hash Algorithm
SSL Secure Sockets Layer
SSO single sign-on
SVGA Super Video Graphics Array
TCP Transmission Control Protocol
TFTP Trivial File Transfer Protocol
TOS type of service
TTL time to live
UAC User Account Control
UDP User Datagram Protocol
UID user identifier
UNC Universal Naming Convention
URI Uniform Resource Identifier
URL Uniform Resource Locator
UTC Coordinated Universal Time
UTF UCS Transformation Format
VDEV Virtual Device
VLAN virtual LAN
VLL Virtual LVI/LUN
WADL Web Application Description Language
WAN wide area network
WINS Windows Internet Name Service
WORM Write Once, Read Many
WS workstation
WWN World Wide Name
WWW World Wide Web
XDR External Data Representation
XFS extended file system
XML Extensible Markup Language
E-6 AcronymsHitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Index
C
Command Control Interface 2-2Command device 2-3
D
Dynamic Provisioning 3-2Dynamic Tiering 3-2
E
Encryption License Key 3-14
J
journal volume 2-2
L
LU 2-2
P
P-VOL 2-2performance management function 1-2primary volume 2-2program product 1-2
R
replication functions 1-2, 2-2resource management function 1-2
S
S-VOL 2-2secondary volume 2-2secondary volumes 2-2ShadowImage 2-2
T
TrueCopy 2-2
U
Universal Replicator 2-2Universal Volume Manager 3-3
V
volume management function 1-2volume management functions 3-1Volume Migration 3-14Volume Shredder 3-14
Index-1Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Index-2Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Hitachi NAS Platform F1000 Series Enterprise Array Features Administrator's Guide
Hitachi Data Systems
Corporate Headquarters2845 Lafayette StreetSanta Clara, California 95050-2639U.S.A.www.hds.com
Regional Contact Information
Americas+1 408 970 [email protected]
Europe, Middle East, and Africa+44 (0)1753 [email protected]
Asia Pacific+852 3189 [email protected]
MK-92NAS094-02