using vnx multi-path file system - dell emc · ... aix, or solaris ser ver unless specified. ......

64
EMC é VNX Series Release 7.0 Using VNX Multi-Path File System P/N 300-011-804 REV A05 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.EMC.com

Upload: lytruc

Post on 07-Jun-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

EMCé VNX Series

Release 7.0

Using VNX Multi-Path File SystemP/N 300-011-804

REV A05

EMC CorporationCorporate Headquarters:

Hopkinton, MA 01748-9103

1-508-435-1000

www.EMC.com

Copyright â 2006 - 2012 EMC Corporation. All rights reserved.

Published June 2012

EMC believes the information in this publication is accurate as of its publication date. Theinformation is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATIONMAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TOTHE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIEDWARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires anapplicable software license.

For the most up-to-date regulatory document for your product line, go to the TechnicalDocumentation and Advisories section on EMC Powerlink.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks onEMC.com.

All other trademarks used herein are the property of their respective owners.

Corporate Headquarters: Hopkinton, MA 01748-9103

2 Using VNX Multi-Path File System

Contents

Preface.....................................................................................................5

Chapter 1: Introduction...........................................................................7

System requirements....................................................................................9

Restrictions..................................................................................................10

User interface choices.................................................................................11

Related information.....................................................................................12

Chapter 2: Concepts.............................................................................13

Understanding MPFS threads.....................................................................15

Overview of EMC VNX MPFS over FC and iSCSI......................................15

VNX MPFS architectures............................................................................15

VNX MPFS over Fibre Channel..........................................................16

VNX MPFS over iSCSI.......................................................................17

VNX MPFS over iSCSI/FC ................................................................18

Planning considerations..............................................................................20

Compatibility with MPFS.............................................................................20

Chapter 3: Configuring.........................................................................23

MPFS configuration summary.....................................................................24

Steps for configuring a VNX for file..............................................................24

Verify file system compatibility.....................................................................25

Start MPFS..................................................................................................27

Operating MPFS through a firewall.............................................................28

Mount a file system for servers...................................................................28

Exporting a file system path for MPFS servers..................................28

Stop MPFS..................................................................................................29

Using VNX Multi-Path File System 3

Chapter 4: Managing............................................................................31

Set the threads variable..............................................................................32

Delete configuration parameters.................................................................33

Add threads.................................................................................................34

Delete threads.............................................................................................34

Reset default values....................................................................................35

View MPFS statistics...................................................................................35

Viewing MPFS protocol statistics.......................................................36

Viewing MPFS performance statistics................................................37

Blade statistics...................................................................................38

MPFS session statistics.....................................................................40

File statistics.......................................................................................41

Listing open sessions.........................................................................43

Resetting statistics.............................................................................44

Chapter 5: Troubleshooting..................................................................45

EMC E-Lab Interoperability Navigator.........................................................46

Error messages...........................................................................................46

EMC Training and Professional Services....................................................47

Installing MPFS software.............................................................................47

Mounting and unmounting a file system......................................................49

Miscellaneous issues..................................................................................54

Glossary..................................................................................................57

Index.......................................................................................................63

4 Using VNX Multi-Path File System

Contents

Preface

As part of an effort to improve and enhance the performance and capabilities of its productlines, EMC periodically releases revisions of its hardware and software. Therefore, somefunctions described in this document may not be supported by all versions of the softwareor hardware currently in use. For the most up-to-date information on product features, referto your product release notes.

If a product does not function properly or does not function as described in this document,please contact your EMC representative.

Using VNX Multi-Path File System 5

Special notice conventions

EMC uses the following conventions for special notices:

Note: Emphasizes content that is of exceptional importance or interest but does not relate topersonal injury or business/data loss.

Identifies content that warns of potential business or data loss.

Indicates a hazardous situation which, if not avoided, could result in minor ormoderate injury.

Indicates a hazardous situation which, if not avoided, could result in death orserious injury.

Indicates a hazardous situation which, if not avoided, will result in death or seriousinjury.

Where to get help

EMC support, product, and licensing information can be obtained as follows:

Product information — For documentation, release notes, software updates, or forinformation about EMC products, licensing, and service, go to the EMC Online Supportwebsite (registration required) at http://Support.EMC.com.

Troubleshooting — Go to the EMC Online Support website. After logging in, locatethe applicable Support by Product page.

Technical support — For technical support and service requests, go to EMC CustomerService on the EMC Online Support website. After logging in, locate the applicableSupport by Product page, and choose either Live Chat or Create a service request. Toopen a service request through EMC Online Support, you must have a valid supportagreement. Contact your EMC sales representative for details about obtaining a validsupport agreement or with questions about your account.

Note: Do not request a specific support representative unless one has already been assigned toyour particular system problem.

Your comments

Your suggestions will help us continue to improve the accuracy, organization, and overallquality of the user publications.

Please send your opinion of this document to:

[email protected]

6 Using VNX Multi-Path File System

Preface

1

Introduction

The EMC VNX Multi-Path File System (MPFS) combines theindustry-standard file sharing of network-attached storage (NAS) and thehigh performance and efficient data delivery of a storage area network(SAN) into one unified storage network.

The MPFS file system accelerates data access by providing separatetransports for file data (file content) and metadata (control data). Onlymetadata passes through the server, and all file content is accessed directlyfrom the system which decreases overall network traffic. In addition, serversaccess file data from EMC Symmetrix system or EMC VNX for block overiSCSI, Fibre Channel, or Fibre Channel over Ethernet (FCoE) connections.This increases the speed with which the VNX for file can deliver files tothe servers.

Note: A server is a Linux, Windows, HP-UX, AIX, or Solaris server unless specified.

An MPFS-enabled file system:

Enables data access at channel speeds

Reduces network traffic

Improves server performance

Enhances VNX for file performance and system scalability

Allows file sharing among heterogeneous clients

This document is part of the VNX for file information set and is intendedfor use by system administrators responsible for supporting HighPerformance Computing (HPC), grid computing, distributed computingvirtualization, or simply backing up file systems by using MPFS.

Topics included are:

System requirements on page 9 Restrictions on page 10

Using VNX Multi-Path File System 7

User interface choices on page 11 Related information on page 12

8 Using VNX Multi-Path File System

Introduction

System requirements

Table 1 on page 9 describes the EMCé VNX MPFS software, hardware, network, andstorage configurations.

Table 1. System requirements

EMC VNX OE for FIle version 7.0

MPFS software packages for Linux, Windows, UNIX, AIX, or Solaris

VNX /MPFS soft-

ware

Linux servers running one of the following:

Cent OS 5 with update 6 or later (iSCSI only)

Cent OS 6 with update 1 or later (iSCSI only)

Red Hat Enterprise Linux 4 with update 6 or later

Red Hat Enterprise Linux 5 with update 6 or later (without Itanium)

Red Hat Enterprise Linux 6 base or later

SuSE Linux Enterprise Server 10 with SP2 or later (SP3 without Itanium)

SuSE Linux Enterprise Server 11 with SP1 or later (without Itanium)

Linux software

Windows servers running one of the following:

Windows 2003 with SP2

Windows 2003 x64 with SP2

Windows 2003 R2 with SP2

Windows 2003 R2 x64 with SP2

Windows XP with SP3

Windows XP x64 with SP2

Windows Vista with SP2

Windows Vista x64 with SP2

Windows 2008 with SP2

Windows 2008 x64 with SP2

Windows 2008 R2 x64

Windows 7

Windows 7 x64

Windows software

System requirements 9

Introduction

Solaris servers running one of the following:

Sun Solaris version 5.9 (Sparc)

Sun Solaris version 5.10 (Sparc)

Sun Solaris version 5.10 (AMD)

Solaris software

AIX servers running one of the following:

IBM AIX 5.3 with ML04 and ML07

IBM AIX 6.1 with ML00

IBM AIX 6.1 with TL05, TL06, or TL07

AIX software

UNIX servers running one of the following:

HP-UX version 11.23 with V2

HP-UX version 11.23 (Itanium) with V2

HP-UX version 11.31 (Itanium) with V3

UNIX software

VMware ESX version 3.0.1 and 3.0.2 (FC) or later

VMware ESX version 3.5.1 (FC and iSCSI) (optional) or later

VMware software

VNX for File

EMC Symmetrix systemé or VNX for Block

Hardware

Server with a Fibre Channel connection to the system. For an iSCSI configuration, an OS-based

iSCSI initiator and an IP connection to a switch are required.

Network

Symmetrix system or VNX for BlockStorage

Note: The EMC E-Lab Interoperability Navigator provides specific information on server versionsupport.

Restrictions

These restrictions apply when using MPFS:

The EMC VNX for file MPFS over iSCSI configurations do not use the iSCSI target feature.They rely on the iSCSI initiator and the FC switch, FCoE switch, or the VNX for block asthe iSCSI target.

10 Using VNX Multi-Path File System

Introduction

The Nolock Common Internet File System (CIFS) locking policy is the default setting forMPFS; it is also the only locking policy supported on an MPFS-enabled file system.

Only the Nolock locking policy is compatible with MPFS on a VNX for file. A server cannotproperly mount a file system when a blade is running an incompatible locking policy.Managing VNX for file for a Multiprotocol Environment provides more information onlocking policies.

Before enabling any new features, ensure that the file system is compatible with MPFS.Verify file system compatibility on page 25 provides the appropriate command syntax.

MPFS improves performance significantly when large file transfers (sequential I/Os) arecommon. MPFS does not greatly benefit a configuration that deals with many small,random I/Os.

When both Checkpoint and VNX Replicator are active, MPFS system performance isreduced. The performance reduction is caused by additional CPU use and I/O overheadof the block copy operation.

An MPFS file system with a stripe size of 256 KB generally achieves optimal performance.

To ensure continuous availability of file systems in the unlikely event of a blade failure,configure each MPFS-enabled blade for automatic failover to a standby blade. ConfiguringStandbys on VNX provides more information on configuring a standby blade.

When exporting a file system on a blade by using the server_export command and thero (read-only) option, MPFS disregards the read-only option and writes to the file system.

When an MPFS-enabled file system is extended, by using the nas_fs command throughthe command line interface (CLI) or by using EMC Unisphere software, the serverloses the MPFS connection. Rezone the server to see the added disks; the server enablesMPFS on the file system after rezoning.The VNX Command Reference Manual providesinformation on the nas_fs command. Managing VNX Volumes and File Systems Manuallyprovides information on extending file systems. Both documents are available on theEMC Powerlinké website (registration required) at http://Powerlink.EMC.com.

MPFS is compatible with the EMC Common AntiVirus Agent (CAVA) and Common EventPublishing Agent (CEPA). However, CAVA or CEPA cannot share the same server withthe MPFS software in a Windows environment.

MPFS Windows software version 3.2 or earlier supports global shares but does notsupport NetBIOS shares. MPFS Windows software version 3.2.1 and later support bothglobal shares and NetBIOS shares.

User interface choices

This document describes how to configure MPFS by using the CLI.

User interface choices 11

Introduction

Related information

Specific information related to the features and functionality described in this document isincluded in:

EMC VNX MPFS over iSCSI Applied Best Practices Guide

EMC VNX Series MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

EMC VNX Series MPFS over FC and iSCSI v6.0 Windows Clients Product Guide

EMC VNX Series MPFS over FC v4.0 HP-UX, AIX and Solaris Clients Product Guide

EMC VNX MPFS for Linux Clients Release Notes

EMC VNX MPFS for Windows Clients Release Notes

EMC VNX MPFS for HP-UX and Solaris Clients Release Notes

EMC VNX MPFS for AIX Clients Release Notes

Using International Character Sets with VNX

EMC VNX documentation on the EMC Online Support website

The complete set of EMC VNX series customer publications is available on the EMCOnline Support website. To search for technical documentation, go tohttp://Support.EMC.com. After logging in to the website, click the VNX Support by Productpage to locate information for the specific feature required.

VNX wizards

Unisphere software provides wizards for performing setup and configuration tasks. TheUnisphere online help provides more details on the wizards.

When you select a system in Unisphere, the wizards are displayed on the right as shown inFigure 1 on page 12.

Figure 1. Unisphere wizard main screen

12 Using VNX Multi-Path File System

Introduction

2

Concepts

MPFS allows servers to access shared data concurrently over iSCSI orFibre Channel connections to Symmetrix or VNX for block systems. Thecomponents needed are:

VNX for file with MPFS

Symmetrix or VNX for block system

Linux, Windows, UNIX, AIX, or Solaris server

Each component is described in Table 2 on page 13.

Table 2. MPFS components

DescriptionComponent

The key component of the MPFS architecture. Blades

configured with MPFS own and control access to the

file system.

VNX for file with

MPFS

A high-performance unified storage cached disk system

designed for online data storage.The VNX for file inter-

acts with the system to provide fast, reliable, and secure

access to storage.

Note: System support requires MPFS software version

4.0 or later.

Symmetrix or VNX

for block system

MPFS is enabled on a server and interacts with the

MPFS-configured platform for synchronization, access

control, and metadata management.The server access-

es the blade through network file system (NFS) and

CIFS for file access, and through File Mapping Protocols

(FMPs) protocols for MPFS.

Linux, Windows,

UNIX, AIX, or So-

laris server

The major concepts of MPFS are:

Using VNX Multi-Path File System 13

Understanding MPFS threads on page 15 Overview of EMC VNX MPFS over FC and iSCSI on page 15 VNX MPFS architectures on page 15 Planning considerations on page 20 Compatibility with MPFS on page 20

14 Using VNX Multi-Path File System

Concepts

Understanding MPFS threads

When MPFS is started, 16 threads are run, which is the default number of MPFS threads.The maximum number of threads is 128, which is also the best practice for MPFS. If systemperformance is slow, gradually increase the number of threads allotted for the blade toimprove system performance. Add threads conservatively, as the blade allocates 16 KB ofmemory to accommodate each new thread. The optimal number of threads depends on thenetwork configuration, the number of servers, and the workload.

Overview of EMC VNX MPFS over FC and iSCSI

EMC VNX MPFS over Fibre Channel lets Linux, Windows, UNIX, AIX, or Solaris serversaccess shared data concurrently over Fibre Channel connections, whereas EMC VNX MPFSover iSCSI lets servers access shared data concurrently over an iSCSI connection.

MPFS uses common Internet Protocol local area network (IP LAN) topology to transportdata and metadata to and from the servers.

Without the MPFS file system, servers can access shared data by using standard NFS orCIFS protocols; the MPFS file system accelerates data access by providing separatetransports for file data (file content) and metadata (control data).

For an FC-enabled server, data is transferred directly between the server and storage overa Fibre Channel SAN.

For an iSCSI-enabled server, data is transferred over the IP LAN between the server andstorage for a VNX series or VNX VG2/VG8 configuration.

Metadata passes through the VNX for file (and the IP network), which includes thenetwork-attached storage (NAS) portion of the configuration.

VNX MPFS architectures

Three basic VNX MPFS architectures are available:

VNX MPFS over Fibre Channel

VNX MPFS over iSCSI

VNX MPFS over iSCSI/FC

The FC architecture consists of these configurations:

EMCé VNX5300, VNX5500, VNX5700, or VNX7500 over Fibre Channel

VNX VG2/VG8 over Fibre Channel

The iSCSI architecture consists of these configurations:

VNX5300, VNX5500, VNX5700, or VNX7500 over iSCSI

Understanding MPFS threads 15

Concepts

VNX VG2/VG8 over iSCSI

The iSCSI/FC architecture consists of these configurations:

VNX5300, VNX5500, VNX5700, or VNX7500 over iSCSI/FC

VNX VG2/VG8 over iSCSI/FC

Note: CLARiiON CX3 and CX4 systems are supported in VNX VG2/VG8 configurations.

VNX MPFS over Fibre Channel

The VNX MPFS over Fibre Channel architecture consists of the following:

VNX with MPFS — A NAS device configured with a VNX and MPFS software

VNX for block or EMC Symmetrixé system

Servers with MPFS software connected to a VNX through the IP LAN, VNX for block, orSymmetrix system by using Fibre Channel architecture

Figure 2 on page 16 shows the VNX over Fibre Channel configuration where the serversare connected to a VNX series (VNX5300, VNX5500, VNX5700, or VNX7500) by using anIP switch and one or more FC or Fibre Channel over Ethernet (FCoE) switches. A VNXseries is a VNX for file and VNX for block in a single cabinet. In a smaller configuration ofone or two servers, the servers can be connected directly to the VNX series without the useof FC or FCoE switches.

IP switch

FC switch/FCoE switch

FC

NFS/CIFS

VNX series

MPFS data

Servers

VNX-000004

MPFS metadata

Figure 2. VNX over Fibre Channel

16 Using VNX Multi-Path File System

Concepts

Figure 3 on page 17 shows the VNX VG2/VG8 over Fibre Channel configuration. In thisdiagram, the servers are connected to a VNX for block or a Symmetrix system by using aVNX VG2/VG8, an IP switch and an optional FC switch or FCoE switch.

IP switchVNX VG2/VG8

FC switch/FCoE switch

FC

FC

NFS/

CIFS

VNX for block or Symmetrix

MPFS data

Servers

VNX-000001

MPFS

metadata

Figure 3. VNX VG2/VG8 over Fibre Channel

VNX MPFS over iSCSI

The VNX MPFS over iSCSI architecture consists of the following:

VNX with MPFS — A NAS device configured with a VNX and MPFS software

VNX for block or Symmetrix system

Servers with MPFS software connected to a VNX through the IP LAN, VNX for block, orSymmetrix system by using iSCSI architecture

VNX MPFS architectures 17

Concepts

Figure 4 on page 18 shows the VNX over iSCSI configuration where the servers areconnected to a VNX series by using one or more IP switches.

IP switchNFS/CIFS

VNX series

MPFS data

Servers

IP switch

VNX-000005

MPFS metadata

iSCSI data

Figure 4. VNX over iSCSI

Figure 5 on page 18 shows the VNX VG2/VG8 over iSCSI configuration where the serversare connected to a VNX for block or Symmetrix system with a VNX VG2/VG8 by using oneor more IP switches.

IP switch

IP switch

FCNFS/

CIFS

VNX for block or Symmetrix

MPFS data

iSCSI data

Servers

VNX-000002

MPFS

metadata

VNX VG2/VG8

Figure 5. VNX VG2/VG8 over iSCSI

VNX MPFS over iSCSI/FC

The VNX MPFS over iSCSI/FC architecture consists of the following:

VNX with MPFS — A NAS device that is configured with a VNX and MPFS software

VNX for block or Symmetrix system

18 Using VNX Multi-Path File System

Concepts

Servers with MPFS software connected to a VNX through the IP LAN, Symmetrix system,or VNX for block by using iSCSI/FC architecture

Figure 6 on page 19 shows the VNX over iSCSI/FC configuration where the servers areconnected to a VNX series by using one or more IP switches and an FC switch or an FCoEswitch.

IP switchNFS/CIFS

MPFS data

MPFS data

Servers

IP switch

VNX-000066

MPFS metadata

iSCSI data

FC switch/FCoE switch

FC

VNX series

Figure 6. VNX over iSCSI/FC

Figure 7 on page 19 shows the VNX VG2/VG8 over iSCSI/FC configuration where theservers are connected to a VNX for block or Symmetrix system with a VNX VG2/VG8 byusing one or more IP switches and an FC switch or an FCoE switch.

IP switch

IP switch

FCNFS/

CIFS

VNX for block or Symmetrix

MPFS data

iSCSI data

Servers

VNX-000067

MPFS

metadata

VNX VG2/VG8

MPFS data

FC switch/FCoE switch

FC

Figure 7. VNX VG2/VG8 over iSCSI/FC

VNX MPFS architectures 19

Concepts

Planning considerations

When configuring or planning to run MPFS, consider:

The compatibility of other VNX for file features with MPFS

Where MPFS fits into the VNX for file configuration process

Compatibility with MPFS

Table 3 on page 20 lists the features supported with MPFS-enabled blades.

Table 3. MPFS-supported NAS features

NoteNon-MPFSMPFSProduct or feature

Restrictions on page 10 provides more information.YesNoCIFS mandatory locking

Provides virus checking of files written with MPFS Windows

servers.

YesYesCommon AntiVirus Agent (CA-

VA)

A mechanism whereby applications can register to receive

event notification and context from the VNX for file.

YesYesCommon Event Publishing

Agent (CEPA)

Fall through to normal NAS protocols also occurs during a

LAN backup if the backup application is not FileMover-aware

and data is migrated to secondary storage.

YesYesLAN Backup

N/AYesYesNDMP

N/ANoYesEMC Rainfinityé

Only the Production File System (PFS) can be used with

MPFS. MPFS is not used when accessing ckpt-type file sys-

tems.

The administrator is not notified when the restore is complete.

The restore completion status can be checked in the serv-

er_log file and can also be configured by using the SVFS fa-

cility of nas_event. Refer to the EMC Support Matrix for infor-

mation on MPFS Data Mover capacity.

YesYesEMC SnapSure

N/AYesYesEMC TimeFinderé/FS

20 Using VNX Multi-Path File System

Concepts

Table 3. MPFS-supported NAS features

(continued)

NoteNon-MPFSMPFSProduct or feature

After the system is migrated, it is used as an MPFS file sys-

tem. During the migration, the data is called by using standard

NAS protocol, even if it is mounted as MPFS on the server

side.

YesYesEMC VNX Data Migration Ser-

vice

Only available with the purchase of Security and Compliance

Suite.

YesYesEMC VNX Event Enabler

N/AYesYesEMC VNX File-Level Retention

Capability

Included and available when the customer purchases the

Operating Environment for File. All the files stored on primary

storage are retrieved through MPFS. All files stored on sec-

ondary storage fall through to normal NAS protocols, either

NFS or CIFS.

YesYesEMC VNX FileMover

Only available with the purchase of the Remote Replication

Suite. Support on source file system only. Refer to the EMC

Support Matrix for information on MPFS Data Mover capacity.

YesYesEMC VNX Replicator

When the quota limit is close, all traffic falls back to standard

NAS.

YesYesUser, Group, and Tree Quotas

Compatibility with MPFS 21

Concepts

22 Using VNX Multi-Path File System

Concepts

3

Configuring

The tasks to configure MPFS are:

MPFS configuration summary on page 24 Steps for configuring a VNX for file on page 24 Verify file system compatibility on page 25 Start MPFS on page 27 Operating MPFS through a firewall on page 28 Mount a file system for servers on page 28 Stop MPFS on page 29

Using VNX Multi-Path File System 23

MPFS configuration summary

Refer to the EMC Support Matrix for configuration information and storage systemspecifications. After logging in to Powerlink, go to Support> Interoperability and ProductLifecycle Information > Interoperability Matrices.

Related information on page 12 lists specific documentation related to the features andfunctionality of MPFS file system.

Steps for configuring a VNX for file

Table 4 on page 24 lists the configuration tasks for the VNX for file and shows where MPFSfits in this process. All of the documents listed in this table are available on the EMC Powerlinkwebsite at http://Powerlink.EMC.com.

Table 4. VNX for file configuration tasks

ReferenceActionStep

Configuring Standbys on VNXConfigure failover

Configure a blade as a standby for a primary blade.

1.

Configuring and Managing Networking on VNXSet up network interfaces

Set up the network interfaces that enable users to

connect to the blades and retrieve files.

2.

Configuring and Managing Networking on VNXReview and test system configuration

Verify the hardware configuration of the blades.

Test the blades to ensure they are accessible

through the network.

3.

Configuring VNX Naming ServicesConfigure network services

Configure NIS, DNS, and NTP.

4.

Managing Volumes and File Systems for VNX ManuallyCreate volumes

Create the volume configuration required to support

file systems.

5.

Managing Volumes and File Systems for VNX ManuallyCreate file systems

Create the file systems that contain user files.

6.

24 Using VNX Multi-Path File System

Configuring

Table 4. VNX for file configuration tasks (continued)

ReferenceActionStep

Managing Volumes and File Systems for VNX ManuallyCreate a mount point

Create a network access point for each blade.

7.

NFS users: Configuring NFS on VNXMount a file system

Mount the file system, specifying the options appro-

priate for your application.

8.

CIFS users: Configuring and Managing CIFS on VNX

NFS users: Configuring NFS on VNXExport a file system

Make the network access point available for NFS

and CIFS users.

9.

CIFS users: Configuring and Managing CIFS on VNX

Configuring and Managing CIFS on VNXConfigure blades to use CIFS

Configure the blades to become members of a

Windows domain and establish security policies.

10.

Using VNX Multi-Path File SystemConfigure VNX for file for MPFS11.

EMC VNX Series MPFS over FC and iSCSI v6.0 Linux

Clients Product Guide

EMC VNX Series MPFS over FC and iSCSI v6.0 Win-

dows Clients Product Guide

EMC VNX Series MPFS over FC v4.0 HP-UX, AIX, and

Solaris Clients Product Guide

EMC VNX Series MPFS over FC and iSCSI v6.0 Linux

Clients Product Guide

Install, configure, and run MPFS12.

EMC VNX Series MPFS over FC and iSCSI v6.0 Win-

dows Clients Product Guide

EMC VNX Series MPFS over FC v4.0 HP-UX, AIX, and

Solaris Clients Product Guide

Verify file system compatibility

The Action section shows the MPFS command syntax with an example.The Output sectionindicates whether a file system is compatible with the MPFS protocol.The Notes list possiblereasons for incompatibility. Use this command to verify file system compatibility with theMPFS protocol.

Verify file system compatibility 25

Configuring

Action

To verify file system compatibility (by receiving the mount status on each mounted blade), use this command syntax:

$ server_mpfs <movername> -mountstatus

where:

<movername> = name of the blade.

The ALL option, in place of the movername, runs the command for all blades.

Example:

To receive the mount status on server_2, type:

$ server_mpfs server_2 -mountstatus

26 Using VNX Multi-Path File System

Configuring

Output

server_2 :fs mpfs compatible? reason-- ---------------- ------

no not a ufs file systemtesting_renaming no volume structure not FMP compatible

no not a ufs file systemserver2_fs1_ckpt no volume structure not FMP compatiblempfs_fs2_lockdb_ckpt_5

no volume structure not FMP compatiblempfs_fs2_lockdb_ckpt_4

no volume structure not FMP compatiblempfs_fs2_lockdb_ckpt_3

no volume structure not FMP compatiblempfs_fs2_lockdb_ckpt_2

no volume structure not FMP compatiblempfs_fs2_lockdb_ckpt_1

no volume structure not FMP compatiblempfs_fs2_lockdb_ckpt_10

no volume structure not FMP compatiblempfs_fs2_lockdb_ckpt_9

no volume structure not FMP compatiblempfs_fs2_lockdb_ckpt_8

no volume structure not FMP compatiblempfs_fs2_lockdb_ckpt_7

no volume structure not FMP compatibleno not a ufs file system

mpfs_fs2_lockdb_ckpt_6no volume structure not FMP compatible

root_fs_common nompfs_fs2 yesmpfs_fs1 mountedserver2_fs1 yesroot_fs_2 yes

Note: Possible reasons for incompatibility include:

The disk mark is not 8 KB aligned because it was created with Celerra 6.0 software version orearlier. The 8 KB alignment is available for file systems configured with VNX for file 7.0 softwareversion or later.

The volume has a stripe size that is too small. For best performance, use a stripe size of 256KB.

A non Universal Extended File System (nonUxFS) is used, for example, a checkpoint file systemcreated with the VNX Shapshots feature.

Start MPFS

Use this command to start MPFS. Set the number of threads to run between 1 and 128.

Start MPFS 27

Configuring

Action

To start the MPFS service, use this command syntax:

$ server_setup <movername> -P mpfs -option start=<n>

where:

<movername> = name of the blade.

<n> = number of MPFS threads to start.

Example:

To start the MPFS service with 32 threads on server_2, type:

$ server_setup server_2 -P mpfs -option start=32

Note: The default number of threads is 16. The best practice for MPFS is 128 threads.

Note: If the server_mpfs server_2 -set threads command is run after the server_setup -P mpfs -option start command,

threads are added and removed dynamically.

Output

server_2: done

Operating MPFS through a firewall

If a firewall resides between the Linux, Windows, UNIX, AIX, or Solaris server and the VNXfor file, the firewall must allow access to the following ports for the servers:

Server: 625 (Windows), 6907 (Linux, UNIX, AIX, and Solaris)

VNX for file: 4656 (FMP)a, 2049 (NFS)a, 1234 (mountd), 111 (portmap/rpcbind)

Mount a file system for servers

Servers require only particular NFS or CIFS procedures to mount a file system.

After the MPFS software is installed on the server and network connectivity is establishedbetween the blade and server, the MPFS service is started on the blade. The server canmount and access the MPFS file system.

Exporting a file system path for MPFS servers

No special procedures are required to export a file system path for servers. When exportingNFS and CIFS file systems, MPFS access is automatically enabled. When exporting a file

a Ports 2049 and 4656 must both be open to run the FMP service.

28 Using VNX Multi-Path File System

Configuring

system with the read-only (ro) option, MPFS disregards the option and can write to the filesystem.

Stop MPFS

Use this command to stop MPFS. When stopping MPFS, the configuration informationpreviously entered is invoked when restarting MPFS (for example, the number of threadsthat run when starting MPFS).

Action

To stop the MPFS service, use this command syntax:

$ server_setup <movername> -P mpfs -option stop

where:

<movername> = name of the blade.

Example:

To stop the MPFS service on server_2, type:

$ server_setup server_2 -P mpfs -option stop

Output

server_2: done

Stop MPFS 29

Configuring

30 Using VNX Multi-Path File System

Configuring

4

Managing

The tasks to manage MPFS are:

Set the threads variable on page 32 Delete configuration parameters on page 33 Add threads on page 34 Delete threads on page 34 Reset default values on page 35 View MPFS statistics on page 35

Using VNX Multi-Path File System 31

Set the threads variable

Set the threads by using the server_mpfs command before or after starting the MPFS service.Depending on when the threads are set, the results are different, as explained in Table 5on page 32.

Table 5. Thread numbers settings/results

ResultAction

Overrides the default number of threads

with the number specified.

Before executing the server_setup -P

mpfs -o start <n> command

Adds and removes threads dynamically.After executing the server_setup -P

mpfs -o start <n> command

Note: The configuration values set with the server_mpfs command are recorded in a configuration fileon the blades. The ALL option runs the command for all blades.

Note: Variable names are case-sensitive.

Use this command to set the number of MPFS threads.

Action

To set the value for the variable, use this command syntax:

$ server_mpfs <movername> -set var=<value>

where:

<movername> = name of the blade.

var=<value> = the specified variable and value. Currently, the only valid variable is threads.

Example:

To start the MPFS protocol with 32 threads on server_2, type:

$ server_mpfs server_2 -set threads=32

Note: When running the server_mpfs command before the server_setup -P mpfs -o start command, the threads option

overrides the default value for the number of threads in the configuration file.

Note: When running the server_mpfs command before the server_setup -P mpfs -o start=<n> command, the threads

option determines the number of threads that MPFS starts by default.

Note: The default number of threads is set to the number specified. In the example, it is set to 32.

32 Using VNX Multi-Path File System

Managing

Output

server_2: done

Delete configuration parameters

Use this command to delete MPFS configuration parameters. Do not stop the service bysetting threads=0; instead, use the server_setup command.

Action

To stop the MPFS service that is running and delete the current configuration of the service, use this command syntax:

$ server_setup <movername> -P mpfs -option delete

where:

<movername> = name of the blade.

Example:

To stop the MPFS service and delete the MPFS configuration on server_2, type:

$ server_setup server_2 -P mpfs -option delete

Output

server_2: done

Delete configuration parameters 33

Managing

Add threads

When MPFS is started, 16 threads are run, which is the default number of MPFS threads.The maximum number of threads is 128, which is also the best practice for MPFS. If systemperformance is slow, gradually increase the number of threads alloted for the blade to improvesystem performance. Add threads conservatively, as the blade allocates 16 KB of memoryto accommodate each new thread. The optimal number of threads depends on the networkconfiguration, the number of servers, and the workload.

Action

To increase the number of threads running on a blade, use this command syntax:

$ server_mpfs <movername> -add <number_of_threads>

where:

<movername> = name of the blade.

<number_of_threads> = number of MPFS threads added from the previous total for the specific server.

Example:

To increase the number of MPFS threads running on server_2 by 16, type:

$ server_mpfs server_2 -add 16

Output

server_2: done

Delete threads

Delete threads from a blade while MPFS is running.The number of MPFS threads is between1 and 128, with 128 threads being the best practice for MPFS.

Action

To decrease the number of threads running on a blade, use this command syntax:

$ server_mpfs <movername> -delete <number_of_threads>

where:

<movername> = name of the blade.

<number_of_threads> = number of MPFS threads deleted from the previous total for the specific server.

Example:

To decrease the number of MPFS threads running on server_2 by 16, type:

$ server_mpfs server_2 -delete 16

Output

server_2: done

34 Using VNX Multi-Path File System

Managing

Reset default values

Currently, threads are the only supported variable. The options -Default and -Default varare functionally equivalent.

Action

To reset the variable threads to their default value of 16, use this command syntax:

$ server_mpfs <movername> -Default

where:

<movername> = name of the blade.

Example:

To restore default values on server_2, type:

$ server_mpfs server_2 -Default

Note: Without a variable entry, the command resets all variables to their default values. The only valid variable is threads.

Set the threads variable on page 32 provides more information.

Output

server_2: done

View MPFS statistics

The following sections describe commands used to view statistics related to the server andVNX for file:

Viewing MPFS protocol statistics

Viewing MPFS performance statistics

Reset default values 35

Managing

Viewing MPFS protocol statistics

Table 6 on page 36 defines server statistics. Use this command to view server statistics foran MPFS-enabled blade.

Action

To view MPFS protocol statistics, use this command syntax:

$ server_mpfs <movername> -Stats

where:

<movername> = name of the blade.

Example:

To view statistics for server_2, type:

$ server_mpfs server_2 -Stats

Output

server_2:Server ID=server_2FMP Threads=16Max Threads Used=1FMP Open Files=0FMP Port=4656HeartBeat Time Interval=30

Note: The output for this example reflects that server_2 is running the default number of threads (16).

Note: Table 6 on page 36 provides detailed information on the statistics output when using the -Statsoption.

Table 6. Server protocol statistics

CommentDescriptionStatistic

The server uses the Server ID to identify

blades that are running the MPFS ser-

vice. The default Server ID is unique to

all blades in a single VNX for file cabi-

net, but is duplicated in multiple VNX

for file environments.

The server requires a unique Server ID

for each blade. Use the server_name

command to rename any blades with

duplicate Server IDs.

Unique identifier for the blade.

The default blade is server_<x>, where

<x> is the VNX for file cabinet’s slot

number, unless it is changed with the

server_name command.

Server ID

36 Using VNX Multi-Path File System

Managing

Table 6. Server protocol statistics (continued)

CommentDescriptionStatistic

If required for performance reasons, use

server_mpfs to change this value. Set

Number of available FMP threads for

servicing server requests.

FMP Threads

the threads variable on page 32 pro-

vides more information.

N/AMaximum number of threads used.Max Threads Used

N/ANumber of files currently opened by the

server.

FMP Open Files

N/AFMP port where the VNX for file re-

ceives requests.

FMP Port

N/ATime interval in which the server must

renew the session’s connection; other-

wise, the session terminates.

Heartbeat Time Interval

Viewing MPFS performance statistics

The following sections describe a summary of procedures for viewing MPFS performancestatistics.

View MPFS statistics 37

Managing

Blade statistics

Use this command to view performance statistics for a blade.

Action

To view MPFS performance statistics for a blade, use this command syntax:

$ server_stats <movername>

where:

<movername> = name of the blade.

Example:

To view the statistics for server_2, type:

$ server_stats server_2

Output

server_2:server MPFS statistics------------------------------

total avg msec high msec------- ---------- -----------

open(): 44835 2.73 52getMap(): 3130 0.03 4allocSpace(): 19490 2.14 180mount(): 177 4.72 32commit(): 53135 0.91 72release(): 575 0.06 4close(): 480 0.02 4nfs/cifs sharing delays: 34149 3.06 6084notify replies (delay): 17745 1.37 6084

total-------

notify msgs sent: 17745notify replies failed: 0conflicts (total): 34678conflicts (lock): 0conflicts (sharing): 34678conflicts (starvation): 0open files: 0open sessions: 0throughput for last 273230.03 sec:1461.67 blks/sec read5464.12 blks/sec written

38 Using VNX Multi-Path File System

Managing

Notes

where (columns):

total = cumulative time spent performing operations of this type, in milliseconds.

avg msec = average time spent performing an operation of this type, in milliseconds.

high msec = longest time spent performing an operation of this type, in milliseconds.

where (rows):

open() = number of files the server opened.

getMap() = number of times the server reads a file and has no extent information; it runs a getmap.

allocSpace() = number of times the server writes to a file and has no extent information; it runs a getmap.

mount() = number of mounts.

commit() = number of commits (such as NFS commit).

release() = number of times the server releases an extent, due to a notify or file close.

close() = number of file close commands the server sends to the VNX for file.

nfs/cifs sharing delays = number of times and duration, NFS or CIFS threads were delayed while waiting for a VNX for file

to release locked resources.

notify replies (delay) = number of notify rpcs received from the server.

notify msgs sent = number of notify server messages sent to the server.

notify replies failed = number of notify replies with a NOTIF_ERROR returned.

conflicts (total) = number of shared conflicts.

conflicts (lock) = number of conflicts caused by conflicting MPFS range lock requests.

conflicts (sharing) = number of conflicts caused by file sharing between MPFS requests and CIFS/NFS requests.

conflicts (starvation) = number of conflicts caused by the triggering of the starvation prevention mechanism in the MPFS

module.

open files = number of files opened.

open sessions = number of active sessions.

throughput for last 273230.03 sec: = average throughput time for the session, in seconds.

1461.67 blks/sec read = throughput for file reads, measured in 8 KB blocks per second.

5464.12 blks/sec written = throughput for file writes, measured in 8 KB blocks per second.

View MPFS statistics 39

Managing

MPFS session statistics

Use this command to view performance statistics for a particular MPFS session.

Action

To view timing and counting statistics for a particular MPFS session, use this command syntax:

$ server_stats <movername> -session <sessionid>

where:

<movername> = name of the blade.

<sessionid> = IP address of the server associated with the desired session.

Example:

To view the statistics for a particular MPFS session, type:

$ server_stats server_2 -session xxx.xx.xx.xxx

Output

server_2:------------------------------session MPFS statistics------------------------------session = xxx.xx.xx.xxx

total avg msec high msec------- --------- ----------

open(): 1106 0.04 4getMap(): 470 0.05 4allocSpace(): 636 26.83 2720mount(): 85 5.55 16commit(): 81764 57.17 680release(): 1105 0.03 4close(): 1105 0.03 4notify (delay): 0 0.00 0

total-------

conflicts (generated): 0conflicts (notified): 0open files: 1throughput for last 9628.88 sec:6247.87 blks/sec read8376.78 blks/sec written

40 Using VNX Multi-Path File System

Managing

Notes

where (columns):

total = cumulative time spent performing operations of this type, in milliseconds.

avg msec = average time spent performing an operation of this type, in milliseconds.

high msec = longest time spent performing an operation of this type, in milliseconds.

where (rows):

open() = number of files the server opened.

getMap() = number of times the server reads a file and has no extent information; it runs a getmap.

allocSpace() = number of times the server writes to a file and has no extent information; it runs a getmap.

mount() = number of mounts.

commit() = number of commits (such as NFS commit).

release() = number of times the server releases an extent, due to a notify or file close.

close() = number of file close commands the server sends to the VNX for file.

notify (delay) = time the server spent replying to the VNX for file.

conflicts (generated) = number of conflicts generated.

conflicts (notified) = number of conflicts notified.

open files = number of files opened.

throughput for last 9628.88 sec: = average throughput time for the session, in seconds.

6247.87 blks/sec read = throughput for file reads, measured in 8 KB blocks per second.

8376.78 blks/sec written = throughput for file writes, measured in 8 KB blocks per second.

File statistics

Use this command to view performance statistics for a particular file.

Action

To view the counting statistics for a particular file, use this command syntax:

$ server_stats <movername> -file <filepath>

where:

<movername> = name of the blade.

<filepath> = fully qualified filename for the desired file in the form /<fs_name> or <fullpath of file>.

Example:

To view the statistics for a particular MPFS file, type:

$ server_stats server_2 -file /ufs1/mpfs/file.dat

View MPFS statistics 41

Managing

Output

-------------------------file MPFS statistics-------------------------------file = /ufs1/mpfs/file.dat

total avg msec high msec----- -------- ---------

open(): 5 0.00 0getMap(): 4291 0.01 4allocSpace(): 22727 2.49 200commit(): 28 1094.57 8368release(): 19 0.00 0

total-----

close(): 5conflicts: 1

Notes

where (rows):

total = cumulative time spent performing operations of this type, in milliseconds.

avg msec = average time spent performing an operation of this type, in milliseconds.

high msec = longest time spent performing an operation of this type, in milliseconds.

where (columns):

open() = number of files the server opened.

getMap() = number of times the server reads a file and has no extent information; it runs a getmap.

allocSpace() = number of times the server writes to a file and has no extent information; it runs a getmap.

commit() = number of commits (such as NFS commit).

release() = number of times the server releases an extent, due to a notify or file close.

close() = number of file close commands the server sends to the VNX for file.

conflicts = number of shared conflicts.

42 Using VNX Multi-Path File System

Managing

Listing open sessions

Use this command to list the active server sessions for a particular blade or for all blades.

Action

To list the active server sessions on the blade, use this command syntax:

$ server_stats <movername> -list

where:

<movername> = name of the blade.

Example:

To list the open server sessions, type:

$ server_stats server_2 -list

Output

Active MPFS sessions(clientid/timestamp)----------------------------xxx.xx.xxx.xxx 0 sec 0 usecxxx.xx.xxx.xxx 1087815431 sec 690034 usecxxx.xx.xxx.xxx 29644763 sec 2080141652 usecxxx.xx.xxx.xxx 1087820585 sec 140034 usec

Note: The output lists the IP addresses of the servers that are connected to the blade along with the duration of the

server’s session.

View MPFS statistics 43

Managing

Resetting statistics

Use this command to reset the statistics associated with a file or session.

Action

To reset the statistics associated with the specified file or session, use this command syntax:

$ server_stats <movername>-z -session <sessionid>

where:

<movername> = name of the blade.

<sessionid> = IP address of the servers associated with the desired session.

Example:

To reset the statistics associated with a specified file or session, type:

$ server_stats server_2 -z -session xxx.xx.xxx.xxx

Output

server_2: done

44 Using VNX Multi-Path File System

Managing

5

Troubleshooting

As part of an effort to continuously improve and enhance the performanceand capabilities of its product lines, EMC periodically releases new versionsof its hardware and software. Therefore, some functions described in thisdocument may not be supported by all versions of the software or hardwarecurrently in use. For the most up-to-date information on product features,refer to your product release notes.

If a product does not function properly or does not function as describedin this document, contact your EMC Customer Support Representative.

Problem Resolution Roadmap for VNX contains additional informationabout using the EMC Online Support website and resolving problems.

Topics included are:

EMC E-Lab Interoperability Navigator on page 46 Error messages on page 46 EMC Training and Professional Services on page 47 Installing MPFS software on page 47 Mounting and unmounting a file system on page 49 Miscellaneous issues on page 54

Using VNX Multi-Path File System 45

EMC E-Lab Interoperability Navigator

The EMC E-Lab Interoperability Navigator is a searchable, web-based application thatprovides access to EMC interoperability support matrices. It is available on the EMC OnlineSupport website at http://Support.EMC.com. After logging in, locate the applicable Supportby Product page, find Tools, and click E-Lab Interoperability Navigator.

Error messages

All event, alert, and status messages provide detailed information and recommended actionsto help you troubleshoot the situation.

To view message details, use any of these methods:

Unisphere software:

• Right-click an event, alert, or status message and select to view Event Details, AlertDetails, or Status Details.

CLI:

• Type nas_message -info <MessageID>, where <MessageID> is the messageidentification number.

Celerra Error Messages Guide:

• Use this guide to locate information about messages that are in the earlier-releasemessage format.

EMC Online Support website:

• Use the text from the error message's brief description or the message's ID to searchthe Knowledgebase on the EMC Online Support website. After logging in to EMCOnline Support, locate the applicable Support by Product page, and search for theerror message.

46 Using VNX Multi-Path File System

Troubleshooting

EMC Training and Professional Services

EMC Customer Education courses help you learn how EMC storage products work togetherwithin your environment to maximize your entire infrastructure investment. EMC CustomerEducation features online and hands-on training in state-of-the-art labs conveniently locatedthroughout the world. EMC customer training courses are developed and delivered by EMCexperts. Go to the EMC Online Support website at http://Support.EMC.com for course andregistration information.

EMC Professional Services can help you implement your system efficiently. Consultantsevaluate your business, IT processes, and technology, and recommend ways that you canleverage your information for the most benefit. From business plan to implementation, youget the experience and expertise that you need without straining your IT staff or hiring andtraining new personnel. Contact your EMC Customer Support Representative for moreinformation.

Installing MPFS software

This section explains the problems that are encountered when installing the MPFSsoftware.

Problem

Installation of the MPFS software fails with an error message similar to the following:

EMC Training and Professional Services 47

Troubleshooting

Installing ./EMCmpfs-6.0.1.6-i686.rpm on localhost[ Step 1 ] Checking installed MPFSpackage ...[ Step 2 ] Installing MPFS package ...Preparing... ##################################### [100%]

1:EMCmpfs #####################################[100%]The kernel that you are running,2.6.22.18-0.2-default, is not supportedby MPFS.The following kernels are supported by MPFS on SuSE:SuSE-2.6.16.46-0.12-defaultSuSE-2.6.16.46-0.12-smpSuSE-2.6.16.46-0.14-defaultSuSE-2.6.16.46-0.14-smpSuSE-2.6.16.53-0.8-defaultSuSE-2.6.16.53-0.8-smpSuSE-2.6.16.53-0.16-defaultSuSE-2.6.16.53-0.16-smpSuSE-2.6.16.60-0.21-defaultSuSE-2.6.16.60-0.21-smpSuSE-2.6.16.60-0.27-defaultSuSE-2.6.16.60-0.27-smpSuSE-2.6.16.60-0.37-defaultSuSE-2.6.16.60-0.37-smpSuSE-2.6.16.60-0.60.1-defaultSuSE-2.6.16.60-0.60.1-smpSuSE-2.6.16.60-0.69.1-defaultSuSE-2.6.16.60-0.69.1-smpSuSE-2.6.5-7.282-defaultSuSE-2.6.5-7.282-smpSuSE-2.6.5-7.283-defaultSuSE-2.6.5-7.283-smpSuSE-2.6.5-7.286-defaultSuSE-2.6.5-7.286-smpSuSE-2.6.5-7.287.3-defaultSuSE-2.6.5-7.287.3-smpSuSE-2.6.5-7.305-defaultSuSE-2.6.5-7.305-smpSuSE-2.6.5-7.308-defaultSuSE-2.6.5-7.308-smp

Cause

The kernel being used is not supported.

Solution

Use a supported OS kernel. The EMC VNX MPFS for Linux Clients Release Notesprovide a list of supported kernels.

Problem

The MPFS software does not run or the MPFS daemon did not start.

Cause

The MPFS software is not installed.

48 Using VNX Multi-Path File System

Troubleshooting

Solution

1. Use RPM to verify the installation by typing:

rpm -q EMCmpfs

If the MPFS software is installed properly, the command displays output similar tothis:

EMCmpfs-6.0.x.x

Note: Alternately, use the mpfsctl version command to verify that the Linux server is installed.The mpfsctl man page or the EMC VNX MPFS over FC and iSCSI v6.0 Linux Product Guideprovides additional information.

2. Use the ps command to verify that the MPFS daemon has started:

ps -ef | grep mpfsd

The output looks like this if the MPFS daemon has started:

root 847 1 0 15:19 ? 00:00:00 /usr/sbin/mpfsd

3. If the ps command output does not show the MPFS daemon process is running, asroot, start MPFS using the following command:

# /etc/rc.d/init.d/mpfs start

Mounting and unmounting a file system

This section explains the problems that are encountered when mounting or unmountinga file system.

Problem

The mount command displays messages about unknown file systems.

Cause

An option was specified that is not supported by the mount command.

Solution

1. Display the mount_mpfs man page to find supported options:

man mount_mpfs

2. Run the mount command again with the correct options.

Mounting and unmounting a file system 49

Troubleshooting

Problem

The mount command displays this message:

mount: must be root to use mount

Cause

Premissions are required to use the mount command.

Solution

Log in as root and try the mount command again.

Problem

The mount command displays this message:

nfs mount: get_fh <hostname>:: RPC: Rpcbind failure - RPC: Timed out

Cause

The VNX for file or NFS specified is down.

Solution

Verify that the correct server name was specified and that the server is up with an exportedfile system.

Problem

The mount command displays this message:

mount -t mpfs xx.xx.x.xxx:/rcfs /mnt/mpfsVolume 'APM000643042520000-0008' not found.Error mounting /mnt/mpfs via MPFS

Cause

The MPFS mount operation could not find the physical disk associated with the specifiedfile system.

50 Using VNX Multi-Path File System

Troubleshooting

Solution

Use the mpfsinq command or the mount -o rescan command to verify that the physicaldisk drive associated with the file system is connected to the server over Fibre Channeland is accessible from the server.

Problem

The mount command displays a message similar to:

mount: /<filesystem>: No such file or directory

Cause

No mount point exists.

Solution

Create a mount point and try the mount command again.

Problem

The mount command displays this message:

mount: fs type mpfs not supported by kernel

Cause

The MPFS software is not installed.

Solution

Install the MPFS software and try the mount command again.

Problem

A file system is not unmounted. The umount command displays this message:

umount: Device busy

Cause

Existing processes were using the file system when an attempt was made to unmountit, or the umount command was issued from the file system itself.

Mounting and unmounting a file system 51

Troubleshooting

Solution

1. Use the fuser command to identify all processes using the file system.

2. Use the kill -9 command to stop all processes.

3. Run the unmount command again.

Problem

The mount command hangs.

Cause

The server specified with the mount command does not exist or is not available.

Solution

1. Interrupt the mount command by using the interrupt key combinations (usually Ctrl-C).

2. Try to reach the server by using the ping command.

3. If the ping command succeeds, retry the mount.

Problem

The mount command displays this message:

permission denied.

Cause 1

Permissions are required to access the file system specified in the mount command.

Solution 1

Ensure that the file system is exported with the right permissions, or set the rightpermissions for the file system. The VNX Command Reference Manual provides moreinformation.

Cause 2

You are not the root user on the server.

Solution 2

Use the su command to become the root user.

52 Using VNX Multi-Path File System

Troubleshooting

Problem

The mount command displays this message:

RPC program not registered

Cause

The server specified in the mount command is not an NFS or VNX for file.

Solution

Verify whether the correct server name was specified and the server has an exportedfile system.

Problem

The mount command logs this message in the /var/log/messages file:

Couldn’t find device during mount.

Cause

The MPFS mount operation could not find the physical disk associated with the specifiedfile system.

Solution

Use either the fdisk command or the mpfsinq command to verify that the physical diskdrive associated with the file system is connected to the server over Fibre Channel andis accessible from the server.

Problem

The mount command displays this message:

RPC: Unknown host.

Cause

The server name specified in the mount command does not exist on the network.

Solution

1. Ensure that the correct server name is specified in the mount command.

Mounting and unmounting a file system 53

Troubleshooting

2. If the correct name was not specified, verify whether the host’s /etc/hosts file or theNIS/DNS map contains an entry for the server.

3. If the server does appear in /etc/hosts or the NIS/DNS map, verify whether the serverresponds to the ping command.

4. If the ping command succeeds, try using the server’s IP address instead of its namein the mount command.

Problem

The mount command displays this message:

# mount -t mpfs ka0abc12s401:/server4fs1 /mntmount: fs type mpfs not supported by kernel

Cause

The MPFS software is not installed on the server.

Solution

1. Install the MPFS software on the server.

2. Run the mount command again.

Miscellaneous issues

The following miscellaneous issues are encountered with a Linux server.

Problem

The user cannot write to a mounted file system.

Cause

Write permission is required on the file system or the file system is mounted as read-only.

Solution

1. Verify that you have write permission on the file system.

2. Try unmounting the file system and remounting it in read/write mode.

54 Using VNX Multi-Path File System

Troubleshooting

Problem

The following message appears:

NFS server not responding

Cause

VNX is unavailable due to a network-related problem, a reboot, or a shutdown.

Solution

Verify whether the server responds to the ping command. Also try unmounting andremounting the file system.

Problem

Removing the MPFS software package fails.

Cause 1

The MPFS software is not installed on the Linux server.

Solution 1

Verify that the MPFS software package name is spelled correctly, with uppercase andlowercase letters specified. If the MPFS software package name is spelled correctly,verify that the MPFS software is installed on the Linux server by typing:

#rpm -q EMCmpfs

If the MPFS software is installed properly, the command displays output similar to:

EMCmpfs-6.0.20-xxx

If the MPFS software is not installed, the output is similar to:

Package "EMCmpfs" was not found.

Cause 2

Trying to remove the MPFS software package while one or more MPFS-mounted filesystems are active, and I/O is taking place on the active file system. A message appearson the Linux server similar to:

ERROR: Mounted MPFS filesystems found on the system.Please unmount all MPFS filesystems before removingthe product.

Miscellaneous issues 55

Troubleshooting

Solution 2

1. Stop the I/O.

2. Unmount all active MPFS file systems by using the unmount command.

3. Restart the removal process.

56 Using VNX Multi-Path File System

Troubleshooting

Glossary

A

AV engine3Ø∞π´-∑®πª¿ ®µª∞Ω∞πº∫ ∫∂≠ªæ®π¨ πºµµ∞µÆ ∂µ ® 6∞µ´∂æ∫ 2¨πΩ¨π ªØ®ª æ∂π≤∫ æ∞ªØ ªØ¨ "∂¥¥∂µ µª∞5∞πº∫ ƨµª (" 5 ).

See also AV server, CAVA, VC client, and virus definition file.

AV server6∞µ´∂æ∫ 2¨πΩ¨π ™∂µ≠∞Æºπ¨´ æ∞ªØ ªØ¨ " 5 ®µ´ ® ªØ∞π´-∑®πª¿ ®µª∞Ω∞πº∫ ¨µÆ∞µ¨.

See also AV engine, CAVA, and VC client.

C

CAVA2¨¨ "∂¥¥∂µ µª∞5∞πº∫ ƨµª.

CEPA2¨¨ "∂¥¥∂µ $Ω¨µª /º©≥∞∫Ø∞µÆ ƨµª.

checkpoint/∂∞µª-∞µ-ª∞¥¨, ≥∂Æ∞™®≥ ∞¥®Æ¨ ∂≠ ® /%2. ™Ø¨™≤∑∂∞µª ∞∫ ® ≠∞≥¨ ∫¿∫ª¨¥ ®µ´ ∞∫ ®≥∫∂ π¨≠¨ππ¨´ ª∂ ®∫ ®™Ø¨™≤∑∂∞µª ≠∞≥¨ ∫¿∫ª¨¥ ∂π ®µ $," 2µ®∑2ºπ¨ ≠∞≥¨ ∫¿∫ª¨¥.

See also Production File System.

CIFS2¨¨ "∂¥¥∂µ (µª¨πµ¨ª %∞≥¨ 2¿∫ª¨¥.

client%π∂µª-¨µ´ ´¨Ω∞™¨ ªØ®ª π¨∏º¨∫ª∫ ∫¨πΩ∞™¨∫ ≠π∂¥ ® ∫¨πΩ¨π, ∂≠ª¨µ ®™π∂∫∫ ® µ¨ªæ∂π≤.

command line interface (CLI)(µª¨π≠®™¨ ≠∂π ª¿∑∞µÆ ™∂¥¥®µ´∫ ªØπ∂ºÆØ ªØ¨ "∂µªπ∂≥ 2ª®ª∞∂µ ª∂ ∑¨π≠∂π¥ ª®∫≤∫ ªØ®ª ∞µ™≥º´¨ ªØ¨¥®µ®Æ¨¥¨µª ®µ´ ™∂µ≠∞ƺπ®ª∞∂µ ∂≠ ªØ¨ ´®ª®©®∫¨ ®µ´#®ª®,∂Ω¨π∫ ®µ´ ªØ¨¥∂µ∞ª∂π∞µÆ ∂≠ ∫ª®ª∞∫ª∞™∫≠∂π 5-7 ≠∂π ≠∞≥¨ ™®©∞µ¨ª ™∂¥∑∂µ¨µª∫.

Using VNX Multi-Path File System 57

Common AntiVirus Agent (CAVA) ∑∑≥∞™®ª∞∂µ ´¨Ω¨≥∂∑¨´ ©¿ $," ªØ®ª πºµ∫ ∂µ ® 6∞µ´∂æ∫ 2¨πΩ¨π ®µ´ ™∂¥¥ºµ∞™®ª¨∫ æ∞ªØ ®∫ª®µ´®π´ ®µª∞Ω∞πº∫ ¨µÆ∞µ¨ ª∂ ∫™®µ "(%2 ≠∞≥¨∫ ∫ª∂π¨´ ∂µ 5-7 ≠∂π ≠∞≥¨.

See also AV engine, AV server, and VC client.

Common Event Publishing Agent (CEPA)$,"-∑π∂Ω∞´¨´ ®Æ¨µª πºµµ∞µÆ ∂µ ® 6∞µ´∂æ∫ 2¨πΩ¨π ªØ®ª ∑π∂Ω∞´¨∫ ´¨ª®∞≥∫ ∂≠ ¨Ω¨µª∫ ∂™™ºππ∞µÆ∂µ ªØ¨ 6∞µ´∂æ∫ 2¨πΩ¨π. (ª ™®µ ™∂¥¥ºµ∞™®ª¨ æ∞ªØ 5-7 ≠∂π ≠∞≥¨ ª∂ ´∞∫∑≥®¿ ® ≥∞∫ª ∂≠ ¨Ω¨µª∫ ªØ®ª∂™™ºππ¨´.

Common Internet File System (CIFS)%∞≥¨-∫Ø®π∞µÆ ∑π∂ª∂™∂≥ ©®∫¨´ ∂µ ªØ¨ ,∞™π∂∫∂≠ª 2¨πΩ¨π ,¨∫∫®Æ¨ !≥∂™≤ (2,!). (ª ®≥≥∂æ∫ º∫¨π∫ ª∂∫Ø®π¨ ≠∞≥¨ ∫¿∫ª¨¥∫ ∂Ω¨π ªØ¨ (µª¨πµ¨ª ®µ´ ∞µªπ®µ¨ª∫.

D

daemon4-(7 ∑π∂™¨∫∫ ªØ®ª πºµ∫ ™∂µª∞µº∂º∫≥¿ ∞µ ªØ¨ ©®™≤Æπ∂ºµ´, ©ºª ´∂¨∫ µ∂ªØ∞µÆ ºµª∞≥ ∞ª ∞∫ ®™ª∞Ω®ª¨´©¿ ®µ∂ªØ¨π ∑π∂™¨∫∫ ∂π ªπ∞ÆÆ¨π¨´ ©¿ ® ∑®πª∞™º≥®π ¨Ω¨µª.

Data Mover(µ 5-7 ≠∂π ≠∞≥¨, ® ™®©∞µ¨ª ™∂¥∑∂µ¨µª ªØ®ª ∞∫ πºµµ∞µÆ ∞ª∫ ∂æµ ∂∑¨π®ª∞µÆ ∫¿∫ª¨¥ ªØ®ª π¨ªπ∞¨Ω¨∫´®ª® ≠π∂¥ ® ∫ª∂π®Æ¨ ´¨Ω∞™¨ ®µ´ ¥®≤¨∫ ∞ª ®Ω®∞≥®©≥¨ ª∂ ® µ¨ªæ∂π≤ ™≥∞¨µª. 3Ø∞∫ ∞∫ ®≥∫∂ π¨≠¨ππ¨´ ª∂ ®∫® ©≥®´¨.

Domain Name System (DNS)-®¥¨ π¨∫∂≥ºª∞∂µ ∫∂≠ªæ®π¨ ªØ®ª ®≥≥∂æ∫ º∫¨π∫ ª∂ ≥∂™®ª¨ ™∂¥∑ºª¨π∫ ∂µ ® 4-(7 µ¨ªæ∂π≤ ∂π 3"//(/µ¨ªæ∂π≤ ©¿ ´∂¥®∞µ µ®¥¨. 3ب #-2 ∫¨πΩ¨π ¥®∞µª®∞µ∫ ® ´®ª®©®∫¨ ∂≠ ´∂¥®∞µ µ®¥¨∫, Ø∂∫ªµ®¥¨∫,®µ´ ªØ¨∞π ™∂ππ¨∫∑∂µ´∞µÆ (/ ®´´π¨∫∫¨∫, ®µ´ ∫¨πΩ∞™¨∫ ∑π∂Ω∞´¨´ ©¿ ªØ¨ ®∑∑≥∞™®ª∞∂µ ∫¨πΩ¨π∫.

See also ntxmap.

E

extent2¨ª ∂≠ ®´±®™¨µª ∑Ø¿∫∞™®≥ ©≥∂™≤∫.

F

Fibre Channel-∂¥∞µ®≥≥¿ 1 &©/∫ ´®ª® ªπ®µ∫≠¨π ∞µª¨π≠®™¨ ª¨™Øµ∂≥∂Æ¿, ®≥ªØ∂ºÆØ ªØ¨ ∫∑¨™∞≠∞™®ª∞∂µ ®≥≥∂æ∫ ´®ª®ªπ®µ∫≠¨π 𮪨∫ ≠π∂¥ 133 ,©/∫ º∑ ª∂ 4.25 &©/∫. #®ª® ™®µ ©¨ ªπ®µ∫¥∞ªª¨´ ®µ´ 𨙨∞Ω¨´∫∞¥º≥ª®µ¨∂º∫≥¿. "∂¥¥∂µ ªπ®µ∫∑∂πª ∑π∂ª∂™∂≥∫, ∫º™Ø ®∫ (µª¨πµ¨ª /π∂ª∂™∂≥ ((/) ®µ´ 2¥®≥≥ "∂¥∑ºª¨π2¿∫ª¨¥∫ (µª¨π≠®™¨ (2"2(), πºµ ∂Ω¨π %∞©π¨ "Ø®µµ¨≥. "∂µ∫¨∏º¨µª≥¿, ® ∫∞µÆ≥¨ ™∂µµ¨™ª∞Ω∞ª¿ ª¨™Øµ∂≥∂Æ¿™®µ ∫º∑∑∂πª Ø∞ÆØ-∫∑¨¨´ (/. ®µ´ µ¨ªæ∂π≤∞µÆ.

File Mapping Protocol (FMP)%∞≥¨ ∫¿∫ª¨¥ ∑π∂ª∂™∂≥ º∫¨´ ª∂ ¨ø™Ø®µÆ¨ ≠∞≥¨ ≥®¿∂ºª ∞µ≠∂𥮪∞∂µ ©¨ªæ¨¨µ ®µ ®∑∑≥∞™®ª∞∂µ ∫¨πΩ¨π®µ´ 5-7 ≠∂π ≠∞≥¨.

See also MPFS.

58 Using VNX Multi-Path File System

Glossary

file system,¨ªØ∂´ ∂≠ ™®ª®≥∂Æ∞µÆ ®µ´ ¥®µ®Æ∞µÆ ªØ¨ ≠∞≥¨∫ ®µ´ ´∞𨙪∂π∞¨∫ ∂µ ® ∫¿∫ª¨¥.

FLARE$¥©¨´´¨´ ∂∑¨π®ª∞µÆ ∫¿∫ª¨¥ ∞µ 5-7 ≠∂π ©≥∂™≤ ´∞∫≤ ®ππ®¿∫.

I

Internet Protocol address (IP address) ´´π¨∫∫ ºµ∞∏º¨≥¿ ∞´¨µª∞≠¿∞µÆ ® ´¨Ω∞™¨ ∂µ ®µ¿ 3"//(/ µ¨ªæ∂π≤. $®™Ø ®´´π¨∫∫ ™∂µ∫∞∫ª∫ ∂≠ ≠∂ºπ∂™ª¨ª∫ (32 ©∞ª∫), π¨∑π¨∫¨µª¨´ ®∫ ´¨™∞¥®≥ µº¥©¨π∫ ∫¨∑®π®ª¨´ ©¿ ∑¨π∞∂´∫. µ ®´´π¨∫∫ ∞∫ ¥®´¨ º∑∂≠ ® µ¨ªæ∂π≤ µº¥©¨π, ®µ ∂∑ª∞∂µ®≥ ∫º©µ¨ªæ∂π≤ µº¥©¨π, ®µ´ ® Ø∂∫ª µº¥©¨π.

Internet SCSI (iSCSI)/π∂ª∂™∂≥ ≠∂π ∫¨µ´∞µÆ 2"2( ∑®™≤¨ª∫ ∂Ω¨π 3"//(/ µ¨ªæ∂π≤∫.

iSCSI2¨¨ (µª¨πµ¨ª 2"2(.

iSCSI initiator∞2"2( ¨µ´∑∂∞µª, ∞´¨µª∞≠∞¨´ ©¿ ® ºµ∞∏º¨ ∞2"2( µ®¥¨, æØ∞™Ø ©¨Æ∞µ∫ ®µ ∞2"2( ∫¨∫∫∞∂µ ©¿ ∞∫∫º∞µÆ ®™∂¥¥®µ´ ª∂ ªØ¨ ∂ªØ¨π ¨µ´∑∂∞µª (ªØ¨ ª®πƨª).

iSCSI target∞2"2( ¨µ´∑∂∞µª, ∞´¨µª∞≠∞¨´ ©¿ ® ºµ∞∏º¨ ∞2"2( µ®¥¨, æØ∞™Ø ¨ø¨™ºª¨∫ ™∂¥¥®µ´∫ ∞∫∫º¨´ ©¿ ªØ¨∞2"2( ∞µ∞ª∞®ª∂π.

K

kernel2∂≠ªæ®π¨ π¨∫∑∂µ∫∞©≥¨ ≠∂π ∞µª¨π®™ª∞µÆ ¥∂∫ª ´∞𨙪≥¿ æ∞ªØ ªØ¨ ™∂¥∑ºª¨π∫ Ø®π´æ®π¨. 3ب ≤¨πµ¨≥¥®µ®Æ¨∫ ¥¨¥∂π¿, ™∂µªπ∂≥∫ º∫¨π ®™™¨∫∫, ¥®∞µª®∞µ∫ ≠∞≥¨ ∫¿∫ª¨¥∫, Ø®µ´≥¨∫ ∞µª¨ππº∑ª∫ ®µ´ ¨ππ∂π∫,∑¨π≠∂π¥∫ ∞µ∑ºª ®µ´ ∂ºª∑ºª ∫¨πΩ∞™¨∫, ®µ´ ®≥≥∂™®ª¨∫ ™∂¥∑ºª¨π π¨∫∂ºπ™¨∫.

M

mount point+∂™®≥ ∫º©´∞𨙪∂π¿ ª∂ æØ∞™Ø ® ¥∂ºµª ∂∑¨π®ª∞∂µ ®ªª®™Ø¨∫ ® ∫º©´∞𨙪∂π¿ ∂≠ ® π¨¥∂ª¨ ≠∞≥¨ ∫¿∫ª¨¥.

MPFS2¨¨ ,º≥ª∞-/®ªØ %∞≥¨ 2¿∫ª¨¥.

MPFS session"∂µµ¨™ª∞∂µ ©¨ªæ¨¨µ ®µ ,/%2 ™≥∞¨µª ®µ´ 5-7 ≠∂π ≠∞≥¨.

MPFS share2Ø®π¨´ π¨∫∂ºπ™¨ ´¨∫∞Ƶ®ª¨´ ≠∂π ¥º≥ª∞∑≥¨ø¨´ ™∂¥¥ºµ∞™®ª∞∂µ∫ ©¿ º∫∞µÆ ªØ¨ ,/%2 ≠∞≥¨ ∫¿∫ª¨¥.

Multi-Path File System (MPFS)5-7 ≠∂π ≠∞≥¨ ≠¨®ªºπ¨ ªØ®ª ®≥≥∂æ∫ بª¨π∂ƨµ¨∂º∫ ∫¨πΩ¨π∫ æ∞ªØ ,/%2 ∫∂≠ªæ®π¨ ª∂ ™∂µ™ºπ𨵪≥¿®™™¨∫∫, ´∞𨙪≥¿ ∂Ω¨π %∞©π¨ "Ø®µµ¨≥ ∂π ∞2"2( ™Ø®µµ¨≥∫, ∫Ø®π¨´ ´®ª® ∫ª∂π¨´ ∂µ ® $," 2¿¥¥¨ªπ∞ø

Using VNX Multi-Path File System 59

Glossary

∂π5-7 ≠∂π ©≥∂™≤ ∫ª∂π®Æ¨ ∫¿∫ª¨¥.,/%2 ®´´∫ ® ≥∞Æتæ¨∞Æت ∑π∂ª∂™∂≥ ™®≥≥¨´ %∞≥¨,®∑∑∞µÆ /π∂ª∂™∂≥(%,/) ªØ®ª ™∂µªπ∂≥∫ ¥¨ª®´®ª® ∂∑¨π®ª∞∂µ∫.

N

network basic input/output system (NetBIOS)-¨ªæ∂π≤ ∑π∂Æ𮥥∞µÆ ∞µª¨π≠®™¨ ®µ´ ∑π∂ª∂™∂≥ ´¨Ω¨≥∂∑¨´ ≠∂π (!, ∑¨π∫∂µ®≥ ™∂¥∑ºª¨π∫.

network file system (NFS)-¨ªæ∂π≤ ≠∞≥¨ ∫¿∫ª¨¥ (-%2) ∞∫ ® µ¨ªæ∂π≤ ≠∞≥¨ ∫¿∫ª¨¥ ∑π∂ª∂™∂≥ ªØ®ª ®≥≥∂æ∫ ® º∫¨π ∂µ ® ™≥∞¨µª™∂¥∑ºª¨π ª∂ ®™™¨∫∫ ≠∞≥¨∫ ∂Ω¨π ® µ¨ªæ∂π≤ ®∫ ¨®∫∞≥¿ ®∫ ∞≠ ªØ¨ µ¨ªæ∂π≤ ´¨Ω∞™¨∫ æ¨π¨ ®ªª®™Ø¨´ ª∂ ∞ª∫≥∂™®≥ ´∞∫≤∫.

Network Information Service (NIS)#∞∫ªπ∞©ºª¨´ ´®ª® ≥∂∂≤º∑ ∫¨πΩ∞™¨ ªØ®ª ∫Ø®π¨∫ º∫¨π ®µ´ ∫¿∫ª¨¥ ∞µ≠∂𥮪∞∂µ ®™π∂∫∫ ® µ¨ªæ∂π≤,∞µ™≥º´∞µÆ º∫¨πµ®¥¨∫, ∑®∫∫æ∂π´∫, Ø∂¥¨ ´∞𨙪∂π∞¨∫, Æπ∂º∑∫, Ø∂∫ªµ®¥¨∫, (/ ®´´π¨∫∫¨∫, ®µ´µ¨ªÆπ∂º∑ ´¨≠∞µ∞ª∞∂µ∫.

Network Time Protocol (NTP)/π∂ª∂™∂≥ º∫¨´ ª∂ ∫¿µ™Øπ∂µ∞¡¨ ªØ¨ π¨®≥ª∞¥¨ ™≥∂™≤ ∞µ ® ™∂¥∑ºª¨π æ∞ªØ ® µ¨ªæ∂π≤ ª∞¥¨ ∫∂ºπ™¨.

network-attached storage (NAS)2∑¨™∞®≥∞¡¨´ ≠∞≥¨ ∫¨πΩ¨π ªØ®ª ™∂µµ¨™ª∫ ª∂ ªØ¨ µ¨ªæ∂π≤. - 2´¨Ω∞™¨, ∫º™Ø ®∫ 5-7 ≠∂π ≠∞≥¨, ™∂µª®∞µ∫® ∫∑¨™∞®≥∞¡¨´ ∂∑¨π®ª∞µÆ ∫¿∫ª¨¥ ®µ´ ® ≠∞≥¨ ∫¿∫ª¨¥, ®µ´ ∑π∂™¨∫∫¨∫ ∂µ≥¿ (/. π¨∏º¨∫ª∫ ©¿ ∫º∑∑∂πª∞µÆ∑∂∑º≥®π ≠∞≥¨ ∫Ø®π∞µÆ ∑π∂ª∂™∂≥∫ ∫º™Ø ®∫ -%2 ®µ´ "(%2.

ntxmap"º∫ª∂¥∞¡¨´ ∫∂≠ªæ®π¨ º∫¨´ ª∂ ∫º∑∑∂πª ¥®∑∑∞µÆ π¨∏º∞𨥨µª∫ ∞µ ® ¥º≥ª∞∑π∂ª∂™∂≥ ¨µΩ∞π∂µ¥¨µª.

P

Production File System (PFS)/π∂´º™ª∞∂µ %∞≥¨ 2¿∫ª¨¥ ∂µ5-7 ≠∂π %∞≥¨. /%2 ∞∫ ©º∞≥ª ∂µ 2¿¥¥¨ªπ∞ø Ω∂≥º¥¨∫ ∂π 5-7 ≠∂π !≥∂™≤+4-∫ ®µ´ ¥∂ºµª¨´ ∂µ ® #®ª® ,∂Ω¨π ∞µ ªØ¨ 5-7 ≠∂π %∞≥¨.

R

replication2¨πΩ∞™¨ ªØ®ª ∑π∂´º™¨∫ ® π¨®´-∂µ≥¿, ∑∂∞µª-∞µ-ª∞¥¨ ™∂∑¿ ∂≠ ® ∫∂ºπ™¨ ≠∞≥¨ ∫¿∫ª¨¥. 3ب ∫¨πΩ∞™¨∑¨π∞∂´∞™®≥≥¿ º∑´®ª¨∫ ªØ¨ ™∂∑¿, ¥®≤∞µÆ ∞ª ™∂µ∫∞∫ª¨µª æ∞ªØ ªØ¨ ∫∂ºπ™¨ ≠∞≥¨ ∫¿∫ª¨¥.

S

storage area network (SAN)-¨ªæ∂π≤ ∂≠ ´®ª® ∫ª∂π®Æ¨ ´∞∫≤∫. (µ ≥®πƨ ¨µª¨π∑π∞∫¨∫, ® 2 - ™∂µµ¨™ª∫ ¥º≥ª∞∑≥¨ ∫¨πΩ¨π∫ ª∂ ®™¨µªπ®≥∞¡¨´ ∑∂∂≥ ∂≠ ´∞∫≤ ∫ª∂π®Æ¨.

See also network-attached storage (NAS).

stripe size-º¥©¨π ∂≠ ©≥∂™≤∫ ∞µ ∂µ¨ ∫ªπ∞∑¨ ∂≠ ® ∫ªπ∞∑¨ Ω∂≥º¥¨.

60 Using VNX Multi-Path File System

Glossary

T

thread2¨∏º¨µª∞®≥ ≠≥∂æ ∂≠ ™∂µªπ∂≥ ∞µ ® ™∂¥∑ºª¨π ∑π∂Æπ®¥. ªØπ¨®´ ™∂µ∫∞∫ª∫ ∂≠ ®´´π¨∫∫ ∫∑®™¨, ® ∫ª®™≤,≥∂™®≥ Ω®π∞®©≥¨∫, ®µ´ Æ≥∂©®≥ Ω®π∞®©≥¨∫.

V

virus definition file%∞≥¨ ™∂µª®∞µ∞µÆ ∞µ≠∂𥮪∞∂µ ≠∂π ® Ω∞πº∫ ∑π∂ª¨™ª∞∂µ ∑π∂Æ𮥠ªØ®ª ∑π∂ª¨™ª∫ ® ™∂¥∑ºª¨π ≠π∂¥ ªØ¨µ¨æ¨∫ª, ¥∂∫ª ´¨∫ªπº™ª∞Ω¨ Ω∞πº∫¨∫. 3Ø∞∫ ≠∞≥¨ ∞∫ ∫∂¥¨ª∞¥¨∫ π¨≠¨ππ¨´ ª∂ ®∫ ® Ω∞πº∫ ∫∞Ƶ®ªºπ¨ º∑´®ª¨≠∞≥¨, ® Ω∞πº∫ ∑®ªª¨πµ º∑´®ª¨ ≠∞≥¨, ∂π ® Ω∞πº∫ ∞´¨µª∞ª¿ ((#$) ≠∞≥¨.

See also AV engine, AV server, CAVA, and VC client.

virus-checking client (VC client)5∞πº∫-™Ø¨™≤∞µÆ ®Æ¨µª ™∂¥∑∂µ¨µª ∂≠ 5-7 ≠∂π ≠∞≥¨ ∫∂≠ªæ®π¨ ªØ®ª πºµ∫ ∂µ ªØ¨ #®ª® ,∂Ω¨π.

See also AV engine, AV server, and CAVA.

Using VNX Multi-Path File System 61

Glossary

62 Using VNX Multi-Path File System

Glossary

Index

A

adding threads 34architectures 15

B

bladeautomatic failover 11listing open sessions 43locking policy 11performance statistics 38protocol statistics 36server_mount command 11standby 11

C

checkpoint 11

D

deleting configuration parameters 33deleting threads 34

E

EMC E-Lab Navigator 46error messages 46exporting a file system path 28exporting file system 11

F

Fibre Channel architecture 16file system

incompatability 27

file system (continued)mount status, verifying 26verify compatability 25

file system compatibility 11firewall FMP port numbers 28

G

global shares 11

H

hardware 10

I

iSCSI architecture 17

M

messages, error 46metadata 15miscellaneous Linux server issues 54mount status, verifying 26mounting a file system 28, 49MPFS

components 13file statistics 41software installation 47software requirements 9

N

NAS features 20nas_fs command 11NetBIOS shares 11

Using VNX Multi-Path File System 63

network 10

O

operating through a firewall 28overview 7

P

performance 11performance statistics 37planning considerations 20protocol statistics 36

R

related documentation 12replicator 11resetting default thread values 35resetting statistics 44restrictions 10

S

session statistics 40setting threads 32starting MPFS 27stopping MPFS 29storage 10stripe size 11support for shares 11

system requirements 9

T

threadsadding 34default 15maximum 15resetting default values 35setting 32

troubleshooting 45

U

unmount a file system 49unmounting, troubleshooting 51

V

VNXover Fibre Channel 16over iSCSI 18over iSCSI/FC 19

VNX for blockover Fibre Channel 17over iSCSI 17over ISCSI/FC 19

VNX VG2/VG8over Fibre Channel 17over ISCSI 18over iSCSI/FC 19

64 Using VNX Multi-Path File System

Index