netbackup planning guide 307083

217
Symantec NetBackup ™ Backup Planning and Performance Tuning Guide UNIX, Windows, and Linux Release 6.5

Upload: qwertycat

Post on 28-Mar-2015

272 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Netbackup Planning Guide 307083

Symantec NetBackup ™Backup Planning andPerformance Tuning Guide

UNIX, Windows, and Linux

Release 6.5

DDavidei
Revised
Page 2: Netbackup Planning Guide 307083

Symantec NetBackup™ Backup Planning andPerformance Tuning Guide

The software described in this book is furnished under a license agreement and may be usedonly in accordance with the terms of the agreement.

Documentation version 6.5

Legal NoticeCopyright © 2010 Symantec Corporation. All rights reserved.

Symantec, the Symantec Logo, and NetBackup are trademarks or registered trademarks ofSymantec Corporation or its affiliates in the U.S. and other countries. Other names may betrademarks of their respective owners.

Portions of this software are derived from the RSA Data Security, Inc. MD5 Message-DigestAlgorithm. Copyright 1991-92, RSA Data Security, Inc. Created 1991. All rights reserved.

This Symantec product may contain third party software for which Symantec is requiredto provide attribution to the third party (“Third Party Programs”). Some of the Third PartyPrograms are available under open source or free software licenses. The License Agreementaccompanying the Software does not alter any rights or obligations you may have underthose open source or free software licenses. Please see the Third Party Legal Notice Appendixto this Documentation or TPIP ReadMe File accompanying this Symantec product for moreinformation on the Third Party Programs.

The product described in this document is distributed under licenses restricting its use,copying, distribution, and decompilation/reverse engineering. No part of this documentmay be reproduced in any form by any means without prior written authorization ofSymantec Corporation and its licensors, if any.

THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS,REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OFMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT,ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TOBE LEGALLY INVALID. SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR INCIDENTALOR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING,PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINEDIN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.

The Licensed Software and Documentation are deemed to be commercial computer softwareas defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights inCommercial Computer Software or Commercial Computer Software Documentation", asapplicable, and any successor regulations. Any use, modification, reproduction release,performance, display or disclosure of the Licensed Software and Documentation by the U.S.Government shall be solely in accordance with the terms of this Agreement.

Page 3: Netbackup Planning Guide 307083

Symantec Corporation350 Ellis StreetMountain View, CA 94043

http://www.symantec.com

Page 4: Netbackup Planning Guide 307083

Technical SupportSymantec Technical Support maintains support centers globally. TechnicalSupport’s primary role is to respond to specific queries about product featuresand functionality. The Technical Support group also creates content for our onlineKnowledge Base. The Technical Support group works collaboratively with theother functional areas within Symantec to answer your questions in a timelyfashion. For example, the Technical Support group works with Product Engineeringand Symantec Security Response to provide alerting services and virus definitionupdates.

Symantec’s support offerings include the following:

■ A range of support options that give you the flexibility to select the rightamount of service for any size organization

■ Telephone and/or web-based support that provides rapid response andup-to-the-minute information

■ Upgrade assurance that delivers automatic software upgrades protection

■ Global support purchased on a regional business hours or 24 hours a day, 7days a week basis

■ Premium service offerings that include Account Management Services

For information about Symantec’s support offerings, you can visit our web siteat the following URL:

www.symantec.com/business/support/

All support services will be delivered in accordance with your support agreementand the then-current enterprise technical support policy.

Contacting Technical SupportCustomers with a current support agreement may access Technical Supportinformation at the following URL:

www.symantec.com/business/support/

Before contacting Technical Support, make sure you have satisfied the systemrequirements that are listed in your product documentation. Also, you should beat the computer on which the problem occurred, in case it is necessary to replicatethe problem.

When you contact Technical Support, please have the following informationavailable:

■ Product release level

Page 5: Netbackup Planning Guide 307083

■ Hardware information

■ Available memory, disk space, and NIC information

■ Operating system

■ Version and patch level

■ Network topology

■ Router, gateway, and IP address information

■ Problem description:

■ Error messages and log files

■ Troubleshooting that was performed before contacting Symantec

■ Recent software configuration changes and network changes

Licensing and registrationIf your Symantec product requires registration or a license key, access our technicalsupport web page at the following URL:

www.symantec.com/business/support/

Customer serviceCustomer service information is available at the following URL:

www.symantec.com/business/support/

Customer Service is available to assist with non-technical questions, such as thefollowing types of issues:

■ Questions regarding product licensing or serialization

■ Product registration updates, such as address or name changes

■ General product information (features, language availability, local dealers)

■ Latest information about product updates and upgrades

■ Information about upgrade assurance and support contracts

■ Information about the Symantec Buying Programs

■ Advice about Symantec's technical support options

■ Nontechnical presales questions

■ Issues that are related to CD-ROMs or manuals

Page 6: Netbackup Planning Guide 307083

Support agreement resourcesIf you want to contact Symantec regarding an existing support agreement, pleasecontact the support agreement administration team for your region as follows:

[email protected] and Japan

[email protected], Middle-East, and Africa

[email protected] America and Latin America

Additional enterprise servicesSymantec offers a comprehensive set of services that allow you to maximize yourinvestment in Symantec products and to develop your knowledge, expertise, andglobal insight, which enable you to manage your business risks proactively.

Enterprise services that are available include the following:

Managed Services remove the burden of managing and monitoring securitydevices and events, ensuring rapid response to real threats.

Managed Services

Symantec Consulting Services provide on-site technical expertise fromSymantec and its trusted partners. Symantec Consulting Services offer a varietyof prepackaged and customizable options that include assessment, design,implementation, monitoring, and management capabilities. Each is focused onestablishing and maintaining the integrity and availability of your IT resources.

Consulting Services

Education Services provide a full array of technical training, security education,security certification, and awareness communication programs.

Education Services

To access more information about enterprise services, please visit our web siteat the following URL:

www.symantec.com/business/services/

Select your country or language from the site index.

Page 7: Netbackup Planning Guide 307083

Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Section 1 Backup planning and configurationguidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Chapter 1 NetBackup capacity planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Purpose of this guide .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Disclaimer .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Introduction .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Analyzing your backup requirements ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Designing your backup system .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Calculate the required data transfer rate for your backups .... . . . . . . . . . 21Calculate how long it takes to back up to tape .... . . . . . . . . . . . . . . . . . . . . . . . . . . 22Calculate how many tape drives are needed .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Calculate the required data transfer rate for your

network(s) .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Calculate the size of your NetBackup catalog .... . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Calculate the size of the EMM database .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Calculate media needed for full and incremental backups .... . . . . . . . . . . 32Calculate the size of the tape library needed to store your

backups .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Estimate your SharedDisk storage requirements ... . . . . . . . . . . . . . . . . . . . . . . . 35Based on your previous findings, design your master server ... . . . . . . . . 38Estimate the number of master servers needed .... . . . . . . . . . . . . . . . . . . . . . . . . 40Design your media server ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42Estimate the number of media servers needed .... . . . . . . . . . . . . . . . . . . . . . . . . . 43Design your NOM server ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Summary .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Questionnaire for capacity planning .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Chapter 2 Master server configuration guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Managing NetBackup job scheduling .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Delays in starting jobs ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Delays in running queued jobs ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

Contents

Page 8: Netbackup Planning Guide 307083

Delays in shared disk jobs becoming active ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Job delays caused by unavailable media ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Delays after removing a media server ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Limiting factors for job scheduling .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Staggering the submission of jobs for better load

distribution .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57Adjusting the server’s network connection options .... . . . . . . . . . . . . . . . . . . 57Using NOM to monitor jobs ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58Testing for disaster recovery .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Miscellaneous considerations .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60Selection of storage units ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60Disk staging .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63File system capacity ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

NetBackup catalog strategies ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Catalog backup types ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Guidelines for managing the catalog .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65Catalog backup not finishing in the available window .... . . . . . . . . . . . . . . . 65Catalog compression .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Merging, splitting, or moving servers ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66Guidelines for policies ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

Include and exclude lists ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Critical policies ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Schedule frequency .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Managing logs ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Optimizing the performance of vxlogview .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Interpreting legacy error logs ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Chapter 3 Media server configuration guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Network and SCSI/FC bus bandwidth .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73How to change the threshold for media errors ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

Adjusting media_error_threshold .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75About tape I/O error handling .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

Reloading the st driver without restarting Solaris ... . . . . . . . . . . . . . . . . . . . . . . . . . . . 77Media Manager drive selection .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Chapter 4 Media configuration guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

Dedicated or shared backup environment .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79Suggestions for pooling .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79Disk versus tape .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Contents8

Page 9: Netbackup Planning Guide 307083

Chapter 5 Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Best practices: SAN Client ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83Best practices: Flexible Disk Option .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85Best practices: new tape drive technologies ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85Best practices: tape drive cleaning .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Frequency-based cleaning .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86TapeAlert (reactive cleaning, or on-demand cleaning) ... . . . . . . . . . . . . . . . . 86Robotic cleaning .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Best practices: storing tape cartridges ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88Best practices: recoverability ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

Suggestions for data recovery planning .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Best practices: naming conventions .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Policy names .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Schedule names .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Storage unit and storage group names .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Best practices: duplication .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

Section 2 Performance tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

Chapter 6 Measuring performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

Measuring performance: overview .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95Controlling system variables for consistent testing conditions .... . . . . . . . . . 96

Server variables ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96Network variables ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97Client variables ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98Data variables ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

Evaluating performance .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99Table of All Log Entries report ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

Evaluating UNIX system components ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104Monitoring CPU load (UNIX) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104Measuring performance independent of tape or disk output ... . . . . . . 104

Evaluating Windows system components ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106Monitoring CPU load (Windows) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107Monitoring memory use .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107Monitoring disk load .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108Increasing disk performance .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Chapter 7 Tuning the NetBackup data transfer path . . . . . . . . . . . . . . . . . . . 111

Tuning the data transfer path: overview .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111The data transfer path .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Basic tuning suggestions for the data path .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

9Contents

Page 10: Netbackup Planning Guide 307083

Tuning suggestions .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113NetBackup client performance .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115NetBackup network performance .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Network interface settings ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117Network load .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118Setting the network buffer size for the NetBackup media

server ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118Setting the NetBackup client communications buffer size ... . . . . . . . . . 120Using socket communications (the NOSHM file) .. . . . . . . . . . . . . . . . . . . . . . . . 121Using multiple interfaces ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

NetBackup server performance .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123Shared memory (number and size of data buffers) ... . . . . . . . . . . . . . . . . . . . 124Changing parent and child delay values ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134Using NetBackup wait and delay counters ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135Understanding the two-part communication process ... . . . . . . . . . . . . . . . 135Estimating the impact of Inline copy on backup

performance .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147Fragment size and NetBackup restores ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148Other restore performance issues ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

NetBackup storage device performance .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

Chapter 8 Tuning other NetBackup components . . . . . . . . . . . . . . . . . . . . . . . . . . 159

Multiplexing and multiple data streams .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159When to use multiplexing and multiple data streams .... . . . . . . . . . . . . . . 160Effects of multiplexing and multistreaming on backup and

restore ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162Resource allocation .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

Improving the assignment of resources to queued jobs ... . . . . . . . . . . . . . 162Sharing reservations .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165Disabling the sharing of reservations .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165Adjusting the resource monitoring interval ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167Disabling on-demand unloads .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

Encryption .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168Compression .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

How to enable compression .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168Tape versus client compression .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

Using encryption and compression .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170NetBackup Java .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170Vault ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170Fast recovery with Bare Metal Restore ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170Backing up many small files ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

Notes on FlashBackup performance .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

Contents10

Page 11: Netbackup Planning Guide 307083

Adjusting the read buffer for raw partition backup .... . . . . . . . . . . . . . . . . . . . . . . . 174Adjusting the allocation size of the snapshot mount point volume for

NetBackup for VMware .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174NetBackup Operations Manager (NOM) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

Adjusting the NOM server heap size ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174Adjusting the NOM web server heap size ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175Adjusting the Sybase cache size ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176Saving NOM databases and database logs on separate hard

disks ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178Defragmenting NOM databases ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181Purge data periodically ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182NOM performance and floating-point calculations .... . . . . . . . . . . . . . . . . . . 182

Chapter 9 Tuning disk I/O performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

Hardware selection .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183Hardware performance hierarchy .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

Performance hierarchy level 1 ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185Performance hierarchy level 2 ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186Performance hierarchy level 3 ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187Performance hierarchy level 4 ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187Performance hierarchy level 5 ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189Notes on performance hierarchies ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

Hardware configuration examples ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190Tuning software for better performance .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

Chapter 10 OS-related tuning factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

Kernel tuning (UNIX) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195Kernel parameters on Solaris 8 and 9 .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

Example settings for kernel parameters ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197Kernel parameters on Solaris 10 .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

Recommendations on particular Solaris 10 parameters ... . . . . . . . . . . . . 199Message queue and shared memory parameters on HP-UX .... . . . . . . . . . . . . . 200

Note on shmmax .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201Changing the HP-UX kernel parameters ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

Changing kernel parameters on Linux .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202About data buffer size (Windows) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203Adjusting data buffer size (Windows) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203Other Windows issues ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

Troubleshooting NetBackup’s configuration files onWindows .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

Disable antivirus software when backing up Windows files ... . . . . . . . 205

11Contents

Page 12: Netbackup Planning Guide 307083

Appendix A Additional resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

Performance tuning information at Vision online .... . . . . . . . . . . . . . . . . . . . . . . . . . 207Performance monitoring utilities ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207Freeware tools for bottleneck detection .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207Mailing list resources ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

Contents12

Page 13: Netbackup Planning Guide 307083

Backup planning andconfiguration guidelines

■ Chapter 1. NetBackup capacity planning

■ Chapter 2. Master server configuration guidelines

■ Chapter 3. Media server configuration guidelines

■ Chapter 4. Media configuration guidelines

■ Chapter 5. Best practices

1Section

Page 14: Netbackup Planning Guide 307083

14

Page 15: Netbackup Planning Guide 307083

NetBackup capacityplanning

This chapter includes the following topics:

■ Purpose of this guide

■ Disclaimer

■ Introduction

■ Analyzing your backup requirements

■ Designing your backup system

■ Questionnaire for capacity planning

Purpose of this guideVeritas NetBackup is a high-performance data protection application. Itsarchitecture is designed for large and complex distributed computingenvironments. NetBackup provides scalable storage servers (master and mediaservers) that can be configured for network backup, recovery, archiving, and filemigration services.

This manual is for administrators who want to analyze, evaluate, and tuneNetBackup performance. This manual is intended to answer questions such asthe following: How big should the NetBackup master server be? How can the serverbe tuned for maximum performance? How many CPUs and disk or tape drives areneeded? How to configure backups to run as fast as possible? How to improverecovery times? What tools can characterize or measure how NetBackup handlesdata?

1Chapter

Page 16: Netbackup Planning Guide 307083

Note: Most critical factors in performance are based in hardware rather thansoftware. Compared to software, hardware and its configuration have roughlyfour times greater effect on performance. Although this guide provides somehardware configuration assistance, it is assumed for the most part that yourdevices are correctly configured.

DisclaimerIt is assumed you are familiar with NetBackup and your applications, operatingsystems, and hardware. The information in this manual is advisory only, presentedin the form of guidelines. Changes to an installation that are undertaken as aresult of the information in this guide should be verified in advance forappropriateness and accuracy. Some of the information that is contained hereinmay apply only to certain hardware or operating system architectures.

Note: The information in this manual is subject to change.

IntroductionTo estimate your backup requirements, the first step is to understand yourenvironment. Many performance issues can be traced to hardware orenvironmental issues. An understanding of the entire backup data path isimportant to determine the maximum performance you can expect from yourinstallation.

Every backup environment has a bottleneck. It may be a fast bottleneck, but itdetermines the maximum performance obtainable with your system.

Example:

In this environment, backups run slowly (in other words, they do not complete inthe scheduled backup window). Total throughput is 80 MB per second.

What makes the backups run slowly? How can NetBackup or the environment beconfigured to increase backup performance in this situation?

Figure 1-1 shows this environment.

NetBackup capacity planningDisclaimer

16

Page 17: Netbackup Planning Guide 307083

Figure 1-1 Dedicated NetBackup environment

LAN

NetBackupclients

Tape library:4 LTO gen 3

Dedicated NetBackup server:4 CPUs, 8 GB RAM

Gigabit Ethernet

2 Gb Fibre Channel

SAN

The explanation is the following: the 4 LTO gen 3 tape drives have a combinedthroughput of 320 MB (megabytes) per second. And the 2 Gbit SAN connectionfrom the server to the tape drives has a theoretical throughput of 200 MB persecond. The LAN, however, having a speed of 1 gigabit per second, has a theoreticalthroughput of 125 MB per second. In practice, 1-gigabit throughput is unlikely toexceed 70% utilization. Therefore, the best delivered data rate is about 90 MB persecond to the NetBackup server.

The throughput may be even lower when you consider the following:

■ TCP/IP packet headers

■ TCP-window size constraints

■ Router hops (packet latency for ACK packets delays the sending of the nextdata packet)

■ Host CPU utilization

■ File system overhead

■ Other LAN users' activity

Since the LAN is the slowest element in the configuration, it is the first place tolook to increase backup performance.

17NetBackup capacity planningIntroduction

Page 18: Netbackup Planning Guide 307083

Analyzing your backup requirementsMany elements influence your backup strategy. You must analyze and comparethese factors and then make backup decisions according to your site’s priorities.

When you plan your installation’s NetBackup capacity, ask yourself the followingquestions:

■ Which systems need to be backed up?Identify all systems that need to be backed up. List each system separately sothat you can identify any that require more resources to back up. Documentwhich computers have disk drives or tape drives or libraries attached and writedown the model type of each drive or library. Identify any applications onthese computers that need to be backed up, such as Oracle, DB2, orMS-Exchange. In addition, record each host name, operating system andversion, database type and version, network technology (for example,1000BaseT), and location.

■ How much data is to be backed up?Calculate how much data you need to back up. Include the total disk space oneach individual system, including that for databases. Remember to add thespace on mirrored disks only once.By calculating the total size for all disks, you can design a system that takesfuture growth into account. Consider the future by estimating how much datayou will need to back up in six months to a few years from now.

Consider the following:

■ Do you plan to back up databases or raw partitions?If you plan to back up databases, identify the database engines, their versionnumbers, and the method to back them up. NetBackup can back up severaldatabase engines and raw file systems, and databases can be backed upwhile they are online or offline. To back up any database while it is online,you need a NetBackup database agent for your particular database engine.With a Snapshot Client backup of databases using raw partitions, you backup as much data as the total size of your raw partition. Also, remember toadd the size of your database backups to your final calculations whenfiguring out how much data you need to back up.

■ Do you plan to back up special application servers such as MS-Exchangeor Lotus Notes?If you plan on backing up any application servers, you must identify theirtypes and application release numbers. As previously mentioned, you mayneed a special NetBackup agent to properly back up your particular servers.

■ What types of backups are needed and how often should they take place?

NetBackup capacity planningAnalyzing your backup requirements

18

Page 19: Netbackup Planning Guide 307083

The frequency of your backups has a direct impact on your:

■ Disk or tape requirements

■ Data transfer rate considerations

■ Restore opportunities.

To properly size your backup system, you must decide on the type andfrequency of your backups. Will you perform daily incremental and weeklyfull backups? Monthly or bi-weekly full backups?

■ How much time is available to run each backup?What is the window of time that is available to complete each backup? Thelength of a window dictates several aspects of your backup strategy. Forexample, you may want a larger window of time to back up multiple,high-capacity servers. Or you may consider the use of advanced NetBackupfeatures such as synthetic backups, a local snapshot method, or FlashBackup.

■ How long should backups be retained?An important factor while designing your backup strategy is to consider yourpolicy for backup expiration. The amount of time a backup is kept is also knownas the "retention period." A fairly common policy is to expire your incrementalbackups after one month and your full backups after six months. With thispolicy, you can restore any daily file change from the previous month andrestore data from full backups for the previous six months. The length of theretention period depends on your own unique requirements and businessneeds, and perhaps regulatory requirements. However, the length of yourretention period is directly proportional to the number of tapes you need andthe size of your NetBackup catalog database. Your NetBackup catalog databasekeeps track of all the information on all your disk drives and tapes. The catalogsize is tightly tied in to your retention period and the frequency of yourbackups. Also, database management daemons and services may becomebottlenecks.

■ If backups are sent off site, how long must they remain off site?If you plan to send tapes off-site as a disaster recovery option, identify whichtapes to send off site and how long they remain off site. You might decide toduplicate all your full backups, or only a select few. You might also decide toduplicate certain systems and exclude others. As tapes are sent off site, youmust buy new tapes to replace them until they are recycled back from off sitestorage. If you forget this detail, you may run out of tapes when you most needthem.

■ What is your network technology?If you plan to back up any system over a network, note the network types. Thenext section explains how to calculate the amount of data you can transferover those networks in a given time.

19NetBackup capacity planningAnalyzing your backup requirements

Page 20: Netbackup Planning Guide 307083

See “Designing your backup system” on page 20.Based on the amount of data that you want to back up and the frequency ofthose backups, consider installing a private network for backups.

■ What new systems do you expect to add in the next six months?Plan for future growth when designing your backup system. Analyze thepotential growth of your system to ensure that your current backup solutioncan accommodate future requirements. Remember to add any resulting growthfactor that you incur to your total backup solution.

■ Will user-directed backups or restores be allowed?Allow users to do their own backups and restores, to reduce the time it takesto initiate certain operations. However, user-directed operations can also resultin higher support costs and the loss of some flexibility. User-directed operationscan monopolize media and tape drives when you most need them. They canalso generate more support calls and training issues while the users becomefamiliar with the new backup system. Decide whether user access to some ofyour backup systems’ functions is worth the potential costs.

Other factors to consider when planning your backup capacity include:

■ Data typeWhat are the types of data: text, graphics, database? How compressible is thedata? How many files are involved? Will the data be encrypted? (Note thatencrypted backups may run slower.)See “Encryption” on page 168.

■ Data locationIs the data local or remote? What are the characteristics of the storagesubsystem? What is the exact data path? How busy is the storage subsystem?

■ Change managementBecause hardware and software infrastructure change over time, create anindependent test-backup environment to ensure your production environmentworks with the changed components.

Designing your backup systemThe ideas and examples that follow are based on standard and ideal calculations.Your numbers can differ based on your particular environment, data, andcompression rates.

After an analysis of your backup requirements, you can begin designing yourbackup system.

Use the following subsections in order:

NetBackup capacity planningDesigning your backup system

20

Page 21: Netbackup Planning Guide 307083

■ Calculate the required data transfer rate for your backups

■ Calculate how long it takes to back up to tape

■ Calculate how many tape drives are needed

■ Calculate the required data transfer rate for your network(s)

■ Calculate the size of your NetBackup catalog

■ Calculate the size of the EMM database

■ Calculate media needed for full and incremental backups

■ Calculate the size of the tape library needed to store your backups

■ Estimate your SharedDisk storage requirements

■ Based on your previous findings, design your master server

■ Estimate the number of master servers needed

■ Design your media server

■ Estimate the number of media servers needed

■ Design your NOM server

■ Summary

Calculate the required data transfer rate for your backupsThis section helps calculate the rate of transfer your system must achieve tocomplete a backup of all your data in the allowed time window. Use the followingformula to calculate your ideal data transfer rate for full and incremental backups:

Ideal data transfer rate = (amount of data to back up) / (backup window)

On average, the daily change in data for many systems is between 10 and 20percent. Calculate a change of 20% in the (amount of data to back up). Then divideit by the (backup window) for the backup rate for incremental backups.

If you run cumulative-incremental backups, account for data that undergoeschanges. For example: if the same 20% of the data changes daily, yourcumulative-incremental backup is smaller than if a different 20% changes everyday.

Example: Calculating your ideal data transfer rate during theweekAssumptions:

Amount of data to back up during a full backup = 500 gigabytes

21NetBackup capacity planningDesigning your backup system

Page 22: Netbackup Planning Guide 307083

Amount of data to back up during an incremental backup = 20% of a full backupDaily backup window = 8 hours

Solution 1:

Full backup = 500 gigabytes

Ideal data transfer rate = 500 gigabytes per 8 hours = 62.5 gigabytes per hour

Solution 2:

Incremental backup = 100 gigabytes

Ideal data transfer rate = 100 gigabytes per 8 hours = 12.5 gigabytes per hour

For your ideal weekend transfer rate, divide the amount of data that must bebacked up by the length of the weekend backup window.

Calculate how long it takes to back up to tapeWhen you know your ideal data transfer rates for backups, you can figure outwhat kind of tape drive technology meets your needs. With the length of yourbackup windows and the amount of data to back up, you can calculate the numberof required tape drives.

Table 1-1 lists the transfer rates for several tape drive technologies.

Table 1-1 Tape drive data transfer rates

Gigabytes per hour (2:1compression)

Gigabytes per hour (nocompression)

Drive

10854LTO gen 1

216105LTO gen 2

576288LTO gen 3

11557SDLT 320

844422LTO gen 4

259129SDLT 600

252 (2.33:1)108STK 9940B

844422STK T10000

700350TS1120

The values are those published by their individual manufacturers and observedin real-life situations. Keep in mind that device manufacturers list optimum rates

NetBackup capacity planningDesigning your backup system

22

Page 23: Netbackup Planning Guide 307083

for their devices. In reality, it is quite rare to achieve those values when a systemhas to deal with the following: the overhead of the operating system, CPU loads,bus architecture, data types, and other hardware and software issues.

The gigabytes per hour values from Table 1-1 represent transfer rates for severaldevices, with and without compression. When you design your backup system,consider the nature of both your data and your environment. Estimate on theconservative side when planning capacity.

To calculate the length of your backups using a particular tape drive, use theformula:

Actual data transfer rate = (Amount of data to back up)/((Number of drives) * (Tapedrive transfer rate))

Faster tape drives are not always betterThe fastest tape technology may not be the most appropriate for your site. Considerthe following. The figures in Table 1-1 are the maximum speeds that the tapedrives can achieve. But tape drives also have a minimum speed below which theystart to operate inefficiently. This figure is known as the "minimum streamingspeed" and is usually around 40% of the native (no compression) speed of thedevice. If the drive receives data at less than minimum streaming speed, it operatesin a stop-and-start mode ("shoe shining"). In this mode the drive empties the databuffers faster than they can be filled and has to stop while the buffers fill up again.When the buffers are full the drive must start up and reposition the tape beforeit writes the next data block. This stop-and-start behavior damages both the tapeand the drive and also results in a further slowing down of the backup. For thisreason, the fastest tape technology may not always be the most appropriate oneto use.

Note on using disk drives for staging slower backupsUnlike tape drives, disk devices (including VTLs) do not have a minimum streamingspeed. It may therefore be a good strategy to stage slower backups to disk beforeduplicating them to tape: the duplication of the backup image from disk runsfaster than the original slow backup.

Example: Calculating the data transfer rate requiredAssumptions:

Amount of data to back up during a full backup = 1000 gigabytes (1 terabyte)

Daily backup window = 8 hours

23NetBackup capacity planningDesigning your backup system

Page 24: Netbackup Planning Guide 307083

Ideal transfer rate (data/(backup window)) = 1000 gigabytes per 8 hours = 125gigabytes per hour

Solution 1:

Tape drive = 1 drive, LTO gen 2

Tape drive transfer rate = 105 gigabytes per hour

Actual data transfer rate = 1000 gigabytes/((1 drive) * (105 gigabytes per hour))= 9.52 hours

At 105 gigabytes per hour, an LTO gen 2 tape drive takes 9.52 hours to performa 1000-gigabyte backup. A single LTO gen 2 tape drive cannot perform the backupin eight hours. You need a faster tape drive or another LTO gen 2 tape drive.

Solution 2:

Tape drive = 1 drive, LTO gen 3

Tape drive transfer rate = 288 gigabytes per hour

Actual data transfer rate = 1000 gigabytes/((1 drive) * (288 gigabytes per hour))= 3.47 hours

With a data transfer rate of 288 gigabytes per hour, a single LTO gen 3 tape drivetakes 3.47 hours to perform a 1000-gigabyte backup.

Depending on the several factors that can influence the transfer rates of yourtape drives, you can obtain higher or lower transfer rates. These example solutionsare approximations of what you can expect.

Note also that a backup of encrypted data may take more time.

See “Encryption” on page 168.

Calculate how many tape drives are neededHere is the formula:

Number of drives = (amount of data to back up) /((backup window) * (tape drivetransfer rate))

Example: Calculating the number of tape drives needed toperform a backupAssumptions:

Amount of data to back up = 1000 gigabytes (1 terabyte)

Backup window = 8 hours

Solution 1:

NetBackup capacity planningDesigning your backup system

24

Page 25: Netbackup Planning Guide 307083

Tape drive type = LTO gen 2

Tape drive transfer rate = 105 gigabytes per hour

Number of drives = 1000 gigabytes/ ((8 hours) * (105 gigabytes per hour)) = 1.19= 2 drives

Solution 2:

Tape drive type = LTO gen 3

Tape drive transfer rate = 288 gigabytes per hour

Number of drives = 1000 gigabytes/((8 hours) * (288 gigabytes per hour)) = 0.43 =1 drive

You can calculate the number of drives that are needed to perform a backup. It isdifficult, however, to spread the data streams evenly across all drives. To spreadyour data, experiment with various backup schedules, NetBackup policies, andyour hardware configuration.

See “Basic tuning suggestions for the data path” on page 112.

To calculate how many tape devices you need, you must calculate how many tapedevices you can attach to a drive controller.

To calculate the maximum number of drives that you can attach to a controller,you need the manufacturers’ maximum transfer rates for drives and controllers.Failure to use maximum transfer rates for your calculations can result in saturatedcontrollers and unpredictable results.

Table 1-2 displays the transfer rates for several drive controllers.

Table 1-2 Drive controller data transfer rates

Theoretical gigabytes perhour

Theoretical megabytes persecond

Drive Controller

237.666ATA-5 (ATA/ATAPI-5)

28880Wide Ultra 2 SCSI

360100iSCSI

3601001 gigabit Fibre Channel

540150SATA/150

576160Ultra-3 SCSI

7202002 gigabit Fibre Channel

1080300SATA/300

25NetBackup capacity planningDesigning your backup system

Page 26: Netbackup Planning Guide 307083

Table 1-2 Drive controller data transfer rates (continued)

Theoretical gigabytes perhour

Theoretical megabytes persecond

Drive Controller

1152320Ultra320 SCSI

14404004 gigabit Fibre Channel

In practice, your transfer rates might be slower because of the inherent overheadof several variables. Variables include your file system layout, system CPU load,and memory usage.

Calculate the required data transfer rate for your network(s)For backups over a network, you must move data from your client(s) to your mediaserver(s) fast enough to finish backups within your backup window. Table 1-3shows the typical transfer rates of some common network technologies. Tocalculate the required data transfer rate, use this formula:

Required network data transfer rate = (amount of data to back up) / (backupwindow)

Table 1-3 Network data transfer rates

Typical gigabytes per hourTheoretical gigabytes perhour

Network Technology

2536100BaseT (switched)

2503601000BaseT (switched)

2500360010000BaseT (switched)

Additional information is available on the importance of matching networkbandwidth to your tape drives.

See “Network and SCSI/FC bus bandwidth” on page 73.

Example: Calculating network transfer ratesAssumptions:

Amount of data to back up = 500 gigabytes

Backup window = 8 hours

Required network transfer rate = 500 gigabytes/8hr = 62.5 gigabytes per hour

Solution 1: Network Technology = 100BaseT (switched)

NetBackup capacity planningDesigning your backup system

26

Page 27: Netbackup Planning Guide 307083

Typical transfer rate = 25 gigabytes per hour

A single 100BaseT network has a transfer rate of 25 gigabytes per hour. Thisnetwork cannot handle your required data transfer rate of 62.5 gigabytes perhour.

In this case, you would have to explore other options, such as the following:

■ Backing up your data over a faster network (1000BaseT)

■ Backing up large servers to dedicated tape drives (SAN media servers)

■ Backing up over SAN connections by means of SAN Client

■ Performing off-host backups using Snapshot Client to present a snapshotdirectly to a media server

■ Performing your backups during a longer time window

■ Performing your backups over faster dedicated networks

Solution 2: Network Technology = 1000BaseT (switched)

Typical transfer rate = 250 gigabytes per hour

Based on Table 1-3, a single 1000BaseT network has a transfer rate of 250 gigabytesper hour. This network technology can handle your backups with room to spare.

Calculate the data transfer rates for your networks to help you identify yourpotential bottlenecks. Several solutions for dealing with multiple networks andbottlenecks are available.

See “Basic tuning suggestions for the data path” on page 112.

Calculate the size of your NetBackup catalogAn important factor when designing your backup system is to calculate how muchdisk space is needed to store your NetBackup catalog. Your catalog keeps track ofall the files that have been backed up.

The catalog's size depends on the following variables, for both full and incrementalbackups:

■ The number of files being backed up

■ The frequency and the retention period of the backups

You can use either of two methods to calculate the NetBackup catalog size. In bothcases, since data volumes grow over time, you should factor in expected growthwhen calculating total disk space used.

Note that catalog compression can reduce the size of your catalog. NetBackupautomatically decompresses the catalog only for the images and time period that

27NetBackup capacity planningDesigning your backup system

Page 28: Netbackup Planning Guide 307083

are needed for the restore. You can also use catalog archiving to reduce the spacerequirements for the catalog. More information is available on catalog compressionand catalog archiving.

See the NetBackup Administrator's Guide, Volume I.

Note: If you select NetBackup's True Image Restore option, your catalog becomeslarger than a catalog without this option selected. True Image Restore collectsthe information that is required to restore directories to their contents at the timeof any selected full or incremental backup. The additional information thatNetBackup collects for incremental backups is the same as the information thatis collected for full backups. As a result, incremental backups take much moredisk space when you collect True Image Restore information.

First methodThis method for calculating catalog size can be quite precise. It requires certaindetails: the number of files that are held online and the number of backups (fulland incremental) that are retained at any time.

To calculate the size in gigabytes for a particular backup, use the following formula:

Catalog size = (120 * number of files in all backups)/ 1GB

To use this method, you must determine the approximate number of copies ofeach file that is held in backups and the typical file size. The number of copiescan usually be estimated as follows:

Number of copies of each file that is held in backups = number of full backups +10% of the number of incremental backups held

Example: Calculating the size of your NetBackup catalog

Assumptions:

Number of full backups per month: 4

Retention period for full backups: 6 months

Total number of full backups retained: 24

Number of incremental backups per month: 25

Total number of files that are held online (total number of files in a full backup):17,500,000

Solution:

Number of copies of each file retained:

24 + (25 * 10%) = 26.5

NetBackup capacity planningDesigning your backup system

28

Page 29: Netbackup Planning Guide 307083

NetBackup catalog size for each file retained:

(120 * 26.5 copies) = 3180 bytes

Total catalog space required:

(3180 * 17,500,000 files) /1 GB = 54 GB

Second methodAn alternative method for calculating the catalog size is the following: multiplyby a small percentage (such as 2%) the total amount of data in the productionenvironment (not the total size of all backups). Note that 2% is an example; thissection helps you calculate a percentage that is appropriate for your environment.

Note: You can calculate catalog size by means of a small percentage only forenvironments in which it is easy to determine the following: the typical file size,typical retention policies, and typical incremental change rates. In some cases,the catalog size that is obtained using this method may vary significantly fromthe eventual catalog size.

To use this method, you must determine the approximate number of copies ofeach file that is held in backups and the typical file size. The number of copiescan usually be estimated as follows:

Number of copies of each file that is held in backups = number of full backups +10% of the number of incremental backups held

The multiplying percentage can be calculated as follows:

Multiplying percentage = (120 * number of files that are held in backups / averagefile size) * 100%

Then, the size of the catalog can be estimated as:

Size of the catalog = total disk space used * multiplying percentage

Example: Calculating the size of your NetBackup catalog

Assumptions:

Number of full backups per month: 4

Retention period for full backups: 6 months

Total number of full backups retained: 24

Number of incremental backups per month: 25

Average file size: 70 KB

Total disk space that is used on all servers in the domain: 1.4 TB

29NetBackup capacity planningDesigning your backup system

Page 30: Netbackup Planning Guide 307083

Solution:

Number of copies of each file retained:

24 + (25 * 10%) = 26.5

NetBackup catalog size for each file retained:

(120 * 26.5 copies) = 3180 bytes

Multiplying percentage:

(3180/70000) * 100% = 4.5%

Total catalog space required:

(1,400 GB * 4.5%) = 63 GB

Recommendation for sizing the catalogThe size of the catalog depends on the number of files in the backups and thenumber of copies of the backups that are retained. As a result, the catalog has thepotential to grow quite large. You should consider two additional factors, however,when estimating the ultimate size of the catalog: can it be backed up in anacceptable time window, and can the general housekeeping activities completewithin their execution windows. The time that is required to complete a catalogbackup depends on the amount of space the catalog occupies. The time that isrequired for the housekeeping activities depends on the number of entries in thecatalog. Note that NetBackup’s catalog archiving feature can be used to moveolder catalog data to other disk or tape storage. Catalog archiving can reduce thesize of the catalog on disk and thus reduce the backup time. Catalog archiving,however, does not decrease the amount of time that is required for housekeepingactivities.

In general, Symantec recommends that you plan your environment so that thefollowing criteria are met:

■ The amount of data that is held in the online catalog should not exceed 750GB. Catalog archiving can be used to keep the online portion of the catalogbelow this value.

■ The total number of catalog entries should not exceed 1,000,000. This numberequals the total of all retained copies of all backups from all clients held bothonline and in the catalog archive.

Note that the actual limits of acceptable catalog performance are influenced bythe speed of the storage and the power of the server. Your performance may varysignificantly from the guideline figures provided in this section.

NetBackup capacity planningDesigning your backup system

30

Page 31: Netbackup Planning Guide 307083

Note: If you expect your catalog to exceed these limits, consider deploying multipleNetBackup domains in your environment.

More information on catalog archiving is available in the NetBackupAdministrator’s Guide, Volume I.

Calculate the size of the EMM databaseBy default, the EMM database resides on the NetBackup master server. Otherconfigurations are possible.

See “Merging, splitting, or moving servers” on page 66.

The size of the NetBackup database (NBDB) determines the amount of space thatis needed for the EMM database.

Note: This space must be included when determining size requirements for amaster server or media server, depending on where the EMM database is installed.

Space for the NBDB on the EMM database is required in the following two locations:

UNIX

/usr/openv/db/data

/usr/openv/db/staging

Windows

install_path\NetBackupDB\data

install_path\NetBackupDB\staging

Calculate the required space for the NBDB in each of the two directories, as follows:

60 MB + (2 KB * number of volumes that are configured for EMM) + (number ofimages in disk storage other than BasicDisk * 5 KB) + (number of disk volumes *number of media severs * 5 KB)

where EMM is the Enterprise Media Manager, and volumes are NetBackup (EMM)media volumes. Note that 60 MB is the default amount of space that is needed forthe NBDB database that the EMM database uses. It includes pre-allocated spacefor configuration information for devices and storage units.

Note: During NetBackup installation, the install script looks for 60 MB of freespace in the /datadirectory. If the directory has insufficient space, the installationfails. The space in /staging is only required when a hot catalog backup is run.

31NetBackup capacity planningDesigning your backup system

Page 32: Netbackup Planning Guide 307083

The EMM transaction log occupies a portion of the space in the /data directorythat NBDB requires. This transaction log is only truncated (not removed) when ahot catalog backup is performed. The log continues to grow indefinitely if a hotcatalog backup is not made at regular intervals.

Example: Calculating the space needed for the EMM databaseAssuming there are 1000 EMM volumes to back up, the total space that is neededfor the EMM database in /usr/openv/db/data is:

60 MB + (2 KB * 1000 volumes) + (5 KB * 1000 SharedDisk images) + (5 KB * 10SharedDisk volumes * 4 media servers) = 67.2 MB

The same amount of space is required in /usr/openv/db/staging. The amountof space that is required may grow over time as the NBDB database increases insize.

Note:The 60 MB of space is pre-allocated. The space is derived from the followingseparate databases that were consolidated into the EMM database in NetBackup6.0: globDB, ltidevs, robotic_def, namespace.chksum, ruleDB, poolDB, volDB,mediaDB, storage_units, stunit_groups, SSOhosts, and media errors database.

Additional details are available on the files and database information that areincluded in the EMM database. See the NetBackup Release Notes, in the sectiontitled "Enterprise Media Manager Databases."

Calculate media needed for full and incremental backupsCalculate how many tapes are needed to store and retrieve your backups.

The number of tapes depends on the following:

■ The amount of data that you back up

■ The frequency of your backups

■ The planned retention periods

■ The capacity of the media that is used to store your backups.

If you expect your site's workload to increase over time, you can ease the pain offuture upgrades by planning for expansion. Design your initial backup architectureso it can evolve to support more clients and servers. Invest in the faster,higher-capacity components that can serve your needs beyond the present.

A formula for calculating your tape needs is shown here:

Number of tapes = (Amount of data to back up) / (Tape capacity)

NetBackup capacity planningDesigning your backup system

32

Page 33: Netbackup Planning Guide 307083

To calculate how many tapes are needed based on all your requirements, theprevious formula can be expanded to the following:

Number of tapes = ((Amount of data to back up) * (Frequency of backups) *(Retention period)) / (Tape capacity)

Table 1-4 lists some common tape capacities.

Table 1-4 Tape capacities

Theoretical gigabytes (2:1compression)

Theoretical gigabytes (nocompression)

Drive

200100LTO gen 1

400200LTO gen 2

800400LTO gen 3

1600800LTO gen 4

320160SDLT 320

600300SDLT 600

400200STK 9940B

1000500STK T10000

20001000STK T10000B

1400700TS1120

Example: Calculating how many tapes are needed to store allyour backupsPreliminary calculations:

Size of full backups = 500 gigabytes * 4 (per month) * 6 months = 12 terabytes

Size of incremental backups = (20% of 500 gigabytes) * 30 * 1 month = 3 terabytes

Total data tracked = 12 terabytes + 3 terabytes = 15 terabytes

Solution 1:

Tape drive type = LTO gen 2

Tape capacity without compression = 200 gigabytes

Tape capacity with compression = 400 gigabytes

Without compression:

33NetBackup capacity planningDesigning your backup system

Page 34: Netbackup Planning Guide 307083

Tapes that are needed for full backups = 12 terabytes/200 gigabytes = 60

Tapes that are needed for incremental backups = 3 terabytes/200 gigabytes = 15

Total tapes needed = 60 + 15 = 75 tapes

With 2:1 compression:

Tapes that are needed for full backups = 12 terabytes/400 gigabytes = 30

Tapes that are needed for incremental backups = 3 terabytes/400 gigabytes = 7.5

Total tapes needed = 30 + 7.5 = 37.5 tapes

Solution 2:

Tape drive type = LTO gen 3

Tape capacity without compression = 400 gigabytes

Tape capacity with compression = 800 gigabytes

Without compression:

Tapes that are needed for full backups = 12 terabytes/400 gigabytes = 30

Tapes that are needed for incremental backups = 3 terabytes/400 gigabytes = 7.5~= 8

Total tapes needed = 30 + 8 = 38 tapes

With 2:1 compression:

Tapes that are needed for full backups = 12 terabytes/800 gigabytes = 15

Tapes that are needed for incremental backups = 3 terabytes/800 gigabytes = 3.75~= 4

Total tapes needed = 15 + 4 = 19 tapes

Calculate the size of the tape library needed to store your backupsTo calculate how many robotic library tape slots are needed to store all yourbackups, do the following: take the number of backup tapes that was calculatedin a previous section and add tapes for catalog backup and for cleaning.

See “Calculate media needed for full and incremental backups” on page 32.

The formula is the following:

Tape slots needed = (the number of tapes that are needed for backups) + (thenumber of tapes that are needed for catalog backups) + 1 (for a cleaning tape)

For tapes for catalog backup, a typical example is 2.

Additional tapes may be needed for the following:

NetBackup capacity planningDesigning your backup system

34

Page 35: Netbackup Planning Guide 307083

■ If you plan to duplicate tapes or to reserve some media for special (non-backup)use, add those tapes to the formula.

■ Add the tapes that are needed for future data growth. Make sure your systemhas a viable upgrade path as new tape drives become available.

Estimate your SharedDisk storage requirementsThis section helps you estimate the following:

■ The number of media servers to use as storage servers in each SharedDisk diskpool.A storage server mounts the storage, and writes data to and reads data fromdisk storage. A SharedDisk disk pool is the storage destination of a NetBackupstorage unit.See the NetBackup Shared Storage Guide.

■ The number of disk volumes to use in each SharedDisk disk pool.

Guidelines for number of storage serversTo determine the number of media servers to be configured as storage serversfor each SharedDisk disk pool, consider the following:

■ The more clients that use the disk pool, the more storage servers that areneeded per disk pool.

■ The more Fibre Channel connections that are configured on the media servers,the more storage servers that are needed.

■ Client data has to be transferred both in and out of the media server. The lowerthe average bandwidth of each media server, the more storage servers thatare needed.

■ The more often backups are run, and the larger each backup is, the greaterthe number of storage servers that are needed.

Note that the number of storage servers affects the following:

■ NetBackup device selection takes longer with a larger number of storageservers, especially for the backups that use Multiple copies option (Inline Copy).

■ As the number of storage servers increases, they may consume more resourcesfrom the disk arrays. The hardware and software of the arrays may have to beadjusted accordingly.

■ As the number of storage servers increases, background monitoringcommunication traffic also increases, which causes more database activities.

The communication traffic consists of the following:

35NetBackup capacity planningDesigning your backup system

Page 36: Netbackup Planning Guide 307083

■ The Disk Polling Service and the Media Performance Monitor Service withinnbrmms on the media server.

■ The Resource Event Manager service and the Disk Service Manager withinnbemm on the EMM server.

Recommendation for number of storage serversSymantec recommends a conservative estimate for number of storage servers perdisk pool. For most NetBackup installations, a rough guideline is approximately10 to 50 storage servers per disk pool.

Note: you can easily add or remove storage servers from a disk pool.

Guidelines for number of disk volumesTo determine the number of disk volumes to use in each SharedDisk disk pool,consider the following:

■ Limitations in the disk volume size that the disk array imposes. If the array’smaximum volume size is relatively small, more volumes may be required.

■ Limitations in file system-size imposed by the operating system of the mediaserver that is used as a storage server. A volume cannot exceed the size of thefile system. The smaller each file system, the more volumes that are requiredto store a given amount of data.

■ Number of storage servers in each SharedDisk disk pool. The more storageservers, the more disk volumes that are needed.

You can use the following to determine—roughly—the minimum and the maximumnumber of disk volumes that are needed in each SharedDisk disk pool:

You should have at least two disk volumes per storage server.(Note that a disk volume can only be used by one storage serverat a time.) If the total number of disk volumes is smaller than thenumber of storage servers, NetBackup runs out of disk volumesto mount.

Minimum

if all clients on the storage servers use Fibre Channel, the numberof disk volumes in a SharedDisk disk pool need not exceed thetotal number of Fibre Channel connections available. That is,NetBackup does not require more SharedDisk disk volumes thanthe total number of Fibre Channel connections available. Othersite requirements, however, may suggest a higher number of diskvolumes.

Maximum

NetBackup capacity planningDesigning your backup system

36

Page 37: Netbackup Planning Guide 307083

The number of disk volumes affects the following areas:

During media and device selection, NetBackup tries to find thebest storage server. A larger number of disk volumes improvesthe probability that NetBackup can find an eligible disk volumefor the storage server. The result is better load balancing acrossmedia servers.

Storage sharing

A larger number of disk volumes is more likely to allow a diskvolume configuration that addresses the physical details of thedisk array. The configuration allows the use of separate spindlesfor separate backup jobs, to eliminate disk head contention.

Raw I/O performance

Given a constant amount of storage space, fewer disk volumesresult in less image fragmentation, because images are less likelyto span volumes. As a result, fewer restores ofmultiple-disk-volume are required, which improves restoreperformance.

Image fragmentation

Mount times for disk volumes are in the range of one to threeseconds. Fewer disk volumes make it more likely that a new backupjob starts on a disk volume that is already mounted, which savestime. LUN masking increases mount times, whereas SCSIpersistent reservation results in much shorter mount times.

Note: The length of the mount time is not related to the size ofthe backup. The longer the backup job, the less noticeable is thedelay due to mount time.

Mount time

As with a larger number of storage servers, a larger number ofdisk volumes makes media and device selection more timeconsuming. Media and device selection is especially timeconsuming for the backups that use Multiple copies option (InlineCopy).

Media and deviceselection

Disk pool monitoring and administration activities increase withthe number of disk volumes in a disk pool. Fewer disk volumesrequire less time to monitor or administer.

System monitoringand administration

RecommendationThe recommended number of disk volumes to have in a disk pool depends on thefollowing: whether the NetBackup clients are connected to the storage server bymeans of a LAN (potentially slow) or Fibre Channel (considerably faster).

Consider the following:

■ NetBackup clients that are connected to the storage server on the LAN

37NetBackup capacity planningDesigning your backup system

Page 38: Netbackup Planning Guide 307083

The number of disk volumes should be between two and four times the numberof storage servers.

■ NetBackup clients that are connected to the storage server on Fibre Channel(SAN clients)You should have at least one disk volume per logical client connection on theFibre Channel. If four SAN clients use two Fibre Channel connections (twoclients per FC connection), then four disk volumes are recommended.

Further informationFurther information is available on SharedDisk performance considerations andbest practices.

See the tech note Best practices for disk layout with the NetBackup Flexible Diskoption.

http://seer.entsupport.symantec.com/docs/305161.htm

Further information on how to configure SharedDisk is available.

See the NetBackup Shared Storage Guide.

Based on your previous findings, design your master serverTo design and configure a master server, do the following:

■ Perform an analysis of your initial backup requirements.See “Analyzing your backup requirements” on page 18.

■ Perform the calculations that are outlined in the previous sections of thecurrent chapter.

You can design a master server when the following design constraints are known:

■ Amount of data to back up

■ Size of the NetBackup catalog

■ Number of tape drives needed

■ Number of disk drives needed

■ Number of networks needed

Configuring the master serverWhen you have the design constraints, use the following procedure as an outlineto configure your master server.

NetBackup capacity planningDesigning your backup system

38

Page 39: Netbackup Planning Guide 307083

To configure the master server

1 Acquire a dedicated server.

2 Add tape drives and controllers, for saving your backups.

3 Add disk drives and controllers, for saving your backups, and for OS andNetBackup catalog.

4 Add network cards.

5 Add memory.

6 Add CPUs.

Notes on master server configurationIn some cases, it may not be practical to design a server to back up all of yoursystems. You might have one or more large servers that cannot be backed up overa network within your backup window. In such cases, back up those servers bymeans of their own locally-attached drives or drives that use the Shared StorageOption. Although this section discusses how to design a master server, you canuse this information to add the drives and components to your other servers.

The next example shows how to configure a master server using the designelements from the previous sections.

Example: Designing your master serverAssumptions:

Amount of data to back up during full backups = 500 gigabytes

Amount of data to back up during incremental backups = 100 gigabytes

Tape drive type = LTO gen 3

Tape drives needed = 1

Network technology = 1 gigabit

Network cards needed = 1

Size of NetBackup catalog after 6 months = 60 gigabytes

For size of catalog, refer to the example under the following:

Calculate the size of your NetBackup catalog

Solution:

CPUs needed for network cards = 1

CPUs needed for tape drives = 1

39NetBackup capacity planningDesigning your backup system

Page 40: Netbackup Planning Guide 307083

CPUs needed for OS = 1

Total CPUs needed = 1 + 1 + 1 = 3

Memory for network cards = 16 megabytes

Memory for tape drives = 256 megabytes

Memory for OS and NetBackup = 1 gigabyte

Total memory needed = 16 + 256 + 1000 = 1.272 gigabytes

More information is available on the values used in this solution.

See “Design your media server” on page 42.

In this solution, your master server needs 3 CPUs and 1.272 gigabytes of memory.In addition, you need 60 gigabytes of disk space to store your NetBackup catalog.You also need the disks and drive controllers to install your operating system andNetBackup (2 gigabytes should be ample for most installations). This server alsorequires the following: one SCSI card, or another faster adapter for use with thetape drive (and robot arm), and a single 100BaseT card for network backups.

When designing your master server, begin with a dedicated server for optimumperformance. In addition, consult with your server’s hardware manufacturer toensure that the server can handle your other components. In most cases, servershave specific restrictions on the number and mixture of hardware componentsthat can be supported concurrently. If you overlook this last detail, it can cripplethe best of plans.

Estimate the number of master servers neededAs a rule, the number of master servers is proportional to the number of mediaservers.

To determine how many master servers are required, consider the following:

■ The master server must be able to periodically communicate with all its mediaservers. If you have too many media servers, the master server may beoverloaded.

■ Consider business-related requirements. For example, if an installation hasdifferent applications that require different backup windows, a single mastermay have to run backups continually. As a result, resources for catalog cleaning,catalog backup, or other maintenance activity may be insufficient. If you usethe legacy offline catalog backup feature, the time window for the catalogbackup may not be adequate.

■ If possible, design your configuration with one master server per firewalldomain. In addition, do not share robotic tape libraries between firewalldomains.

NetBackup capacity planningDesigning your backup system

40

Page 41: Netbackup Planning Guide 307083

■ As a rule, the number of clients (separate physical hosts) per master server isnot a critical factor for NetBackup. The backup processing that clients performhas little or no impact on the NetBackup server. Exceptions do exist. Forexample, if all clients have database extensions, or all clients runALL_LOCAL_DRIVES backups at the same time, server performance may beaffected.

■ Plan your configuration so that it contains no single point of failure. Providesufficient redundancy to ensure high availability of the backup process. Moretape drives or media may reduce the number of media servers that are neededper master server.

■ Do not plan to run more than about 10,000 jobs per 12-hour period on a singlemaster server.See “Limits to scalability” on page 42.

■ Consider limiting the number of media servers that are handled by a masterto the lower end of the estimates in the following table.A well-managed NetBackup environment may be able to handle more mediaservers than the numbers that are listed in this table. Your backup operations,however, may be more efficient and manageable with fewer, larger mediaservers. The variation in the number of media servers per master server foreach scenario in the table depends on the following: number of jobs submitted,multiplexing, multiple data streams, and network capacity.

Table 1-5 provides estimates of processor and memory requirements for a masterserver.

Table 1-5 Number of media servers for a master server

Maximum numberof media serversper master server

Maximum numberof jobs per day

Minimum RAMNumber ofprocessors

55002 GB1

2020004 GB2

5050008 GB4

1001000016 GB8

These estimates are based on the number of media servers and the number of jobsthe master server must support. This table is for guidance only; it is based on testdata and customer experiences. The amount of RAM and number of processorsmay need to be increased based on other site-specific factors.

41NetBackup capacity planningDesigning your backup system

Page 42: Netbackup Planning Guide 307083

In this table, a processor is defined as a state-of-the-art CPU. An example is a 3GHz processor for an x86 system, or the equivalent on a RISC platform such asSun SPARC.

Type of processorFor a NetBackup master server, Symantec recommends using multiple discreteprocessors instead of a single multi-core processor. The individual cores in amulti-core processor lack the resources to support some of the CPU-intensiveprocesses that run on a master server. Two physical dual-core processors (for atotal of four processors) are better than a single quad-core processor.

Definition of a jobFor the purposes of Table 1-5, a job is defined as an individual backup stream.

Database and application backup policies, and policies that use theALL_LOCAL_DRIVES directive, usually launch multiple backup streams (thusmultiple jobs) at the same time.

Limits to scalabilityNote the following limits to scalability, regardless of the size of the master server:

■ The maximum rate at which jobs can be launched is about 1 job every 4 seconds.Therefore, a domain cannot run much more than 10,000 jobs in a 12 hourbackup window.

■ A single domain cannot support significantly more than 100 media serversand SAN media servers.

Design your media serverYou can use a media server to back up other systems and reduce or balance theload on your master server. With NetBackup, disk storage control and the roboticcontrol of a library can be on either the master server or the media server.

The information in the following two tables is a rough estimate only, intended asa guideline for planning.

Table 1-6 CPUs per master server or media server component

Number of CPUs percomponent

How many and what kindof component

Component

12-3 100BaseT cardsNetwork cards

NetBackup capacity planningDesigning your backup system

42

Page 43: Netbackup Planning Guide 307083

Table 1-6 CPUs per master server or media server component (continued)

Number of CPUs percomponent

How many and what kindof component

Component

11 ATM card

11-2 gigabit Ethernet cardswith coprocessor

12 LTO gen 3 drivesTape drives

12-3 SDLT 600 drives

12-3 LTO gen 2 drives

1OS and NetBackup

Table 1-7 Memory per master server or media server component

Memory per componentType of componentComponent

16 MBNetwork cards

256 MBLTO gen 3 driveTape drives

128 MBSDLT 600 drive

128 MBLTO gen 2 drive

1 GBOS and NetBackup

1 or more GBOS, NetBackup, and NOM

8 MB * (# streams) * (# drives)NetBackup multiplexing

Estimate the number of media servers neededThe following are guidelines for estimating the number of media servers needed:

■ I/O performance is generally more important than CPU performance.

■ Consider CPU, I/O, and memory expandability when choosing a server.

■ Consider how many CPUs are needed.See Table 1-6 on page 42.Here are some general guidelines:Experiments (with Sun Microsystems) indicate that a useful, conservativeestimate is the following: 5MHz of CPU capacity per 1MB per second of datamovement in and out of the NetBackup media server. Keep in mind that the

43NetBackup capacity planningDesigning your backup system

Page 44: Netbackup Planning Guide 307083

operating system and other applications also use the CPU. This estimate is forthe power available to NetBackup itself.Example:

A system that backs up clients to tape at 10 MB per second needs 100 MHz ofCPU power, as follows:

■ 50MHz to move data from the network to the NetBackup server.

■ 50MHz to move data from the NetBackup server to tape.

■ Consider how much memory is needed.See Table 1-7 on page 43.At least 512 megabytes of RAM is recommended if the server is running a JavaGUI. NetBackup uses shared memory for local backups. NetBackup bufferusage affects how much memory is needed.Keep in mind that non-NetBackup processes need memory in addition to whatNetBackup needs.A media server moves data from disk (on relevant clients) to storage (usuallydisk or tape). The server must be carefully sized to maximize throughput.Maximum throughput is attained when the server keeps its tape devicesstreaming.More information is available on tape streaming.See “NetBackup storage device performance” on page 156.

Media server factors to consider for sizing include the following:

■ Disk storage access time

■ Number of SharedDisk storage servers

■ Adapter (for example, SCSI) speed

■ Bus (for example, PCI) speed

■ Tape or disk device speed

■ Network interface (for example, 100BaseT) speed

■ Amount of system RAM

■ Other applications, if the host is non-dedicated

Your platform must be able to drive all network interfaces and keep all tape devicesstreaming.

NetBackup capacity planningDesigning your backup system

44

Page 45: Netbackup Planning Guide 307083

Design your NOM serverBefore setting up a NetBackup Operations Manager (NOM) server, review therecommendations and requirements that are listed in the NetBackup OperationsManager Guide.

Some of the considerations are the following:

■ The NOM server should be configured as a fixed host with a static IP address.

■ Symantec recommends that you not install the NOM server software on thesame server as NetBackup master or media server software. Installing NOMon a master server may affect security and performance.

Sizing considerationsThe size of your NOM server depends largely on the number of NetBackup objectsthat NOM manages.

The NetBackup objects that determine the NOM server size are the following:

■ Number of master servers to manage

■ Number of policies

■ Number of jobs that are run per day

■ Number of media

Based on these factors, the following NOM server components should be sizedaccordingly:

■ Disk space (for installed NOM binary + NOM database)

■ Type and number of CPUs

■ RAM

The next section describes the NOM database and how it affects disk spacerequirements, followed by sizing guidelines for NOM.

NOM databaseThe Sybase database that NOM uses is similar to the database that NetBackupuses. The database is installed as part of the NOM server installation.

Note the following:

■ After you configure NOM, NOM disk space depends on the volume of datainitially loaded on the NOM server from the managed NetBackup servers.

The initial data load on the NOM server is in turn dependent on the followingdata present in the managed master servers:

45NetBackup capacity planningDesigning your backup system

Page 46: Netbackup Planning Guide 307083

■ Number of policy data records

■ Number of job data records

■ Number of media data records

■ The rate of NOM database growth depends on the quantity of managed data.This data can be policy data, job data, or media data.

For optimal performance and scalability, Symantec recommends that you manageapproximately a month of historical data.

Information is available on how to adjust database values for better NOMperformance.

See “NetBackup Operations Manager (NOM)” on page 174.

Sizing guidelinesThe following guidelines are presented in groups according to the number ofobjects that your NOM server manages. The guidelines are intended for planningpurposes, and do not represent fixed recommendations or restrictions.

The guidelines assume that your NOM server is a stand-alone host (the host doesnot act as a NetBackup master server).

Note: Installation of NOM server software on the NetBackup master server is notrecommended.

Sizing guidelines for NOMConsult the following tables to choose the installation category that matches yoursite. Each category is based on several factors as listed in the table. Based on yourNetBackup installation category, you can determine the minimum hardwarerequirements for installing NOM.

Table 1-8 applies to NOM 6.5.

Table 1-8 NetBackup installation categories for NOM 6.5

Maximummedia

Maximumalerts

Maximumpolicies

Maximumnumber ofjobs in thedatabase

Maximumjobs per day

Maximummasterservers

NetBackupInstallationCategory

100001000500010000010003A

30000010000500005000001000010B

NetBackup capacity planningDesigning your backup system

46

Page 47: Netbackup Planning Guide 307083

Table 1-8 NetBackup installation categories for NOM 6.5 (continued)

Maximummedia

Maximumalerts

Maximumpolicies

Maximumnumber ofjobs in thedatabase

Maximumjobs per day

Maximummasterservers

NetBackupInstallationCategory

3000002000005000010000007500040C

Table 1-9 applies to NOM 6.5.2.

Table 1-9 NetBackup installation categories for NOM 6.5.2

Maximummedia

Maximumalerts

Maximumpolicies

Maximumnumber ofjobs in thedatabase

Maximumjobs per day

Maximummasterservers

NetBackupInstallationCategory

100001000500010000010003A

30000010000500005000001000010B

30000020000050000100000010000060C

Table 1-10 applies to NOM 6.5.4.

Table 1-10 NetBackup installation categories for NOM 6.5.4

Maximummedia

Maximumalerts

Maximumpolicies

Maximumnumber ofjobs in thedatabase

Maximumjobs per day

Maximummasterservers

NetBackupInstallationCategory

100001000500010000010003A

30000010000500005000001000010B

300000500000500005000000150000100C

Note: If your NetBackup installation is larger than installations listed here, thebehavior of NOM is unpredictable. In this case, Symantec recommends usingmultiple NOM servers.

Table 1-11 lists the minimum hardware requirements and recommended settingsfor installing NOM based on your NetBackup installation category (A, B, or C).

47NetBackup capacity planningDesigning your backup system

Page 48: Netbackup Planning Guide 307083

Table 1-11 Minimum hardware requirements and recommended settings forNOM

Recommendedheap size(Web serverand NOMserver)

Recommendedcache sizefor Sybase

Averagedatabasegrowth rateper day

RAMNumber of CPUsOSNetBackupInstallationCategory

512 MB512 MB3 MB2 GB1Windows2000/2003

A

512 MB512 MB3 MB2 GB1Solaris 8/9/10

512 MB512 MB3 MB2 GB1Red HatEnterpriseLinux 4/5

1 GB1 GB30 MB4 GB2Windows2000/2003

B

1 GB1 GB30 MB4 GB2Solaris 8/9/10

1 GB1 GB30 MB2 GB2Red HatEnterpriseLinux 4/5

On 32-bitplatform: 1.2- 1.4 GB

On 64-bitplatform: 2GB

2 GB150 MB8 GB2Windows2000/2003

C

2 GB2 GB150 MB8 GB2Solaris 8/9/10

2 GB2 GB150 MB8 GB2Red HatEnterpriseLinux 4/5

Note: For improved performance, Symantec recommends installing NOM in a 64bit environment. NOM has been tested on 64 bit platforms (Solaris, Windows,RHEL) with NetBackup 6.5.4.

For example, if your NetBackup setup falls in installation category B (Windowsenvironment), your NOM system must meet the following requirements:

■ Number of CPUs required

NetBackup capacity planningDesigning your backup system

48

Page 49: Netbackup Planning Guide 307083

2

■ RAM4 GB

For optimal performance, the average database growth rate per day on your NOMsystem should be 30 MB per day or lower. The recommended Sybase cache sizeis 1 GB. The recommended heap size for the NOM server and Web server is 1 GB.

Related topics:

See “Adjusting the NOM server heap size” on page 174.

See “Adjusting the NOM web server heap size” on page 175.

Symantec recommends that you adjust the Sybase cache size after installing NOM.After you install NOM, the database size can grow rapidly as you add more masterservers.

See “Adjusting the Sybase cache size” on page 176.

SummaryDesign a solution that can do a full backup and incremental backups of your largestsystem within your time window. The remainder of the backups can happen oversuccessive days.

Eventually, your site may outgrow its initial backup solution. By following theguidelines in this chapter, you can add more capacity at a future date withouthaving to redesign your strategy. With proper design, you can create a backupstrategy that can grow with your environment.

The number and location of the backup devices are dependent on factors such asthe following:

■ The amount of data on the target systems

■ The available backup and restore windows

■ The available network bandwidth

■ The speed of the backup devices

If one drive causes backup-window time conflicts, another can be added, providingan aggregate rate of two drives. The trade-off is that the second drive imposesextra CPU, memory, and I/O loads on the media server.

If backups do not complete in the allocated window, increase your backup windowor decrease the frequency of your full and incremental backups.

Another approach is to reconfigure your site to speed up overall backupperformance. Before you make any such change, you should understand what

49NetBackup capacity planningDesigning your backup system

Page 50: Netbackup Planning Guide 307083

determines your current backup performance. List or diagram your site networkand systems configuration. Note the maximum data transfer rates for all thecomponents of your backup configuration and compare these against the requiredrate for your backup window. This approach helps you identify the slowestcomponents and, consequently, the cause of your bottlenecks. Some likely areasfor bottlenecks include the networks, tape drives, client OS load, and file systemfragmentation.

Questionnaire for capacity planningUse this questionnaire to describe the characteristics of your systems and howthey are to be used. This data can help determine your NetBackup clientconfigurations and backup requirements.

Table 1-12 contains the questionnaire.

Table 1-12 Backup questionnaire

ExplanationQuestion

Any unique name to identify the computer. Hostname or anyunique name for each system.

System name

The hardware vendor who made the system (for example, Sun,HP, IBM, generic PC)

Vendor

For example: Sun T5220, HP DL580, Dell PowerEdge 6850Model

For example: Solaris 10, HP-UX 11i, Windows 2003 DataCenterOS version

Identify physical location by room, building, or campus.Building / location

Total available internal and external storage capacity.Total storage

Total used internal and external storage capacity. If theamount of data to be backed up is substantially different fromthe amount of storage capacity used, please note that fact.

Used storage

For example: Hitachi, EMC, EMC CLARiiON, STK.Type of external array

For example, 100MB, gigabit, T1. You should know whetherthe LAN is a switched network.

Network connection

For example, Oracle 8.1.6, SQLServer 7.Database (DB)

A hot backup requires the optional database agent if backingup a database.

Hot backup required?

NetBackup capacity planningQuestionnaire for capacity planning

50

Page 51: Netbackup Planning Guide 307083

Table 1-12 Backup questionnaire (continued)

ExplanationQuestion

For example: Exchange server, accounting system, softwaredeveloper's code repository, NetBackup critical policies.

Key application

For example: incremental backups run M-F from 11PM to6AM. Fulls are all day Sunday. This information helps locatepotential bottlenecks and how to configure a solution.

Backup window

For example: incremental backups for 2 weeks, full backupsfor 13 weeks. This information helps determine how to sizethe number of slots that are needed in a library.

Retention policy

Type of media currently used for backups.Existing backup media

Any special situations to be aware of? Any significant patcheson the operating system? Will the backups be over a WAN?Do the backups need to go through a firewall?

Comments?

51NetBackup capacity planningQuestionnaire for capacity planning

Page 52: Netbackup Planning Guide 307083

NetBackup capacity planningQuestionnaire for capacity planning

52

Page 53: Netbackup Planning Guide 307083

Master server configurationguidelines

This chapter includes the following topics:

■ Managing NetBackup job scheduling

■ Miscellaneous considerations

■ NetBackup catalog strategies

■ Merging, splitting, or moving servers

■ Guidelines for policies

■ Managing logs

Managing NetBackup job schedulingThis section discusses NetBackup job scheduling.

Delays in starting jobsThe NetBackup Policy Execution Manager (nbpem) may not begin a backup atexactly the time a backup policy's schedule window opens. This delay can happenwhen you define a schedule or modify an existing schedule with a window starttime close to the current time.

For instance, suppose you create a schedule at 5:50 PM, and specify that backupsshould start at 6:00 PM. You complete the policy definition at 5:55 PM. At 6:00PM, you expect to see a backup job for the policy start, but it does not. Instead,the job takes another several minutes to start.

2Chapter

Page 54: Netbackup Planning Guide 307083

The explanation is the following: NetBackup receives and queues policy changeevents as they happen, but processes them periodically as configured in the PolicyUpdate Interval setting. (The Policy Update Interval is set under HostProperties> Master Server > Properties > Global Settings. The default is 10 minutes.) Thebackup does not start until the first time NetBackup processes policy changesafter the policy definition is completed at 5:55 PM. NetBackup may not processthe changes until 6:05 PM. For each policy change, NetBackup determines whatneeds to be done and updates its work list accordingly.

Delays in running queued jobsIf jobs are queued and only one job runs at a time, use the State Details columnin the Activity Monitor to see the reason for the job being queued. The State Detailscolumn was introduced in NetBackup 6.5.2.

If jobs are queued and only one job runs at a time, set one or more of the followingto allow jobs to run simultaneously:

■ HostProperties>MasterServer>Properties>GlobalAttributes>Maximumjobs per client (should be greater than 1).

■ Policy attribute Limit jobs per policy (should be greater than 1).

■ Policy schedule attribute Media multiplexing (should be greater than 1).

■ Check storage unit properties.

Check the following:

■ Is the storage unit enabled to use multiple drives (Maximum concurrentwritedrives)? If you want to increase this value, remember to set it to fewerthan the number of drives available to this storage unit. Otherwise, restoresand other non-backup activities cannot run while backups to the storageunit are running.

■ Is the storage unit enabled for multiplexing (Maximumstreamsperdrive)?You can write a maximum of 32 jobs to one tape at the same time.

For example, you can run multiple jobs to a single storage unit if you have multipledrives. (Maximum concurrent write drives set to greater than 1.) Or, you can setup multiplexing to a single drive if Maximum streams per drive is set to greaterthan 1. If both Maximum concurrent write drives and Maximum streams per driveare greater than 1: you can run multiple streams to multiple drives, assumingMaximum jobs per client is set high enough.

Master server configuration guidelinesManaging NetBackup job scheduling

54

Page 55: Netbackup Planning Guide 307083

Delays in shared disk jobs becoming activeShared disk jobs are generally slower to become active than tape jobs, becauseshared disk jobs must wait for a disk volume to be mounted. Tape jobs becomeactive as soon as the resources are allocated.

Tape jobsThe NetBackup Job Manager (nbjm) requests resources from the NetBackupResource Broker (nbrb) for the job. nbrb allocates the resources and gives theresources to nbjm. nbjm makes the job active. nbjm starts bpbrm which in turnstarts bptm; bptm mounts the tape medium in the drive.

Shared disksThe NetBackup Job Manager (nbjm) requests job resources from nbrb. nbrballocates the resources and initiates the shared disk mount. When the mountcompletes, nbrb gives the resources to nbjm. nbjm makes the job active.

Job delays caused by unavailable mediaThe job fails if no other storage units are usable, in any of the followingcircumstances:

■ If the media in a storage unit are not configured or are unusable (such asexpired)

■ The maximum mounts setting was exceeded

■ The wrong pool was selected

If media are unavailable, consider the following:

■ Add new media

■ Or change the media configuration to make media available (such as changingthe volume pool or the maximum mounts).

If the media in a storage unit are usable but are busy, the job is queued. In theNetBackup Activity Monitor, the "State Details" column indicates why the job isqueued, such as "media are in use." (The same information is available in the JobDetails display. Right-click on the job and select "Details.") If the media are in use,the media eventually stop being used and the job runs.

Delays after removing a media serverA job may be queued by the NetBackup Job Manager (nbjm) if the media server isnot available. The job is not queued because of communication timeouts, but

55Master server configuration guidelinesManaging NetBackup job scheduling

Page 56: Netbackup Planning Guide 307083

because EMM knows the media server is down and the NetBackup Resource Broker(nbrb) queues the request to be retried later.

If no other media servers are available, EMM queues the job under the followingcircumstances:

■ If a media server is configured in EMM but the server has been physicallyremoved, turned off, or disconnected from the network

■ If the network is down

The Activity Monitor should display the reason for the job queuing, such as "mediaserver is offline." Once the media server is online again in EMM, the job starts. Inthe meantime, if other media servers are available, the job runs on another mediaserver.

If a media server is not configured in EMM, regardless of the physical state of themedia server, EMM does not select that media server for use. If no other mediaservers are available, the job fails.

To permanently remove a media server from the system, consult the"Decommissioning a media server" section in the NetBackup Administrator’sGuide, Volume II.

Limiting factors for job schedulingFor every manual backup job, one bprd process may run if the -w (wait for status)option was specified on the command. When many requests are submitted toNetBackup simultaneously, NetBackup increases its use of memory. The numberof requests may eventually affect the overall performance of the system. Thistype of performance degradation is associated with the way a given operatingsystem handles memory requests. It may affect the functioning of all applicationsthat currently run on the system, not limited to NetBackup.

Note: In the UNIX (Java) Administration Console, the Activity Monitor may notupdate if there are thousands of jobs to view. In this case, you may need to changethe memory setting by means of the NetBackup Java command jnbSA with the-mx option.

See the "INITIAL_MEMORY, MAX_MEMORY" subsection in the NetBackupAdministrator’sGuide forUNIXandLinux,Volume I. Note that this situation doesnot affect NetBackup's ability to continue running jobs.

Master server configuration guidelinesManaging NetBackup job scheduling

56

Page 57: Netbackup Planning Guide 307083

Staggering the submission of jobs for better load distributionWhen the backup window opens, Symantec recommends scheduling jobs to startin small groups periodically, rather than starting all jobs at the same time. If thesubmission of jobs is staggered, the NetBackup resource broker (nbrb) can allocatejob resources more quickly.

You can use this formula to calculate the number of jobs to be submitted at onetime:

number of jobs to be submitted at one time = (number of physical drives * MPXvalue * 1.5) + (number of shared disk volumes * 4)

This formula is a guideline. Further tuning of your system may be required.

Adjusting the server’s network connection optionsWhen many jobs run simultaneously, the CPU utilization of the master servermay become very high. To reduce utilization and improve performance, adjustthe network connection options for the local computer.

57Master server configuration guidelinesManaging NetBackup job scheduling

Page 58: Netbackup Planning Guide 307083

To adjust the network connection options for the local computer

1 In the NetBackup Administration Console, use the HostProperties>MasterServer > Master Server Properties > Firewall dialog box.

2 Or add the following bp.conf entry to the UNIX master server:

CONNECT_OPTIONS = localhost 1 0 2

For an explanation of the CONNECT_OPTIONS values, refer to theNetBackupAdministrator’s Guide for UNIX and Linux, Volume II.

TheNetBackupTroubleshootingGuide also provides information on networkconnectivity issues.

Using NOM to monitor jobsNetBackup Operations Manager (NOM) can be used to monitor the performanceof NetBackup jobs. NOM can also manage and monitor dozens of NetBackupinstallations in multiple locations.

Some of NOM’s features are the following:

■ Web-based interface for efficient, remote administration across multipleNetBackup servers from a single, centralized console.

Master server configuration guidelinesManaging NetBackup job scheduling

58

Page 59: Netbackup Planning Guide 307083

■ Policy-based alert notification, using predefined alert conditions to specifytypical issues or thresholds within NetBackup.

■ Operational reporting, on issues such as backup performance, media utilization,and rates of job success.

■ Consolidated job and job policy views per server (or group of servers), forfiltering and sorting job activity.

For more information on the capabilities of NOM, click Help from the title bar ofthe NOM console. Or see the NetBackup Operations Manager Guide.

Testing for disaster recoveryThe following techniques may help in disaster recovery testing.

To prevent the expiration of empty media

1 Go to the following directory:

UNIX

cd /usr/openv/netbackup/bin

Windows

install_path\NetBackup\bin

2 Enter the following:

mkdir bpsched.d

cd bpsched.d

echo 0 > CHECK_EXPIRED_MEDIA_INTERVAL

59Master server configuration guidelinesManaging NetBackup job scheduling

Page 60: Netbackup Planning Guide 307083

To prevent the expiration of images

1 Go to the following directory:

UNIX

cd /usr/openv/netbackup/bin

Windows

cd install_path\NetBackup\bin

2 Enter the following:

UNIX

touch NOexpire

Windows

echo 0 > NOexpire

See “Remove the NOexpire file when it is not needed” on page 60.

To prevent backups from starting

1 Turn off bprd (NetBackup Request Manager), to suspend scheduling of newjobs by nbpem.

You can use the Activity Monitor in the NetBackup Administration Console.

2 Restart bprd to resume scheduling.

Remove the NOexpire file when it is not neededAfter disaster recovery testing, it may be best to remove the "NOexpire" touchfile. If you leave the "NOexpire" touch file in place, the NetBackup catalog continuesto grow because images do not expire.

Miscellaneous considerationsConsider the following issues when planning for or troubleshooting NetBackup.

Selection of storage unitsMany different NetBackup mechanisms write backup images to storage devices,such as: backup policies, storage lifecycle policies, staging storage units, Vaultduplication, and ad hoc (manual) duplication. When writing a backup image to

Master server configuration guidelinesMiscellaneous considerations

60

Page 61: Netbackup Planning Guide 307083

storage, you can tell NetBackup how to select a storage unit or let NetBackupchoose the storage unit.

The following sections discuss the pros and cons of specifying a particular storageunit versus allowing NetBackup to choose from a group. The more narrowly definedthe storage unit designation is, the faster NetBackup can assign a storage unit,and the sooner the job starts.

"Any Available" storage destinationFor most backup operations, the default is to let NetBackup choose the storageunit (a storage destination of "Any Available"). Any Available may work well insmall configurations that include relatively few storage units and media servers.

However, Any Available is NOT recommended for the following:

■ Configurations with many storage units and media servers. Any Available isnot recommended.

■ Configurations with more sophisticated disk technologies (such as SharedDisk,AdvancedDisk, PureDisk, OpenStorage). With these newer disk technologies,Any Available causes NetBackup to analyze all options to choose the best oneavailable. Any Available is not recommended.

In general, if the configuration includes many storage units, many volumes withinmany disk pools, and many media servers, note: the deep analysis that AnyAvailable requires can delay job initiation when many jobs (backup or duplication)are requested during busy periods of the day. Instead, specify a particular storageunit, or narrow NetBackup’s search by means of storage unit groups (dependingon how storage units and groups are defined).

For more details on Any Available, see the NetBackup Administrator’s Guide,Volume I.

In addition, note the following about Any Available:

■ When Any Available is specified, NetBackup operates in prioritized mode, asdescribed in the next section. In other words, NetBackup selects the firstavailable storage unit in the order in which they were originally defined.

■ Do not specify "Any Available" for multiple copies (Inline Copy) from a backupor from any method of duplication. The methods of duplication include Vault,staging disk storage units, lifecycle policies, or manual duplication throughthe Administration Console or command line. Instead, specify a particularstorage unit.

61Master server configuration guidelinesMiscellaneous considerations

Page 62: Netbackup Planning Guide 307083

Storage unit groupsStorage unit groups contain a specific list of storage units for NetBackup to choosefrom. Only the storage units that are specified in the group are candidates for thejob.

You can configure a storage unit group to choose a storage unit in any of thefollowing ways:

■ PrioritizedChoose the first storage unit in the list that is not busy, down, or out of media.

■ FailoverChoose the first storage unit in the list that is not down or out of media.

■ Round robinChoose the storage unit that is the least recently selected.

■ Media server load balancingNetBackup avoids sending jobs to busy media servers. This option is notavailable for the storage unit groups that contain a BasicDisk storage unit.

You can use the New or Change Storage Unit Group dialog in the NetBackupAdministration Console.

For these options, NetBackup gives preference to a storage unit that a local mediaserver can access.

More information on these options is available.

See the NetBackup online Help for storage unit groups.

See the NetBackup Administrator’s Guide, Volume I.

In addition, note the following about storage unit groups: the more narrowlydefined your storage units and storage unit groups are, the less time it takesNetBackup to select a resource to start a job.

In complex environments with large numbers of jobs required, the following aregood choices:

■ Fewer storage units per storage unit group.

■ Fewer media servers per storage unit. In the storage unit definition, avoidusing "Any Available" media server when drives are shared among multiplemedia servers.

■ Fewer disk volumes in a disk pool.

■ Fewer concurrent jobs.

Examples are the following:

■ Less multiplexing

Master server configuration guidelinesMiscellaneous considerations

62

Page 63: Netbackup Planning Guide 307083

■ Fewer tape drives in each storage unit

Disk stagingWith disk staging, images can be created on disk initially, then copied later toanother media type (as determined in the disk staging schedule). The media typefor the final destination is typically tape, but can be disk. This two-stage processleverages the advantages of disk-based backups in the near term, while preservingthe advantages of tape-based backups for long term.

Note that disk staging can be used to increase backup speed. For more information,refer to the NetBackup Administrator’s Guide, Volume I.

File system capacityAmple file system space must exist for NetBackup to record its logging or catalogentries on each master server, media server, and client. If logging or catalog entriesexhaust available file system space, NetBackup ceases to function. Symantecrecommends that you be able to increase the size of the file system through volumemanagement. The disk that contains the NetBackup master catalog should beprotected with mirroring or RAID hardware or software technology.

NetBackup catalog strategiesThe NetBackup catalog resides on the disk of the NetBackup master server.

The catalog consists of the following parts:

The image database contains information about what hasbeen backed up. It is by far the largest part of the catalog.

Image database

This data includes the media and volume data describingmedia usage and volume information that is used duringthe backups.

NetBackup data in relationaldatabases

Policy, schedule, and other flat files that are used byNetBackup.

NetBackup configurationfiles

For more information on the catalog, refer to "Catalog Maintenance andPerformance Optimization" in the NetBackup Administrator's Guide Volume 1.

The NetBackup catalogs on the master server tend to grow large over time andeventually fail to fit on a single tape.

Figure 2-1 shows the layout of the first few directory levels of the NetBackupcatalogs on the master server.

63Master server configuration guidelinesNetBackup catalog strategies

Page 64: Netbackup Planning Guide 307083

Figure 2-1 Directory layout on the master server (UNIX)

/Netbackup/db /var

/usr/openv/

/Netbackup/vault

Relational database files

NBDB.dbEMM_DATA.dbEMM_INDEX.dbNBDB.logBMRDB.dbBMRDB.logBMR_DATA.dbBMR_INDEX.dbvxdbms.conf

Image databaase

License key andauthentication information

Configuration files

/class

server.confdatabases.conf

/var/global

/class_template

/client

/config

/error /images

/jobs

/media

/vault

/client_1 /client_2

/db/data

Catalog backup typesIn addition to the existing cold catalog backups (which require that no jobs berunning), NetBackup allows online "hot" catalog backups. Hot catalog backupscan be performed while other jobs are running.

Note: For NetBackup 6.0 and later, Symantec recommends schedule-based,incremental hot catalog backups with periodic full backups.

Master server configuration guidelinesNetBackup catalog strategies

64

Page 65: Netbackup Planning Guide 307083

Guidelines for managing the catalogConsider the following:

■ NetBackup catalog pathnames (cold catalog backups)In the file list, use absolute pathnames for the locations of the NetBackup andMedia Manager catalog paths. Include the server name in the path. Take thisprecaution in case the media server that performs the backup is changed.

■ Back up the catalog using online, hot catalog backupThis type of catalog backup is for highly active NetBackup environments inwhich continual backup activity occurs. It is considered an online, hot methodbecause it can be performed while regular backup activity takes place. Thistype of catalog is policy-based and can span more than one tape. It also allowsfor incremental backups, which can significantly reduce catalog backup timesfor large catalogs.

■ Store the catalog on a separate file system (UNIX systems only)The NetBackup catalog can grow quickly depending on backup frequency,retention periods, and the number of files being backed up. With the NetBackupcatalog data on its own file system, catalog growth does not affect other diskresources, root file systems, or the operating system.Information is available on how to move the catalog (UNIX systems only).See “Catalog compression” on page 66.

■ Change the location of the NetBackup relational database filesThe location of the NetBackup relational database files can be changed or splitinto multiple directories, for better performance. For example, by placing thetransaction log file (NBDB.log) on a physically separate drive, you gain thefollowing: better protection against disk failure, and increased efficiency inwriting to the log file.Refer to the procedure in the "NetBackup relational database" appendix of theNetBackup Administrator’s Guide, Volume I.

■ Delay to compress catalogThe default value for this parameter is 0, which means that NetBackup doesnot compress the catalog. As your catalog increases in size, you may want touse a value between 10 and 30 days for this parameter. When you restore oldbackups, NetBackup automatically uncompresses the files as needed, withminimal performance impact.See “Catalog compression” on page 66.

Catalog backup not finishing in the available windowIf cold catalog backups do not finish in the backup window, or hot catalog backupsrun a long time, consider the following:

65Master server configuration guidelinesNetBackup catalog strategies

Page 66: Netbackup Planning Guide 307083

Catalog archiving reduces the size of online catalog data byrelocating the large catalog .f files to secondary storage.NetBackup administration continues to require regularlyscheduled catalog backups, but without the large amountof online catalog data, the backups are faster.

Use catalog archiving

Off load some policies, clients, and backup images from thecurrent master server to a master in one of the new domains.Make sure that each master has a window large enough toallow its catalog backup to finish. Since a media server canbe connected to one master server only, additional mediaservers may be needed.

For assistance in splitting the current NetBackup domain,contact Symantec Consulting.

Split the NetBackup domainto form two or more smallerdomains, each with its ownmaster server and mediaservers.

■ That the media servers still exist.

■ That the media servers are online and reachable fromthe master server. Unreachable 5.x media servers cancause the image cleaning operation to time outrepeatedly. These timeouts slow down the backup.

You should upgrade all media servers to NetBackup 6.xas soon as possible after upgrading the master server,to avoid such issues.

If the domain includesNetBackup 5.x media servers(or has included any since themaster server was upgradedto NetBackup 6.x), verify thefollowing:

Catalog compressionWhen the NetBackup image catalog becomes too large for the available disk space,you can manage this situation in either of the following ways:

■ Compress the image catalog

■ Move the image catalog (UNIX systems only).

For details, refer to "Moving the image catalog" and "Compressing anduncompressing the Image Catalog" in the NetBackup Administrator’s Guide,Volume I.

Note that NetBackup compresses images after each backup session, regardless ofwhether any backups were successful. The compressesion happens right beforethe execution of the session_notify script and the backup of the catalog. The actualbackup session is extended until compression is complete.

Merging, splitting, or moving serversThe master server schedules and maintains backup information for a given setof systems. The Enterprise Media Manager (EMM) and its database maintain

Master server configuration guidelinesMerging, splitting, or moving servers

66

Page 67: Netbackup Planning Guide 307083

centralized device and media-related information for all servers that are part ofthe configuration. By default, the EMM server and the NetBackup RelationalDatabase (NBDB) that contains the EMM data are located on the master server. Alarge and dynamic datacenter can expect to periodically reconfigure the numberand organization of its backup servers.

Centralized management, reporting, and maintenance are the benefits of acentralized NetBackup environment. When a master server has been established,it is possible to merge its databases with another master server. Merging databasesgives control over its server backups to the new master server.

If the backup load on a master server has grown and backups do not finish in thebackup window, consider splitting the master server into two master servers.

You can merge or split NetBackup master servers or EMM servers. You can alsoconvert a media server to a master server or a master server to a media server.However, the procedures are complex and require a detailed knowledge ofNetBackup database interactions. Merging or splitting NetBackup and EMMdatabases to another server is not recommended without a Symantec consultant.A consultant can determine the changes needed, based on your specificconfiguration and requirements.

Guidelines for policiesThe following items may have performance implications.

Include and exclude listsConsider the following:

■ Do not use excessive wild cards in file lists.When wildcards are used, NetBackup compares every file name against thewild cards. This decreases NetBackup performance. Instead of placing /tmp/*

(UNIX) or C:\Temp\* (Windows) in an include or exclude list, use /tmp/ orC:\Temp.

■ Use exclude files to exclude large useless files.Reduce the size of your backups by using exclude lists for the files yourinstallation does not need to preserve. For instance, you may decide to excludetemporary files. Use absolute paths for your exclude list entries, so thatvaluable files are not inadvertently excluded. Before adding files to the excludelist, confirm with the affected users that their files can be safely excluded.Should disaster (or user error) strike, not being able to recover files costs muchmore than backing up extra data.

67Master server configuration guidelinesGuidelines for policies

Page 68: Netbackup Planning Guide 307083

When a policy specifies that all local drives be backed up (ALL_LOCAL_DRIVES),nbpem initiates a parent job (nbgenjob). nbgenjob connects to the client andruns bpmount -i to get a list of mount points. Then nbpem initiates a job withits own unique job identification number for each mount point. Next the clientbpbkar starts a stream for each job. Only then does Netbackup read the excludelist. When the entire job is excluded, bpbkar exits with status 0, stating thatit sent 0 of 0 files to back up. The resulting image files are treated the same asthe images from any other successful backup. The images expire in the normalfashion when the expiration date in the image header files specifies they areto expire.

Critical policiesFor online hot catalog backups, identify the policies that are crucial to recoveringyour site in the event of a disaster. For more information on hot catalog backupand critical policies, refer to the NetBackup Administrator’s Guide, Volume I.

Schedule frequencyMinimize how often you back up the files that have not changed, and minimizeyour consumption of bandwidth, media, and other resources. To do so, limit fullbackups to monthly or quarterly, followed by weekly cumulative incrementalbackups and daily incremental backups.

Managing logsConsider the following issues related to logging.

Optimizing the performance of vxlogviewThe vxlogview command is used for viewing logs created by unified logging (VxUL).The vxlogview command displays log messages faster when you use the -i optionto specify a log file ID.

For example:

vxlogview -i nbrb -n 0

In this example, -i nbrb specifies the file ID for the NetBackup Resource Brokerprocess (originator ID 118). vxlogview searches only the log files that were createdby nbrb. That is, it searches only the files that contain 118 as the originator ID inthe log file name. By limiting the log files that it has to search, vxlogview canreturn a result faster.

Master server configuration guidelinesManaging logs

68

Page 69: Netbackup Planning Guide 307083

Note: The -i option works only for NetBackup processes that create unified logfiles with an originator ID in the file name. Such processes are referred to asservices. The following is an example of such a log file name:

UNIX:

/usr/openv/logs/51216-118-2201360136-041029-0000000000.log

Windows:

install_path\logs\51216-118-2201360136-041029-0000000000.log

where -118- is the originator ID of nbrb.

As a rule, a NetBackup process is a service if it appears in the Activity Monitor ofthe NetBackup Administration Console, under the Daemons tab (UNIX) or Servicestab (Windows).

Important note: If the process named on the vxlogview -i option is not a service(does not write log files with an originator ID in the file name), vxlogview returns"No log files found." In that case, use the -o option instead of -i. For example:

vxlogview -o mds -n 0

In this example, vxlogview searches all unified log files for messages logged bymds (the EMM Media and Device Selection component).

More vxlogview examples are available in theNetBackup Troubleshooting Guide.

Interpreting legacy error logsThis section describes the fields in the legacy log files that are written to thefollowing locations:

UNIX

/usr/openv/netbackup/db/error

Windows

install_path\NetBackup\db\error

On UNIX, there is a link to the most current file in the error directory. The linkis called daily_messages.log.

The information in these logs provides the basis for the NetBackup ALL LOGENTRIES report. For more information on legacy logging and unified logging(VxUL), refer to the NetBackup Troubleshooting Guide.

Here is a sample message from an error log:

69Master server configuration guidelinesManaging logs

Page 70: Netbackup Planning Guide 307083

1021419793 1 2 4 nabob 0 0 0 *NULL* bpjobd TERMINATED bpjobd

Table 2-1 defines the various fields in this message (the fields are delimited byblanks).

Table 2-1 Meaning of daily_messages log fields

ValueDefinitionField

1021419793 (= number of secondssince 1970)

Time this event occurred (ctime)1

1Error database entry version2

2Type of message3

4Severity of error:

1: Unknown

2: Debug

4: Informational

8: Warning

16: Error

32: Critical

4

nabobServer on which error was reported5

0Job ID (included if pertinent to thelog entry)

6

0(optional entry)7

0(optional entry)8

*NULL*Client on which error occurred, ifapplicable. Otherwise *NULL*

9

bpjobdProcess which generated the errormessage

10

TERMINATED bpjobdText of error message11

Table 2-2 lists the values for the message type, which is the third field in the logmessage.

Master server configuration guidelinesManaging logs

70

Page 71: Netbackup Planning Guide 307083

Table 2-2 Message types

Definition of this message typeType Value

Unknown1

General2

Backup4

Archive8

Retrieve16

Security32

Backup status64

Media device128

71Master server configuration guidelinesManaging logs

Page 72: Netbackup Planning Guide 307083

Master server configuration guidelinesManaging logs

72

Page 73: Netbackup Planning Guide 307083

Media server configurationguidelines

This chapter includes the following topics:

■ Network and SCSI/FC bus bandwidth

■ How to change the threshold for media errors

■ Reloading the st driver without restarting Solaris

■ Media Manager drive selection

Network and SCSI/FC bus bandwidthConfigure the number of tape drives that the Fibre Channel connection can support.Keep in mind the amount of data that is pushed to the media server from theclients. Tape drive wear and tear are much reduced and efficiency is increased ifthe data stream matches the tape drive capacity and is sustained.

Note: Make sure that both your inbound network connection and your SCSI/FCbus have enough bandwidth to feed all of your tape drives.

Example:

iSCSI (360 GB/hour)

Two LTO gen 3 drives, each rated at approximately 300 GB/hour (2:1 compression)

In this example, the tape drives require more speed than provided by the iSCSIbus. In this configuration, only one tape drive streams. Add a second iSCSI bus,or move to a connection that is fast enough to efficiently feed data to the tapedrives.

3Chapter

Page 74: Netbackup Planning Guide 307083

How to change the threshold for media errorsSome backup failures can occur because there is no media available. In that case,execute the following script and run the NetBackup Media List report to checkthe status of media:

UNIX

/usr/openv/netbackup/bin/goodies/available_media

Windows

install_path\NetBackup\bin\goodies\available_media

The NetBackup Media List report may show that some media is frozen andtherefore cannot be used for backups.

I/O errors that recur can cause NetBackup to freeze media. The NetBackupTroubleshooting Guide describes how to deal with this issue. For example, seeunder NetBackup error code 96. You can also configure the NetBackup errorthreshold value, as described in this section.

Each time a read, write, or position error occurs, NetBackup records the time,media ID, type of error, and drive index in the EMM database. Then NetBackupscans to see whether that media has had "m" of the same errors within the past"n" hours. The variable "m" is a tunable parameter known asmedia_error_threshold. The default value of media_error_threshold is 2 errors.The variable "n" is known as time_window. The default value of time_window is12 hours.

If a tape volume has more than media_error_threshold errors, NetBackup takesthe appropriate action.

If the volume has not been previously assigned for backups, NetBackup does thefollowing:

■ Sets the volume status to FROZEN

■ Selects a different volume

■ Logs an error

If the volume is in the NetBackup media catalog and was previously selected forbackups, NetBackup does the following:

■ Sets the volume to SUSPENDED

■ Aborts the current backup

■ Logs an error

Media server configuration guidelinesHow to change the threshold for media errors

74

Page 75: Netbackup Planning Guide 307083

Adjusting media_error_thresholdYou can adjust the NetBackup media error threshold as follows.

To adjust the NetBackup media error thresholds

◆ Use the nbemmcmd command on the media server:

UNIX

/usr/openv/netbackup/bin/admincmd/nbemmcmd -changesetting

-time_window unsigned integer -machinename string

-media_error_threshold unsigned integer -drive_error_threshold

unsigned integer

Windows

install_path\NetBackup\bin\admincmd\nbemmcmd.exe -changesetting

-time_window unsigned integer -machinename string

-media_error_threshold unsigned integer -drive_error_threshold

unsigned integer

For example, if the -drive_error_threshold is set to the default value of 2,the drive is downed after 3 errors in 12 hours. If the -drive_error_thresholdis set to 6, it takes 7 errors in the same 12 hour period before the drive isdowned.

NetBackup freezes a tape volume or downs a drive for which these values areexceeded. For more detail on the nbemmcmd command, refer to the man pageor to the NetBackup Commands Guide.

About tape I/O error handlingThis section has nothing to do with the number of times NetBackup retries abackup or restore that fails. That situation is controlled by the global configurationparameter "Backup Tries" for backups and the bp.conf entry RESTORE_RETRIESfor restores. The algorithm described here determines whether I/O errors on tapeshould cause media to be frozen or drives to be downed.

When a read/write/position error occurs on tape, the error returned by theoperating system does not identify whether the tape or drive caused the error. Toprevent the failure of all backups in a given timeframe, bptm tries to identify abad tape volume or drive based on past history.

To do so, bptm uses the following logic:

■ Each time an I/O error occurs on a read/write/position, bptm logs the error inthe following file.UNIX

75Media server configuration guidelinesHow to change the threshold for media errors

Page 76: Netbackup Planning Guide 307083

/usr/openv/netbackup/db/media/errors

Windows

install_path\NetBackup\db\media\errors

The error message includes the time of the error, media ID, drive index, andtype of error. Examples of the entries in this file are the following:

07/21/96 04:15:17 A00167 4 WRITE_ERROR

07/26/96 12:37:47 A00168 4 READ_ERROR

■ Each time an entry is made, the past entries are scanned. The scan determineswhether the same media ID or drive has had this type of error in the past "n"hours. "n" is known as the time_window. The default time window is 12 hours.During the history search for the time_window entries, EMM notes the pasterrors that match the media ID, the drive, or both. The purpose is to determinethe cause of the error. For example: if a media ID gets write errors on morethan one drive, the tape volume may be bad and NetBackup freezes the volume.If more than one media ID gets a particular error on the same drive, the drivegoes to a "down" state. If only past errors are found on the same drive withthe same media ID, EMM assumes that the volume is bad and freezes it.

■ The freeze or down operation is not performed on the first error.Note two other parameters: media_error_threshold and drive_error_threshold.For both of these parameters the default is 2. For a freeze or down to happen,more than the threshold number of errors must occur. By default, at least threeerrors must occur in the time window for the same drive or media ID.If either media_error_threshold or drive_error_threshold is 0, a freeze or downoccurs the first time an I/O error occurs. media_error_threshold is looked atfirst, so if both values are 0, a freeze overrides a down. Symantec does notrecommend that these values be set to 0.A change to the default values is not recommended without good reason. Oneobvious change would be to put very large numbers in the threshold files.Large numbers in that file would disable the mechanism, such that to "freeze"a tape or "down" a drive should never occur.Freezing and downing is primarily intended to benefit backups. If read errorsoccur on a restore, a freeze of media has little effect. NetBackup still accessesthe tape to perform the restore. In the restore case, downing a bad drive mayhelp.

Media server configuration guidelinesHow to change the threshold for media errors

76

Page 77: Netbackup Planning Guide 307083

Reloading the st driver without restarting SolarisThe devfsadmd daemon enhances device management in Solaris. This daemoncan dynamically reconfigure devices during the boot process and in response tokernel event notification.

The devfsadm that is located in /usr/sbin is the command form of devfsadmd.devfsadm replaces drvconfig (for management of physical device tree /devices)and devlinks (for management of logical devices in /dev). devfsadm also replacesthe commands for specific device class types, such as /usr/sbin/tapes.

Without restarting the server, you can recreate tape devices for NetBackup afterchanging the /kernel/drv/st.conf file. Do the following.

To reload the st driver without restarting

1 Turn off the NetBackup and Media Manager daemons.

2 Obtain the module ID for the st driver in kernel:

/usr/sbin/modinfo | grep SCSI

The module ID is the first field in the line that corresponds to the SCSI tapedriver.

3 Unload the st driver from the kernel:

/usr/sbin/modunload -i "module id"

4 Run one (not all) of the following commands:

/usr/sbin/devfsadm -i st

/usr/sbin/devfsadm -c tape

/usr/sbin/devfsadm -C -c tape

(Use the last command to enforce cleanup if dangling logical links are presentin /dev.)

The devfsadm command recreates the device nodes in /devices and thedevice links in /dev for tape devices.

5 Reload the st driver:

/usr/sbin/modload st

6 Restart the NetBackup and Media Manager daemons.

77Media server configuration guidelinesReloading the st driver without restarting Solaris

Page 78: Netbackup Planning Guide 307083

Media Manager drive selectionWhen EMM determines which storage unit to use, it attempts to select a drivethat matches the storage unit selection criteria. The criteria, for example, may bemedia server, robot number, robot type, or density. EMM prefers loaded drivesover unloaded drives (a loaded drive removes the overhead of loading a media ina drive). If no loaded drives are available, EMM attempts to select the best usabledrive that is suited for the job. In general, EMM prefers non-shared drives overshared drives, and it attempts to select the least recently used drive.

Media server configuration guidelinesMedia Manager drive selection

78

Page 79: Netbackup Planning Guide 307083

Media configurationguidelines

This chapter includes the following topics:

■ Dedicated or shared backup environment

■ Suggestions for pooling

■ Disk versus tape

Dedicated or shared backup environmentYour backup environment can be dedicated or shared. Dedicated SANs are securebut expensive. Shared environments cost less, but require more work to makethem secure. A SAN installation with a database may require the performance ofa RAID 1 array. An installation backing up a file structure may satisfy its needswith RAID 5 or NAS.

Suggestions for poolingThe following are useful conventions for media pools (formerly known as volumepools):

■ Configure a scratch pool for management of scratch tapes. If a scratch poolexists, EMM can move volumes from that pool to other pools that do not havevolumes available.

■ Use the available_media script in the goodies directory. You can put theavailable_media report into a script. The script redirects the report output toa file and emails the file to the administrators daily or weekly. The script helpstrack the tapes that are full, frozen, suspended, and so on. By means of a script,

4Chapter

Page 80: Netbackup Planning Guide 307083

you can also filter the output of the available_media report to generate customreports.To monitor media, you can also use the NetBackup Operations Manager (NOM).For instance, NOM can be configured to issue an alert if: fewer than X numberof media are available, or if more than X% of the media is frozen or suspended.

■ Use the none pool for cleaning tapes.

■ Do not create more pools than you need. In most cases, you need only 6 to 8pools. The pools include a global scratch pool, catalog backup pool, and thedefault pools that are created by the installation. The existence of too manypools causes the library capacity to become fragmented across the pools.Consequently, the library becomes filled with many tapes that are partiallyfull.

Disk versus tapeDisk is now a common backup medium. Backup data on disk generally providesfaster restores.

Tuning disk-based storage for performance is similar to tuning tape-based storage.The optimal buffer settings for a site can vary according to its configuration. Ittakes thorough testing to determine these settings.

Disk-based storage can be useful if you have a lot of incremental backups and thepercentage of data change is small. If the volume of data in incremental copies isinsufficient to ensure efficient writing to tape, consider disk storage. After writingthe data to disk, you can use staging or storage lifecycle policies to copy batchesof the images to tape. This arrangement can produce faster backups and preventwear and tear on your tape drives.

Consider the following factors when backing up a dataset to disk or tape:

■ Short or long retention periodDisk is well suited for short retention periods; tape is better suited for longerretention periods.

■ Intermediate (staging) or long-term storageDisk is suited for staging; tape for long-term storage.

■ Incremental or full backupDisk is better suited to low volume incremental backups.

■ Synthetic backupsSynthetic full backups are faster when incremental backups are stored on disk.

■ Data recovery timeRestore from disk is usually faster than from tape.

Media configuration guidelinesDisk versus tape

80

Page 81: Netbackup Planning Guide 307083

■ Multi-streamed restoreMust a restore of the data be multi-streamed from tape? If so, do not stage themulti-streamed backup to disk before writing it to tape.

■ Speed of the backupsIf client backups are too slow to keep the tape in motion, send the backups todisk. Later, staging or lifecycle policies can move the backup images to tape.

■ Size of the backupsIf the backups are too small to keep the tape in motion, send the backups todisk. Small backups may include incrementals and frequent backups of smalldatabase log files. Staging or lifecycle policies can later move the backup imagesto tape.

The following are some benefits of backing up to disk rather than tape:

■ No need to multiplexBackups to disk do not need to be multiplexed. Multiplexing is important withtape because it creates a steady flow of data which keeps the tape in motionefficiently (tape streaming). However, multiplexing to tape slows down asubsequent restore.More information is available on tape streaming.See “NetBackup storage device performance” on page 156.

■ Instant access to dataMost tape drives have a "time to data" of close to two minutes. Time is requiredto move the tape from its slot, load it into the drive, and seek to an appropriateplace on the tape. Disk has an effective time to data of 0 seconds. Restoring alarge file system from 30 different tapes can add almost two hours to therestore: a two-minute delay per tape for load and seek, and a possibletwo-minute delay per tape for rewind and unload.

■ Fewer full backupsWith tape-based systems, full backups must be done regularly because of the"time to data" issue. If full backups are not done regularly, a restore may requiretoo many tapes from incremental backups. As a result, the time to restoreincreases, as does the chance that a single tape may cause the restore to fail.

81Media configuration guidelinesDisk versus tape

Page 82: Netbackup Planning Guide 307083

Media configuration guidelinesDisk versus tape

82

Page 83: Netbackup Planning Guide 307083

Best practices

This chapter includes the following topics:

■ Best practices: SAN Client

■ Best practices: Flexible Disk Option

■ Best practices: new tape drive technologies

■ Best practices: tape drive cleaning

■ Best practices: storing tape cartridges

■ Best practices: recoverability

■ Best practices: naming conventions

■ Best practices: duplication

Best practices: SAN ClientThe NetBackup SAN Client feature is designed for a computer that has thefollowing characteristics:

■ Contains critical data that requires high bandwidth for backups

■ Is not a candidate for converting to a media server

A SAN Client performs a fast backup over a Fibre Channel SAN to a media server.Media servers that have been enabled for SAN Client backups are called FibreTransport Media Servers.

5Chapter

Page 84: Netbackup Planning Guide 307083

Note: The FlashBackup option should not be used with SAN Clients. Restores ofFlashBackup backups use the LAN path rather than the SAN path from mediaserver to the client. The LAN may not be fast enough to handle a full volume (rawpartition) restore of a FlashBackup backup.

A Symantec technote contains information on SAN Client performance parametersand best practices:

http://entsupport.symantec.com/docs/293110

The main point of the technote is the following: that effective use of a SAN Clientdepends on the proper configuration of the correct hardware. (Refer to the technotefor in-depth details.)

In brief, the technote contains the following information:

■ A list of the HBAs that NetBackup 6.5 supports.Further information is available on supported hardware.See the NetBackup 6.5 hardware compatibility list:http://entsupport.symantec.com/docs/284599

■ Tips on how to deploy a Fibre Channel SAN between the SAN Client and theFibre Transport Media Server.

■ A list of supported operating systems and HBAs for the Fibre Transport MediaServer. Also a list of tuning parameters that affect the media serverperformance.

■ A similar list of supported operating systems and tuning parameters for theSAN Client.

■ A description of recommended architectures as a set of best practices

The document includes the following:

■ To make best use of high-speed disk storage, or to manage a pool of SANClients for maximum backup speed: dedicate Fibre Transport Media Serversfor SAN Client backup only. Do not share media servers for LAN-basedbackups.

■ For a more cost-effective use of media servers, Fibre Transport MediaServers can be shared for both SAN Client and LAN-based backups.

■ Use multiple data streams for higher data transfer speed from the SANClient to the Fibre Transport Media Server. Multiple data streams can becombined with multiple HBA ports on the Fibre Transport Media Server.The maximum number of simultaneous connections to the Fibre TransportMedia Server is 16.

Best practicesBest practices: SAN Client

84

Page 85: Netbackup Planning Guide 307083

Further information is available on SAN Client.

See the NetBackup SAN Client and Fibre Transport Troubleshooting Guide:

http://entsupport.symantec.com/docs/288437

Best practices: Flexible Disk OptionWith the Flexible Disk Option, NetBackup can fully use file systems native to thehost operating system of the media server. NetBackup assumes full ownership ofthe file systems and also uses the storage server capabilities of the host operatingsystem.

The Flexible Disk Option enables two disk type storage units: AdvancedDisk andSharedDisk. AdvancedDisk does not require any specialized hardware, whileSharedDisk depends on the availability of SAN attached storage. Both disk typesare managed as disk pools within NetBackup.

See “Estimate your SharedDisk storage requirements” on page 35.

A Symantec tech note contains information on performance considerations andbest practices for the Flexible Disk Option. The tech note is titled Best practicesfor disk layout with the NetBackup Flexible Disk option.

http://seer.entsupport.symantec.com/docs/305161.htm

Best practices: new tape drive technologiesSymantec provides a white paper on best practices for migrating your NetBackupinstallation to new tape technologies:

"Best Practices: Migrating to or Integrating New Tape Drive Technologies inExisting Libraries," available at www.support.veritas.com.

Recent tape drives offer noticeably higher capacity than the previous generationof tape drives that are targeted at the open-systems market. Administrators maywant to take advantage of these higher-capacity, higher performance tape drives,but are concerned about how to integrate these into an existing tape library. Thewhite paper discusses different methods for doing so and the pros and cons ofeach.

Best practices: tape drive cleaningYou can use the following tape drive cleaning methods in a NetBackup installation:

■ Frequency-based cleaning

85Best practicesBest practices: Flexible Disk Option

Page 86: Netbackup Planning Guide 307083

■ TapeAlert (on-demand cleaning)

■ Robotic cleaning

Refer to the NetBackup Administrator’s Guide, Volume II, for details on how touse these methods. Following are brief summaries of each method.

Frequency-based cleaningNetBackup does frequency-based cleaning by tracking the number of hours a drivehas been in use. When this time reaches a configurable parameter, NetBackupcreates a job that mounts and exercises a cleaning tape. This practice cleans thedrive in a preventive fashion. The advantage of this method is that typically thereare no drives unavailable awaiting cleaning. No limitation exists as to the platformtype or robot type. On the downside, cleaning is done more often than necessary.Frequency-based cleaning adds system wear and takes time that can be used towrite to the drive. This method is also hard to tune. When new tapes are used,drive cleaning is needed less frequently; the need for cleaning increases as thetape inventory ages. This increases the amount of tuning administration that isneeded and, consequently, the margin of error.

TapeAlert (reactive cleaning, or on-demand cleaning)TapeAlert (on-demand cleaning) allows reactive cleaning for most drive types.TapeAlert allows a tape drive to notify EMM when it needs to be cleaned. EMMthen performs the cleaning. You must have a cleaning tape configured in at leastone library slot to use this feature. TapeAlert is the recommended cleaning solutionif it can be implemented.

Certain drives at some firmware levels do not support this type of reactive cleaning.If reactive cleaning is not supported, frequency-based cleaning may be substituted.This solution is not vendor or platform specific. Symantec has not tested thespecific firmware levels. The vendor should be able to confirm whether theTapeAlert feature is supported.

How TapeAlert worksTo understand drive-cleaning TapeAlerts, it is important to understand theTapeAlert interface to a drive. The TapeAlert interface to a tape drive is by meansof the SCSI bus. The interface is based on a Log Sense page, which contains 64alert flags. The conditions that cause a flag to be set and cleared are device-specificand device-vendor specific.

Best practicesBest practices: tape drive cleaning

86

Page 87: Netbackup Planning Guide 307083

The configuration of the Log Sense page is by means of a Mode Select page. TheMode Sense/Select configuration of the TapeAlert interface is compatible withthe SMART diagnostic standard for disk drives.

NetBackup reads the TapeAlert Log Sense page at the beginning and end of a writeor read job. TapeAlert flags 20 to 25 are used for cleaning management, althoughsome drive vendors’ implementations vary. NetBackup uses TapeAlert flag 20(Clean Now) and TapeAlert flag 21 (Clean Periodic) to determine when to clean adrive.

When NetBackup selects a drive for a backup, bptm reviews the Log Sense pagefor status. If one of the clean flags is set, the drive is cleaned before the job starts.If a backup is in progress and a clean flag is set, the flag is not read until a tape isdismounted from the drive.

If a job spans media and, during the first tape, one of the clean flags is set, thefollowing occurs: the cleaning light comes on and the drive is cleaned before thesecond piece of media is mounted in the drive.

The implication is that the present job concludes its ongoing write despite aTapeAlert Clean Now or Clean Periodic message. That is, the TapeAlert does notrequire the loss of what has been written to tape so far. This is true regardless ofthe number of NetBackup jobs that are involved in writing out the rest of themedia.

If a large number of media become FROZEN as a result of having implementedTapeAlert, other media or tape drive issues are likely to exist.

Disabling TapeAlertUse the following procedure.

To disable TapeAlert

◆ Create a touch file called NO_TAPEALERT, as follows:

UNIX:

/usr/openv/volmgr/database/NO_TAPEALERT

Windows:

install_path\volmgr\database\NO_TAPEALERT

Robotic cleaningRobotic cleaning is not proactive, and is not subject to the limitations in theprevious section. By being reactive, unnecessary cleanings are eliminated, and

87Best practicesBest practices: tape drive cleaning

Page 88: Netbackup Planning Guide 307083

frequency tuning is not an issue. The drive can spend more time moving data,rather than in maintenance operations.

EMM does not support library-based cleaning for most robots, because roboticlibrary and operating systems vendors implement this type of cleaning in differentways.

Best practices: storing tape cartridgesFollow the tape manufacturer’s recommendations for storing tape cartridges.

Best practices: recoverabilityRecovering from data loss involves both planning and technology to support yourrecovery objectives and time frames. The methods and procedures you adopt foryour installation should be documented and tested regularly to ensure that yourinstallation can recover from a disaster.

Table 5-1 describes how you can use NetBackup and other tools to recover fromvarious mishaps or disaster.

Table 5-1 Methods and procedures for recoverability

Methods and proceduresRecoverypossible?

Operational risk

NoneNoFile deleted before backup

Standard NetBackup restore proceduresYesFile deleted after backup

Data recovery using NetBackupYesBackup client failure

Backup image duplication: create multiplebackup copies

YesMedia failure

Manual failover to alternate serverYesMedia server failure

Deploy the master server in a cluster, forautomatic failover

YesMaster server failure

NetBackup database recoveryYesLoss of backup database

If multiplexing was not used, recovery ofmedia without NetBackup, using GNU tar

YesNo NetBackup software

Vaulting and off site media storageYesComplete site disaster

Best practicesBest practices: storing tape cartridges

88

Page 89: Netbackup Planning Guide 307083

Additional material may be found in the following books:

The Resilient Enterprise, Recovering Information Services from Disasters, bySymantec and industry authors. Published by Symantec Software Corporation.

Blueprints for High Availability, Designing Resilient Distributed Systems, by EvanMarcus and Hal Stern. Published by John Wiley and Sons.

Implementing Backup and Recovery: The Readiness Guide for the Enterprise, byDavid B. Little and David A. Chapa. Published by Wiley Technology Publishing.

Suggestions for data recovery planningYou should have a well-documented and tested plan to recover from a logicalerror, an operator error, or a site disaster.

More information on disaster recovery is available.

See NetBackup Troubleshooting Guide

See NetBackup Administrator's Guide, Volumes I & II

For recovery planning, use the following preparatory measures:

■ Always use a scheduled hot catalog backupRefer to "Catalog Recovery from an Online Backup" in the NetBackupTroubleshooting Guide.

■ Review the disaster recovery plan oftenReview your site-specific recovery procedures and verify that they are accurateand up-to-date. Also, verify that the more complex systems, such as theNetBackup master and media servers, have procedures for rebuilding thecomputers with the latest software.

■ Perform test recoveries on a regular basisImplement a plan to perform restores of various systems to alternate locations.This plan should include selecting random production backups and restoringthe data to a non-production system. A checksum can then be performed onone or many of the restored files and compared to the actual production data.Be sure to include off-site storage as part of this testing. The end-user orapplication administrator can also help determine the integrity of the restoreddata.

■ Use and protect the NetBackup catalog

Do the following:

■ Back up the NetBackup catalog to two tapes.The catalog contains information vital for NetBackup recovery. Its loss canresult in hours or days of recovery time using manual processes. The cost

89Best practicesBest practices: recoverability

Page 90: Netbackup Planning Guide 307083

of a single tape is a small price to pay for the added insurance of rapidrecovery in the event of an emergency.

■ Back up the catalog after each backup.If a hot catalog backup is used, an incremental catalog backup can be doneafter each backup session. Busy backup environments should also use ascheduled hot catalog backup, because their backup sessions endinfrequently.In the event of a catastrophic failure, the recovery of images is slow if someimages are not available. If a manual backup occurs shortly before themaster server or the drive that contains the backed-up files crashes, note:the manual backup must be imported to recover the most recent versionof the files.

■ Record the IDs of catalog backup tapes.Record the catalog tapes in the site run book or another public location toensure rapid identification in the event of an emergency. If the catalogtapes are not identified, time may be lost by scanning every tape in a libraryto find them.The utility vmphyinv can be used to mount all tapes in a robotic library andidentify the catalog tape(s). The vmphyinv utility identifies cold catalogtapes.

■ Designate label prefixes for catalog backups.Make it easy to identify the NetBackup catalog data in times of emergency.Label the catalog tapes with a unique prefix such as "DB" on the tape barcode, so your operators can find the catalog tapes without delay.

■ Place NetBackup catalogs in specific robot slots.Place a catalog backup tape in the first or last slot of a robot to identify thetape in an emergency. This practice also allows for easy tape movement ifmanual tape handling is necessary.

■ Put the NetBackup catalog on different online storage than the data beingbacked up.Your catalogs should not reside on the same disks as production data. If adisk drive loses production data, it can also lose any catalog data that resideson it, resulting in increased downtime.

■ Regularly confirm the integrity of the NetBackup catalog.On a regular basis, such as quarterly or after major operations or personnelchanges, walk through the process of recovering a catalog from tape. Thisessential part of NetBackup administration can save hours in the event ofa catastrophe.

Best practicesBest practices: recoverability

90

Page 91: Netbackup Planning Guide 307083

Best practices: naming conventionsUse a consistent name convention on all NetBackup master servers. Use lowercase for all names. Case-related issues can occur when the installation comprisesUNIX and Windows master and media servers.

This section provides examples.

Policy namesOne good naming convention for policies is platform_datatype_server(s).

Example 1: w2k_filesystems_trundle

This policy name designates a policy for a single Windows server doing file systembackups.

Example 2: w2k_sql_servers

This policy name designates a policy for backing up a set of Windows 2000 SQLservers. Several servers may be backed up by this policy. Servers that arecandidates for being included in a single policy are those running the sameoperating system and with the same backup requirements. Grouping serverswithin a single policy reduces the number of policies and eases the managementof NetBackup.

Schedule namesCreate a generic scheme for schedule names. One recommended set of schedulenames is daily, weekly, and monthly. Another recommended set of names isincremental, cumulative, and full. This convention keeps the management ofNetBackup at a minimum. It also helps with the implementation of Vault, if yoursite uses Vault.

Storage unit and storage group namesA good naming convention for storage units is to name the storage unit after themedia server and the type of data being backed up.

Two examples: mercury_filesystems and mercury_databases

where "mercury" is the name of the media server, and "filesystems" and "databases"identify the type of data being backed up.

91Best practicesBest practices: naming conventions

Page 92: Netbackup Planning Guide 307083

Best practices: duplicationNote the following about image duplication:

■ When duplicating an image, specify a volume pool that is different from thevolume pool of the original image. (Use the Setup Duplication Variables dialogof the NetBackup Administration Console.)

■ If multiple duplication jobs are active simultaneously (such as duplication jobsfor Vault), specify a different storage unit or volume pool for each job. Usingdifferent destinations may prevent media swapping among the duplicationjobs.

Best practicesBest practices: duplication

92

Page 93: Netbackup Planning Guide 307083

Performance tuning

■ Chapter 6. Measuring performance

■ Chapter 7. Tuning the NetBackup data transfer path

■ Chapter 8. Tuning other NetBackup components

■ Chapter 9. Tuning disk I/O performance

■ Chapter 10. OS-related tuning factors

■ Appendix A. Additional resources

2Section

Page 94: Netbackup Planning Guide 307083

94

Page 95: Netbackup Planning Guide 307083

Measuring performance

This chapter includes the following topics:

■ Measuring performance: overview

■ Controlling system variables for consistent testing conditions

■ Evaluating performance

■ Evaluating UNIX system components

■ Evaluating Windows system components

Measuring performance: overviewThe final measure of NetBackup performance is the following:

■ The length of time that is required for backup operations to complete (usuallyknown as the backup window)

■ Or the length of time that is required for a critical restore operation to complete

However, to measure and improve performance calls for performance metricsmore reliable and reproducible than wall clock time. This chapter discusses thesemetrics in more detail.

After establishing accurate metrics as described here, you can measure the currentperformance of NetBackup and your system components to compile a baselineperformance benchmark. With a baseline, you can apply changes in a controlledway. By measuring performance after each change, you can accurately measurethe effect of each change on NetBackup performance.

6Chapter

Page 96: Netbackup Planning Guide 307083

Controlling system variables for consistent testingconditions

For reliable performance evaluation, eliminate as many unpredictable variablesas possible to create a consistent backup environment. Only a consistentenvironment can produce reliable and reproducible performance measurements.This section explains some of the variables to consider as they relate to theNetBackup server, the network, the NetBackup client, or the data itself.

Server variablesEliminate all other NetBackup activity from your environment when you measurethe performance of a particular NetBackup operation. One area to consider is theautomatic scheduling of backup jobs by the NetBackup scheduler.

When policies are created, they are usually set up to allow the NetBackup schedulerto initiate the backups. The NetBackup scheduler initiates backups according tothe following: traditional NetBackup frequency-based scheduling, or on certaindays of the week, month, or other time interval. This process is calledcalendar-based scheduling. As part of the backup policy, the Start Windowindicates when the NetBackup scheduler can start backups using eitherfrequency-based or calendar-based scheduling. When you perform backups totest performance, this scheduling might interfere. The NetBackup scheduler mayinitiate backups unexpectedly, especially if the operations you intend to measurerun for an extended period of time.

Running a performance test without interference from otherjobsUse the following procedure to run a test. This procedure helps prevent theNetBackup scheduler from running other backups during the test.

To run a test

1 Create a policy specifically for performance testing.

2 Leave the schedule’s Start Window field blank.

This policy prevents the NetBackup scheduler from initiating any backupsautomatically for that policy.

3 To prevent the NetBackup scheduler from running backup jobs unrelated tothe performance test, consider setting all other backup policies to inactive.

You can use the Deactivate command from the NetBackup AdministrationConsole. You must reactivate the policies after the test, when you want tostart running backups again.

Measuring performanceControlling system variables for consistent testing conditions

96

Page 97: Netbackup Planning Guide 307083

4 Before you start the performance test, check the Activity Monitor to makesure no NetBackup jobs are in progress.

5 To gather more logging information, set the legacy and unified logging levelshigher and create the appropriate legacy logging directories.

By default, NetBackup logging is set to a minimum level.

For details on how to use NetBackup logging, refer to the logging chapter oftheNetBackupTroubleshootingGuide. Keep in mind that higher logging levelsconsume more disk space.

6 From the policy you created for testing, run a backup on demand.

Click Actions > Manual Backup in the NetBackup Administration Console.

Or, you can use a user-directed backup to run the performance test. However,the Manual Backup option is preferred. With a manual backup, the policycontains the entire definition of the backup job. The policy includes the clientsand files that are part of the performance test. If you run the backup manuallyfrom the policy, you can be certain which policy is used for the backup. Thisapproach makes it easier to change and test individual backup settings, fromthe policy dialog.

7 During the performance test, check for non-NetBackup activity on the serverand try to reduce or eliminate it.

8 Check the NetBackup Activity Monitor after the performance test for anyunexpected activity that may have occurred during the test, such as a restorejob.

Network variablesNetwork performance is key to optimum performance with NetBackup. Ideally,you should use a separate network for testing, to prevent unrelated networkactivity from skewing the results.

In many cases, a separate network is not available. If not, ensure thatnon-NetBackup activity is kept to a minimum during the test. If possible, schedulethe test when backups are not active. Even occasional bursts of network activitymay be enough to skew the test results. If you share the network with productionbackups occurring for other systems, you must account for this activity duringthe test.

Another network variable is host name resolution. NetBackup depends heavilyupon a timely resolution of host names to operate correctly. If you have any delaysin host name resolution, try to eliminate that delay. An example of such a delayis a reverse name lookup to identify a server name from an incoming connection

97Measuring performanceControlling system variables for consistent testing conditions

Page 98: Netbackup Planning Guide 307083

from an IP address. You can use the HOSTS (Windows) or /etc/hosts (UNIX) filefor host name resolution on systems in your test environment.

Client variablesMake sure the client system is relatively quiescent during performance testing.A lot of activity, especially disk-intensive activity such as Windows virus scanning,can limit the data transfer rate and skew the test results.

Do not allow another NetBackup server, such as a production server, to accessthe client during the test. NetBackup may attempt to back up the same client totwo different servers at the same time. The results of a performance test that isin progress can be severely affected.

Different file systems have different performance characteristics. It may not bevalid to compare data throughput on UNIX VxFS or Windows FAT file systems toUNIX NFS or Windows NTFS systems. For such a comparison, factor the differencebetween the file systems into your performance tests and into any conclusions.

Data variablesMonitoring the data you back up improves the repeatability of performance testing.If possible, move the data you use for testing to its own drive or logical partition(not a mirrored drive). Defragment the drive before you begin performance testing.For testing restores, start with an empty disk drive or a recently defragmenteddisk drive with ample empty space.

For testing backups to tape, always start each test with an empty piece of media,as follows:

■ Expire existing images for that piece of media through the Catalog node of theNetBackup Administration Console, or run the bpexpdate command.

■ Another approach is to use the bpmedia command to freeze any mediacontaining existing backup images so that NetBackup selects a new piece ofmedia for the backup operation. This step reduce the impact of tape positioningon the performance test and yields more consistent results between tests. Italso reduces mounting and unmounting of media that has NetBackup catalogimages and that cannot be used for normal backups.

When you test restores from tape, always restore from the same backup imageon the tape to achieve consistent results between tests.

A large set of data generates a more reliable, reproducible test than a small set ofdata. A performance test with a small data set would probably be skewed by startupand shutdown overhead within the NetBackup operation. These variables aredifficult to keep consistent between test runs and are therefore likely to produce

Measuring performanceControlling system variables for consistent testing conditions

98

Page 99: Netbackup Planning Guide 307083

inconsistent test results. A large set of data minimizes the effect of startup andshutdown times.

Design the dataset to represent the makeup of the data in the intended productionenvironment. If the data set in the production environment contains many smallfiles on file servers, the data set for the tests should also contain many small files.A representative data set can more accurately predict the NetBackup performancethat can be expected in a production environment.

The type of data can help reveal bottlenecks in the system. Files that containnon-compressible (random) data cause the tape drive to run at its lower ratedspeed. As long as the other components of the data transfer path can keep up, youcan identify the tape drive as the bottleneck. On the other hand, files containinghighly-compressible data can be processed at higher rates by the tape drive whenhardware compression is enabled. The result may be a higher overall throughputand may expose the network as the bottleneck.

Many values in NetBackup provide data amounts in kilobytes and rates in kilobytesper second. For greater accuracy, divide by 1024 rather than rounding off to 1000when you convert from kilobytes to megabytes or kilobytes per second tomegabytes per second.

Evaluating performanceYou can obtain NetBackup data throughput statistics from two locations: theNetBackup Activity Monitor and the NetBackup All Log Entries report. Select thelocation according to the type of NetBackup operation you want to measure:non-multiplexed backup, restore, or multiplexed backup.

You can obtain statistics for all three types of operations from the NetBackup AllLog Entries report. You can obtain statistics for non-multiplexed backup or restoreoperations from the NetBackup Activity Monitor. For multiplexed backups, obtainthe overall statistics from the All Log Entries report. Wait until all the individualbackup operations which are part of the multiplexed backup are complete. In thiscase, the statistics available in the Activity Monitor for each of the individualbackup operations are relative only to that operation. The statistics do not reflectthe total data throughput to the tape drive.

The statistics from these two locations may differ, because of differences inrounding techniques in the Activity Monitor versus the All Logs report. For agiven type of operation, choose the Activity Monitor or the All Log Entries report.Consistently record your statistics only from that location. In both the ActivityMonitor and the All Logs report, the data-streaming speed is reported in kilobytesper second. If a backup or restore is repeated, the reported speed can vary betweenrepetitions depending on many factors. Factors include the availability of system

99Measuring performanceEvaluating performance

Page 100: Netbackup Planning Guide 307083

resources and system utilization. The reported speed can be used to assess theperformance of the data-streaming process.

The statistics from the NetBackup error logs show the actual amount of timereading and writing data to and from tape. The statistics do not include the timefor mounting and positioning the tape. You should cross-reference the informationfrom the error logs with data from the bpbkar log on the NetBackup client. (Thebpbkar log shows the end-to-end elapsed time of the entire process.) Such crossreferenes can indicate how much time was spent on operations unrelated toreading and writing tape.

To evaluate performance through the NetBackup Activity Monitor

1 Run the backup or restore job.

2 Open the NetBackup Activity Monitor.

3 Verify that the backup or restore completed successfully.

The Status column should contain a zero (0).

4 View the log details for the job by selecting the Actions>Details menu option,or by double-clicking on the entry for the job.

Measuring performanceEvaluating performance

100

Page 101: Netbackup Planning Guide 307083

5 Select the Detailed Status tab.

6 Obtain the NetBackup performance statistics from the following fields in theActivity Monitor:

These fields show the time window during which the backupor restore job took place.

Start Time/EndTime

This field shows the total elapsed time from when the job wasinitiated to job completion. It can be used as an indication oftotal wall clock time for the operation.

Elapsed Time

The data throughput rate.KB per Second

Compare this value to the amount of data. Although it shouldbe comparable, the NetBackup data amount is slightly higherbecause of administrative information (metadata) that is savedfor the backed up data.

Kilobytes

For example, if you display properties for a directory that contains 500 files,each 1 megabyte in size, the directory shows a size of 500 megabytes. (500megabytes is 524,288,000 bytes, or 512,000 kilobytes.) The NetBackup reportmay show 513,255 kilobytes written, reporting 1255 kilobytes more than thefile size of the directory. This report is true for a flat directory. Subdirectorystructures may diverge due to the way the operating system tracks used andavailable space on the disk.

Note that the operating system may report how much space was allocatedfor the files in question, not only how much data is present. If the allocationblock size is 1 kilobyte, 1000 1-byte files report a total size of 1 megabyte,although 1 kilobyte of data is all that exists. The greater the number of files,the larger this discrepancy may become.

To evaluate performance through the All Log Entries report

1 Run the backup or restore job.

2 Run the All Log Entries report from the NetBackup reports node in theNetBackup Administrative Console. Be sure that the Date/Time Range thatyou select covers the time period during which the job was run.

3 Verify that the job completed successfully by searching for entries such asthe following:

For a backup: "the requested operation was successfully completed"

For a restore: "successfully read (restore) backup id..."

4 Obtain the NetBackup performance statistics from the messages in the report.

See “Table of All Log Entries report” on page 102.

101Measuring performanceEvaluating performance

Page 102: Netbackup Planning Guide 307083

Table of All Log Entries reportTable 6-1 lists messages from the All Log Entires report.

The messages vary according to the locale setting of the master server.

Table 6-1 Messages in All Log Entries report

StatisticEntry

The Date and Time fields for this entry show thetime at which the backup job started.

started backup job for client

<name>, policy <name>,

schedule <name> on storage

unit <name>

For a multiplexed backup, this entry shows thesize of the individual backup job. The Date andTime fields indicate when the job finished writingto the storage device. The overall statistics forthe multiplexed backup group, which include thedata throughput rate to the storage device, arefound in a subsequent entry.

successfully wrote backup id

<name>, copy <number>,

<number> Kbytes

For multiplexed backups, this entry shows theoverall statistics for the multiplexed backup groupincluding the data throughput rate.

successfully wrote <number>

of <number> multiplexed

backups, total Kbytes <number>

at Kbytes/sec

For non-multiplexed backups, this entry combinesthe information in the previous two entries formultiplexed backups. The single entry shows thefollowing:

■ The size of the backup job

■ The data throughput rate

■ When the job finished writing to the storagedevice (in the Date and Time fields).

successfully wrote backup id

<name>, copy <number>,

fragment <number>, <number>

Kbytes at <number> Kbytes/sec

The Date and Time fields for this entry show thetime at which the backup job completed. Thisvalue is later than the "successfully wrote" entry(in a previous message): it includes extraprocessing time at the end of the job for taskssuch as NetBackup image validation.

the requested operation was

successfully completed

Measuring performanceEvaluating performance

102

Page 103: Netbackup Planning Guide 307083

Table 6-1 Messages in All Log Entries report (continued)

StatisticEntry

The Date and Time fields for this entry showwhen the restore job started reading from thestorage device. (Note that the latter part of theentry is not shown for restores from disk, becauseit does not apply.)

begin reading backup id

<name>, (restore), copy

<number>, fragment <number>

from media id <name> on drive

index <number>

For a multiplexed restore, this entry shows thesize of the individual restore job. (As a rule, allrestores from tape are multiplexed restores,because non-multiplexed restores requireadditional action from the user.)

The Date and Time fields indicate when the jobfinished reading from the storage device. Theoverall statistics for the multiplexed restoregroup, including the data throughput rate, arefound in a subsequent entry below.

successfully restored from

backup id <name>, copy

<number>, <number> Kbytes

For multiplexed restores, this entry shows theoverall statistics for the multiplexed restoregroup, including the data throughput rate.

successfully restored <number>

of <number> requests <name>,

read total of <number> Kbytes

at <number> Kbytes/sec

For non-multiplexed restores, this entry combinesthe information from the previous two entries formultiplexed restores. The single entry shows thefollowing:

■ The size of the restore job

■ The data throughput rate

■ When the job finished reading from thestorage device (in the Date and Time fields)

As a rule, only restores from disk are treated asnon-multiplexed restores.

successfully read (restore)backup id media <number>, copy

<number>, fragment <number>,

<number> Kbytes at <number>

Kbytes/sec

Additional informationFor other NetBackup operations, the NetBackup All Log Entries report has entriesthat are similar to those in Table 6-1. For example, it has entries for imageduplication operations that create additional copies of a backup image. The entriesmay be useful for analyzing the performance of NetBackup.

The bptmdebug log file for tape backups (or bpdm log file for disk backups) containsthe entries that are in the All Log Entries report. The log also has additional detail

103Measuring performanceEvaluating performance

Page 104: Netbackup Planning Guide 307083

about the operation that may be useful. One example is the message onintermediate data throughput rate for multiplexed backups:

... intermediate after number successful, number Kbytes at

number Kbytes/sec

This message is generated whenever an individual backup job completes that ispart of a multiplexed backup group. For example, the debug log file for amultiplexed backup group (that consists of three individual backup jobs) mayinclude the following: two intermediate status lines, then the final (overall)throughput rate.

For a backup operation, the bpbkar debug log file also contains additional detailabout the operation that may be useful.

Note that writing the debug log files during the NetBackup operation introducesoverhead that may not be present in a production environment. Factor thatadditional overhead into your calculations.

The information in the All Logs report is also found in/usr/openv/netbackup/db/error (UNIX) orinstall_path\NetBackup\db\error(Windows).

See the NetBackup Troubleshooting Guide to learn how to set up NetBackup towrite these debug log files.

Evaluating UNIX system componentsIn addition to your evaluation of NetBackup’s performance, you should also verifythat common system resources are in adequate supply.

Monitoring CPU load (UNIX)Use the vmstat utility to monitor memory use. Add up the "us" and "sy" CPUcolumns to get the total CPU load on the system. (Refer to the vmstat man pagefor details.) The vmstat scan rate indicates the amount of swapping activity takingplace.

The sar command also provides insight into UNIX memory usage.

Measuring performance independent of tape or disk outputYou can measure the disk (read) component of NetBackup’s speed independentof the network components and tape components. This section describes twodifferent techniques. The first, using bpbkar, is easier. The second may be helpfulin more limited circumstances.

Measuring performanceEvaluating UNIX system components

104

Page 105: Netbackup Planning Guide 307083

In these procedures, the master server is the client.

Using bpbkarTry this procedure.

To measure disk I/O using bpbkar

1 Turn on the legacy bpbkar log by ensuring that the bpbkar directory exists.

UNIX

/usr/openv/netbackup/logs/bpbkar

Windows

install_path\NetBackup\logs\bpbkar

2 Set logging level to 1.

3 Enter the following:

UNIX

/usr/openv/netbackup/bin/bpbkar -nocont -dt 0 -nofileinfo

-nokeepalives file system > /dev/null

Where file system is the path being backed up.

Windows

install_path\NetBackup\bin\bpbkar32 -nocont X:\ > NUL

Where X:\ is the path being backed up.

4 Check how long it took NetBackup to move the data from the client disk:

UNIX: The start time is the first PrintFile entry in the bpbkar log. The endtime is the entry "Client completed sending data for backup." The amount ofdata is given in the entry "Total Size."

Windows: Check the bpbkar log for the entry "Elapsed time."

Using the bpdm_dev_null touch file (UNIX only)For UNIX systems, the following procedure is a useful follow-on to the bpbkar

procedure. The bpbkar procedure may show that the disk read performance isnot the bottleneck. If it is not the bottleneck, the following bpdm_dev_null

procedure may be helpful. If the bpdm_dev_null procedure shows poorperformance, the bottleneck is in the data transfer between the client bpbkar

105Measuring performanceEvaluating UNIX system components

Page 106: Netbackup Planning Guide 307083

process and the server bpdm process. The problem may involve the network, orshared memory (such as not enough buffers, or buffers that are too small).

You can change shared memory settings.

See “Shared memory (number and size of data buffers)” on page 124.

Caution: If not used correctly, the following procedure can lead to data loss.Touching the bpdm_dev_null file redirects all disk backups to /dev/null, not onlythose backups using the storage unit that is created by this procedure. You shoulddisable active production policies for the duration of this test and remove/dev/null as soon as this test is complete.

To measure disk I/O using the bpdm_dev_null touch file (UNIX only)

1 Enter the following:

touch /usr/openv/netbackup/bpdm_dev_null

The bpdm_dev_null file re-directs any backup that uses a disk storage unitto /dev/null.

2 Create a new disk storage unit, using /tmp or some other directory as theimage directory path.

3 Create a policy that uses the new disk storage unit.

4 Run a backup from this policy.

NetBackup creates a file in the storage unit directory as if this backup was areal backup to disk. This image file is 0 bytes long.

5 To remove the zero-length file and clear the NetBackup catalog of a backupthat cannot be restored, run this command:

/usr/openv/netbackup/bin/admincmd/bpexpdate -backupid backupid -d 0

where backupid is the name of the file that resides in the storage unitdirectory.

Evaluating Windows system componentsIn addition to your evaluation of NetBackup’s performance, you should also verifythat common system resources are in adequate supply. You may want to use theWindows Performance Monitor utility. For information about using thePerformance Monitor, refer to your Microsoft documentation.

The Performance Monitor organizes information by object, counter, and instance.

Measuring performanceEvaluating Windows system components

106

Page 107: Netbackup Planning Guide 307083

An object is a system resource category, such as a processor or physical disk.Properties of an object are counters. Counters for the Processor object include%Processor Time, which is the default counter, and Interrupts/sec. Duplicatecounters are handled by instances. For example, to monitor the %ProcessorTimeof a specific CPU on a multiple CPU system, the Processor object is selected. Thenthe %Processor Time counter for that object is selected, followed by the specificCPU instance for the counter.

In the Performance Monitor, you can view data in real-time format or collect thedata in a log for future analysis. Specific components to evaluate include CPUload, memory use, and disk load.

Note: You should use a remote host for monitoring of the test host, to reduce loadthat might otherwise skew results.

Monitoring CPU load (Windows)To determine if the system has enough power to accomplish the requested tasks,monitor the %ProcessorTime counter for the Processor object to determine howhard the CPU is working. Also monitor the ProcessQueueLength counter for theSystem object to determine how many processes are actively waiting for theprocessor.

For % Processor Time, values of 0 to 80 percent are generally safe. Values from80 percent to 90 percent indicate that the system is heavily loaded. Consistentvalues over 90 percent indicate that the CPU is a bottleneck.

Spikes close to 100 percent are normal and do not necessarily indicate a bottleneck.However, if sustained loads close to 100 percent are observed, consider tuningthe system to decrease process load, or upgrade to a faster processor.

Sustained Processor Queue Lengths greater than 2 indicate too many threadsare waiting to be executed. To correctly monitor the Processor Queue Lengthcounter, the Performance Monitor must track a thread-related counter. If youconsistently see a queue length of 0, verify that a non-zero value can be displayed.

Note: The default scale for the ProcessorQueueLength may not be equal to 1. Besure to read the data correctly. For example, if the default scale is 10x, then areading of 40 means that only 4 processes are waiting.

Monitoring memory useMemory is a critical resource for increasing the performance of backup operations.

107Measuring performanceEvaluating Windows system components

Page 108: Netbackup Planning Guide 307083

When you examine memory usage, view information on the following:

Committed Bytes displays the size of virtual memory thathas been committed, as opposed to reserved. Committedmemory must have disk storage available or must notrequire the disk storage because the main memory is largeenough. If the number of Committed Bytes approaches orexceeds the amount of physical memory, you may encounterissues with page swapping.

Committed Bytes

Page Faults/sec is a count of the page faults in the processor.A page fault occurs when a process refers to a virtualmemory page that is not in its Working Set in main memory.A high Page Fault rate may indicate insufficient memory.

Page Faults/sec

Monitoring disk loadTo use disk performance counters to monitor the disk performance in PerformanceMonitor, you may need to enable the counters. Windows may not have enabledthe disk performance counters by default for your system.

On Windows 2000, the counters are set by default.

To get more information about disk performance counters

◆ Enter the following:

diskperf -help

To enable the counters and allow disk monitoring

1 Enter the following:

diskperf -y

2 Reboot the system.

To disable the counters and cancel disk monitoring

1 Enter the following:

diskperf -n

2 Reboot the system.

Measuring performanceEvaluating Windows system components

108

Page 109: Netbackup Planning Guide 307083

To monitor disk performance

1 Use the %Disk Time counter for the PhysicalDisk object.

Track the percentage of elapsed time that the selected disk drive is servicingread or write requests.

2 Monitor the Avg. Disk Queue Length counter and watch for values greaterthan 1 that last for more than one second.

Values greater than 1 for more than a second indicate that multiple processesare waiting for the disk to service their requests.

Increasing disk performanceYou can use the following techniques to increase disk performance:

■ Check the fragmentation level of the data.A highly fragmented disk limits throughput levels. Use a disk maintenanceutility to defragment the disk.

■ Consider adding additional disks to the system to increase performance.If multiple processes attempt to log data simultaneously, divide the data amongmultiple physical disks.

■ Determine if the data transfer involves a compressed disk.Windows drive compression adds overhead to disk read or write operations,with adverse effects on NetBackup performance. Use Windows compressiononly if it is needed to avoid a disk full condition.

■ Consider converting to a system with a Redundant Array of Inexpensive Disks(RAID).Though more expensive, RAID devices offer greater throughput and (dependingon the RAID level) improved reliability.

■ Determine what type of controller technology drives the disk.A different system might yield better results.See “Example: Calculating the number of tape drives needed to perform abackup” on page 24.

109Measuring performanceEvaluating Windows system components

Page 110: Netbackup Planning Guide 307083

Measuring performanceEvaluating Windows system components

110

Page 111: Netbackup Planning Guide 307083

Tuning the NetBackup datatransfer path

This chapter includes the following topics:

■ Tuning the data transfer path: overview

■ The data transfer path

■ Basic tuning suggestions for the data path

■ NetBackup client performance

■ NetBackup network performance

■ NetBackup server performance

■ NetBackup storage device performance

Tuning the data transfer path: overviewThis chapter contains information on ways to optimize NetBackup. This chapteris not intended to provide tuning advice for particular systems. If you would likehelp fine-tuning your NetBackup installation, please contact Symantec ConsultingServices.

First ensure that your system meets NetBackup’s recommended minimumrequirements. Refer to the NetBackup Installation Guide and NetBackup ReleaseNotes for information about these requirements. Additionally, Symantecrecommends that you have the most recent NetBackup software patch installed.

Many performance issues can be traced to hardware or other environmentalissues. You must understand the entire data transfer path to determine themaximum obtainable performance in your environment. Poor performance is

7Chapter

Page 112: Netbackup Planning Guide 307083

often the result of poor planning, which results from unrealistic expectations ofcomponents of the transfer path.

The data transfer pathWhat limits the overall performance of NetBackup is the slowest component inthe backup system. For example, a fast tape drive that is combined with anoverloaded server yields poor performance. A fast tape drive on a slow networkalso yields poor performance.

The backup system is referred to as the data transfer path. The path usually startsat the data on the disk and ends with a backup copy on tape or disk.

This chapter subdivides the standard NetBackup data transfer path into fourcomponents: the NetBackup client, the network, the NetBackup server, and thestorage device.

This chapter discusses NetBackup performance evaluation and improvement froma testing perspective. It describes ways to isolate performance variables to learnthe effect of each variable on overall system performance. It also describes howto optimize NetBackup performance with regard to those variables. It may not bepossible to optimize every variable on your production system.

The requirements for database backups may not be the same as for file systembackups. This information applies to file system backups unless otherwise noted.

Basic tuning suggestions for the data pathIn every backup system there is always room for improvement. To obtain the bestperformance from a backup infrastructure is not complex, but it requires carefulreview of the many factors that can affect processing. The first step is to gain anaccurate assessment of each hardware, software, and networking component inthe backup data path. Many performance problems are resolved before attemptingto change NetBackup parameters.

NetBackup software offers plenty of resources to help isolate performanceproblems and assess the impact of configuration changes. However, it is essentialto thoroughly test both backup and restore processes after making any changesto the NetBackup configuration parameters.

This section provides practical ideas to improve your backup system performanceand avoid bottlenecks.

You can find background details in the following NetBackup manuals:

NetBackup Administrator’s Guide, Volumes I & II

Tuning the NetBackup data transfer pathThe data transfer path

112

Page 113: Netbackup Planning Guide 307083

NetBackup Troubleshooting Guide

Tuning suggestionsConsider the following:

■ Use multiplexing.Multiplexing writes multiple data streams from several clients to a single tapedrive or several tape drives. Multiplexing can improve the backup performanceof slow clients, multiple slow networks, and many small backups (such asincremental backups). Multiplexing reduces the time each job waits for a deviceto become available. It thereby makes the best use of the transfer rate of yourstorage devices.See “Restore of a multiplexed image” on page 150.Refer also to the NetBackup Administrator’s Guide, Volume II, for moreinformation about using multiplexing.

■ Stripe a disk volume across drives.A striped set of disks can pull data from all drives concurrently, to allow fasterdata transfers between disk drives and tape drives.

■ Maximize the use of your backup windows.You can configure all your incremental backups to happen at the same timeevery day. You can also stagger the execution of your full backups acrossmultiple days. Large systems can be backed up over the weekend while smallersystems are spread over the week. You can start full backups earlier than theincremental backups. They might finish before the incremental backups andreturn all or most of your backup window to finish the incremental backups.

■ Convert large clients to SAN Clients or SAN Media ServersA SAN Client is a client that is backed up over a SAN connection to a mediaserver rather than over a LAN. SAN Client technology is for large databasesand application servers where large data files are rapidly read from disk andstreamed across the SAN. SAN Client is not suitable for file servers where thedisk read speed is relatively slow.See “Best practices: SAN Client” on page 83.

■ Use dedicated private networks to decrease backup times and network traffic.Dedicate one or more networks to backups, to reduce backup time and reduceor eliminate network traffic on your enterprise networks. In addition, you canconvert to faster technologies and back up your systems at any time withoutaffecting the enterprise network’s performance. This approach assumes thatusers do not mind the system loads while backups take place.

■ Avoid a concentration of servers on one network.

113Tuning the NetBackup data transfer pathBasic tuning suggestions for the data path

Page 114: Netbackup Planning Guide 307083

If many large servers back up over the same network, convert some of themto media servers or attach them to private backup networks. Either approachdecreases backup times and reduces network traffic for your other backups.

■ Use dedicated backup servers to perform your backups.For a backup server, use a dedicated system for backups only. Using a serverthat also runs several applications unrelated to backups can severely affectyour performance and maintenance windows.

■ Consider the requirements of backing up your catalog.Remember that the NetBackup catalog needs to be backed up. To facilitateNetBackup catalog recovery, the master server should have access to adedicated tape drive, either stand-alone or within a robotic library.

■ Try to level the backup load.You can use multiple drives to reduce backup times. To spread the load acrossmultiple drives, you may need to reconfigure the streams or the NetBackuppolicies.

■ Consider bandwidth limiting.Bandwidth limiting lets you restrict the network bandwidth that is consumedby one or more NetBackup clients on a network. The bandwidth setting appearsunder Host Properties > Master Servers, Properties. The actual limitingoccurs on the client side of the backup connection. This feature only restrictsbandwidth during backups. Restores are unaffected.When a backup starts, NetBackup reads the bandwidth limit configurationand then determines the appropriate bandwidth value and passes it to theclient. As the number of active backups increases or decreases on a subnet,NetBackup dynamically adjusts the bandwidth limiting on that subnet. Ifadditional backups are started, the NetBackup server instructs the otherNetBackup clients that run on that subnet to decrease their bandwidth setting.Similarly, bandwidth per client is increased if the number of clients decreases.Changes to the bandwidth value occur on a periodic basis rather than asbackups stop and start. This characteristic can reduce the number of bandwidthvalue changes.

■ Try load balancingNetBackup provides ways to balance loads between servers, clients, policies,and devices. Note that these settings may interact with each other:compensating for one issue can cause another. The best approach is to use thedefaults unless you anticipate or encounter an issue.

Try one or more of the following:

■ Adjust the backup load on the server.Change the Limit jobs per policy attribute for one or more of the policiesthat the server backs up. For example, you can decrease Limit jobs per

Tuning the NetBackup data transfer pathBasic tuning suggestions for the data path

114

Page 115: Netbackup Planning Guide 307083

policy to reduce the load on a server on a specific subnetwork. Reconfigurepolicies or schedules to use storage units on other servers. Use bandwidthlimiting on one or more clients.

■ Adjust the backup load on the server during specific time periods only.Reconfigure schedules to use storage units on the servers that can handlethe load (if you use media servers).

■ Adjust the backup load on the clients.Change the Maximum jobs per client global attribute. An increase toMaximumjobsperclient can increase the number of concurrent jobs thatany one client can process and therefore increase the load.

■ Reduce the time to back up clients.Increase the number of jobs that clients can perform concurrently, or usemultiplexing. Increase the number of jobs that the server can performconcurrently for the policies that back up the clients.

■ Give preference to a policy.Increase the Limit jobs per policy attribute value for the preferred policyrelative to other policies. Alternatively, increase the priority for the policy.

■ Adjust the load between fast and slow networks.Increase the values of Limit jobs per policy and Maximum jobs per clientfor the policies and clients on a faster network. Decrease these values forslower networks. Another solution is to use bandwidth limiting.

■ Limit the backup load that one or more clients produce.Use bandwidth limiting to reduce the bandwidth that the clients use.

■ Maximize the use of devicesUse multiplexing. Also, allow as many concurrent jobs per storage unit,policy, and client as possible without causing server, client, or networkperformance issues.

■ Prevent backups from monopolizing devices.Limit the number of devices that NetBackup can use concurrently for eachpolicy or limit the number of drives per storage unit. Another approach isto exclude some of your devices from Media Manager control.

NetBackup client performanceThis section lists some factors to consider when evaluating the NetBackup clientcomponent of the NetBackup data transfer path.

Consider the following to identify possible changes that may improve NetBackupperformance:

115Tuning the NetBackup data transfer pathNetBackup client performance

Page 116: Netbackup Planning Guide 307083

■ Disk fragmentationFragmentation severely affects the data transfer rate from the disk.Fragmentation can be repaired using disk management utility software.

■ Number of disksAdd disks to the system to increase performance.If multiple processes attempt to log data simultaneously, divide the data amongmultiple physical disks.

■ Disk arraysConvert to a system that is based on a Redundant Array of Inexpensive Disks(RAID). RAID devices generally offer greater throughput and (depending onthe RAID level) improved reliability.

■ SAN clientFor critical data that requires high bandwidth for backups, consider the SANclient feature.

Refer to the following documents for more information:

■ NetBackup Shared Storage Guide

■ "SAN Client Deployment, Best Practices and Performance Metrics" technote.http://entsupport.symantec.com/docs/293110

■ The type of controller technology being used to drive the diskConsider if a different system would yield better results.

■ Virus scanningVirus scanning can severely affect the performance of the NetBackup client,especially for systems such as large Windows file servers. Consider disablingvirus scans during backup or restore.

■ NetBackup notify scriptsThe bpstart_notify.bat and bpend_notify.bat scripts are very useful incertain situations, such as shutting down a running application to back up itsdata. However, these scripts must be written with care to avoid any unnecessarylengthy delays at the start or end of the backup. If the scripts do not performtasks essential to the backup, remove them.

■ NetBackup software locationIf the data being backed up is on the same physical disk as the NetBackupinstallation, note: performance may be adversely affected, especially ifNetBackup debug log files are generated. If logs are used, the extent of thedegradation is greatly influenced by the logs’ verbose setting. If possible, installNetBackup on a separate physical disk to avoid disk drive contention.

Tuning the NetBackup data transfer pathNetBackup client performance

116

Page 117: Netbackup Planning Guide 307083

■ Snapshots (hardware or software)If snapshots are taken before the backup of data, the time that is needed totake the snapshot can affect the performance.

■ Job trackerIf the Job Tracker is running on the client, NetBackup gathers an estimate ofthe data to be backed up before the backup. Gathering this estimate affectsthe startup time and the data throughput rate, because no data is written tothe NetBackup server during this estimation. You may want to avoid runningthe NetBackup Client Job Tracker.

■ Determine the theoretical performance of the NetBackup client softwareUse the NetBackup client command bpbkar (UNIX) or bpbkar32 (Windows) todetermine how fast the NetBackup client can read the data to be backed up.You may be able to eliminate data read speed as a performance bottleneck.See “Measuring performance independent of tape or disk output” on page 104.

NetBackup network performanceTo improve the overall performance of NetBackup, consider the following networkcomponents and factors.

Network interface settingsMake sure your network connections are properly installed and configured.

Note the following:

■ Network interface cards (NICs) for NetBackup servers and clients must be setto full-duplex.

■ Both ends of each network cable (the NIC card and the switch) must be setidentically as to speed and mode. (Both NIC and switch must be at full duplex.)Otherwise, link down, excessive or late collisions, and errors result.

■ If auto-negotiate is used, make sure that both ends of the connection are setat the same mode and speed. The higher the speed, the better.

■ In addition to NICs and switches, all routers must be set to full duplex.

■ Using AUTOSENSE may cause network problems and performance issues.

■ Consult the operating system documentation for instructions on how todetermine and change the NIC settings.

117Tuning the NetBackup data transfer pathNetBackup network performance

Page 118: Netbackup Planning Guide 307083

Network loadTo evaluate remote backup performance, consider the following:

■ The amount of network traffic

■ The amount of time that network traffic is high

Small bursts of high network traffic for short durations can decrease datathroughput rate. However, if the network traffic remains high, the network isprobably the bottleneck. Try to schedule backups when network traffic is low. Ifyour network is loaded, you may want to implement a secondary network whichis dedicated to backup and restore traffic.

Setting the network buffer size for the NetBackup media serverThe NetBackup media server has a tunable parameter that you can use to adjustthe size of the network buffer. This buffer either receives data from the network(a backup) or writes data to the network (a restore). The parameter sets the valuefor the network buffer size for backups and restores.

UNIX

The default value for this parameter is 32032.

Windows

The default value for this parameter is derived from the NetBackup data buffersize using the following formula:

For backup jobs: (<data_buffer_size> * 4) + 1024

For restore jobs: (<data_buffer_size> * 2) + 1024

For tape:

Because the default value for the NetBackup data buffer size is 65536 bytes, theformula results in the following: a default NetBackup network buffer size of 263168bytes for backups and 132096 bytes for restores.

For disk:

Because the default value for the NetBackup data buffer size is 262144 bytes, theformula results in the following: a default NetBackup network buffer size of1049600 bytes for backups and 525312 bytes for restores.

Tuning the NetBackup data transfer pathNetBackup network performance

118

Page 119: Netbackup Planning Guide 307083

To set the network buffer size

1 Create the following files:

UNIX

/usr/openv/netbackup/NET_BUFFER_SZ

/usr/openv/netbackup/NET_BUFFER_SZ_REST

Windows

install_path\NetBackup\NET_BUFFER_SZ

install_path\NetBackup\NET_BUFFER_SZ_REST

2 Note the following about the buffer files:

These files contain a single integer that specifies the network buffer size inbytes. For example, to use a network buffer size of 64 kilobytes, the file wouldcontain 65536. If the files contain the integer 0 (zero), the default value forthe network buffer size is used.

If the NET_BUFFER_SZ file exists, and the NET_BUFFER_SZ_REST file doesnot exist, NET_BUFFER_SZ specifies the network buffer size for backup andrestores.

If the NET_BUFFER_SZ_REST file exists, its contents specify the networkbuffer size for restores.

If both files exist, the NET_BUFFER_SZ file specifies the network buffer sizefor backups. The NET_BUFFER_SZ_REST file specifies the network buffersize for restores.

Because local backup or restore jobs on the media server do not send dataover the network, this parameter has no effect on those operations. It is usedonly by the NetBackup media server processes which read from or write tothe network, specifically, the bptm or bpdm processes. It is not used by anyother NetBackup processes on a master server, media server, or client.

Network buffer size in relation to other parametersThe network buffer size parameter is the counterpart on the media server to thecommunications buffer size parameter on the client. The network buffer sizesneed not be the same on all of your NetBackup systems for NetBackup to functionproperly. However, if the media server’s network buffer size is the same as theclient’s communications buffer size, network throughput may improve.

Similarly, the network buffer size does not have a direct relationship to theNetBackup data buffer size.

119Tuning the NetBackup data transfer pathNetBackup network performance

Page 120: Netbackup Planning Guide 307083

See “Shared memory (number and size of data buffers)” on page 124.

The two buffers are separately tunable parameters. However, setting the networkbuffer size to a substantially larger value than the data buffer has achieved thebest performance in many NetBackup installations.

Increasing the network buffer size on AIX for synthetic fullbackupsIf synthetic full backups on AIX are running slowly, increase the NET_BUFFER_SZnetwork buffer to 262144 (256KB).

To increase the network buffer size on AIX

1 Create the following file:

/usr/openv/netbackup/NET_BUFFER_SZ

2 To change the default setting from 32032 to 262144, enter the number 262144in the file.

This file is unformatted, and should contain only the size in bytes:

$ cat /usr/openv/netbackup/NET_BUFFER_SZ

262144

$

A change in this value can affect backup and restore operations on the mediaservers. Test backups and restores to ensure that the change you make doesnot negatively affect performance.

Setting the NetBackup client communications buffer sizeThe NetBackup client has a tunable parameter to adjust the size of the networkcommunications buffer. This buffer writes data to the network for backups.

This client parameter is the counterpart to the network buffer size parameter onthe media server. The network buffer sizes are not required to be the same on allof your NetBackup systems for NetBackup to function properly. However, if themedia server’s network buffer size is the same as the client’s communicationsbuffer size, you may achieve better performance.

Tuning the NetBackup data transfer pathNetBackup network performance

120

Page 121: Netbackup Planning Guide 307083

To set the communications buffer size parameter on UNIX clients

◆ Create the /usr/openv/netbackup/NET_BUFFER_SZ file.

As with the media server, the file should contain a single integer that specifiesthe communications buffer size. Generally, performance is better when thevalue in the NET_BUFFER_SZ file on the client matches the value in theNET_BUFFER_SZ file on the media server.

The NET_BUFFER_SZ_REST file is not used on the client. The value in theNET_BUFFER_SZ file is used for both backups and restores.

To set the communications buffer size parameter on Windows clients

1 From Host Properties in the NetBackup Administration Console, do thefollowing: expand Clients and open the Client Properties > Windows Client>ClientSettings dialog for the client on which the parameter is to be changed.

2 Enter the new value in the Communications buffer field.

This parameter is specified in number of kilobytes. The default value is 32.An extra kilobyte is added internally for backup operations. Therefore, thedefault network buffer size for backups is 33792 bytes. In some NetBackupinstallations, this default value is too small. A value of 128 improvesperformance in these installations.

Because local backup jobs on the media server do not send data over thenetwork, this parameter has no effect on these local operations. Only theNetBackup bpbkar32 process uses this parameter. It is not used by any otherNetBackup processes on a master server, media server, or client.

3 If you modify the NetBackup buffer settings, test the performance of restoreswith the new settings.

Using socket communications (the NOSHM file)When a master server or media server backs itself up, NetBackup uses sharedmemory to speed up the backup. In this case, NetBackup uses shared memoryrather than socket communications to transport the data between processes.However, it may not be possible or desirable to use shared memory during abackup. In that case, you can use socket communications rather than sharedmemory to interchange the backup data.

121Tuning the NetBackup data transfer pathNetBackup network performance

Page 122: Netbackup Planning Guide 307083

To use socket communications

◆ Touch the following file:

UNIX

/usr/openv/netbackup/NOSHM

Windows

install_path\NetBackup\NOSHM

To touch a file means to change the file’s modification and access times. Thefile name should not contain any extension.

About the NOSHM fileEach time a backup runs, NetBackup checks for the existence of the NOSHM file.No services need to be stopped and started for it to take effect. You might useNOSHM, for example, when the NetBackup server hosts another application thatuses a large amount of shared memory, such as Oracle.

NOSHM is also useful for testing: both as a workaround while solving a sharedmemory issue, and to verify that an issue is caused by shared memory.

Note: NOSHM only affects backups when it is applied to a system with adirectly-attached storage unit.

NOSHM forces a local backup to run as though it were a remote backup. A localbackup is a backup of a client that has a directly-attached storage unit. An exampleis a client that happens to be a master server or media server. A remote backuppasses the data across a network connection from the client to a master server’sor media server’s storage unit.

A local backup normally has one or more bpbkar processes that read from thedisk and write into shared memory. A local backup also has a bptm process thatreads from shared memory and writes to the tape. A remote backup has one ormore bptm (child) processes that read from a socket connection to bpbkar andwrite into shared memory. A remote backup also has a bptm (parent) process thatreads from shared memory and writes to the tape. NOSHM forces the remotebackup model even when the client and the media server are the same system.

For a local backup without NOSHM, shared memory is used between bptm andbpbkar. Whether the backup is remote or local, and whether NOSHM exists ornot, shared memory is always used between bptm (parent) and bptm (child).

Tuning the NetBackup data transfer pathNetBackup network performance

122

Page 123: Netbackup Planning Guide 307083

Note: NOSHM does not affect the shared memory that bptm uses to buffer datathat is written to tape. bptmuses shared memory for any backup, local or otherwise.

Using multiple interfacesDistributing NetBackup traffic over several network interfaces can improveperformance. Configure a unique hostname for the server for each networkinterface and set up bp.conf entries for the hostnames.

For example, suppose the server is configured with three network interfaces. Eachof the network interfaces connects to one or more NetBackup clients. The followingprocedure allows NetBackup to use all three network interfaces.

To configure three network interfaces

1 In the server’s bp.conf file, add one entry for each network interface:

SERVER=server-neta

SERVER=server-netb

SERVER=server-netc

2 In each client’s bp.conf file, make the following entries:

SERVER=server-neta

SERVER=server-netb

SERVER=server-netc

A client can have an entry for a server that is not currently on the samenetwork.

For more information on how to set network interfaces, refer to theNetBackupAdministrator’s Guide, Volume I.

NetBackup server performanceTo improve NetBackup server performance, consider the following factorsregarding the data transfer path:

■ Shared memory (number and size of data buffers)

■ Changing parent and child delay values

■ Using NetBackup wait and delay counters

■ Fragment size and NetBackup restores

■ Other restore performance issues

123Tuning the NetBackup data transfer pathNetBackup server performance

Page 124: Netbackup Planning Guide 307083

Shared memory (number and size of data buffers)The NetBackup media server uses shared memory to buffer data between thenetwork and the tape drive or disk drive. (Or it buffers data between the disk andtape if the NetBackup media server and client are the same system.) The numberand size of these shared data buffers can be configured on the NetBackup mediaserver.

The number and size of the tape and disk buffers may be changed so thatNetBackup optimizes its use of shared memory. A different buffer size may resultin better throughput for high-performance tape drives. These changes may alsoimprove throughput for other types of drives.

Buffer settings are for media servers only and should not be used on a pure masterserver or client.

Note:Restores use the same buffer size that was used to back up the images beingrestored.

Default number of shared data buffersTable 7-1 shows the default number of shared data buffers for various NetBackupoperations.

Table 7-1 Default number of shared data buffers

Number of shared databuffers

Windows

Number of shared databuffers

UNIX

NetBackup operation

3030Non-multiplexed backup

1212Multiplexed backup

3030Restore that usesnon-multiplexed protocol

1212Restore that usesmultiplexed protocol

3030Verify

3030Import

3030Duplicate

3030NDMP backup

Tuning the NetBackup data transfer pathNetBackup server performance

124

Page 125: Netbackup Planning Guide 307083

Default size of shared data buffersThe default size of shared data buffers for various NetBackup operations is shownin Table 7-2.

Table 7-2 Default size of shared data buffers

Size of shared databuffers

Windows

Size of shared databuffers

UNIX

NetBackup operation

64K (tape), 256K (disk)64K (tape), 256K (disk)Non-multiplexed backup

64K (tape), 256K (disk)64K (tape), 256K (disk)Multiplexed backup

same size as used for thebackup

same size as used for thebackup

Restore, verify, or import

read side: same size as usedfor the backup

write side: 64K (tape), 256K(disk)

read side: same size as usedfor the backup

write side: 64K (tape), 256K(disk)

Duplicate

63K (tape), 256K (disk)63K (tape), 256K (disk)NDMP backup

On Windows, a single tape I/O operation is performed for each shared data buffer.Therefore, this size must not exceed the maximum block size for the tape deviceor operating system. For Windows systems, the maximum block size is generally64K, although in some cases customers use a larger value successfully. For thisreason, the terms "tape block size" and "shared data buffer size" are synonymousin this context.

Amount of shared memory required by NetBackupYou can use this formula to calculate the amount of shared memory that NetBackuprequires:

Shared memory required = (number_data_buffers * size_data_buffers) *number_tape_drives * max_multiplexing_setting

For example, assume that the number of shared data buffers is 16, and the sizeof the shared data buffers is 64 kilobytes. Also assume two tape drives, and amaximum multiplexing setting of four. Following the formula, NetBackup requires8 MB of shared memory:

(16 * 65536) * 2 * 4 = 8 MB

Be careful when changing these settings.

125Tuning the NetBackup data transfer pathNetBackup server performance

Page 126: Netbackup Planning Guide 307083

See “Testing changes made to shared memory” on page 133.

How to change the number of shared data buffersYou can change the number of shared data buffers by creating the following file(s)on the media server. In the files, enter an integer number of shared data buffers.

See “Notes on number data buffers files” on page 127.

■ UNIXFor tape:

/usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS

/usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS_RESTORE

The value specified byNUMBER_DATA_BUFFERSdetermines the number of sharedmemory buffers for all types of backups if none of the followingNUMBER_DATA_BUFFERS_xxxx files exists.

For disk:

/usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS_DISK

Note: NUMBER_DATA_BUFFERS_DISK requires NetBackup 6.5.1 or later.

For multiple copies (Inline Copy):

/usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS_MULTCOPY

For the FT media server:

/usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS_FT

■ WindowsFor tape:

install_path\NetBackup\db\config\NUMBER_DATA_BUFFERS

install_path\NetBackup\db\config\NUMBER_DATA_BUFFERS_RESTORE

The value specified byNUMBER_DATA_BUFFERSdetermines the number of sharedmemory buffers for all types of backups if none of the followingNUMBER_DATA_BUFFERS_xxxx files exists.

For disk:

install_path\NetBackup\db\config\NUMBER_DATA_BUFFERS_DISK

Note: NUMBER_DATA_BUFFERS_DISK requires NetBackup 6.5.1 or later.

For multiple copies (Inline Copy):

Tuning the NetBackup data transfer pathNetBackup server performance

126

Page 127: Netbackup Planning Guide 307083

install_path\NetBackup\db\config\NUMBER_DATA_BUFFERS_MULTCOPY

For the FT media server:NUMBER_DATA_BUFFERS_FT is not yet supported on Windows.

Notes on number data buffers filesNote the following points:

■ The various number data buffers files must contain a single integer thatspecifies the number of shared data buffers NetBackup uses.

■ If the NUMBER_DATA_BUFFERS file exists, its contents determine the number ofshared data buffers to be used for multiplexed and non-multiplexed backups.

■ The other NUMBER_DATA_BUFFERS files (NUMBER_DATA_BUFFERS_DISK,NUMBER_DATA_BUFFERS_MULTCOPY, NUMBER_DATA_BUFFERS_FT) allow buffersettings for particular types of backups. The values specified in these filesoverride either the NetBackup default number or the value that is specified inNUMBER_DATA_BUFFERS.

For example, NUMBER_DATA_BUFFERS_DISK allows for a different value whenyou back up to disk instead of tape. If NUMBER_DATA_BUFFERS exists butNUMBER_DATA_BUFFERS_DISK does not, NUMBER_DATA_BUFFERS applies to tapeand disk backups. If both files exist, NUMBER_DATA_BUFFERS applies to tapebackups and NUMBER_DATA_BUFFERS_DISK applies to disk backups. If onlyNUMBER_DATA_BUFFERS_DISK is present, it applies to disk backups only.

Note: NUMBER_DATA_BUFFERS_DISK requires NetBackup 6.5.1 or later.

■ The NUMBER_DATA_BUFFERS file also applies to remote NDMP backups, but doesnot apply to local NDMP backups or to NDMP three-way backups.See “Note on shared memory and NetBackup for NDMP” on page 131.

■ The NUMBER_DATA_BUFFERS_RESTORE file is only used for restore from tape,not from disk. If the NUMBER_DATA_BUFFERS_RESTORE file exists, its contentsdetermine the number of shared data buffers for multiplexed restores fromtape.

■ The NetBackup daemons do not have to be restarted for the new buffer valuesto be used. Each time a new job starts, bptm checks the configuration file andadjusts its behavior.

■ For a recommendation for setting NUMBER_DATA_BUFFERS_FT, refer to thefollowing topic:See “Recommended number of data buffers for SAN Client and FT media server”on page 133.

127Tuning the NetBackup data transfer pathNetBackup server performance

Page 128: Netbackup Planning Guide 307083

How to change the size of shared data buffersYou can change the size of shared data buffers by creating the following file(s) onthe media server. In the files, enter an integer size in bytes for the shared databuffer. The integer should be a multiple of 1024 (a multiple of 32 kilobytes isrecommended).

See “Notes on size data buffer files” on page 129.

■ UNIXFor tape:

/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS

The value specified by SIZE_DATA_BUFFERS determines the shared memorybuffer size for all types of backups if none of the followingSIZE_DATA_BUFFERS_xxxx files exists.For tape (NDMP storage units):

/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_NDMP

For disk:

/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_DISK

The SIZE_DATA_BUFFERS_DISK file also affects NDMP to disk backups.

For multiple copies (Inline Copy):

/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_MULTCOPY

For the FT media server:

/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_FT

■ WindowsFor tape:

install_path\NetBackup\db\config\SIZE_DATA_BUFFERS

The value specified by SIZE_DATA_BUFFERS determines the shared memorybuffer size for all types of backups if none of the followingSIZE_DATA_BUFFERS_xxxx files exists.For tape (NDMP storage units):

install_path\NetBackup\db\config\SIZE_DATA_BUFFERS_NDMP

For disk:

install_path\NetBackup\db\config\SIZE_DATA_BUFFERS_DISK

Tuning the NetBackup data transfer pathNetBackup server performance

128

Page 129: Netbackup Planning Guide 307083

The SIZE_DATA_BUFFERS_DISK file also affects NDMP to disk backups.

For multiple copies (Inline Copy):

install_path\NetBackup\db\config\SIZE_DATA_BUFFERS_MULTCOPY

For the FT media server:SIZE_DATA_BUFFERS_FT is not yet supported on Windows.

Notes on size data buffer filesNote the following points:

■ The various size data buffers files contain a single integer that specifies thesize of each shared data buffer in bytes. The integer should be a multiple of1024 (a multiple of 32 kilobytes is recommended).See “Size values for shared data buffers” on page 130.

■ If the SIZE_DATA_BUFFERS file exists, its contents determine the size of shareddata buffers to be used for multiplexed and non-multiplexed backups.

■ The other SIZE_DATA_BUFFERS files (SIZE_DATA_BUFFERS_DISK,SIZE_DATA_BUFFERS_MULTCOPY, SIZE_DATA_BUFFERS_FT) allow buffer settingsfor particular types of backups. The values specified in these files overrideeither the NetBackup default size or the value that is specified inSIZE_DATA_BUFFERS.

For example, SIZE_DATA_BUFFERS_DISK allows for a different value when youback up to disk instead of tape. If SIZE_DATA_BUFFERS exists butSIZE_DATA_BUFFERS_DISKdoes not, SIZE_DATA_BUFFERS applies to all backups.If both files exist, SIZE_DATA_BUFFERS applies to tape backups andSIZE_DATA_BUFFERS_DISK applies to disk backups. If onlySIZE_DATA_BUFFERS_DISK is present, it applies to disk backups only.

■ The SIZE_DATA_BUFFERS_DISK file also affects NDMP to disk backups.

■ Perform backup and restore testing if the shared data buffer size is changed.If NetBackup media servers are not running the same operating system, testrestores on each media server that may be involved in a restore operation. Ifa UNIX media server writes a backup to tape with a shared data buffer of 256kilobytes, a Windows media server may not be able to read that tape.

Warning: Test restore as well as backup operations, to avoid the potential fordata loss.

See “Testing changes made to shared memory” on page 133.

129Tuning the NetBackup data transfer pathNetBackup server performance

Page 130: Netbackup Planning Guide 307083

Size values for shared data buffersTable 7-3 lists appropriate values for the various SIZE_DATA_BUFFERS files. Theinteger represents the size of one tape or disk buffer in bytes. For example, to usea shared data buffer size of 64 kilobytes, the file would contain the integer 65536.

These values are multiples of 1024. If you enter a value that is not a multiple of1024, NetBackup rounds it down to the nearest multiple of 1024. For example, ifyou enter a value of 262656, NetBackup uses the value of 262144.

The NetBackup daemons do not have to be restarted for the parameter values tobe used. Each time a new job starts, bptm checks the configuration file and adjustsits behavior.

Analyze the buffer usage by checking the bptm debug log before and after alteringthe size of buffer parameters. Note that the bptm log applies to both tape and diskbackups.

Table 7-3 Byte values for SIZE_DATA_BUFFERS_xxxx files

SIZE_DATA_BUFFER value in bytesKilobytes per data buffer

3276832

6553664

9830496

131072128

163840160

196608192

229376224

262144256

Important: the data buffer size equals the tape I/O size. Therefore theSIZE_DATA_BUFFERS value must not exceed the maximum tape I/O size that thetape drive or operating system supports. This value is usually 256 or 128 kilobytes.Check your operating system and hardware documentation for the maximumvalues. Take into consideration the total system resources and the entire network.The Maximum Transmission Unit (MTU) for the LAN network may also have tobe changed. NetBackup expects the value for NET_BUFFER_SZ andSIZE_DATA_BUFFERS to be in bytes. For 32K, use 32768 (32 x 1024).

Tuning the NetBackup data transfer pathNetBackup server performance

130

Page 131: Netbackup Planning Guide 307083

Note:Some Windows tape devices cannot write with block sizes higher than 65536(64 kilobytes). Some Windows media servers cannot read backups on a UNIX mediaserver with SIZE_DATA_BUFFERS set to more than 65536. The Windows mediaserver would not be able to import or restore images from media that were writtenwith SIZE_DATA_BUFFERS greater than 65536.

Note: The size of the shared data buffers for a restore is determined by the sizeof the shared data buffers in use at the time the backup was written. Restores donot use the SIZE_DATA_BUFFERS files.

Note on shared memory and NetBackup for NDMPThe following tables describe how NetBackup for NDMP uses shared memory.

Table 7-4 shows the effect of NUMBER_DATA_BUFFERS according to the type ofNDMP backup.

Table 7-4 NetBackup for NDMP and number of data buffers

Use of shared memoryType of NDMP backup

NetBackup does not use shared memory (no data buffers).NUMBER_DATA_BUFFERS has no effect.

Local NDMP backup orthree-way backup

NetBackup uses shared memory. You can useNUMBER_DATA_BUFFERS to change the number of memorybuffers.

Remote NDMP backup

Table 7-5 shows the effect of SIZE_DATA_BUFFERS_NDMP according to the typeof NDMP backup.

Table 7-5 NetBackup for NDMP and size of data buffers

Use of shared memoryType of NDMP backup

NetBackup does not use shared memory (no data buffers).You can useSIZE_DATA_BUFFERS_NDMP to change the sizeof the records that are written to tape.

Use SIZE_DATA_BUFFERS_DISK to change record size forNDMP disk backup.

Local NDMP backup orthree-way backup

131Tuning the NetBackup data transfer pathNetBackup server performance

Page 132: Netbackup Planning Guide 307083

Table 7-5 NetBackup for NDMP and size of data buffers (continued)

Use of shared memoryType of NDMP backup

NetBackup uses shared memory. You can useSIZE_DATA_BUFFERS_NDMP to change the size of thememory buffers and the size of the records that are writtento tape.

UseSIZE_DATA_BUFFERS_DISK to change buffer size andrecord size for NDMP disk backup.

Remote NDMP backup

The following is a brief description of NDMP three-way backup and remote NDMPbackup:

■ In an NDMP three-way backup, the backup is written to an NDMP storage uniton a different NAS filer.

■ In remote NDMP backup, the backup is written to a NetBackup MediaManager-type storage device.

More information is available on these backup types.

See the NetBackup for NDMP Administrator’s Guide.

Recommended shared memory settingsThe SIZE_DATA_BUFFERS setting for backup to tape is typically increased to 256KB and NUMBER_DATA_BUFFERS is increased to 16. To configure NetBackup to use16 x 256 KB data buffers, specify 262144 (256 x 1024) in SIZE_DATA_BUFFERS and16 in NUMBER_DATA_BUFFERS.

Note that an increase in the size and number of the data buffers uses up moreshared memory, which is a limited system resource. The total amount of sharedmemory that is used for each tape drive is:

(number_data_buffers * size_data_buffers) * number_tape_drives *max_multiplexing_setting

For two tape drives, each with a multiplexing setting of 4 and with 16 buffers of256KB, the total shared memory usage would be:

(16 * 262144) * 2 * 4 = 32768 KB (32 MB)

If large amounts of memory are to be allocated, the kernel may require additionaltuning to provide enough shared memory for NetBackup.

See “Kernel tuning (UNIX)” on page 195.

Tuning the NetBackup data transfer pathNetBackup server performance

132

Page 133: Netbackup Planning Guide 307083

Note: Note that AIX media servers do not need to tune shared memory becauseAIX uses dynamic memory allocation.

Make changes carefully, monitoring for performance changes with eachmodification. For example, an increase in the tape buffer size can cause somebackups to run slower. Also, there have been cases with restore issues. After anychanges, be sure to include restores as part of your validation testing.

Recommended number of data buffers for SAN Client and FTmedia serverFor SAN Client Fibre Transport, the effective total number of data buffers isapproximately twice the number of buffers specified for non-multiplexed backups.The reason is that the specified number of buffers are present on both the SANClient and on the FT media server.

Note: It usually does not improve performance to increase memory buffers to anumber that is significantly more than the SAN Client Fibre Transport default(16). Such an increase usually causes the majority of the buffers on either theclient or server side to be empty.

Testing changes made to shared memoryAfter making any changes, it is vitally important to verify that the following testscomplete successfully.

To test changes made to shared memory

1 Run a backup.

2 Restore the data from the backup.

133Tuning the NetBackup data transfer pathNetBackup server performance

Page 134: Netbackup Planning Guide 307083

3 Restore data from a backup that was created before the changes to theSIZE_DATA_BUFFERS_xxxx and NUMBER_DATA_BUFFERS_xxxx files.

4 Before and after altering the size or number of data buffers, examine thebuffer usage information in the bptm debug log file.

The values in the log should match your buffer settings. The relevant bptmlog entries are similar to the following:

12:02:55 [28551] <2> io_init: using 65536 data buffer size

12:02:55 [28551] <2> io_init: CINDEX 0, sched bytes for

monitoring = 200

12:02:55 [28551] <2> io_init: using 8 data buffers

or

15:26:01 [21544] <2> mpx_setup_restore_shm: using 12 data

buffers, buffer size is 65536

When you change these settings, take into consideration the total systemresources and the entire network. The Maximum Transmission Unit (MTU)for the local area network (LAN) may also have to be changed.

Changing parent and child delay valuesYou can modify the parent and child delay values for a process.

To change the parent and child delay values

1 Create the following files:

UNIX

/usr/openv/netbackup/db/config/PARENT_DELAY

/usr/openv/netbackup/db/config/CHILD_DELAY

Windows

install_path\NetBackup\db\config\PARENT_DELAY

install_path\NetBackup\db\config\CHILD_DELAY

These files contain a single integer that specifies the value in millisecondsfor the delay corresponding to the name of the file.

2 For example, for a parent delay of 50 milliseconds, enter 50 in thePARENT_DELAY file.

See “Using NetBackup wait and delay counters” on page 135.

Tuning the NetBackup data transfer pathNetBackup server performance

134

Page 135: Netbackup Planning Guide 307083

Using NetBackup wait and delay countersDuring a backup or restore operation, the NetBackup media server uses a set ofshared data buffers to do the following: isolate the process of communicating withthe storage device (tape or disk) from the process of interacting with the clientdisk or network. Through the use of wait and delay counters, you can determinewhich process on the NetBackup media server has to wait more often: the dataproducer or the data consumer.

Achieving a good balance between the data producer and the data consumerprocesses is an important factor in achieving optimal performance from theNetBackup server component of the NetBackup data transfer path.

Figure 7-1 shows the producer-consumer relationship.

Figure 7-1 Producer-consumer relationship during a backup

NetBackupClient Network

BPTM(child process)

BPTM(parent process)Shared buffers

Tape or disk

Producer

Consumer

Understanding the two-part communication processThe two-part communication process varies depending on the following:

■ Whether the operation is a backup or restore

■ Whether the operation involves a local client or a remote client

Local clientsWhen the NetBackup media server and the NetBackup client are part of the samesystem, the NetBackup client is referred to as a local client.

135Tuning the NetBackup data transfer pathNetBackup server performance

Page 136: Netbackup Planning Guide 307083

For a local client, the bpbkar (UNIX) or bpbkar32(Windows) process reads data from the disk during backupand places it in shared buffers. The bptm process reads thedata from the shared buffer and writes it to tape or disk.

Backup of local client

During a restore of a local client, the bptm process readsdata from the tape or disk and places it in the shared buffers.The tar (UNIX) or tar32 (Windows) process reads the datafrom the shared buffers and writes it to disk.

Restore of local client

Remote clientsWhen the NetBackup media server and the NetBackup client are part of twodifferent systems, the NetBackup client is referred to as a remote client.

Thebpbkar (UNIX) orbpbkar32 (Windows) process on theremote client reads data from the disk and writes it to thenetwork. Then a child bptm process on the media serverreceives data from the network and places it in the sharedbuffers. The parentbptmprocess on the media server readsthe data from the shared buffers and writes it to tape ordisk.

Backup of remote client

During the restore of the remote client, the parent bptmprocess reads data from the tape or disk and places it intothe shared buffers. The child bptm process reads the datafrom the shared buffers and writes it to the network. Thetar (UNIX) or tar32 (Windows) process on the remote clientreceives the data from the network and writes it to disk.

Restore of remote client

Roles of processes during backup and restoreWhen a process attempts to use a shared data buffer, it first verifies that the nextbuffer in order is in a correct state. A data producer needs an empty buffer, whilea data consumer needs a full buffer.

Table 7-6 provides a mapping of processes and their roles during backup andrestore.

Table 7-6 Processes and roles

Data consumerData producerOperation

bptmbpbkar (UNIX) or bpbkar32(Windows)

Local Backup

Tuning the NetBackup data transfer pathNetBackup server performance

136

Page 137: Netbackup Planning Guide 307083

Table 7-6 Processes and roles (continued)

Data consumerData producerOperation

bptm (parent)bptm (child)Remote Backup

tar (UNIX) or tar32 (Windows)bptmLocal Restore

bptm (child)bptm (parent)Remote Restore

If the data consumer lacks a full buffer, it increments the wait and delay countersto indicate that it had to wait for a full buffer. After a delay, the data consumerchecks again for a full buffer. If a full buffer is still not available, the data consumerincrements the delay counter to indicate that it had to wait (delay) for a full buffer.The data consumer repeats the delay and full buffer check steps until a full bufferis available.

This sequence is summarized in the following algorithm:

while (Buffer_Is_Not_Full) {

++Wait_Counter;

while (Buffer_Is_Not_Full) {

++Delay_Counter;

delay (DELAY_DURATION);

}

}

If the data producer lacks an empty buffer, it increments the wait and delay counterto indicate that it had to wait for an empty buffer. After a delay, the data producerchecks again for an empty buffer. If an empty buffer is still not available, the dataproducer increments the delay counter to indicate that it had to delay again whilewaiting for an empty buffer. The data producer relates the delay and empty buffercheck steps until an empty buffer is available.

The algorithm for a data producer has a similar structure:

while (Buffer_Is_Not_Empty) {

++Wait_Counter;

while (Buffer_Is_Not_Empty) {

++Delay_Counter;

delay (DELAY_DURATION);

}

}

Analysis of the wait and delay counter values indicates which process, produceror consumer, has had to wait most often and for how long.

Four wait and delay counter relationships exist, as follows:

137Tuning the NetBackup data transfer pathNetBackup server performance

Page 138: Netbackup Planning Guide 307083

The data producer has substantially larger wait and delaycounter values than the data consumer. The data consumeris unable to receive data fast enough to keep the dataproducer busy.

Investigate means to improve the performance of the dataconsumer. For a backup, check if the data buffer size isappropriate for the tape or disk drive being used. If dataconsumer still has a substantially large value in this case,try increasing the number of shared data buffers to improveperformance.

Data Producer >> DataConsumer

The data producer and the data consumer have very similarwait and delay counter values, but those values are relativelylarge. This situation may indicate that the data producerand data consumer regularly attempt to use the same shareddata buffer. Try increasing the number of shared databuffers to improve performance.

See “Determining wait and delay counter values” on page 139.

Data Producer = DataConsumer (large value)

The data producer and the data consumer have very similarwait and delay counter values, but those values are relativelysmall. This situation indicates that there is a good balancebetween the data producer and data consumer. It shouldyield good performance from the NetBackup servercomponent of the NetBackup data transfer path.

Data Producer = DataConsumer (small value)

The data producer has substantially smaller wait and delaycounter values than the data consumer. The data produceris unable to deliver data fast enough to keep the dataconsumer busy.

Investigate ways to improve the performance of the dataproducer. For a restore operation, check if the data buffersize is appropriate for the tape or disk drive. If the dataproducer still has a relatively large value in this case, tryincreasing the number of shared data buffers to improveperformance.

See “How to change the number of shared data buffers”on page 126.

Data Producer << DataConsumer

Of primary concern is the relationship and the size of the values. Information ondetermining substantial versus trivial values appears on the following pages. Therelationship of these values only provides a starting point in the analysis.Additional investigative work may be needed to positively identify the cause of abottleneck within the NetBackup data transfer path.

Tuning the NetBackup data transfer pathNetBackup server performance

138

Page 139: Netbackup Planning Guide 307083

Determining wait and delay counter valuesWait and delay counter values can be found by creating debug log files on theNetBackup media server.

Note: The debug log files introduce additional overhead and have a small effecton the overall performance of NetBackup. This effect is more noticeable for a highverbose level setting. Normally, you should not need to run with debug loggingenabled on a production system.

To determine the wait and delay counter values for a local client backup

1 Activate debug logging by creating these two directories on the media server:

UNIX

/usr/openv/netbackup/logs/bpbkar

/usr/openv/netbackup/logs/bptm

Windows

install_path\NetBackup\logs\bpbkar

install_path\NetBackup\logs\bptm

2 Execute your backup.

139Tuning the NetBackup data transfer pathNetBackup server performance

Page 140: Netbackup Planning Guide 307083

3 Look at the log for the data producer (bpbkar on UNIX or bpbkar32 onWindows) process in:

UNIX

/usr/openv/netbackup/logs/bpbkar

Windows

install_path\NetBackup\logs\bpbkar

The line should be similar to the following, with a timestamp correspondingto the completion time of the backup:

... waited 224 times for empty buffer, delayed 254 times

In this example the wait counter value is 224 and the delay counter value is254.

4 Look at the log for the data consumer (bptm) process in:

UNIX

/usr/openv/netbackup/logs/bptm

Windows

install_path\NetBackup\logs\bptm

The line should be similar to the following, with a timestamp correspondingto the completion time of the backup:

... waited for full buffer 1 times, delayed 22 times

In this example, the wait counter value is 1 and the delay counter value is 22.

Tuning the NetBackup data transfer pathNetBackup server performance

140

Page 141: Netbackup Planning Guide 307083

To determine the wait and delay counter values for a remote client backup

1 Activate debug logging by creating this directory on the media server:

UNIX

/usr/openv/netbackup/logs/bptm

Windows

install_path\NetBackup\logs\bptm

2 Execute your backup.

3 Look at the log for the bptm process in:

UNIX

/usr/openv/netbackup/logs/bptm

Windows

install_path\NetBackup\Logs\bptm

Delays that are associated with the data producer (bptm child) appear asfollows:

... waited for empty buffer 22 times, delayed 151 times, ...

In this example, the wait counter value is 22 and the delay counter value is151.

Delays that are associated with the data consumer (bptm parent) appear as:

... waited for full buffer 12 times, delayed 69 times

In this example the wait counter value is 12, and the delay counter value is69.

141Tuning the NetBackup data transfer pathNetBackup server performance

Page 142: Netbackup Planning Guide 307083

To determine the wait and delay counter values for a local client restore

1 Activate logging by creating the two directories on the NetBackup mediaserver:

UNIX

/usr/openv/netbackup/logs/bptm

/usr/openv/netbackup/logs/tar

Windows

install_path\NetBackup\logs\bptm

install_path\NetBackup\logs\tar

2 Execute your restore.

3 Look at the log for the data consumer (tar or tar32) in the tar log directory.

The line should be similar to the following, with a timestamp correspondingto the completion time of the restore:

... waited for full buffer 27 times, delayed 79 times

In this example, the wait counter value is 27, and the delay counter value is79.

4 Look at the log for the data producer (bptm) in the bptm log directory.

The line should be similar to the following, with a timestamp correspondingto the completion time of the restore:

... waited for empty buffer 1 times, delayed 68 times

In this example, the wait counter value is 1 and the delay counter value is 68.

Tuning the NetBackup data transfer pathNetBackup server performance

142

Page 143: Netbackup Planning Guide 307083

To determine the wait and delay counter values for a remote client restore

1 Activate debug logging by creating the following directory on the mediaserver:

UNIX

/usr/openv/netbackup/logs/bptm

Windows

install_path\NetBackup\logs\bptm

2 Execute your restore.

3 Look at the log for bptm in the bptm log directory.

Delays that are associated with the data consumer (bptm child) appear asfollows:

... waited for full buffer 36 times, delayed 139 times

In this example, the wait counter value is 36 and the delay counter value is139.

Delays that are associated with the data producer (bptm parent) appear asfollows:

... waited for empty buffer 95 times, delayed 513 times

In this example the wait counter value is 95 and the delay counter value is513.

Note on log file creationWhen you run multiple tests, you can rename the current log file. Renaming thefile causes NetBackup to create a new log file, which prevents you from erroneouslyreading the wrong set of values.

Deleting the debug log file does not stop NetBackup from generating the debuglogs. You must delete the entire directory. For example, to stop bptm from logging,you must delete the bptm subdirectory. NetBackup automatically generates debuglogs at the specified verbose setting whenever the directory is detected.

Using wait and delay counter values to analyze issuesYou can use the bptm debug log file to verify that the following tunable parametershave successfully been set to the desired values. You can use these parametersand the wait and delay counter values to analyze issues.

143Tuning the NetBackup data transfer pathNetBackup server performance

Page 144: Netbackup Planning Guide 307083

These additional values include the following:

The size of each shared data buffer can be found on a linesimilar to:

... io_init: using 65536 data buffer size

Data buffer size

The number of shared data buffers may be found on a linesimilar to:

... io_init: using 16 data buffers

Number of data buffers

The values in use for the duration of the parent and childdelays can be found on a line similar to:

... io_init: child delay = 10, parent delay = 15 (milliseconds)

Parent/child delay values

The Network Buffer Size values on the media server can befound in lines similar to these in debug log files:

The bptm child process reads from the receive networkbuffer during a remote backup.

...setting receive network buffer to 263168 bytes

The bptm child process writes to the network buffer duringa remote restore:

...setting send network buffer to 131072 bytes

See “Setting the network buffer size for the NetBackupmedia server” on page 118.

NetBackup media servernetwork buffer size

Example of using wait and delay counter valuesSuppose you wanted to analyze a local backup that has a 30-minute data transferthat is baselined at 5 MB per second. Tha backup involves a total data transfer of9,000 MB. Because a local backup is involved, you can determine that bpbkar(UNIX) or bpbkar32 (Windows) is the data producer and bptm is the data consumer.

See “Roles of processes during backup and restore” on page 136.

Then you can determine the wait and delay values for bpbkar (or bpbkar32) andbptm by following the procedures that are described in the following:

See “Determining wait and delay counter values” on page 139.

For this example, suppose those values are the following:

Tuning the NetBackup data transfer pathNetBackup server performance

144

Page 145: Netbackup Planning Guide 307083

Table 7-7 Examples for wait and delay

DelayWaitProcess

5803329364bpbkar (UNIX)

bpbkar32 (Windows)

10595bptm

These values reveal that bpbkar (or bpbkar32) is forced to wait by a bptm processthat cannot move data out of the shared buffer fast enough.

Next, you can determine time lost due to delays by multiplying the delay countervalue by the parent or child delay value, whichever applies.

In this example, the bpbkar (or bpbkar32) process uses the child delay value, whilethe bptm process uses the parent delay value. (The defaults for these values are10 for child delay and 15 for parent delay.) The values are specified in milliseconds.

See “Changing parent and child delay values” on page 134.

You can use the following equations to determine the amount of time lost due tothese delays:

Table 7-8 Example delays

DelayProcess

= 58033 delays X 0.020 seconds

=1160 seconds

=19 minutes 20 seconds

bpbkar (UNIX)

bpbkar32 (Windows)

=105 X 0.030 seconds

=3 seconds

bptm

Use these equations to determine if the delay for bpbkar (or bpbkar32) issignificant. If this delay were removed, the resulting transfer time of 10:40 wouldindicate a throughput value of 14 MB per second, nearly a threefold increase.(10:40 = total transfer time of 30 minutes minus delay of 19 minutes and 20seconds.) With this performance increase, you should investigate how the tapeor disk performance can be improved.

The number of delays should be interpreted within the context of how much datawas moved. As the amount of moved data increases, the significance thresholdfor counter values increases as well.

Again, for a total of 9,000 MB of data being transferred, assume a 64-KB buffer.

145Tuning the NetBackup data transfer pathNetBackup server performance

Page 146: Netbackup Planning Guide 307083

You can determine the total number of buffers to be transferred using the followingequation:

= 9,000 X 1024

= 9,216,000 KB

Number_Kbytes

=9,216,000 / 64

=144,000

Number_Slots

The wait counter value can now be expressed as a percentage of the total dividedby the number of buffers transferred:

= 29364 / 144,000

= 20.39%

bpbkar (UNIX)

bpbkar32 (Windows)

= 95 / 144,000

= 0.07%

bptm

In the 20 percent of cases where bpbkar (or bpbkar32) needed an empty shareddata buffer, bptm has not yet emptied the shared data buffer. A value of this sizeindicates a serious issue. You should investigate further as to why the dataconsumer (bptm) cannot keep up.

In contrast, the delays that bptm encounters are insignificant for the amount ofdata transferred.

You can also view the delay and wait counters as a ratio:

= 58033/29364

= 1.98

bpbkar (UNIX)

bpbkar32 (Windows)

In this example, on average bpbkar (or bpbkar32) had to delay twice for each waitcondition that was encountered. If this ratio is large, increase the parent or childdelay to avoid checking for a shared data buffer in the correct state too often.Conversely, if this ratio is close to 1, reduce the applicable delay value to checkmore often. which may increase your data throughput performance. Keep in mindthat the parent and child delay values are rarely changed in most NetBackupinstallations.

The preceding information explains how to determine if the values for wait anddelay counters are substantial enough for concern. The wait and delay countersare related to the size of data transfer. A value of 1,000 may be extreme when only1 megabyte of data is moved. The same value may indicate a well-tuned system

Tuning the NetBackup data transfer pathNetBackup server performance

146

Page 147: Netbackup Planning Guide 307083

when gigabytes of data are moved. The final analysis must determine how thesecounters affect performance.

Correcting issues uncovered by wait and delay counter valuesYou can correct issues by checking the following:

■ bptm-read waitsThe bptm debug log contains messages such as the following:

...waited for full buffer 1681 times, delayed 12296 times

The first number is the number of times bptm waited for a full buffer: in otherwords, how many times the bptm write operations waited for data from thesource. If the wait counter indicates a performance issue, a change in thenumber of buffers does not help.See “Determining wait and delay counter values” on page 139.Multiplexing may help.

■ bptm-write waitsThe bptm debug log contains messages such as the following:

...waited for empty buffer 1883 times, delayed 14645 times

The first number is the number of times bptm waited for an empty buffer: thenumber of times bptm encountered data from the source faster than the datacan be written to tape or disk. If the wait counter indicates a performanceissue, reduce the multiplexing factor.See “Determining wait and delay counter values” on page 139.More buffers may help.

■ bptm delaysThe bptm debug log contains messages such as the following:

...waited for empty buffer 1883 times, delayed 14645 times

The second number is the number of times bptm waited for an available buffer.If the delay counter indicates a performance issue, investige. Each delay intervalis 30 microseconds.

Estimating the impact of Inline copy on backup performanceInline Copy (multiple copies) takes one stream of data that the bptm buffers receiveand writes the data to two or more destinations sequentially. The time to writeto multiple devices is the same as the time required to write to one device

147Tuning the NetBackup data transfer pathNetBackup server performance

Page 148: Netbackup Planning Guide 307083

multiplied by the number of devices. The overall write speed, therefore, is thewrite speed of a single device divided by the number of devices.

The write speed of a backup device is usually faster than the read speed of thesource data. Therefore, switching to Inline Copy may not necessarily slow downthe backup. The important figure is the write speed of the backup device: thenative speed of the device multiplied by the compression ratio of the devicehardware compression. For tape backups this compression ratio can beapproximated by looking at how much data is held on a single tape (as reportedby NetBackup). Compare that amount of data with the uncompressed capacity ofa cartridge.

For example:

An LTO gen 3 cartridge has an uncompressed capacity of 400 GB. An LTO gen 3drive has a native write capacity of 80 MB per second. If a full cartridge contains600 GB, the compression ratio is 600/400 or 1.5:1. Thus, the write speed of thedrive is 1.5 * 80 = 120 MB per second.

If Inline copy to two LTO gen 3 drives is used, the overall write speed is 120/2 =60 MB per second.

If the backup normally runs at 45 MB per second (the read speed of the sourcedata is 45 MB per second), Inline copy does not affect the backup speed. If thebackup normally runs at 90 MB per second, Inline Copy reduces the speed of thebackup to 60 MB per second. The performance limit is moved from the readoperation to the write operation.

Fragment size and NetBackup restoresThis section describes how fragment size affects NetBackup restores fornon-multiplexed and multiplexed images.

The fragment size affects where tape markers are placed and how many tapemarkers are used. (The default fragment size is 1 terabyte for tape storage unitsand 512 GB for disk.) As a rule, a larger fragment size results in faster backups,but may result in slower restores when recovering a small number of individualfiles.

The "Reduce fragment size to" setting on the Storage Unit dialog limits the largestfragment size of the image. By limiting the size of the fragment, the size of thelargest read during restore is minimized, reducing restore time. The fragmentsize is especially important when restoring a small number of individual filesrather than entire directories or file systems.

For many sites, a fragment size of approximately 10 GB results in goodperformance for backup and restore.

Tuning the NetBackup data transfer pathNetBackup server performance

148

Page 149: Netbackup Planning Guide 307083

For a fragment size, consider the following:

■ Larger fragment sizes usually favor backup performance, especially whenbacking up large amounts of data. Smaller fragments can slow down largebackups. Each time a new fragment is created, the backup stream is interrupted.

■ Larger fragment sizes do not hinder performance when restoring large amountsof data. But when restoring a few individual files, larger fragments may slowdown the restore.

■ Larger fragment sizes do not hinder performance when restoring fromnon-multiplexed backups. For multiplexed backups, larger fragments mayslow down the restore. In multiplexed backups, blocks from several imagescan be mixed together within a single fragment. During restore, NetBackuppositions to the nearest fragment and starts reading the data from there, untilit comes to the desired file. Splitting multiplexed backups into smallerfragments can improve restore performance.

■ During restores, newer, faster devices can handle large fragments well. Slowerdevices, especially if they do not use fast locate block positioning, restoreindividual files faster if fragment size is smaller. (In some cases, SCSI fast tapepositioning can improve restore performance.)

Unless you have particular reasons for creating smaller fragments, larger fragmentsizes are likely to yield better overall performance. For example, reasons forcreating smaller fragments are the following: restoring a few individual files,restoring from multiplexed backups, or restoring from older equipment.

Restore of a non-multiplexed imagebptm positions to the media fragment and the actual tape block that contains thefirst file to be restored. If fast-locate is available, bptm uses that for the positioning.If fast-locate is not available, bptm uses MTFSF/MTFSR (forward spacefilemark/forward space record) to do the positioning.

The first file is then restored.

After that, for every subsequent file to be restored, bptm determines where thatfile is, relative to the current position. It may be faster for bptm to position to thatspot rather than to read all the data in between (if fast locate is available). In thatcase, bptm uses positioning to reach the next file instead of reading all the datain between.

If fast-locate is not available, bptm can read the data as quickly as it can positionwith MTFSR (forward space record).

Therefore, fragment sizes for non-multiplexed restores matter if fast-locate isNOT available. With smaller fragments, a restore reads less extraneous data. You

149Tuning the NetBackup data transfer pathNetBackup server performance

Page 150: Netbackup Planning Guide 307083

can set the maximum fragment size for the storage unit on the Storage Unit dialogin the NetBackup Administration Console (Reduce fragment size to).

Restore of a multiplexed imagebptm positions to the media fragment that contains the first file to be restored.If fast-locate is available, bptm uses that for the positioning. If fast_locate is notavailable, bptm uses MTFSF (forward space file mark) for the positioning. Therestore cannot use "fine-tune" positioning to reach the block that contains thefirst file, because of the randomness of how multiplexed images are written. Therestore starts to read, throwing away all the data (for this client and other clients).It continues throwing away data until it reaches the block that contains the firstfile.

The first file is then restored.

From that point, the logic is the same as for non-multiplexed restores, with oneexception. If the current position and the next file position are in the samefragment, the restore cannot use positioning. It cannot use positioning for thesame reason that it cannot use "fine-tune" positioning to get to the first file.

If the next file position is in a subsequent fragment (or on a different media), therestore uses positioning to reach that fragment. The restore does not read all thedata in between.

Thus, smaller multiplexed fragments can be advantageous. The optimal fragmentsize depends on the site's data and situation. For multi-gigabyte images, it maybe best to keep fragments to 1 gigabyte or less. The storage unit attribute thatlimits fragment size is based on the total amount of data in the fragment. It is notbased on the total amount of data for any one client.

When multiplexed images are written, each time a client backup stream starts orends, the result is a new fragment. A new fragment is also created when acheckpoint occurs for a backup that has checkpoint restart enabled. So not allfragments are of the maximum fragment size. End-of-media (EOM) also causesnew fragment(s).

Some examples may help illustrate when smaller fragments do and do not helprestores.

Example 1:

Assume you want to back up four streams to a multiplexed tape. Each stream isa single, 1 GB file. A default maximum fragment size of 1 TB has been specified.The resultant backup image logically looks like the following. ‘TM’ denotes a tapemark or file mark, which indicates the start of a fragment.

TM <4 gigabytes data> TM

Tuning the NetBackup data transfer pathNetBackup server performance

150

Page 151: Netbackup Planning Guide 307083

To restore one of the 1 GB files, the restore positions to the TM. It then has toread all 4 GB to get the 1 GB file.

If you set the maximum fragment size to 1 GB:

TM <1 GB data> TM <1 GB data> TM <1 GB data> TM <1 GB data> TM

this size does not help: the restore still has to read all four fragments to pull outthe 1 GB of the file being restored.

Example 2:

This example is the same as Example 1, but assume that four streams back up 1GB of /home or C:\. With the maximum fragment size (Reduce fragment size) setto a default of 1 TB (assuming that all streams are relatively the sameperformance), you again end up with:

TM <4 GBs data> TM

Restoring the following

/home/file1

or

C:\file1

/home/file2

or

C:\file2

from one of the streams, NetBackup must read as much of the 4 GB as necessaryto restore all the data. But, if you set Reduce fragment size to 1 GB, the imagelooks like the following:

TM <1 GB data> TM <1 GB data> TM <1 GB data> TM <1 GB data> TM

In this case, home/file1 or C:\file1 starts in the second fragment. bptm positionsto the second fragment to start the restore of home/file1 or C:\file1. (1 GB ofreading is saved so far.) After /home/file1 is done, if /home/file2 or C:\file2 is inthe third or forth fragment, the restore can position to the beginning of thatfragment before it starts reading.

These examples illustrate that whether fragmentation benefits a restore dependson the following: what the data is, what is being restored, and where in the imagethe data is. In Example 2, reducing the fragment size from 1 GB to half a GB (512MB) increases the chance the restore can locate by skipping instead of reading,when restoring small amounts of an image.

151Tuning the NetBackup data transfer pathNetBackup server performance

Page 152: Netbackup Planning Guide 307083

Fragmentation and checkpoint restartIf the policy’s Checkpoint Restart feature is enabled, NetBackup creates a newfragment at each checkpoint. It creates the fragment according to the Takecheckpointsevery setting. For more information on Checkpoint Restart, refer tothe NetBackup Administrator’s Guide, Volume I.

Other restore performance issuesCommon reasons for restore performance issues are described in the followingsubsections.

NetBackup catalog performanceThe disk subsystem where the NetBackup catalog resides has a large impact onthe overall performance of NetBackup. To improve restore performance, configurethis subsystem for fast reads. NetBackup binary catalog format provides scalableand fast catalog access.

NUMBER_DATA_BUFFERS_RESTORE settingThis parameter can help keep other NetBackup processes busy while a multiplexedtape is positioned during a restore. An increase in this value causes NetBackupbuffers to occupy more physical RAM. This parameter only applies to multiplexedrestores.

See “Shared memory (number and size of data buffers)” on page 124.

Index performance issuesRefer to "Indexing the Catalog for Faster Access to Backups" in the NetBackupAdministrator’s Guide, Volume I.

Improving search performance for many small backupsYou can improve search performance when you have many small backup images.

Tuning the NetBackup data transfer pathNetBackup server performance

152

Page 153: Netbackup Planning Guide 307083

To improve search performance

1 Run the following command as root on the master server:

UNIX

/usr/openv/netbackup/bin/admincmd/bpimage -create_image_list

-client client_name

Windows

install_directory\bin\admincmd\bpimage -create_image_list

-client client_name

where client_name is the name of the client that has many small backups.

In the following directory:

UNIX

/usr/openv/netbackup/db/images/client_name

Windows

install_path\NetBackup\db\images\client_name

the bpimage command creates the following files:

List of images for this clientIMAGE_LIST

Information about the images for this clientIMAGE_INFO

The file information for small imagesIMAGE_FILES

2 Do not edit these files. They contain offsets and byte counts that are used tofind and read the image information.

These files increase the size of the client directory.

Restore performance in a mixed environmentIf you encounter restore performance issues in a mixed environment (UNIX andWindows), consider reducing the tcp wait interval parameter:tcp_deferred_ack_interval. Under Solaris 8, the default value of this parameteris 100ms. (Root privileges are required to change this parameter.)

The current value of tcp_deferred_ack_interval can be obtained by executingthe following command (this example is for Solaris):

/usr/sbin/ndd -get /dev/tcp tcp_deferred_ack_interval

153Tuning the NetBackup data transfer pathNetBackup server performance

Page 154: Netbackup Planning Guide 307083

The value of tcp_deferred_ack_interval can be changed by executing thecommand:

/usr/sbin/ndd -set /dev/tcp tcp_deferred_ack_interval value

where value is the number which provides the best performance for the system.This approach may have to be tried and tested: it may vary from system to system.A suggested starting value is 20. In any case, the value must not exceed 500ms,otherwise it may break TCP/IP.

With the optimum value for the system, you can set the value permanently in ascript under the following directory:

/etc/rc2.d

Now it is executed when the system starts.

Multiplexing set too highIf multiplexing is too high, needless tape searching may occur. The ideal settingis the minimum needed to stream the drives.

Restores from multiplexed database backupsNetBackup can run several restores at the same time from a single multiplexedtape, by means of the MPX_RESTORE_DELAY option. This option specifies howlong in seconds the server waits for additional restore requests of files or rawpartitions that are in a set of multiplexed images on the same tape. The restorerequests received within this period are executed simultaneously. By default, thedelay is 30 seconds.

This option may be useful if multiple stripes from a large database backup aremultiplexed together on the same tape. If the MPX_RESTORE_DELAY option ischanged, you do not need to stop and restart the NetBackup processes for thechange to take effect.

When the request daemon on the master server (bprd) receives the first streamof a multiplexed restore request, it triggers the MPX_RESTORE_DELAY timer.The timer starts counting the configured amount of time. bprd watches and waitsfor related multiplexed jobs from the same client before it starts the overall job.If another associated stream is received within the timeout period, it is added tothe total job: the timer is reset to the MPX_RESTORE_DELAY period. When thetimeout has been reached without bprd receiving an additional stream, the timeoutwindow closes. All associated restore requests are sent to bptm. A tape is mounted.If any associated restore requests arrive, they are queued until the tape that isnow "In Use" is returned to an idle state.

Tuning the NetBackup data transfer pathNetBackup server performance

154

Page 155: Netbackup Planning Guide 307083

If MPX_RESTORE_DELAY is not high enough, NetBackup may need to mount andread the tape multiple times to collect all header information for the restore.Ideally, NetBackup would read a multiplexed tape and collect all the requiredheader information with a single pass of the tape. A single pass minimizes therestore time.

Example of restore frommultiplexed database backup (Oracle)Suppose that MPX_RESTORE_DELAY is not set in the bp.conf file, so its value isthe default of 30 seconds. Suppose also that you initiate a restore from an OracleRMAN backup that was backed up using 4 channels or 4 streams. You also use thesame number of channels to restore.

RMAN passes NetBackup a specific data request, telling NetBackup whatinformation it needs to start and complete the restore. The first request is receivedby NetBackup in 29 seconds, which causes the MPX_RESTORE_DELAY timer tobe reset. The next request is received by NetBackup in 22 seconds; again the timeris reset. The third request is received 25 seconds later, resetting the timer a thirdtime. But the fourth request is received 31 seconds after the third. Since the fourthrequest was not received within the restore delay interval, NetBackup starts threeof the four restores. Instead of reading from the tape once, NetBackup queues thefourth restore request until the previous three requests are completed. Note thatall of the multiplexed images are on the same tape. NetBackup mounts, rewinds,and reads the entire tape again to collect the multiplexed images for the fourthrestore request.

In addition to NetBackup's reading the tape twice, RMAN waits to receive all thenecessary header information before it begins the restore.

If MPX_RESTORE_DELAY is longer than 30 seconds, NetBackup can receive allfour restore requests within the restore delay windows. It collects all the necessaryheader information with one pass of the tape. Oracle can start the restore afterthis one tape pass, for better restore performance.

Set the MPX_RESTORE_DELAY with caution, because it can decrease performanceif set too high. Suppose that the MPX_RESTORE_DELAY is set to 1800 seconds.When the final associated restore request arrives, NetBackup resets the requestdelay timer as it did with the previous requests. NetBackup must wait for theentire 1800-second interval before it can start the restore.

Therefore, try to set the value of MPX_RESTORE_DELAY so it is neither too highor too low.

155Tuning the NetBackup data transfer pathNetBackup server performance

Page 156: Netbackup Planning Guide 307083

NetBackup storage device performanceThis section looks at storage device functionality in the NetBackup data transferpath. Changes in these areas may improve NetBackup performance.

Tape drive wear and tear is much less, and efficiency is greater, if the data streammatches the tape drive capacity and is sustained. Most tape drives have slowerthroughput than disk drives. Match the number of drives and the throughput perdrive to the speed of the SCSI/FC connection, and follow the hardware vendors’recommendations.

The following factors affect tape drives:

■ Media positioningWhen a backup or restore is performed, the storage device must position thetape so that the data is over the read and write head. The positioning can takea significant amount of time. When you conduct performance analysis withmedia that contains multiple images, allow for the time lag that occurs beforethe data transfer starts.

■ Tape streamingIf a tape device is used at its most efficient speed, it is "streaming" the dataonto the tape. If a tape device is streaming, the media rarely has to stop andrestart. Instead, the media constantly spins within the tape drive. If the tapedevice is not used at its most efficient speed, it may continually start and stopthe media from spinning. This behavior is the opposite of tape streaming andusually results in a poor data throughput.

■ Data compressionMost tape devices support some form of data compression within the tapedevice itself. Compressible data (such as text files) yields a higher datathroughput rate than non-compressible data, if the tape device supportshardware data compression.Tape devices typically come with two performance rates: maximum throughputand nominal throughput. Maximum throughput is based on how fastcompressible data can be written to the tape drive when hardware compressionis enabled in the drive. Nominal throughput refers to rates achievable withnon-compressible data.

Note: NetBackup cannot set tape drive data compression. Follow theinstructions that are provided with your OS and tape drive.

In general, tape drive data compression is preferable to client (software)compression. Client compression may be desirable for reducing the amountof data that is transmitted across the network for a remote client backup.

Tuning the NetBackup data transfer pathNetBackup storage device performance

156

Page 157: Netbackup Planning Guide 307083

See “Tape versus client compression” on page 168.

157Tuning the NetBackup data transfer pathNetBackup storage device performance

Page 158: Netbackup Planning Guide 307083

Tuning the NetBackup data transfer pathNetBackup storage device performance

158

Page 159: Netbackup Planning Guide 307083

Tuning other NetBackupcomponents

This chapter includes the following topics:

■ Multiplexing and multiple data streams

■ Resource allocation

■ Encryption

■ Compression

■ Using encryption and compression

■ NetBackup Java

■ Vault

■ Fast recovery with Bare Metal Restore

■ Backing up many small files

■ Adjusting the read buffer for raw partition backup

■ Adjusting the allocation size of the snapshot mount point volume for NetBackupfor VMware

■ NetBackup Operations Manager (NOM)

Multiplexing and multiple data streamsConsider the following factors regarding multiplexing and multiple data streams.

8Chapter

Page 160: Netbackup Planning Guide 307083

When to use multiplexing and multiple data streamsMultiple data streams can reduce the time for large backups. The reduction isachieved by first splitting the data to be backed up into multiple streams. Thenyou use multiplexing, multiple drives, or a combination of the two for processingthe streams concurrently. In addition, you can configure the backup so eachphysical device on the client is backed up by a separate data stream. Each datastream runs concurrently with streams from other devices, to reduce backuptimes.

Note: For best performance, use only one data stream to back up each physicaldevice on the client. Multiple concurrent streams from a single physical devicecan adversely affect the time to back up the device: the drive heads must moveback and forth between tracks that contain the files for the respective streams.

Multiplexing is not recommended for database backups, when restore speed is ofparamount interest or when your tape drives are slow.

Backing up across a network, unless the network bandwidth is very broad, cannullify the ability to stream. Typically, a single client can send enough data tosaturate a single 100BaseT network connection. A gigabit network has the capacityto support network streaming for some clients. Multiple streams use more of theclient’s resources than a single stream. Symantec recommends testing to makesure of the following: that the client can handle the multiple data streams, andthat the high rate of data transfer does not affect users.

Multiplexing and multiple data streams can be powerful tools to ensure that alltape drives are streaming. With NetBackup, both can be used at the same time.Be careful to distinguish between the two concepts, as follows.

Multiplexing writes multiple data streams to a single tape drive.

Figure 8-1 shows multiplexing.

Figure 8-1 Multiplexing diagram

server

clients

back up to tape

The multiple data streams feature writes multiple data streams, each to its owntape drive, unless multiplexing is used.

Tuning other NetBackup componentsMultiplexing and multiple data streams

160

Page 161: Netbackup Planning Guide 307083

Figure 8-2 shows multiple data streams.

Figure 8-2 Multiple data streams diagram

server

back up to tape

Consider the following about multiplexing:

■ Experiment with different multiplexing factors to find the one that is minimallysufficient for streaming.Find a setting at which the writes are enough to fill the maximum bandwidthof your drive: that setting is the optimal multiplexing factor. If you get 5 MBper second from each of the read streams, use a multiplexing factor of two toget the maximum throughput to a DLT7000. (That is, 10 MB per second.)

■ Use a higher multiplexing factor for incremental backups.

■ Use a lower multiplexing factor for local backups.

■ Expect the duplication of a multiplexed tape to take longer if it is demultiplexed(unless "Preserve Multiplexing" is specified on the duplication). Without"Preserve Multiplexing," the duplication may take longer because multipleread passes of the source tape must be made. Using "Preserve Multiplexing,"however, may affect the restore time (see next bullet).

■ When you duplicate a multiplexed backup, demultiplex it.By demultiplexing the backups when they are duplicated, the time for recoveryis significantly reduced.

Consider the following about multiple data streams:

■ Do not use multiple data streams on single mount points.The multiple data streams feature takes advantage of the ability to streamdata from several devices at the same time. Streaming from several devicespermits backups to take advantage of Read Ahead on a spindle or set of spindlesin RAID environments. The use of multiple data streams from a single mountpoint encourages head thrashing and may result in degraded performance.

161Tuning other NetBackup componentsMultiplexing and multiple data streams

Page 162: Netbackup Planning Guide 307083

Only conduct multistreamed backups against single mount points if they aremirrored (RAID 0). However, degraded performance is a likely result.

Effects of multiplexing and multistreaming on backup and restoreNote the following:

■ MultiplexingTo use multiplexing effectively, you must understand the implications ofmultiplexing on restore times. Multiplexing may decrease backup time forlarge numbers of clients over slow networks, but it does so at the cost ofrecovery time. Restores from multiplexed tapes must pass over allnon-applicable data. This action increases restore times. When recovery isrequired, demultiplexing causes delays in the restore: NetBackup must searchthe tape to accomplish the restore.Restores should be tested to determine the impact of multiplexing on restoreperformance. Also, a smaller maximum fragment size when multiplexing mayhelp restore performance.See “Fragment size and NetBackup restores” on page 148.When you initially set up a new environment, keep the multiplexing factorlow. A multiplexing factor of four or less does not highly affect the speed ofrestores, depending on the type of drive or system. If the backups do not finishwithin their assigned window, multiplexing can be increased to meet thewindow. However, a higher multiplexing factor provides diminishing returnsas the number of multiplexing clients increases. The optimum multiplexingfactor is the number of clients that are needed to keep the buffers full for asingle tape drive.Set the multiplexing factor to four and do not multistream. Run benchmarksin this environment. Then you can begin to change the values until both thebackup and restore window parameters are met.

■ Multiple data streamsThe NEW_STREAM directive is useful for fine-tuning streams so that no disksubsystem is under-utilized or over-utilized.

Resource allocationThe following adjustments can be made to improve resource allocation.

Improving the assignment of resources to queued jobsThe parameters described in this topic were introduced in NetBackup 6.5.2.

Tuning other NetBackup componentsResource allocation

162

Page 163: Netbackup Planning Guide 307083

In certain situations, nbrb may take too long to process jobs that are waiting fordrives. This delay may occur when many jobs are queued for resources and thejobs are completing faster than nbrb can re-use the released resources for newjobs.

The following configuration file contains parameters that may improve nbrbperformance in this situation.

UNIX:

/usr/openv/var/global/nbrb.conf

Windows:

install_path\Veritas\NetBackup\var\global\nbrb.conf

The following parameters can be configured:

■ SECONDS_FOR_EVAL_LOOP_RELEASE

■ RESPECT_REQUEST_PRIORITY

■ DO_INTERMITTENT_UNLOADS

Example format for these parameters in nbrb.conf:

SECONDS_FOR_EVAL_LOOP_RELEASE = 180

RESPECT_REQUEST_PRIORITY = 0

DO_INTERMITTENT_UNLOADS = 1

The following table describes the nbrb.conf parameters.

Table 8-1 nbrb.conf parameters

DescriptionParameter

In NetBackup 6.5.2 and 6.5.3, the default value is 0.

In NetBackup 6.5.4, the default value is 180.

If the value is 0, nbrb reverts to normal defaultbehavior, evaluating all queued job requests beforereleasing any drives that have been released bycompleting jobs. When set to a nonzero value,SECONDS_FOR_EVAL_LOOP_RELEASE sets the timeinterval after which nbrb breaks into its evaluationcycle. After the specified interval, nbrb releases drivesthat have been given up by completed jobs, makingthem available for use by other jobs.

SECONDS_FOR_EVAL_LOOP_RELEASE

163Tuning other NetBackup componentsResource allocation

Page 164: Netbackup Planning Guide 307083

Table 8-1 nbrb.conf parameters (continued)

DescriptionParameter

In NetBackup 6.5.2, 6.5.3, and 6.5.4, the default valueis 0.

This option only has effect ifSECONDS_FOR_EVAL_LOOP_RELEASE is set to anonzero value.

If RESPECT_REQUEST_PRIORITY is set to 1, nbrbrestarts its evaluation queue at the top of theprioritized job queue after resources have beenreleased when theSECONDS_FOR_EVAL_LOOP_RELEASE interval haspassed.

If RESPECT_REQUEST_PRIORITY is set to 0: NBRBcontinues evaluating jobs in the prioritized job queueat the point where the evaluation cycle wasinterrupted for drive releases due toSECONDS_FOR_EVAL_LOOP_RELEASE interval. Asa result, a job is likely to reuse a drive more quicklyafter the drive has been released. Some lower priorityjobs however may get drives before higher priorityjobs do.

RESPECT_REQUEST_PRIORITY

In NetBackup 6.5.2 and 6.5.3, the default value is 0.

In NetBackup 6.5.4, the default value is 1.

This option only has effect ifSECONDS_FOR_EVAL_LOOP_RELEASE is set to anonzero value.

If DO_INTERMITTENT_UNLOADS is set to 1: whenresources are released after theSECONDS_FOR_EVAL_LOOP_RELEASE interval, nbrbinitiates unloads of drives that have exceeded themedia unload delay. Drives become available morequickly to jobs that require different media serversor different media than the job that last used the drive.However, the loaded media/drive pair may not beavailable for jobs further down in the prioritizedevaluation queue that could use the drive/mediawithout unload.

DO_INTERMITTENT_UNLOADS

Note the following:

Tuning other NetBackup componentsResource allocation

164

Page 165: Netbackup Planning Guide 307083

■ If nbrb.conf does not exist or parameters are not set in the file, the parametersassume their default values as described in the table.

■ The addition or modification of the nbrb.conf file does not require stoppingand restarting NetBackup processes. The processes read nbrb.conf at the startof every evaluation cycle, and changes of any type are implemented at thattime.

Sharing reservationsBefore NetBackup 6.5.4, for image duplication or synthetic backup, NetBackupreserves all the required source (read) media exclusively. No other job can use themedia while it is reserved, even if the job needs the media for read only (such asfor restore, bpimport, or verify). If two or more duplication or synthetic jobs thatrequire a common set of source media are started at the same time, one job startsand the other job waits until the first job terminates.

To improve performance, NetBackup 6.5.x allows shared reservations. (Startingin 6.5.4, NetBackup shares reservations by default.) With shared reservations,multiple jobs can reserve the same media, though only one job can use it at a time.In other words, the second job does not have to wait for the first job to terminate.The second job can access the media as soon as the first job is done with it.

To enable the sharing of reservations

◆ Create the following file:

On UNIX

/usr/openv/netbackup/db/config/RB_USE_SHARED_RESERVATIONS

On Windows

install_path\Veritas\NetBackup\db\config\RB_USE_SHARED_RESERVATIONS

Disabling the sharing of reservationsIn 6.5.4, shared reservations are enabled by default.

See “Sharing reservations” on page 165.

In most cases, sharing reservations results in better performance.

However, it may be helpful to disable sharing reservations in the following case:

■ Many duplication jobs are running (using a storage lifecycle policy, or Vault,or bpduplicate), and

■ Many read media are shared between different duplication jobs

165Tuning other NetBackup componentsResource allocation

Page 166: Netbackup Planning Guide 307083

In this case, without shared reservations, one job runs and other jobs requiringthe same media are queued because they cannot get a reservation. With sharedreservations, the jobs can start simultaneously. However, with a limited set ofresources (media/drive pair or disk drives), resources may bounce or "ping-pong"between different jobs as each job requests the resource.

For example, assume the following:

Two duplication jobs, job 1 and job 2, are duplicating backup images. Job 1 isduplicating images 1 through 5, and job 2 is duplicating images 6 through 9. Theimages are on the following media:

Table 8-2 Media required by jobs 1 and 2

Media used by job 2Media used by job 1

Image 6 is on media A2

Image 7 is on media A2

Image 8 is on media A2

Image 9 is on media A3

Image 1 is on media A1

Image 2 is on media A2

Image 3 is on media A2

Image 4 is on media A2

Image 5 is on media A3

In this example, both jobs require access to media A2. Without shared reservations,if job 1 gets the reservation first, job 2 cannot start, because it needs to reservemedia A2. A2 is already reserved by job 1. With shared reservations, both jobs canstart at the same time.

Assume, however, that only a few drives are available for writing. Also assumethat job 1 begins first and starts duplicating image 1. Then job 2 starts using mediaA2 to duplicate image 6. Media A2 in effect bounces between the two jobs:sometimes it is used by job 1 and sometimes by job 2. As a result, the overallperformance of both jobs may degrade.

In NetBackup 6.5.4 and later, you can use the following procedure to disable thesharing of reservations.

To disable the sharing of reservations

◆ Create the following file:

On UNIX

/usr/openv/netbackup/db/config/RB_DO_NOT_USE_SHARED_RESERVATIONS

On Windows

install_path\Veritas\NetBackup\db\config\RB_DO_NOT_USE_SHARED_RESERVATIONS

Tuning other NetBackup componentsResource allocation

166

Page 167: Netbackup Planning Guide 307083

Adjusting the resource monitoring intervalThe RESOURCE_MONITOR_INTERVAL described in this topic was introduced inNetBackup 6.5.2.

For resource monitoring, the NetBackup nbrb process periodically runs bptm-rptdrv on each media server by means of the master server. By default, bptm-rptdrv is run every 10 minutes. bptm -rptdrv serves two functions:

■ Determines master-media connectivity

■ Determines if any allocations are pending or if bptm is hung

In certain cases, it may be advantageous to monitor these resources less often. Ifyou have many media servers, bptm -rptdrv can take CPU cycles from nbrb andnbjm. Also, starting bptm -rptdrv on the media server and reporting on the datarequires network bandwidth. bptm -rptdrv also causes a lot of activity on the EMMserver.

To adjust the resource monitoring interval

◆ In the bp.conf file on the NetBackup master server, set the following option:

RESOURCE_MONITOR_INTERVAL = number_of_seconds

The allowed values are from 600 to 3600 seconds (10 minutes to 1 hour).

Disabling on-demand unloadsThe NetBackup EMM service may ask the resource broker (nbrb) to unload driveseven though the media unload delay has not expired. This request is called anon-demand unload. If allocating resources for a request is not possible withoutunloading the drive, EMM may ask nbrb to unload the drive.

It may be helpful to disable on-demand unloads when a series of small relatedbackup jobs are scheduled (such as multiple NetBackup database agent jobs).

To disable on-demand unloads

◆ Create the following file:

On UNIX

/usr/openv/netbackup/db/config/RB_DISABLE_REAL_UNLOADS_ON_DEMAND

On Windows

install_path\Veritas\NetBackup\db\config\RB_DISABLE_REAL_UNLOADS_ON

_DEMAND

167Tuning other NetBackup componentsResource allocation

Page 168: Netbackup Planning Guide 307083

EncryptionWhen the NetBackup client encryption option is enabled, your backups may runslower. How much slower depends on the throttle point in your backup path. Ifthe network is the issue, encryption should not hinder performance. If the networkis not the issue, then encryption may slow down the backup.

If you multi-stream encrypted backups on a client with multiple CPUs, try todefine one fewer stream than the number of CPUs. For example, if the client hasfour CPUs, define three or fewer streams for the backup.

This approach can minimize CPU contention.

CompressionTwo types of compression can be used with NetBackup, client compression(configured in the NetBackup policy) and tape drive compression (handled by thedevice hardware).

How to enable compressionNetBackup client compression can be enabled by selecting the compression optionin the NetBackup Policy Attributes window.

How tape drive compression is enabled depends on your operating system andthe type of tape drive. Check with the operating system and drive vendors, or readtheir documentation to find out how to enable tape compression.

With UNIX device addressing, these options are frequently part of the devicename. A single tape drive has multiple names, each with a different functionalitybuilt into the name. (Multiple names are accomplished with major and minordevice numbers.) On Solaris, if you address /dev/rmt/2cbn, you get drive 2hardware-compressed with no-rewind option. If you address /dev/rmt/2n, itsfunction should be uncompressed with the no-rewind option. The choice of devicenames determines device behavior.

If the media server is UNIX, there is no compression when the backup is to a diskstorage unit. The compression options in this case are limited to clientcompression. If the media server with the disk storage unit is Windows, and thedirectory that is used by the disk storage unit is compressed, note: compressionis used on the disk write as for any file writes to that directory by any application.

Tape versus client compressionNote the following:

Tuning other NetBackup componentsEncryption

168

Page 169: Netbackup Planning Guide 307083

■ The decision to use data compression should be based on the compressibilityof the data itself.

Note the following levels of compressibility, in descending order:

■ Plain textUsually the most compressible type of data.

■ Executable codeMay compress somewhat, but not as much as plain text.

■ Already compressed dataOften, no further compression is possible.

■ Encrypted dataMay expand in size if compression is applied.See “Using encryption and compression” on page 170.

■ Tape drive compression is almost always preferable to client compression.Compression is CPU intensive, and tape drives have built-in hardware toperform compression.

■ Avoid using both tape compression and client compressionCompressing data that is already compressed can increase the amount ofbacked up data.

■ Only in rare cases is it beneficial to use client (software) compression

Those cases usually include the following characteristics:

■ The client data is highly compressible.

■ The client has abundant CPU resources.

■ You need to minimize the data that is sent across the network between theclient and server.

In other cases, however, NetBackup client compression should be turned off,and the hardware should handle the compression.

■ Client compression reduces the amount of data that is sent over the network,but increases CPU usage on the client.

■ On UNIX, the NetBackup client configuration settingMEGABYTES_OF_MEMORY may help client performance.This option sets the amount of memory available for compression.Do not compress files that are already compressed. If the data is compressedtwice, refer to the NetBackup configuration option COMPRESS_SUFFIX. Youcan use this option to exclude files with certain suffixes from compression.Edit this setting through bpsetconfig.See the NetBackup Administrator’s Guide, Volume II.

169Tuning other NetBackup componentsCompression

Page 170: Netbackup Planning Guide 307083

Using encryption and compressionIf a policy is enabled for both encryption and compression, the client firstcompresses the backup data and then encrypts it. When data is encrypted, itbecomes randomized, and is no longer compressible. Therefore, data compressionmust be performed before any data encryption.

NetBackup JavaFor performance improvement, refer to the following sections in the NetBackupAdministrator’s Guide for UNIX and Linux, Volume I: "Configuring theNetBackup-Java Administration Console," and the subsection "NetBackup-JavaPerformance Improvement Hints." In addition, theNetBackupReleaseNotesmaycontain information about NetBackup Java performance.

VaultRefer to the "Best Practices" chapter of the NetBackup Vault Administrator’sGuide.

Fast recovery with Bare Metal RestoreVeritas Bare Metal Restore (BMR) provides a simplified, automated method bywhich to recover an entire system (including the operating system andapplications). BMR automates the restore process to ensure rapid, error-freerecovery. This process requires one Bare Metal Restore command and then asystem boot. BMR guarantees integrity and consistency and is supported for bothUNIX and Windows systems.

Note: BMR requires the True image restore option. This option has implicationsfor the size of the NetBackup catalog.

See “Calculate the size of your NetBackup catalog” on page 27.

Backing up many small filesNetBackup may take longer to back up many small files than a single large file.

The following may improve performance when backing up many small files:

Tuning other NetBackup componentsUsing encryption and compression

170

Page 171: Netbackup Planning Guide 307083

■ Use the FlashBackup (or FlashBackup-Windows) policy type. FlashBackup isa feature of NetBackup Snapshot Client. FlashBackup is described in theNetBackup Snapshot Client Administrator’s Guide.See “Notes on FlashBackup performance” on page 171.

■ On Windows, make sure virus scans are turned off (turning off scans maydouble performance).

■ Snap a mirror (such as with the FlashSnap method in Snapshot Client) andback that up as a raw partitionUnlike FlashBackup, this type of backup does not allow individual file restore.

Try the following to improve performance:

■ Turn off or reduce logging.The NetBackup logging facility has the potential to affect the performance ofbackup and recovery processing. Logging is usually enabled temporarily, totroubleshoot a NetBackup problem. The amount of logging and its verbositylevel can affect performance.

■ Make sure the NetBackup buffer size is the same size on both the servers andclients.

■ Consider upgrading NIC drivers as new releases appear.

■ Run the following bpbkar throughput test on the client with Windows.

C:\Veritas\Netbackup\bin\bpbkar32 -nocont > NUL 2>

For example:

C:\Veritas\Netbackup\bin\bpbkar32 -nocont c:\ > NUL 2> temp.f

■ When initially configuring the Windows server, optimize TCP/IP throughputas opposed to shared file access.

■ Always boost background performance on Windows versus foregroundperformance.

■ Turn off NetBackup Client Job Tracker if the client is a system server.

■ Regularly review the patch announcements for every server OS. Install patchesthat affect TCP/IP functions, such as correcting out-of-sequence delivery ofpackets.

Notes on FlashBackup performanceYou can adjust FlashBackup performance in the following ways.

171Tuning other NetBackup componentsBacking up many small files

Page 172: Netbackup Planning Guide 307083

Using FlashBackup with a copy-on-write snapshot methodIf using the FlashBackup feature with a copy-on-write method such as nbu_snap,assign the snapshot cache device to a separate hard drive. A separate hard drivereduces disk contention and the potential for head thrashing.

Refer to theNetBackupSnapshotClientAdministrator’sGuide for more informationon FlashBackup configuration.

Adjusting the read buffer for FlashBackup andFlashBackup-WindowsIf the storage unit write speed is fast, reading the client disk may become abottleneck during a FlashBackup raw partition backup. By default, FlashBackup(on UNIX) reads the raw partition using fixed 128 KB buffers for full backups and32 KB buffers for incrementals. FlashBackup-Windows, by default, reads the rawpartition using fixed 32 KB buffers for full backups and for incrementals.

In most cases, the default read buffer size allows FlashBackup to stay ahead ofthe storage unit write speed. To minimize the number of I/O waits when readingclient data, you can tune the FlashBackup read buffer size. Tuning this bufferallows NetBackup to read continuous device blocks up to 1 MB per I/O wait,depending on the disk driver. The read buffer size can be adjusted separately forfull backup and for incremental backup.

In general, a larger buffer yields faster raw partition backup (but see the followingnote). In the case of VxVM striped volumes, the read buffer can be configured asa multiple of the striping block size: data can be read in parallel from the disks,speeding up raw partition backup.

Note: Resizing the read buffer for incremental backups can result in a fasterbackup in some cases, and a slower backup in others. Experimentation may benecessary to achieve the best setting.

The result of the resizing depends on the following factors:

■ The location of the data to be read

■ The size of the data to be read relative to the size of the read buffer

■ The read characteristics of the storage device and the I/O stack.

Tuning other NetBackup componentsBacking up many small files

172

Page 173: Netbackup Planning Guide 307083

To adjust the FlashBackup read buffer for UNIX and Linux clients

1 Create the following touch file on each client:

/usr/openv/netbackup/FBU_READBLKS

2 Enter the values in the FBU_READBLKS file, as follows.

On the first line of the file: enter an integer value for the read buffer size inblocks for full backups and/or for incremental backups. The defaults are 256blocks (131072 bytes, or 128 KB) during full backups and 64 blocks (32768bytes, or 32 KB) for incremental backups. The block size is equal to (KB size* 2), or (Number of bytes/512).

To change both values, separate them with a space.

For example:

512 128

This entry sets the full backup read buffer to 256 KB and the incrementalread buffer to 64 KB.

You can use the second line of the file to set the tape record write size, alsoin blocks. The default is the same size as the read buffer. The first entry onthe second line sets the full backup write buffer size. The second value setsthe incremental backup write buffer size. To set read buffer size and taperecord write size to the same values, the file would read altogether as:

512 128

512 128

To adjust the FlashBackup-Windows read buffer for Windows clients (NetBackup6.5.4)

1 Click HostProperties>Clients, right-click on the client and select Properties.

Click Windows Client > Client Settings.

2 For Raw partition read buffer size, specify the size of the read buffer.

A read buffer size larger than the 32 KB default may increase backup speed.Results vary from one system to another; experimentation may be required.A setting of 1024 may be a good starting point.

Note the following:

■ This setting applies to raw partition backups as well as toFlashBackup-Windows policies (including NetBackup for VMware).

■ This setting applies to full backups and to incremental backups.

173Tuning other NetBackup componentsBacking up many small files

Page 174: Netbackup Planning Guide 307083

Adjusting the read buffer for raw partition backupThis topic applies to NetBackup 6.5.4.

See “To adjust the FlashBackup-Windows read buffer for Windows clients(NetBackup 6.5.4)” on page 173.

Adjusting the allocation size of the snapshot mountpoint volume for NetBackup for VMware

This topic applies to NetBackup 6.5.4.

To increase the speed of full virtual machine backups, try increasing the filesystem allocation size of the volume that is used as the snapshot mount point onthe VMware backup proxy server. A larger allocation size, for instance 64 KB, mayresult in faster backups. Results can vary from one system to another;experimentation may be required.

For further information on the snapshot mount point and the VMware backupproxy server, refer to the NetBackup for VMware Administrator's Guide.

For a different tuning suggestion for NetBackup for VMware, see the following:

See “Adjusting the read buffer for FlashBackup and FlashBackup-Windows”on page 172.

NetBackup Operations Manager (NOM)The settings in this section can be modified to adjust NOM performance.

Adjusting the NOM server heap sizeIf the NOM server processes are consuming a lot of memory (which may happenwith large NOM configurations), increase the NOM server heap size.

To increase the NOM server heap size from 512 MB to 2048 MB on Solaris servers

1 Open the /opt/VRTSnom/bin/nomsrvctl file.

2 To increase the heap size to 2048 MB, edit the the MAX_HEAP parameter asfollows:

MAX_HEAP=-Xmx2048m

Tuning other NetBackup componentsAdjusting the read buffer for raw partition backup

174

Page 175: Netbackup Planning Guide 307083

3 Save the nomsrvctl file.

4 Stop and restart the NOM processes, as follows:

/opt/VRTSnom/bin/NOMAdmin -stop_service

/opt/VRTSnom/bin/NOMAdmin -start_service

To increase theNOMserver heap size from512MB to 2048MBonWindows servers

1 Open the Registry Editor and go to the following location:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\

VRTSnomSrvr\Parameters

2 Increase the JVM Option Number 0 value.

For example, enter -Xmx2048m to increase the heap size to 2048 MB.

3 Stop and restart the NOM services, as follows:

install_path\NetBackup Operations Manager\bin\admincmd\NOMAdmin.bat -stop_service

install_path\NetBackup Operations Manager\bin\admincmd\NOMAdmin.bat -start_service

Adjusting the NOM web server heap sizeIf you notice poor performance in the NOM console and restarting the NOM Webservice fixes the problem, increase the NOM web server heap size. The NOM webserver heap size can be increased from 256 MB (the default) up to 2048 MB.

To change the heap size to 512 MB on Solaris servers

1 Run the webgui command to change the heap size.

To change the heap size to 512 MB, run the following:

/opt/VRTSweb/bin/webgui maxheap 512

2 Stop and restart the NOM processes, as follows:

/opt/VRTSnom/bin/NOMAdmin -stop_service

/opt/VRTSnom/bin/NOMAdmin -start_service

175Tuning other NetBackup componentsNetBackup Operations Manager (NOM)

Page 176: Netbackup Planning Guide 307083

To change the heap size to 512 MB on Windows servers

1 Run webgui.exe.

For example:

install_path\VRTSweb\bin\webgui.exe maxheap 512

2 Stop and restart the NOM services, as follows:

install_path\NetBackup Operations Manager\bin\admincmd\NOMAdmin.bat -stop_service

install_path\NetBackup Operations Manager\bin\admincmd\NOMAdmin.bat -start_service

Adjusting the Sybase cache sizeThe amount of memory available for database server cache is an important factorcontrolling NOM performance. Symantec recommends that you adjust the Sybasecache size after installing NOM. After you install NOM, the database size can growrapidly as you add more master servers.

Sybase automatically adjusts the cache size for optimum performance. You canalso set the cache size using the -c server option.

Tuning other NetBackup componentsNetBackup Operations Manager (NOM)

176

Page 177: Netbackup Planning Guide 307083

To set the cache size using the -c server option on Solaris servers

1 Open the /opt/VRTSnom/var/global/nomserver.conf file and change thevalue of the -c option.

For example, you can increase the Sybase cache size to 512 MB by changingthe nomserver.conf file content.

The following:

-n NOM_nom-sol6

-x tcpip(BROADCASTLISTENER=0;DOBROADCAST=NO;ServerPort=13786;)

-gp 8192 -ct+ -gd DBA -gk DBA -gl DBA -ti 60 -c 25M -ch 500M -cl

25M -ud -m

should be changed to:

-n NOM_nom-sol6

-x tcpip(BROADCASTLISTENER=0;DOBROADCAST=NO;ServerPort=13786;)

-gp 8192 -ct+ -gd DBA -gk DBA -gl DBA -ti 60 -c 512M -cs -ud -m

This example replaced -c 25M -ch 500M -cl 25M with -c 512M -cs in thenomserver.conf file to increase the cache size to 512 MB. In the same manner,to increase the cache size to 1 GB, replace -c 25M -ch 500M -cl 25M with -c1G -cs.

The -ch and -cl server options are used to set the maximum and the minimumcache size respectively. The -cs option logs the cache size changes for thedatabase server.

2 Save the nomserver.conf file.

3 Stop and restart the NOM processes, as follows:

/opt/VRTSnom/bin/NOMAdmin -stop_service

/opt/VRTSnom/bin/NOMAdmin -start_service

The logs for the cache size changes are stored in:

/opt/VRTSnom/logs/nomdbsrv.log.

177Tuning other NetBackup componentsNetBackup Operations Manager (NOM)

Page 178: Netbackup Planning Guide 307083

To set the cache size using the -c server option on Windows servers

1 Open the install_path\db\conf\server.conf file.

For example, to increase the cache size to 512 MB, add -c 512M -cs to thecontent of server.conf file:

-n NOM_PUNENOM -x

tcpip(BROADCASTLISTENER=0;DOBROADCAST=NO;ServerPort=13786) -o

"install_path\db\log\server.log" -m

should be changed to

-n NOM_PUNENOM -x

tcpip(BROADCASTLISTENER=0;DOBROADCAST=NO;ServerPort=13786) -o

"install_path\db\log\server.log" -c 512M -cs -m

The -cs option logs the cache size changes for the database server.

In the same manner, to increase the cache size to 1 GB, you should add -c

1G -cs to the content of the server.conf file.

2 Stop and restart the NOM services, as follows:

install_path\NetBackup Operations Manager\bin\admincmd\NOMAdmin.bat -stop_service

install_path\NetBackup Operations Manager\bin\admincmd\NOMAdmin.bat -start_service

The logs for the cache size changes are stored ininstall_path\db\log\server.log.

Saving NOM databases and database logs on separate hard disksTo improve NOM performance, NOM database files and the log files associatedwith the NOM databases should be stored on separate hard disks. You can storethe NOM database files on one hard disk and the log files on another hard disk.

Symantec also recommends that you not store the database files on the hard diskthat contains your operating system files.

Use the following procedures to move the NOM database and log files to a differenthard disk. The first two procedures are for moving the NOM database files onWindows or Solaris. The last two procedures are for moving the database log files.

Tuning other NetBackup componentsNetBackup Operations Manager (NOM)

178

Page 179: Netbackup Planning Guide 307083

To move the NOM database to a different hard disk on Windows

1 Stop all the NOM services by entering the following command:

install_path\NetBackup Operations Manager\bin\admincmd\NOMAdmin -stop_service

2 Open the databases.conf file with a text editor from the following directory:

install_path\NetBackup Operations Manager\db\conf

This file has the following contents:

"install_path\NetBackup Operations Manager\db\data\vxpmdb.db"

This path specifies the default location of the NOM database.

3 To move the database to a different location such as E:\Database on a differenthard disk, replace the contents of the file with the following:

"E:\Database\vxpmdb.db"

Make sure that you specify the path in double quotes. The directories in thespecified path should not contain any special characters such as %, ~, !, @, $,&, >, # etc. For example, do not specify a path such as E:\Database%.

4 Save the databases.conf file.

5 Copy the database files to the new location.

Copy vxpmdb.db and vxam.db from install_path\NetBackup OperationsManager\db\data to the new location, such as E:\Database.

Note: You must move both vxam.db and vxpmdb.db to the new location.

6 Restart all the NOM services:

install_path\NetBackup Operations Manager\bin\admincmd\NOMAdmin -start_service

To move the NOM database to a different hard disk on Solaris

1 Stop all NOM services by entering the following command:

/opt/VRTSnom/bin/NOMAdmin -stop_service

2 The default location of the NOM database on Solaris is/opt/VRTSnom/db/data. To move the database to a different location suchas /usr/mydata, enter the following command:

mv /opt/VRTSnom/db/data /usr/mydata

179Tuning other NetBackup componentsNetBackup Operations Manager (NOM)

Page 180: Netbackup Planning Guide 307083

3 Create a symbolic link to /usr/mydata in /opt/VRTSnom/db/data:

ln -s /usr/mydata /opt/VRTSnom/db/data

4 Restart all NOM services:

/opt/VRTSnom/bin/NOMAdmin -start_service

To move the database log files to a different hard disk on Windows

1 Stop all NOM services:

install_path\NetBackup Operations Manager\bin\admincmd\NOMAdmin -stop_service

2 Navigate to the following location:

install_path\NetBackup Operations Manager\db\WIN32

Enter the following commands:

dblog -t directory_path\vxpmdb.log database_path\vxpmdb.db

dblog -t directory_path\vxam.log database_path\vxam.db

where directory_path is the path where you want to store the database logsand database_path is the path where your database is located.

This command moves the log file associated with the NOM primary databaseto the new directory (directory_path). It is recommended to use vxpmdb.log

or vxam.log as the name of the log file.

3 Restart all NOM services:

install_path\NetBackup Operations Manager\bin\admincmd\NOMAdmin.bat -start_service

To move the database log files to a different hard disk on Solaris

1 Stop all the NOM services:

/opt/VRTSnom/bin/NOMAdmin -stop_service

2 Set the path of the LD_LIBRARY_PATH variable in the following manner:

LD_LIBRARY_PATH=/opt/VRTSnom/db/lib:$LD_LIBRARY_PATH

export LD_LIBRARY_PATH

Tuning other NetBackup componentsNetBackup Operations Manager (NOM)

180

Page 181: Netbackup Planning Guide 307083

3 Navigate to the following location:

/opt/VRTSnom/db/bin

Enter the following commands:

./dblog -t directory_path/vxpmdb.log database_path/vxpmdb.db

./dblog -t directory_path/vxam.log database_path/vxam.db

where directory_path is the path where you want to store your database logfile and database_path is the path where the NOM database is located.

This command moves the log file associated with the NOM primary databaseto the new directory (directory_path). It is recommended to use vxpmdb.log

or vxam.log as the name of the log file.

4 Restart all NOM services:

/opt/VRTSnom/bin/NOMAdmin -start_service

Defragmenting NOM databasesFor optimum performance, defragment the NOM databases periodically and aftera purge operation.

To defragment the NOM database, you must first export and then import thedatabase. You must run the export and import commands consecutively (withoutany time gap) to avoid data loss.

181Tuning other NetBackup componentsNetBackup Operations Manager (NOM)

Page 182: Netbackup Planning Guide 307083

To defragment the NOM primary and alerts databases in NOM 6.5

1 Start NOMAdmin:

Windows:

install_path\NetBackup Operations Manager\bin\admincmd\NOMAdmin

Solaris:

/opt/VRTSnom/bin/NOMAdmin

2 Enter the following commands:

NOMAdmin -export directory_name

NOMAdmin -import directory_name

The directory location (directory_name) must be the same in both commands.

3 For NOM 6.5.1, defragment the database:

NOMAdmin -defrag

Purge data periodicallyYou should purge the NOM data periodically. For optimum performance andscalability, Symantec recommends that you manage approximately a month ofhistorical data. See the "Database maintenance utilities (NOMAdmin)" section inthe NetBackup Operations Manager Guide for commands to purge the alerts andjobs data.

Note: The NOM databases should be defragmented after a purge operation.

NOM performance and floating-point calculationsThe NetBackup Operations Manager (NOM) performs certain calculations thatrequire floating-point math. If the server's processor does not have a dedicatedfloating-point unit, the calculations are handled in floating-point emulation.

Note: Emulation mode slows down NOM performance. Symantec does notrecommend running NOM on any server that lacks a dedicated floating-point unit.

Tuning other NetBackup componentsNetBackup Operations Manager (NOM)

182

Page 183: Netbackup Planning Guide 307083

Tuning disk I/Operformance

This chapter includes the following topics:

■ Hardware selection

■ Hardware performance hierarchy

■ Hardware configuration examples

■ Tuning software for better performance

Hardware selectionThe critical factors in performance are not software-based. The critical factorsare hardware selection and configuration. Hardware has roughly four times theweight that software has in determining performance.

Hardware performance hierarchyThe figure shows two disk arrays and a single non-disk device (tape, Ethernetconnections, and so forth).

Figure 9-1 shows the key hardware elements that affect performance, and theinterconnections (levels) between them.

9Chapter

Page 184: Netbackup Planning Guide 307083

Figure 9-1 Performance hierarchy diagram

Level 1

Level 4

Level 3

Level 5

Level 2

Array 1

Tape,Ethernet,

oranothernon-diskdevice

Shelf

Drives

Shelf

Drives

Array 2

Shelf

Drives

Shelf

Drives

Fibre Channel Fibre Channel

PCI card1

PCIcard 3

PCIcard 2

PCI bridge PCI bridge

Host

Memory

PCI bus PCI bus PCI bus

RAID controller RAID controller

Shelf adaptor Shelf adaptor Shelf adaptor Shelf adaptor

Performance hierarchy levels are described in later sections of this chapter.

In general, all data that goes to or comes from disk must pass through hostmemory.

Figure 9-2 includes a dashed line that shows the path that the data takes througha media server.

Tuning disk I/O performanceHardware performance hierarchy

184

Page 185: Netbackup Planning Guide 307083

Figure 9-2 Data stream in NetBackup media server to arrays

Level 1

Level 4

Level 3

Level 5

Level 2

Array 2

Shelf

Drives

Fibre Channel

PCI card

PCI bridge

Host

PCI bus

Shelf

Memory

RAID controller

Data movingthrough host

memory

Tape,Ethernet

, oranothernon-disk

The data moves up through the ethernet PCI card at the far right. The card sendsthe data across the PCI bus and through the PCI bridge into host memory.NetBackup then writes this data to the appropriate location. In a disk example,the data passes through one or more PCI bridges. It then passes over one or morePCI buses, through one or more PCI cards, across one or more Fibre Channels, andso on.

Sending data through more than one PCI card increases bandwidth by breakingup the data into large chunks. It also sends a group of chunks at the same time tomultiple destinations. For example, a write of 1 MB can be split into 2 chunks thatgo to 2 different arrays at the same time. If the path to each array is x bandwidth,the aggregate bandwidth is approximately 2x.

Each level in the Performance Hierarchy diagram represents the transitions overwhich data flows. These transitions have bandwidth limits.

Between each level there are elements that can affect performance as well.

Performance hierarchy level 1Level 1 is the interconnection within a typical disk array. Level 1 attachesindividual disk drives to the adaptor on each disk shelf. A shelf is a physical entityplaced into a rack. Shelves usually contain around 15 disk drives. If you use FibreChannel drives, the level 1 interconnection is 1 or 2 Fibre Channel arbitrated loops(FC-AL). When Serial ATA (SATA) drives are used, the level 1 interconnect is theSATA interface.

Figure 9-3 shows the performance hierarchy level 1.

185Tuning disk I/O performanceHardware performance hierarchy

Page 186: Netbackup Planning Guide 307083

Figure 9-3 Performance hierarchy level 1

Level 1

Tape,Ethernet,

oranothernon-diskdevice

Shelf

Drives

Shelf adaptorShelf

Drives

Shelf

Drives

Shelf

Drives

Shelf adaptor Shelf adaptorShelf adaptor

Level 1 bandwidth potential is determined by the technology used.

For FC-AL, the arbitrated loop can be either 1 gigabit or 2-gigabit Fibre Channel.An arbitrated loop is a shared-access topology, which means that only 2 entitieson the loop can communicate at one time. For example, one disk drive and theshelf adaptor can communicate. So even though a single disk drive might becapable of 2-gigabit bursts of data transfers, the bandwidth does not aggregate.That is, multiple drives cannot communicate with the shelf adaptor at the sametime, resulting in multiples of the individual drive bandwidth.

Performance hierarchy level 2Level 2 is the interconnection external to the disk shelf. It attaches one or moreshelves to the array RAID controller. This interconnection is usually FC-AL, evenif the drives in the shelf are something other than Fibre Channel (SATA, forexample). This shared-access topology allows only one pair of endpoints tocommunicate at any given time.

Figure 9-4 shows the performance hierarchy level 2.

Figure 9-4 Performance hierarchy level 2

Level 2

Array 1 Array 2

RAID controllerRAID controller Tape,Ethernet,or anothernon-diskdevice

Shelf Shelf Shelf Shelf

Shelf Shelf Shelf Shelf

Larger disk arrays have more than one internal FC-AL. Shelves may even support2 FC-AL, so that two paths exist between the RAID controller and every shelf,which provides for redundancy and load balancing.

Tuning disk I/O performanceHardware performance hierarchy

186

Page 187: Netbackup Planning Guide 307083

Performance hierarchy level 3Level 3 is the interconnection external to the disk array and host.

Figure 9-5 shows the performance hierarchy level 3.

Figure 9-5 Performance hierarchy level 3

Level 3

Array Array

Fibre Channel

Host

Fibre Channel

This diagram shows a single point-to-point connection between an array and thehost. A real-world scenario typically includes a SAN fabric (with one or more FibreChannel switches). The logical result is the same, in that either is a data pathbetween the array and the host.

When these paths are not arbitrated loops (for example, if they are fabric FibreChannel), they do not have the shared-access topology limitations. That is, twoarrays may be connected to a Fibre Channel switch and the host may have a singleFibre Channel connection to the switch. The arrays can then communicate at thesame time (the switch coordinates with the host Fibre Channel connection).However, this arrangement does not aggregate bandwidth, since the host is stilllimited to a single Fibre Channel connection.

Fibre Channel is generally 1 or 2 gigabit (both arbitrated loop and fabric topology).Faster speeds are available. A general rule-of-thumb when considering protocoloverhead is to divide the gigabit rate by 10 to get an approximatemegabyte-per-second bandwidth. So, 1-gigabit Fibre Channel can theoreticallyachieve approximately 100 MB per second. 2-gigabit Fibre Channel cantheoretically achieve approximately 200 MB per second.

Fibre Channel is also similar to traditional LANs, in that a given interface cansupport multiple connection rates. That is, a 2-gigabit Fibre Channel port can alsoconnect to devices that only support 1 gigabit.

Performance hierarchy level 4Level 4 is the interconnection within a host for the attachment of PCI cards.

187Tuning disk I/O performanceHardware performance hierarchy

Page 188: Netbackup Planning Guide 307083

Figure 9-6 shows the performance hierarchy level 4.

Figure 9-6 Performance hierarchy level 4

Level 4PCI card

1PCI card

3PCI

card 2

PCI bridge PCI bridgePCI bus PCI bus PCI bus

A typical host supports 2 or more PCI buses, with each bus supporting 1 or morePCI cards. A bus has a topology similar to FC-AL: only 2 endpoints can communicateat the same time. That is, if 4 cards are in a PCI bus, only one of them cancommunicate with the host at a given instant. Multiple PCI buses are implementedto allow multiple data paths to communicate at the same time, for aggregatebandwidth gains.

PCI buses have 2 key factors involved in bandwidth potential: the width of thebus - 32 bits or 64 bits, and the clock or cycle time of the bus (in Mhz).

As a rule of thumb, a 32-bit bus can transfer 4 bytes per clock and a 64-bit buscan transfer 8 bytes per clock. Most modern PCI buses support both 64-bit and32-bit cards.

PCI buses are available in the following clock rates:

■ 33 Mhz

■ 66 Mhz

■ 100 Mhz (sometimes referred to as PCI-X)

■ 133 Mhz (sometimes referred to as PCI-X)

PCI cards also come in different clock rate capabilities.

Backward compatibility is very common; for example, a bus that is rated at 100Mhz supports 100, 66, and 33 Mhz cards.

Likewise, a 64-bit bus supports both 32-bit and 64-bit cards.

PCI buses can also be mixed. For example, a 100-Mhz 64-bit bus can support anymix of clock and width that are at or below those values.

Tuning disk I/O performanceHardware performance hierarchy

188

Page 189: Netbackup Planning Guide 307083

Note: In a shared-access topology, a slow card can retard the performance of otherfast cards on the same bus: the bus adjusts to the right clock and width for eachtransfer. One moment it can do a 100 Mhz 64 bit transfer to card #2. At anothermoment it can do a 33 Mhz 32 bit to card #3. Since the transfer to card #3 is muchslower, it takes longer to complete. The time that is lost may otherwise have beenused for moving data faster with card #2. A PCI bus is unidirectional: when itconducts a transfer in one direction, it cannot move data in the other direction,even from another card.

Real-world bandwidth is generally around 80% of the theoretical maximum (clock* width). Following are rough estimates for bandwidths that can be expected:

64 bit/ 33 Mhz = approximately 200 MB per second

64 bit/ 66 Mhz = approximately 400 MB per second

64 bit/100 Mhz = approximately 600 MB per second

64 bit/133 Mhz = approximately 800 MB per second

Performance hierarchy level 5Level 5 is the interconnection within a host, between PCI bridge(s) and memory.This bandwidth is rarely a limiting factor in performance.

Figure 9-7 shows the Performance hierarchy level 5.

Figure 9-7 Performance hierarchy level 5

Level 5

HostMemory

PCI bridge PCI bridge

Notes on performance hierarchiesThe hardware components between interconnection levels can also affectbandwidth, as follows:

■ A drive has sequential access bandwidth and average latency times for seekand rotational delays.Drives perform optimally when doing sequential I/O to disk. Non-sequentialI/O forces movement of the disk head (that is, seek and rotational latency).This movement is a huge overhead compared to the amount of data transferred.The more non-sequential I/O done, the slower it becomes.

189Tuning disk I/O performanceHardware performance hierarchy

Page 190: Netbackup Planning Guide 307083

Simultaneously reading or writing more than one stream results in a mix ofshort bursts of sequential I/O with seek and rotational latency in between.This situation significantly degrades overall throughput. Different drive typeshave different seek and rotational latency specifications. Therefore, the typeof drive has a large effect on the amount of degradation.From best to worst, such drives are Fibre Channel, SCSI, and SATA, with SATAdrives usually twice the latency of Fibre Channel. However, SATA drives haveabout 80% of the sequential performance of Fibre Channel drives.

■ A RAID controller has cache memory of varying sizes. The controller also doesthe parity calculations for RAID-5. Better controllers have this calculation(called "XOR") in hardware, which makes it faster. If there is nohardware-assisted calculation, the controller processor must perform it, andcontroller processors are not usually high performance.

■ A PCI card can be limited either by the speed of the port(s) or the clock rate tothe PCI bus.

■ A PCI bridge is usually not an issue because it is sized to handle whatever PCIbuses are attached to it.

Memory can be a limit if there is intensive non-I/O activity in the system.

Note that no CPUs exist for the host processor(s) in the Performance hierarchydiagram.

See Figure 9-1 on page 184.

While CPU performance contributes to all performance, it is not the bottleneckin most modern systems for I/O intensive workloads: very little work is done atthat level. The CPU must execute a read operation and a write operation, but thoseoperations do not take up much bandwidth. An exception is when older gigabitethernet card(s) are involved, because the CPU has to do more of the overhead ofnetwork transfers.

Hardware configuration examplesThese examples are not intended as recommendations for your site. They arefactors to consider when adjusting hardware for better NetBackup performance.

Example 1

A general hardware configuration can have dual 2-gigabit Fibre Channel ports ona single PCI card.

In such a case, the following is true:

■ Potential bandwidth is approximately 400 MB per second.

Tuning disk I/O performanceHardware configuration examples

190

Page 191: Netbackup Planning Guide 307083

■ For maximum performance, the card must be plugged into at least a 66 MhzPCI slot.

■ No other cards on that bus should need to transfer data at the same time. Thatsingle card saturates the PCI bus.

■ Do not expect 2 cards (4 ports total) on the same bus to aggregate to 800 MBper second, unless the bus and cards are 133 Mhz.

Example 2

The next example shows a pyramid of bandwidth potentials with aggregationcapabilities at some points.

Suppose you have the following hardware:

■ 1x 66 Mhz quad 1-gigabit ethernet

■ 4x 66 Mhz 2-gigabit Fibre Channel

■ 4x disk array with 1-gigabit Fibre Channel port

■ 1x Sun V880 server (2x 33 Mhz PCI buses and 1x 66 Mhz PCI bus)

In this case, the following is one way to assemble the hardware so that noconstraints limit throughput:

■ The quad 1-gigabit ethernet card can do approximately 400 MB per secondthroughput at 66 Mhz.

■ It requires at least a 66 Mhz bus. A 33 Mhz bus would limit throughput toapproximately 200 MB per second.

■ It completely saturates the 66 Mhz bus. Do not put any other cards on thatbus that need significant I/O at the same time.

Since the disk arrays have only 1-gigabit Fibre Channel ports, the Fibre Channelcards degrade to 1 gigabit each.

Note the following:

■ Each card can therefore move approximately 100 MB per second. With fourcards, the total is approximately 400 MB per second.

■ However, you do not have a single PCI bus that can support 400 MB per second.The 66-Mhz bus is already taken by an ethernet card.

■ Two 33-Mhz buses can each support approximately 200 MB per second.Therefore, you can put 2 of the Fibre Channel cards on each of the 2 buses.

This configuration can move approximately 400 MB per second for backup orrestore. Real-world results of a configuration like this show approximately 350MB per second.

191Tuning disk I/O performanceHardware configuration examples

Page 192: Netbackup Planning Guide 307083

Tuning software for better performanceThe size of individual I/O operations should be scaled such that the overhead isrelatively low compared to the amount of data moved. That means the I/O sizefor a bulk transfer operation (such as a backup) should be relatively large.

The optimum size of I/O operations is dependent on many factors and variesgreatly depending on the hardware setup.

Figure 9-8 is a variation on the performance hierarchy diagram.

Figure 9-8 Example hierarchy with single shelf per array

Level 1

Level 4

Level 3

Level 5

Level 2

Array 1

Tape,Ethernet,

oranothernon-diskdevice

Array 2

Shelf

Drives

Fibre Channel Fibre Channel

PCI card1

PCIcard 3

PCIcard 2

PCI bridge PCI bridge

HostMemory

PCI bus PCI bus

RAID controller

Shelf adaptorShelf

Drives

RAID controller

Shelf adaptor

PCI bus

Note the following:

■ Each array has a single shelf.

■ Each shelf in the disk array has 9 drives because it uses a RAID 5 group of 8 +1. That is, 8 data disks + 1 parity disk.The RAID controller in the array uses a stripe unit size when performing I/Oto these drives. Suppose that you know the stripe unit size to be 64KB. This

Tuning disk I/O performanceTuning software for better performance

192

Page 193: Netbackup Planning Guide 307083

stripe unit size means that when writing a full stripe (8 + 1), it writes 64 KB toeach drive.The amount of non-parity data is 8 * 64 KB, or 512 KB. So, internal to the array,the optimal I/O size is 512 KB. This size means that crossing Level 3 to thehost PCI card should perform I/O at 512 KB.

■ The diagram shows two separate RAID arrays on two separate PCI buses. Youwant both to perform I/O transfers at the same time.If each is optimal at 512 KB, the two arrays together are optimal at 1 MB.You can implement software RAID-0 to make the two independent arrays looklike one logical device. RAID-0 is a plain stripe with no parity. Parity protectsagainst drive failure, and this configuration already has RAID-5 parity toprotect the drives inside the array.The software RAID-0 is configured for a stripe unit size of 512 KB (the I/O sizeof each unit). It is also configured for a stripe width of 2 (1 for each of thearrays).1 MB is the optimum I/O size for the volume (the RAID-0 entity on the host).A size of 1MB is used throughout the rest of the I/O stack.

■ If possible, configure the file system that is mounted over the volume for 1MB.The application that performs I/O to the file system also uses an I/O size of 1MB. In NetBackup, I/O sizes are set in the configuration touch file:

.../db/config/SIZE_DATA_BUFFERS_DISK.

See “How to change the size of shared data buffers” on page 128.

193Tuning disk I/O performanceTuning software for better performance

Page 194: Netbackup Planning Guide 307083

Tuning disk I/O performanceTuning software for better performance

194

Page 195: Netbackup Planning Guide 307083

OS-related tuning factors

This chapter includes the following topics:

■ Kernel tuning (UNIX)

■ Kernel parameters on Solaris 8 and 9

■ Kernel parameters on Solaris 10

■ Message queue and shared memory parameters on HP-UX

■ Changing kernel parameters on Linux

■ About data buffer size (Windows)

■ Adjusting data buffer size (Windows)

■ Other Windows issues

Kernel tuning (UNIX)Several kernel tunable parameters can affect NetBackup performance on UNIX.

Note: A change to these parameters may affect other applications that use thesame parameters. Sizeable changes to these parameters may result in performancetrade-offs. Usually, the best approach is to make small changes and monitor theresults.

Kernel parameters on Solaris 8 and 9The Solaris operating system dynamically builds the operating system kernelwith each restart of the system.

10Chapter

Page 196: Netbackup Planning Guide 307083

The parameters in this section show minimum settings for a system that isdedicated to NetBackup software.

Note: The parameters that are described in this section can be used on Solaris 8,9, and 10. However, many of the following parameters are obsolete in Solaris 10.

See “Kernel parameters on Solaris 10” on page 198.

Following are brief definitions of the message queue, semaphore, and sharedmemory parameters. The parameter definitions apply to a Solaris system.

The values for the following parameters can be set in the file /etc/system:

■ Message queuesset msgsys:msginfo_msgmax = maximum message sizeset msgsys:msginfo_msgmnb = maximum length of a message queue in bytes.The length of the message queue is the sum of the lengths of all the messagesin the queue.set msgsys:msginfo_msgmni = number of message queue identifiersset msgsys:msginfo_msgtql = maximum number of outstanding messagessystem-wide that are waiting to be read across all message queues.

■ Semaphoresset semsys:seminfo_semmap = number of entries in semaphore mapset semsys:seminfo_semmni = maximum number of semaphore identifierssystem-wideset semsys:seminfo_semmns = number of semaphores system-wideset semsys:seminfo_semmnu = maximum number of undo structures in systemset semsys:seminfo_semmsl = maximum number of semaphores per IDset semsys:seminfo_semopm = maximum number of operations per semopcallset semsys:seminfo_semume = maximum number of undo entries per process

■ Shared memoryset shmsys:shminfo_shmmin = minimum shared memory segment sizeset shmsys:shminfo_shmmax = maximum shared memory segment sizeset shmsys:shminfo_shmseg = maximum number of shared memory segmentsthat can be attached to a given process at one timeset shmsys:shminfo_shmmni = maximum number of shared memory identifiersthat the system supports

The ipcs -a command displays system resources and their allocation. It is auseful command when a process is hung or asleep, to see if the process hasavailable resources.

OS-related tuning factorsKernel parameters on Solaris 8 and 9

196

Page 197: Netbackup Planning Guide 307083

Example settings for kernel parametersFollowing is an example of tuning the kernel parameters for NetBackup masterservers and media servers, for Solaris 8 or 9.

See “Kernel parameters on Solaris 10” on page 198.

The following values are the recommended minimums. If /etc/system alreadycontains any of these entries, use the larger of the existing setting and the settingthat are provided here. Before modifying /etc/system, use the command/usr/sbin/sysdef -i to view the current kernel parameters.

After you have changed the settings in /etc/system, restart the system to allowthe changed settings to take effect. After the restart, the sysdef command displaysthe new settings.

*BEGIN NetBackup with the following recommended minimum settings in a Solaris/etc/system file

*Message queues

set msgsys:msginfo_msgmnb=65536

set msgsys:msginfo_msgmni=256

set msgsys:msginfo_msgssz=16

set msgsys:msginfo_msgtql=512

*Semaphores

set semsys:seminfo_semmap=64

set semsys:seminfo_semmni=1024

set semsys:seminfo_semmns=1024

set semsys:seminfo_semmnu=1024

set semsys:seminfo_semmsl=300

set semsys:seminfo_semopm=32

set semsys:seminfo_semume=64

*Shared memory

set shmsys:shminfo_shmmax=one half of system memory

set shmsys:shminfo_shmmin=1

set shmsys:shminfo_shmmni=220

set shmsys:shminfo_shmseg=100

*END NetBackup recommended minimum settings

197OS-related tuning factorsKernel parameters on Solaris 8 and 9

Page 198: Netbackup Planning Guide 307083

■ Socket Parameters on Solaris 8 and 9The TCP_TIME_WAIT_INTERVAL parameter sets the amount of time to waitafter a TCP socket is closed before it can be used again. This parameter setsthe time that a TCP connection remains in the kernel's table after theconnection has been closed. The default value for most systems is 240000milliseconds, which is 4 minutes (240 seconds). If your server is slow becauseit handles many connections, check the current value forTCP_TIME_WAIT_INTERVAL and consider reducing it to 10000 (10 seconds).You can use the following command:

ndd -get /dev/tcp tcp_time_wait_interval

■ Force load parameters on Solaris 8 and 9When system memory gets low, Solaris unloads unused drivers from memoryand reloads drivers as needed. Tape drivers are a frequent candidate forunloading, since they tend to be less heavily used than disk drivers. Based onthe timing of these unload and reload events for the st (Sun), sg (Symantec),and Fibre Channel drivers, various issues may result. These issues can rangefrom devices "disappearing" from a SCSI bus to system panics.Symantec recommends adding the following "forceload" statements to the/etc/system file. These statements prevent the st drivers and sg drivers frombeing unloaded from memory:forceload: dev/stforceload: dev/sgOther statements may be necessary for various Fibre Channel drivers, suchas the following example for JNI:forceload: dev/fcaw

Kernel parameters on Solaris 10In Solaris 10, all System V IPC facilities are either automatically configured orcan be controlled by resource controls. Facilities that can be shared are memory,message queues, and semaphores.

For information on tuning these system resources, see Chapter 6, "ResourceControls (Overview)," in the Sun System Administration Guide: SolarisContainers-Resource Management and Solaris Zones."

For further assistance with Solaris parameters, refer to the Solaris TunableParameters Reference Manual, available at:

http://docs.sun.com/app/docs/doc/819-2724?q=Solaris+Tunable+Parameters

OS-related tuning factorsKernel parameters on Solaris 10

198

Page 199: Netbackup Planning Guide 307083

The following sections of the Solaris Tunable Parameters ReferenceManualmaybe of particular interest:

■ What’s New in Solaris System Tuning in the Solaris 10 Release?

■ System V Message Queues

■ System V Semaphores

■ System V Shared Memory

Recommendations on particular Solaris 10 parametersSymantec recommends the following.

Change the shmmax settingUse the following setting:

set shmsys:shminfo_shmmax=one half of system memory

Disabling tcp_fusionSymantec recommends that tcp_fusion be disabled. With tcp_fusion enabled,NetBackup performance may be degraded and processes such as bptm may pauseintermittently.

Use one of the following methods to disable tcp_fusion.

The first procedure does not require a system restart. Note however that tcp_fusionis re-enabled at the next restart. You must follow these steps each time the systemis restarted.

The second procedure is simpler but requires the system to be restarted.

Caution: System operation could be disrupted if you do not follow the procedurecarefully.

To use the modular debugger (mdb)

1 When no backup or restore jobs are active, run the following command:

echo 'do_tcp_fusion/W 0' | mdb -kw

2 The NetBackup processes must be restarted. Enter the following:

cd /usr/openv/netbackup/bin/goodies

./netbackup stop

./netbackup start

199OS-related tuning factorsKernel parameters on Solaris 10

Page 200: Netbackup Planning Guide 307083

To use the /etc/system file

1 Add the following line in the /etc/system file.

set ip:do_tcp_fusion = 0

2 Restart the system to have the change take effect.

Parameters obsolete in Solaris 10The following parameters are obsolete in Solaris 10. These parameters can beincluded in the Solaris /etc/system file and are used to initialize the defaultresource control values. But Sun does not recommend their use in Solaris 10.

semsys:seminfo_semmns

semsys:seminfo_semvmx

semsys:seminfo_semmnu

semsys:seminfo_semaem

semsys:seminfo_semume

Message queue and shared memory parameters onHP-UX

The kernel parameters that deal with message queues and shared memory canbe mapped to work on an HP-UX system.

Table 10-1 is a list of HP kernel tuning parameter settings.

Table 10-1 Kernel tuning parameters for HP-UX

Minimum ValueName

1mesg

514msgmap

8192msgmax

65536msgmnb

8msgssz

8192msgseg

512msgtql

256msgmni

OS-related tuning factorsMessage queue and shared memory parameters on HP-UX

200

Page 201: Netbackup Planning Guide 307083

Table 10-1 Kernel tuning parameters for HP-UX (continued)

Minimum ValueName

1sema

semmni+2semmap

300semmni

300semmns

300semmnu

64semume

32767semvmx

1shmem

300shmmni

120shmseg

Calculate shmmax using the formula that is provided under thefollowing:

See “Recommended shared memory settings” on page 132.

See the note on shmmax following this table.

shmmax

Note on shmmaxNote the following:

shmmax = NetBackup shared memory allocation = (SIZE_DATA_BUFFERS *NUMBER_DATA_BUFFERS) * number of drives * MPX per drive

More information is available on SIZE_DATA_BUFFERS andNUMBER_DATA_BUFFERS.

See “Recommended shared memory settings” on page 132.

Changing the HP-UX kernel parametersTo change the kernel parameters in Table 10-1, the easiest method is to use theSystem Administration Manager (SAM).

201OS-related tuning factorsMessage queue and shared memory parameters on HP-UX

Page 202: Netbackup Planning Guide 307083

To change the kernel parameters

1 From SAM, select Kernel Configuration > Configurable Parameters.

2 Find the parameter to change and select Actions > Modify ConfigurableParameter.

3 Key in the new value.

Repeat these steps for all the parameters you want to change.

4 When all the values have been changed, select Actions>ProcessNewKernel.

A warning states that a restart is required to move the values into place.

5 After the restart, the sysdef command can be used to confirm that the correctvalue is in place.

Caution: any changes to the kernel require a restart, to move the new kernelinto place. Do not make changes to the parameters unless a system restartcan be performed. Otherwise, the changes are not saved.

Changing kernel parameters on LinuxTo modify the Linux kernel tunable parameters, use sysctl. sysctl is used to view,set, and automate kernel settings in the /proc/sys directory. Most of theseparameters can be changed online. To make your changes permanent, edit/etc/sysctl.conf. The kernel must have support for the procfs file systemstatically compiled in or dynamically loaded as a module.

The default buffer size for tapes is 32K on Linux.

To change the default buffer size for tapes

1 Do one of the following:

■ Rebuild the kernel (make changes to st_options.h)

■ Or add a resize parameter to the startup of Linux.An example for a grub.conf entry is the following:

title Red Hat Linux (2.4.18-24.7.x)

root (hd0,0)

kernel /vmlinuz-2.4.18-24.7.x ro root=/dev/hda2

OS-related tuning factorsChanging kernel parameters on Linux

202

Page 203: Netbackup Planning Guide 307083

st=buffer_kbs:256,max_buffers:8

initrd /initrd-2.4.18-24.7.x.img

2 For further information on setting restart options for st, see the following:/usr/src/linux*/drivers/scsi/README.st, subsection BOOT TIME.

About data buffer size (Windows)The size limit for data buffers on Windows is 1024 KB. This size is calculated asa multiple of operating system pages (1 page = 4 KB). Therefore, the maximum is256 OS pages, from 0 to 255 (the hex value 0xFF). A larger setting defaults to 64KB, which is the default size for the scatter/gather list.

The setting of the maximum usable block size is dependent on the Host Bus Adapter(HBA) miniport driver, not on the tape driver or the OS. For example, the readmefor the QLogic QLA2200 card contains the following:

* MaximumSGList

Windows includes enhanced scatter/gather list support for very large SCSI I/Otransfers. Windows supports up to 256 scatter/gather segments of 4096 byteseach, for transfers up to 1048576 bytes.

Note: The OEMSETUP.INF file has been updated to automatically update theregistry to support 65 scatter/gather segments. Normally, no additional changesare necessary: this setting typically results in the best overall performance.

Adjusting data buffer size (Windows)You can adjust the data buffer size on Windows as follows.

203OS-related tuning factorsAbout data buffer size (Windows)

Page 204: Netbackup Planning Guide 307083

To change the Windows data buffer size

1 Click Start > Run and open the REGEDT32 program.

2 Select HKEY_LOCAL_MACHINE and follow the tree structure down to theHBA driver as follows:

HKEY_LOCAL_MACHINE > SYSTEM > CurrentControlSet > Services >HBA_driver > Parameters > Device

For QLogic, the HBA_driver is Ql2200.

The main definition is the so-called SGList (scatter/gather list). This parametersets the number of pages that can be either scattered or gathered (that is,read or written) in one DMA transfer. For the QLA2200, you set the parameterMaximumSGList to 0xFF (or to 0x40 for 256Kb) and can then set 256Kb buffersizes for NetBackup. Use extreme caution when you modify this registryvalue. The vendor of the SCSI or Fiber card should always be contacted firstto ascertain the maximum value that a particular card can support.

The same should be possible for other HBAs, especially fiber cards.

The default for JNI fiber cards using driver version 1.16 is 0x80 (512 Kb or128 pages). The default for the Emulex LP8000 is 0x81 (513 Kb or 129 pages).

For this approach to work, the HBA has to install its own SCSI miniport driver.If it does not, transfers are limited to 64 KB, for legacy cards such as old SCSIcards.

In conclusion, the built-in limit on Windows is 1024 KB, unless you use thedefault Microsoft miniport driver for legacy cards. The limitations are all todo with the HBA drivers and the limits of the physical devices that are attachedto them.

For example, Quantum DLT7000 drives work best with 128 KB buffers andStorageTek 9840 drives with 256 KB buffers. If these values are increased toofar, damage can result. The HBA or the tape drives or any devices in between(such as fiber bridges and switches) can be damaged.

3 Double click MaximumSGList:REG_DWORD:0x21

4 Enter a value from 16 to 255 (0x10 hex to 0xFF).

A value of 255 (0xFF) enables the maximum 1 MB transfer size. A value higherthan 255 reverts to the default of 64 KB transfers. The default value is 33(0x21).

5 Click OK.

6 Exit the Registry Editor, then turn off and restart the system.

OS-related tuning factorsAdjusting data buffer size (Windows)

204

Page 205: Netbackup Planning Guide 307083

Other Windows issuesConsider the following issues.

Troubleshooting NetBackup’s configuration files on WindowsIf you create a configuration file on Windows for NetBackup, the file name mustmatch the file name that NetBackup expects. Make sure the file name does nothave an extension, such as .txt. (On UNIX systems, such files are called touchfiles.)

If you create a NOexpire file to prevent the expiration of backups, the file doesnot work if the file’s name is NOexpire.txt.

Note also: the file must use a supported type of encoding, such as ANSI. Unicodeencoding is not supported. If the file is in Unicode, it does not work.

To check the encoding type, open the file using a tool that displays the currentencoding, such as Notepad. Select File > Save As and check the options in theEncoding field. ANSI encoding works properly.

Disable antivirus software when backing up Windows filesAntivirus applications scan all files that are backed up by NetBackup, and thusload down the client’s CPU and slow its backups. Consider disabling antivirussoftware.

Working around antivirus scanning without disabling itAs an alternative, you can leave antivirus scanning enabled and work around it.

To work around antivirus scanning without disabling it

◆ In the Backup, Archive, and Restore interface, on the NetBackup ClientProperties dialog, General tab: clear the check box next to Performincrementals based on archive bit.

205OS-related tuning factorsOther Windows issues

Page 206: Netbackup Planning Guide 307083

OS-related tuning factorsOther Windows issues

206

Page 207: Netbackup Planning Guide 307083

Additional resources

This appendix includes the following topics:

■ Performance tuning information at Vision online

■ Performance monitoring utilities

■ Freeware tools for bottleneck detection

■ Mailing list resources

Performance tuning information at Vision onlineFor additional information on NetBackup tuning, go to:

http://www.symantec.com/stn/vision/index.jsp

Performance monitoring utilitiesInformation is available from the following sources:

■ Storage Mountain, previously called Backup Central, is a resource for allbackup-related issues.http://www.storagemountain.com.

■ The following article discusses how and why to design a scalable datainstallation."High-Availability SANs," Richard Lyford, FC Focus Magazine, April 30, 2002.

Freeware tools for bottleneck detectionInformation is available from the following sources:

■ Iperf, for measuring TCP and UDP bandwidth.

AAppendix

Page 208: Netbackup Planning Guide 307083

http://dast.nlanr.net/Projects/Iperf1.1.1/index.htm

■ Bonnie, for measuring the performance of UNIX file system operations.http://www.textuality.com/bonnie

■ Bonnie++, extends the capabilities of Bonnie.http://www.coker.com.au/bonnie++/readme.html

■ Tiobench, for testing I/O performance with multiple running threads.http://sourceforge.net/projects/tiobench/

Mailing list resourcesInformation is available from the following sources:

■ Veritas NetBackup news groups.http://forums.veritas.com.Search on the keyword "NetBackup" to find threads relevant to NetBackup.

■ The email list Veritas-bu discusses backup-related products such as NetBackup.Archives for Veritas-bu are located at:http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

Additional resourcesMailing list resources

208

Page 209: Netbackup Planning Guide 307083

Symbols/dev/null 105/dev/rmt/2cbn 168/dev/rmt/2n 168/etc/rc2.d 154/etc/system 196/proc/sys (Linux) 202/usr/openv/netbackup 122/usr/openv/netbackup/bin/admincmd/bpimage 153/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS 128/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_DISK 128/usr/openv/netbackup/db/error 104/usr/openv/netbackup/db/images 153/usr/openv/netbackup/db/media 75/usr/openv/netbackup/logs 105/usr/sbin 77/usr/sbin/devfsadm 77/usr/sbin/modinfo 77/usr/sbin/modload 77/usr/sbin/modunload 77/usr/sbin/ndd 153/usr/sbin/tapes 7710000BaseT 261000BaseT 26100BaseT 26100BaseT cards 42

AActivity Monitor 100adjusting

backup load 114error threshold 75network communications buffer 118read buffer size 172

All Log Entries report 99, 101, 103ALL_LOCAL_DRIVES 41, 68alphabetical order

storage units 61ANSI encoding 205antivirus software 205arbitrated loop 186

archive bit 205archiving catalog 66array RAID controller 186arrays 116ATA 25ATM card 43auto-negotiate 117AUTOSENSE 117available_media report 80available_media script 74, 80Avg. Disk Queue Length counter 109

Bbackup

catalog 68disk or tape 80environment

dedicated or shared 79large catalog 65load adjusting 114load leveling 114monopolizing devices 115user-directed 97window 113

Backup Central 207Backup Tries parameter 75balancing load 114bandwidth 189bandwidth limiting 114Bare Metal Restore (BMR) 170best practices 85, 88, 91Bonnie 208Bonnie++ 208bottlenecks 99, 118

freeware tools for detecting 207bp.conf file 75, 123bpbkar 117, 136, 139bpbkar log 104–105bpbkar32 117, 136bpdm log 104bpdm_dev_null 106

Index

Page 210: Netbackup Planning Guide 307083

bpend_notify.bat 116bpimage 153bpmount -i 68bprd 56, 60bpsetconfig 169bpstart_notify.bat 116bptm 136, 139, 143, 147, 149bptm log 75, 104buffers 118, 202

and FlashBackup 172changing number of 126changing size 128changing Windows buffers 204default number of 124default size of 125for network communications 120shared 124tape 124testing 133wait and delay 135

bus 73

Ccache device (snapshot) 172calculate

actual data transfer rate required 23length of backups 23network transfer rate 26number of robotic tape slots needed 34number of tapes needed 32shared memory 125space needed for NBDB database 31–32

cartridgesstoring 88

catalog 152archiving 66backup requirements 114backups

guidelines 64–65backups not finishing 65compression 65–66large backups 65managing 65

Checkpoint Restart 152child delay values 134CHILD_DELAY file 134cleaning

robots 87tape drives 85

cleaning (continued)tapes 80

clientcompression 169tuning performance 115variables 98

Client Job Tracker 171clock or cycle time 188Committed Bytes 108common system resources 106communications

buffer 120–121process 135

Communications Buffer Size parameter 119–120COMPRESS_SUFFIX option 169compression 109, 168

and encryption 170catalog 65–66how to enable 168tape vs client 169

configuration files (Windows) 205configuration guidelines 67CONNECT_OPTIONS 58controller 109copy-on-write snapshot 172counters 135

algorithm 137determining values of 139in Windows performance 107wait and delay 135

CPU 104, 107and performance 190load

monitoring 104utilization 57–58

CPUs needed per media server component 42critical policies 68cumulative-incremental backup 21custom reports

available media 80cycle time 188

Ddaily_messages log 69data buffer

overview 124size 118

data compression 156Data Consumer 138

Index210

Page 211: Netbackup Planning Guide 307083

data path through server 184data producer 137–138data recovery

planning for 88–89data stream and tape efficiency 156data throughput 98

statistics 99data transfer path 99, 112

basic tuning 113data transfer rate

for drive controllers 26for tape drives 23required 23

data variables 98database

backups 160restores 154

databaseslist of pre-6.0 databases 32

DB2 restores 154de-multiplexing 113Deactivate command 96dedicated backup servers 114dedicated private networks 113delay

buffer 135in starting jobs 53values

parent/child 134designing

master server 38media server 42

Detailed Status tab 101devfsadmd daemon 77device

names 168reconfiguration 77

devlinks 77disable TapeAlert 87disaster recovery 88–89

testing 59disk

full 109increase performance 109load

monitoring 108speed

measuring 104staging 63

disk (continued)versus tape 80

Disk Queue Length counter 109disk speed

measuring 106Disk Time counter 109disk-based storage 80diskperf command 108disks

adding 116down drive 75–76drive controllers 26drive_error_threshold 75–76drives

number per network connection 73drvconfig 77

Eemail list (Veritas-bu) 208EMM 56, 67, 74, 78–79, 86EMM database

derived from pre-6.0 databases 32EMM server

calculating space needed for 31–32EMM transaction log 32encoding

file 205encryption 168

and compression 170and multi-streaming 168

error logs 75, 100error threshold value 74Ethernet connection 183evaluating components 104, 106evaluating performance

Activity Monitor 100All Log Entries report 101encryption 168NetBackup clients 115NetBackup servers 123network 97, 117overview 95

exclude lists 67

Ffactors

in choosing disk vs tape 80in job scheduling 56

211Index

Page 212: Netbackup Planning Guide 307083

failoverstorage unit groups 62

fast-locate 150FBU_READBLKS 173FC-AL 185, 188fibre channel 185, 187

arbitrated loop 185file encoding 205file ID on vxlogview 69file system space 63files

backing up many small 170Windows configuration 205

firewall settings 58FlashBackup 171–172FlashBackup read buffer 173Flexible Disk Option 85floating-point math and NOM 182force load parameters (Solaris) 198forward space filemark 149fragment size 148, 150

considerations in choosing 149fragmentation 116

level 109freeware tools 207freeze media 74–76frequency-based tape cleaning 86frozen volume 74FT media server

recommended number of buffers 133shared memory 126–129

full backup 80full backup and tape vs disk 81full duplex 117

GGigabit Fibre Channel 25globDB 32goodies directory 80groups of storage units 61

Hhardware

components and performance 189configuration examples 190elements affecting performance 183performance considerations 189

heap sizeadjusting

NOM server 174NOM web server 175

hierarchydisk 183

host memory 184host name resolution 98hot catalog backup 64, 68

II/O operations

scaling 192IMAGE_FILES 153IMAGE_INFO 153IMAGE_LIST 153improving performance

see tuning 111include lists 67increase disk performance 109incremental backups 80, 113, 161index performance 152info on tuning 207Inline Copy

shared memory 126, 128–129insufficient memory 108interfaces

multiple 123ipcs -a command 196Iperf 207iSCSI 25iSCSI bus 73

JJava interface 44, 170job

delays 55–56scheduling 53, 56

limiting factors 56Job Tracker 117jobs queued 54–55JVM Option Number 0 (NOM) 175

Kkernel tuning

Linux 202Solaris 195

Index212

Page 213: Netbackup Planning Guide 307083

Llarger buffer (FlashBackup) 172largest fragment size 148latency 189legacy logs 69leveling load among backup components 114library-based tape cleaning 88Limit jobs per policy attribute 54, 115limiting fragment size 148link down 117Linux

kernel tunable parameters 202load

leveling 114monitoring 107parameters (Solaris) 198

local backups 161Log Sense page 86logging 97logs 69, 75, 100, 139, 171

managing 68viewing 68

ltidevs 32LTO drives 33, 43

Mmailing lists 208managing

logs 68the catalog 65

Manual Backup command 97master server

CPU utilization 57designing 38determining number of 40splitting 66

MAX_HEAP setting (NOM) 174Maximum concurrent write drives 54Maximum jobs per client 54Maximum Jobs Per Client attribute 115Maximum streams per drive 54maximum throughput rate 156Maximum Transmission Unit (MTU) 134MaximumSGList 203–204measuring

disk read speed 104, 106NetBackup performance 95

mediacatalog 74

media (continued)error threshold 75not available 74pools 79positioning 156threshold for errors 74

media errors database 32Media List report 74media manager

drive selection 78Media multiplexing setting 54media server 43

designing 42factors in sizing 44not available 56number needed 43number supported by a master 41

media_error_threshold 75–76mediaDB 32MEGABYTES_OF_MEMORY 169memory 184, 189–190

amount required 43, 125insufficient 108monitoring use of 104, 107shared 124

merging master servers 66message queue 196message queue parameters

HP-UX 200migration 85Mode Select page 87Mode Sense 87modload command 77modunload command 77monitoring

data variables 98MPX_RESTORE_DELAY option 154MTFSF/MTFSR 149multi-streaming 159

NEW_STREAM directive 162when to use 160

multiple copiesshared memory 126, 128–129

multiple drivesstorage unit 54

multiple interfaces 123multiple small files

backing up 170

213Index

Page 214: Netbackup Planning Guide 307083

multiplexed backupsand fragment size 149database backups 154

multiplexed imagerestoring 150

multiplexing 81, 113, 159and memory required 43effects of 162schedule 54set too high 154when to use 160

Nnamespace.chksum 32naming conventions 91

policies 91storage units 91

NBDB database 31–32NBDB.log 65nbemmcmd command 75nbjm and job delays 56nbpem 60nbpem and job delays 53nbu_snap 172ndd 153NET_BUFFER_SZ 119–121, 130NET_BUFFER_SZ_REST 119NetBackup

catalog 152job scheduling 53news groups 208restores 148scheduler 96

NetBackup Client Job Tracker 171NetBackup Java console 170NetBackup Operations Manager

see NOM 45NetBackup Relational Database 67NetBackup relational database files 65NetBackup Vault 170network

bandwidth limiting 114buffer size 118communications buffer 120connection options 57connections 117interface cards (NICs) 117load 118multiple interfaces 123

network (continued)performance 97private

dedicated 113tapes drives and 73traffic 118transfer rate 26tuning 117tuning and servers 113variables 97

Network Buffer Size parameter 120, 144NEW_STREAM directive 162news groups 208no media available 74no-rewind option 168NO_TAPEALERT touch file 87NOexpire touch file 60, 205NOM 43, 58, 80

adjustingserver heap size 174Sybase cache size 176web server heap size 175

and floating-point math 182database 45defragmenting databases 181for monitoring jobs 58purging data 182sizing 45store database and logs on separate disk 178

nominal throughput rate 156nomserver.conf file 177nomsrvctl file (NOM) 174non-multiplexed restores 150none pool 80NOSHM file 121Notepad

checking file encoding 205notify scripts 116NUMBER_DATA_BUFFERS 126, 132, 134, 201NUMBER_DATA_BUFFERS_DISK 126NUMBER_DATA_BUFFERS_FT 126–127, 133NUMBER_DATA_BUFFERS_MULTCOPY 126NUMBER_DATA_BUFFERS_RESTORE 127, 152

OOEMSETUP.INF file 203offload work to additional master 66on-demand tape cleaning

see TapeAlert 86

Index214

Page 215: Netbackup Planning Guide 307083

online (hot) catalog backup 68Oracle 155

restores 154order of using storage units 61out-of-sequence delivery of packets 171

Ppackets 171Page Faults 108parent/child delay values 134PARENT_DELAY file 134patches 171PCI bridge 185, 189–190PCI bus 185, 188–189PCI card 185, 190performance

and CPU 190and hardware issues 189see also tuning 111strategies and considerations 112

performance evaluation 95Activity Monitor 100All Log Entries report 101monitoring CPU 107monitoring disk load 108monitoring memory use 104, 107system components 104, 106

PhysicalDisk object 109policies

critical 68guidelines 67naming conventions 91

Policy Update Interval 54poolDB 32pooling conventions 79position error 75Process Queue Length 107Processor Time 107

Qqueued jobs 54–55

RRAID 109, 116

controller 186, 190rate of data transfer 21raw partition backup 172

read buffer sizeadjusting 172and FlashBackup 172

reconfigure devices 77recovering data

planning for 88–89recovery time 80Reduce fragment size setting 148REGEDT32 204registry 203reload st driver without rebooting 77report 103

All Log Entries 101media 80

resizing read buffer (FlashBackup) 172restore

and network 153in mixed environment 153multiplexed image 150of database 154performance of 152

RESTORE_RETRIES for restores 75retention period 80RMAN 155robot

cleaning 87robotic_def 32routers 117ruleDB 32

SSAN Client 113

recommended number of buffers 133SAN client 83, 116SAN fabric 187SAN Media Server 113sar command 104SATA 25scatter/gather list 204schedule naming

best practices 91scheduling 53, 96

delays 53disaster recovery 59limiting factors 56

scratch pool 79SCSI bus 73SCSI/FC connection 156SDLT drives 33, 43

215Index

Page 216: Netbackup Planning Guide 307083

search performance 152semaphore (Solaris) 196Serial ATA (SATA) 185server

data path through 184splitting master from EMM 67tuning 123variables 96

SGList 204shared data buffers 124

changing number of 126changing size 128default number of 124default size of 125

shared memory 121, 124amount required 125parameters

HP-UX 200recommended settings 132Solaris parameters 196testing 133

shared-access topology 186, 189shelf 185SIZE_DATA_BUFFERS 130, 132, 134, 201SIZE_DATA_BUFFERS_DISK 128SIZE_DATA_BUFFERS_FT 128–129SIZE_DATA_BUFFERS_MULTCOPY 128–129small files

backup of 170SMART diagnostic standard 87snap mirror 171snapshot cache device 172Snapshot Client 171snapshots 117socket

communications 121parameters (Solaris) 198

softwarecompression (client) 169tuning 192

Solariskernel tuning 195

splitting master server 66SSOhosts 32staging

disk 63, 80Start Window 96State Details in Activity Monitor 54STK drives 33

storage device performance 156Storage Mountain 207storage unit 61, 115

groups 61naming conventions 91not available 55

Storage Unit dialog 148storage_units database 32storing tape cartridges 88streaming (tape drive) 81, 156striped volumes (VxVM) 172striping

block size 172volumes on disks 113

stunit_groups 32suspended volume 74switches 187Sybase cache size

adjusting 176synthetic backups 120System Administration Manager (SAM) 201system resources 106system variables

controlling 96

TTake checkpoints setting 152tape

block size 125buffers 124cartridges

storing 88cleaning 80, 86compression 169efficiency 156full

frozen. See suspendednumber of tapes needed for backups 32position error 75streaming 81, 156versus disk 80

tape connectivity 74tape drive 156

cleaning 85number per network connection 73technologies 85technology needed 22transfer rates 23types 43

Index216

Page 217: Netbackup Planning Guide 307083

tape librarynumber of tape slots needed 34

tape-based storage 80TapeAlert 86tar 137tar32 137TCP/IP 171tcp_deferred_ack_interval 153testing conditions 96threshold

erroradjusting 75

for media errors 74throughput 99time to data 81Tiobench 208tools (freeware) 207topology (hardware) 188touch files 60

encoding 205traffic on network 118transaction log 32transaction log file 65transfer rate

drive controllers 26for backups 21network 26required 23tape drives 23

True Image Restore option 170tuning

additional info 207basic suggestions 113buffer sizes 118, 120client performance 115data transfer path

overview 112device performance 156FlashBackup read buffer 172Linux kernel 202network performance 117restore performance 148, 153search performance 152server performance 123software 192Solaris kernel 195

UUltra-3 SCSI 25

Ultra320 SCSI 26Unicode encoding 205unified logging

viewing 68user-directed backup 97

VVault 170verbosity level 171Veritas-bu email list 208viewing logs 68virus scans 116, 171Vision Online 207vmstat 104volDB 32volume

frozen 74pools 79suspended 74

vxlogview 68file ID 69

VxVM striped volumes 172

Wwait/delay counters 135, 139

analyzing problems 143correcting problems 147for local client backup 139for local client restore 142for remote client backup 141for remote client restore 143

wear of tape drives 156webgui command 175Wide Ultra 2 SCSI 25wild cards in file lists 67Windows Performance Monitor 106Working Set

in memory 108

217Index