tivoli workloud scheduler guide

420
ibm.com/redbooks IBM Tivoli Workload Scheduler Version 8.2: New n 8.2: New Features and Best Practices Vasfi Gucer Rick Jones Natalia Jojic Dave Patrick Alan Bain A first look at IBM Tivoli Workload Scheduler Version 8.2 Learn all major new functions and practical tips Experiment with real-life scenarios

Upload: carmen-radu

Post on 07-Apr-2015

834 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Tivoli Workloud Scheduler Guide

ibm.com/redbooks

IBM Tivoli Workload Scheduler Version 8.2: New n 8.2: New Features and Best Practices

Vasfi GucerRick Jones

Natalia JojicDave Patrick

Alan Bain

A first look at IBM Tivoli Workload Scheduler Version 8.2

Learn all major new functions and practical tips

Experiment with real-life scenarios

Front cover

Page 2: Tivoli Workloud Scheduler Guide
Page 3: Tivoli Workloud Scheduler Guide

IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

October 2003

International Technical Support Organization

SG24-6628-00

Page 4: Tivoli Workloud Scheduler Guide

© Copyright International Business Machines Corporation 2003. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADPSchedule Contract with IBM Corp.

First Edition (October 2003)

This edition applies to IBM Tivoli Workload Scheduler, Version 8.2.

Note: Before using this information and the product it supports, read the information in “Notices” on page ix.

Page 5: Tivoli Workloud Scheduler Guide

Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixTrademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiThe team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiBecome a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiComments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

Chapter 1. IBM Tivoli Workload Scheduler 8.2 new features. . . . . . . . . . . . 11.1 New and easy installation process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 IBM Tivoli Data Warehouse support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2.1 How does the IBM Tivoli Data Warehouse integration work? . . . . . . . 51.2.2 What kind of reports can you get? . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.3 Serviceability enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.4 Job return code processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.5 New options for handling time constraints. . . . . . . . . . . . . . . . . . . . . . . . . 101.6 New options in the localopts file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.6.1 nm tcp timeout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.7 Job events processing enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.8 Networking and security enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.8.1 Full firewall support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.8.2 Centralized security mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.8.3 SSL encryption and authentication support . . . . . . . . . . . . . . . . . . . 13

1.9 IBM Tivoli Workload Scheduler for Applications . . . . . . . . . . . . . . . . . . . . 14

Chapter 2. Job Scheduling Console enhancements . . . . . . . . . . . . . . . . . 172.1 New JSC look and feel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.1.1 Action list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.1.2 Work with engines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.1.3 Explorer view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.1.4 Task Assistant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.2 Other JSC enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.2.1 Progressive message numbering . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.2.2 Hyperbolic Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.2.3 Column layout customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.2.4 Late job handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.2.5 Job return code mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Chapter 3. Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

© Copyright IBM Corp. 2003. All rights reserved. iii

Page 6: Tivoli Workloud Scheduler Guide

3.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.1.1 CD layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.1.2 Major modifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.2 Installation roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.3 Installing a Master Domain Manager on UNIX . . . . . . . . . . . . . . . . . . . . . 353.4 Adding a new feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.5 Promoting an agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783.6 Upgrading to Version 8.2 from a previous release . . . . . . . . . . . . . . . . . . 923.7 Installing the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . . . . . 109

3.7.1 Starting the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . . 1183.7.2 Applying Job Scheduling Console fix pack . . . . . . . . . . . . . . . . . . . 120

3.8 Installing using the twsinst script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303.9 Silent install using ISMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1323.10 Installing Perl5 on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1333.11 Troubleshooting installation problems. . . . . . . . . . . . . . . . . . . . . . . . . . 135

3.11.1 Installation process log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1353.11.2 Common installation problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

3.12 Uninstalling Tivoli Workload Scheduler 8.2 . . . . . . . . . . . . . . . . . . . . . . 1373.12.1 Launch the uninstaller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1373.12.2 Using the uninstaller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383.12.3 Tiding up the TWShome directory. . . . . . . . . . . . . . . . . . . . . . . . . 1423.12.4 Uninstalling JSS and IBM Tivoli Workload Scheduler Connector . 143

3.13 Troubleshooting uninstall problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1473.13.1 Uninstall manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

3.14 Useful commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

Chapter 4. Return code management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1514.1 Return code management overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1524.2 Adding a return code expression to a job definition . . . . . . . . . . . . . . . . 1524.3 Defining a return code condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1534.4 Monitoring return codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1554.5 Conman enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

4.5.1 Retcod example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1574.6 Jobinfo enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

4.6.1 Jobinfo example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

Chapter 5. Security enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1715.1 Working across firewalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1725.2 Strong authentication and encryption using Secure Socket Layer protocol

(SSL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1765.2.1 The SSL protocol internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1775.2.2 Planning for SSL support in Tivoli Workload Scheduler 8.2 . . . . . . 1795.2.3 Creating your own Certificate Authority. . . . . . . . . . . . . . . . . . . . . . 181

iv IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 7: Tivoli Workloud Scheduler Guide

5.2.4 Creating private keys and certificates . . . . . . . . . . . . . . . . . . . . . . . 1885.2.5 Setting SSL local options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1975.2.6 Configuring SSL attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1995.2.7 Trying out your SSL configuration. . . . . . . . . . . . . . . . . . . . . . . . . . 201

5.3 Centralized user security definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2045.3.1 Configuring centralized security . . . . . . . . . . . . . . . . . . . . . . . . . . . 2055.3.2 Configuring the JSC to work across a firewall. . . . . . . . . . . . . . . . . 206

Chapter 6. Late job handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2076.1 Terminology changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2086.2 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2086.3 Latest Start Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2096.4 Suppress option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2106.5 Continue option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2126.6 Cancel option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2156.7 Termination Deadline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

Chapter 7. IBM Tivoli Enterprise Console integration . . . . . . . . . . . . . . . 2217.1 IBM Tivoli Enterprise Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222

7.1.1 IBM Tivoli Enterprise Console components . . . . . . . . . . . . . . . . . . 2227.2 Integration using the Tivoli Plus Module . . . . . . . . . . . . . . . . . . . . . . . . . 223

7.2.1 How the integration works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2257.3 Installing and customizing the integration software. . . . . . . . . . . . . . . . . 226

7.3.1 Setting up the IBM Tivoli Enterprise Console . . . . . . . . . . . . . . . . . 2277.3.2 The Plus Module configuration issues . . . . . . . . . . . . . . . . . . . . . . 2297.3.3 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2307.3.4 Implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231

7.4 Our environment – scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2327.4.1 Events and rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235

7.5 ITWS/TEC/AlarmPoint operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2367.5.1 Full IBM Tivoli Workload Scheduler Event Configuration listing . . . 240

Chapter 8. Disaster recovery with IBM Tivoli Workload Scheduler . . . . 2438.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2448.2 Backup Master configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2448.3 On-site disaster recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2458.4 Short-term switch of Master Domain Manager . . . . . . . . . . . . . . . . . . . . 246

8.4.1 Using the JSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2468.4.2 Using the command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249

8.5 Long-term switch of Master Domain Manager . . . . . . . . . . . . . . . . . . . . 2498.6 Off-site disaster recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252

8.6.1 Cold start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2528.6.2 Warm start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2538.6.3 Independent solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254

Contents v

Page 8: Tivoli Workloud Scheduler Guide

8.6.4 Integrated solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2558.6.5 Hot start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255

Chapter 9. Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2579.1 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258

9.1.1 Choosing platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2589.1.2 Hardware considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2589.1.3 Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2599.1.4 Disk space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2609.1.5 Inodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2609.1.6 Mailman server processes or Domain Managers . . . . . . . . . . . . . . 2609.1.7 Considerations when designing an IBM Tivoli Workload Scheduler

network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2679.1.8 Standard Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2729.1.9 High availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2729.1.10 Central repositories for important files . . . . . . . . . . . . . . . . . . . . . 273

9.2 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2769.2.1 Installing a large IBM Tivoli Workload Scheduler environment . . . . 2779.2.2 Change control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2779.2.3 Patching on a regular basis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2779.2.4 IP addresses and name resolution . . . . . . . . . . . . . . . . . . . . . . . . . 2789.2.5 Message file sizes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2799.2.6 Implementing the Jnextday process . . . . . . . . . . . . . . . . . . . . . . . . 2829.2.7 Ad hoc job/schedule submissions . . . . . . . . . . . . . . . . . . . . . . . . . . 2859.2.8 Mailman and writer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2869.2.9 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2869.2.10 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2889.2.11 Using the LIST authority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2919.2.12 Interconnected TMRs and IBM Tivoli Workload Scheduler. . . . . . 292

9.3 Tuning localopts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2939.3.1 File system synchronization level . . . . . . . . . . . . . . . . . . . . . . . . . . 2969.3.2 Mailman cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2969.3.3 Sinfonia file compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2979.3.4 Customizing timeouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2979.3.5 Parameters that affect the file dependency resolution . . . . . . . . . . 2999.3.6 Parameter that affects the termination deadline . . . . . . . . . . . . . . . 300

9.4 Scheduling best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3009.4.1 Benefits of using a naming conventions . . . . . . . . . . . . . . . . . . . . . 3019.4.2 Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3029.4.3 Ad hoc submissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3039.4.4 File status testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3049.4.5 Time zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305

9.5 Optimizing Job Scheduling Console performance . . . . . . . . . . . . . . . . . 307

vi IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 9: Tivoli Workloud Scheduler Guide

9.5.1 Remote terminal sessions and JSC . . . . . . . . . . . . . . . . . . . . . . . . 3079.5.2 Applying the latest fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3079.5.3 Resource requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3079.5.4 Setting the refresh rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3079.5.5 Setting the buffer size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3099.5.6 Iconize the JSC windows to force the garbage collector to work . . 3099.5.7 Number of open editors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3099.5.8 Number of open windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3099.5.9 Applying filters and propagating to JSC users . . . . . . . . . . . . . . . . 3099.5.10 Java tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3139.5.11 Startup script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314

9.6 IBM Tivoli Workload Scheduler internals. . . . . . . . . . . . . . . . . . . . . . . . . 3159.6.1 IBM Tivoli Workload Scheduler directory structure . . . . . . . . . . . . . 3159.6.2 IBM Tivoli Workload Scheduler process tree . . . . . . . . . . . . . . . . . 3179.6.3 Interprocess communication and link initialization . . . . . . . . . . . . . 3189.6.4 IBM Tivoli Workload Scheduler Connector . . . . . . . . . . . . . . . . . . . 3209.6.5 Retrieval of FTA joblog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3229.6.6 Netman services and their functions . . . . . . . . . . . . . . . . . . . . . . . . 325

9.7 Regular maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3269.7.1 Cleaning up IBM Tivoli Workload Scheduler directories . . . . . . . . . 3269.7.2 Backup considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3369.7.3 Rebuilding IBM Tivoli Workload Scheduler databases . . . . . . . . . . 3389.7.4 Creating IBM Tivoli Workload Scheduler Database objects . . . . . . 339

9.8 Basic fault finding and troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . 3399.8.1 FTAs not linking to the Master Domain Manager . . . . . . . . . . . . . . 3409.8.2 Batchman not up or will not stay up (batchman down) . . . . . . . . . . 3429.8.3 Jobs not running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3439.8.4 Jnextday is hung or still in EXEC state . . . . . . . . . . . . . . . . . . . . . . 3449.8.5 Jnextday in ABEND state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3459.8.6 FTA still not linked after Jnextday . . . . . . . . . . . . . . . . . . . . . . . . . . 3459.8.7 Troubleshooting tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346

9.9 Finding answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348

Appendix A. Code samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349README file for JSC Fix Pack 01 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350Sample freshInstall.txt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359Customized freshInstall.txt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365maestro_plus rule set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372Script for performing long-term switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382

Appendix B. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387Locating the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387Using the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387

Contents vii

Page 10: Tivoli Workloud Scheduler Guide

System requirements for downloading the Web material . . . . . . . . . . . . . 388How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393

viii IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 11: Tivoli Workloud Scheduler Guide

Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.

COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.

© Copyright IBM Corp. 2003. All rights reserved. ix

Page 12: Tivoli Workloud Scheduler Guide

TrademarksThe following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:

Redbooks (logo) ™^™ibm.com®pSeries™xSeries®z/OS®AIX 5L™AIX®

AS/400®DB2®IBM®Maestro™NetView®OS/400®PowerPC®Redbooks™

Sequent®Tivoli Enterprise™Tivoli Enterprise Console®Tivoli®TME®WebSphere®

The following terms are trademarks of other companies:

AlarmPoint is a trademark of Invoq Systems, Inc. in the United States, other countries, or both.

Intel and Intel Inside (logos) are trademarks of Intel Corporation in the United States, other countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Other company, product, and service names may be trademarks or service marks of others.

x IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 13: Tivoli Workloud Scheduler Guide

Preface

IBM Tivoli Workload Scheduler, Version 8.2 is IBM’s strategic scheduling product that runs on many different platforms, including the mainframe. This IBM Redbook covers the new features of Tivoli Workload Scheduler 8.2, focusing specifically on the Tivoli Workload Scheduler 8.2 Distributed product. In addition to new features and real-life scenarios, you will find a whole chapter on best practices (mostly version independent) with lots of tips for fine tuning your scheduling environment. For this reason, even if you are using a back-level version of Tivoli Workload Scheduler 8.2, you will benefit from this redbook.

Some of the topics that are covered in this book are:

� Return code management� Late job handling � Security enhancements� Disaster recovery � Job Scheduling Console enhancements� IBM Tivoli® Enterprise™ Console integration� Tips and best practices

Customers and Tivoli professionals who are responsible for installing, administering, maintaining or using Tivoli Workload Scheduler 8.2 will find this redbook a major reference.

The team that wrote this redbookThis redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, Austin Center.

Vasfi Gucer is an IBM Certified Consultant IT Specialist working at the ITSO Austin Center. He worked with IBM Turkey for 10 years, and has been with the ITSO since January 1999. He has more than 10 years of experience in systems management, networking hardware and distributed platform software. He has worked on various Tivoli customer projects as a Systems Architect in Turkey and the U.S. Vasfi is a Certified Tivoli Consultant.

Rick Jones is currently an L2 Senior Software Engineer for IBM Tivoli Workload Scheduler in IBM UK. He has worked for IBM for five years supporting IBM Tivoli Workload Scheduler. He has been using, administering and supporting various

© Copyright IBM Corp. 2003. All rights reserved. xi

Page 14: Tivoli Workloud Scheduler Guide

flavors of UNIX® for the past 20 years, with considerable shell script and Perl experience.

Natalia Jojic is a Certified Tivoli Consultant and instructor based in London, UK. She is currently working for Elyzium Limited, a UK-based IBM Premier Partner. She is primarily engaged in client-based consulting on delivering Tivoli Enterprise Management solutions, design and development of Tivoli integration modules with other, non-Tivoli products (such as AlarmPoint). Natalia has a Bachelor of Engineering degree in Computer Systems Engineering from City University, London, UK.

Dave Patrick is a Certified IBM Tivoli Workload Scheduler consultant and instructor currently working for Elyzium Limited, a UK-based IBM Premier Partner. He has 17 years of IT experience, including working as an NCR mainframe operator and programmer, a UNIX Administrator, and in the last seven years as an ITWS consultant, five as an instructor of Versions 6.1 to 8.2. During his time as an instructor, he has taught most of the courses held in the UK, as well as courses at IBM locations in Italy, Belgium, Sweden and South Africa. Though he is primarily an instructor, he has been involved in course development, and has also architected and installed a large number of IBM Tivoli Workload Scheduler implementations.

Alan Bain is an IT Specialist in IBM Tivoli Services in Stoke Poges (UK). Over the past four years, Alan has also worked as a Technical Training Consultant with IBM Tivoli Education and a Pre-Sales Systems Engineer with the IBM Tivoli Advanced Technology Group. He has extensive customer experience and in his current services role works on many short-term and long-term IBM Tivoli Workload Scheduler and IBM Tivoli Storage Manager engagements all over the UK, including a TNG-to-ITWS conversion project completed in six months.

Thanks to the following people for their contributions to this project:

Jackie Biggs, Brenda Garner, Linda RobinsonIBM USA

Geoff PuseyIBM UK

Fabio Barillari, Lucio Bortolotti, Maria Pia Cagnetta, Antonio Di Cocco, Riccardo Colella, Pietro Iannucci, Antonello Izzi, Valeria PerticaraIBM Italy

Peter MayInform& Enlighten Ltd

xii IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 15: Tivoli Workloud Scheduler Guide

The team would like to express special thanks to Warren Gill from IBM USA and Michael A Lowry from IBM Sweden.

Become a published authorJoin us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers.

Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability.

Find out more about the residency program, browse the residency index, and apply online at:

ibm.com/redbooks/residencies.html

Comments welcomeYour comments are important to us!

We want our Redbooks™ to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways:

� Use the online Contact us review redbook form found at:

ibm.com/redbooks

� Send your comments in an Internet note to:

[email protected]

� Mail your comments to:

IBM® Corporation, International Technical Support OrganizationDept. JN9B Building 003 Internal Zip 283411400 Burnet RoadAustin, Texas 78758-3493

Preface xiii

Page 16: Tivoli Workloud Scheduler Guide

xiv IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 17: Tivoli Workloud Scheduler Guide

Chapter 1. IBM Tivoli Workload Scheduler 8.2 new features

In this chapter we discuss the new functionality and enhancements in the 8.2 release of IBM Tivoli Workload Scheduler. Details of most of these enhancements are covered in subsequent chapters of the redbook. Note that since the focus of this redbook is on the IBM Tivoli Workload Scheduler Distributed product, the discussion focuses on the enhancements to the Distributed product only.

The following topics are covered in this chapter:

� “New and easy installation process” on page 2� “IBM Tivoli Data Warehouse support” on page 2� “Serviceability enhancements” on page 7� “Job return code processing” on page 9� “New options for handling time constraints” on page 10� “New options in the localopts file” on page 10� “Job events processing enhancements” on page 11� “Networking and security enhancements” on page 12� “IBM Tivoli Workload Scheduler for Applications” on page 14

1

© Copyright IBM Corp. 2003. All rights reserved. 1

Page 18: Tivoli Workloud Scheduler Guide

1.1 New and easy installation processAn InstallShield wizard and a command line interface are provided to guide you in performing the following installation procedures:

� Install a new instance of IBM Tivoli Workload Scheduler.

� Upgrade IBM Tivoli Workload Scheduler from the previous version.

� Add a new feature to the existing IBM Tivoli Workload Scheduler installation.

� Reconfigure an existing installation.

� Install a patch.

� Remove or uninstall IBM Tivoli Workload Scheduler or only specified features.

A clearer installation process is now available on UNIX platforms with an enhanced uniform installation process across all platforms.

For details of the new installation procedures, refer to Chapter 3, “Installation” on page 31.

1.2 IBM Tivoli Data Warehouse supportWith this version, IBM Tivoli Workload Scheduler will be able to save scheduling data into the IBM Tivoli Data Warehouse schema, allowing customers to collect historical data from many Tivoli applications into one central place, correlate them, and enable enterprise level reporting on information from many Tivoli applications.

IBM Tivoli Data Warehouse enables you to access the underlying data about your network devices and connections, desktops, applications and software, and the activities in managing your infrastructure. Having all this information in a data warehouse enables you to look at your IT costs, performance, and other trends of specific interest across your enterprise.

IBM Tivoli Data Warehouse provides the infrastructure for the following:

� Schema generation of the central data warehouse

� Extract, transform, and load (ETL) processes through the IBM DB2® Data Warehouse Center tool

Important: The wizard and command line are available for only Tier 1 platforms. Refer to the IBM Tivoli Workload Scheduler Release Notes for information about supported platforms.

2 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 19: Tivoli Workloud Scheduler Guide

� Report interfaces. It is also flexible and extensible enough to allow you to integrate application data of your own.

As shown in Figure 1-1 on page 4, Tivoli Enterprise Data Warehouse consists of a central data warehouse where historical data from management applications is aggregated and correlated for use by reporting and third-party online analytical processing (OLAP) tools, as well as planning, trending, analysis, accounting, and data mining tools.

Chapter 1. IBM Tivoli Workload Scheduler 8.2 new features 3

Page 20: Tivoli Workloud Scheduler Guide

Figure 1-1 Tivoli Data Warehouse integration

4 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 21: Tivoli Workloud Scheduler Guide

1.2.1 How does the IBM Tivoli Data Warehouse integration work?The IBM Tivoli Workload Scheduler Production Plan is physically mapped into a binary file (named Symphony) that contains the scheduling activities to be performed in the next 24 hours. When a new Production Plan is created, a new Symphony file is created, all the uncompleted activities from the previous Plan are carried forward into the new Symphony file, and the old Symphony file Plan is archived in the TWShome/schedlog directory.

IBM Tivoli Workload Scheduler Enablement pack processes the archived Symphony files. The archived Symphony files contain the history for all the jobs that have been executed during the past Production Plans. These files will be processed from the Tivoli Workload Scheduler archiver process in order to fill some flat files (cpus, jobs, Scheds) that will be imported into some staging DB2 tables (TWS_WORKSTATION_P, TWS_JOB_P, TWS_JOBSTREAM_P on the AWS schema). Both the archiver process and the import command are called from a Perl script, provided with the Tivoli Workload Scheduler warehouse pack that can be scheduled via Tivoli Workload Scheduler itself on the Tivoli Workload Scheduler Master Domain Manager.

The archiver process extracts the scheduling history data from the archived Symphony files and dumps it into some flat files, while the import process imports data from those flat files and uploads it into DB2 tables. Due to the fact that the Tivoli Workload Scheduler Master Domain Manager and the Tivoli Data Warehouse control server usually reside on two different machines, in order for the import process to upload data to the central data warehouse database, a DB2 client must be installed on the Tivoli Workload Scheduler Master Domain Manager.

The Perl script that runs the archiver process and import command is called tws_launch_archive.pl.

Figure 1-2 on page 6 shows the details of the archiver process.

Note: Both IBM Tivoli Data Warehouse and one copy of IBM DB2 software (to be used with IBM Tivoli Data Warehouse) are shipped with each Tivoli application at no charge. This is also true for IBM Tivoli Workload Scheduler Version 8.2. You do not need to buy it separately.

Chapter 1. IBM Tivoli Workload Scheduler 8.2 new features 5

Page 22: Tivoli Workloud Scheduler Guide

Figure 1-2 Archiver process

1.2.2 What kind of reports can you get?The current version of IBM Tivoli Data Warehouse is 1.1 and it comes with a Web-based reporting interface that can be used to get reports.

The integration with IBM Tivoli Workload Scheduler provides several predefined reports. The following lists these predefined reports:

� Jobs with the highest number of unsuccessful runs� Workstations with the highest number of unsuccessful runs� Run states statistics for all jobs� Jobs with the highest average duration time� Workstations with the highest CPU utilization� Run times statistics for all jobs

Scheds

cpus

Jobs

Jobs

Scheds

Cpus

Archiver process

Archived Symphony

Files

ImportTws_launch_archive.pl

Note: IBM Tivoli Data Warehouse V1.2 (new version of IBM Tivoli Data Warehouse, which is expected to be available in 1Q 2004) is planned to be shipped with Crystal Reports. It will replace the current reporting interface.

6 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 23: Tivoli Workloud Scheduler Guide

Figure 1-3 shows the jobs with the highest number of unsuccessful runs.

Figure 1-3 Jobs with the highest number of unsuccessful runs

1.3 Serviceability enhancementsIBM Tivoli Workload Scheduler Version 8.2 will improve the tracing information provided by the product and will provide new tools to increase the serviceability of the product. This Extended Autotrace Feature is a built-in flight-recorder-style trace mechanism that logs all activities performed by the IBM Tivoli Workload Scheduler processes. In case of product failure or unexpected behavior, this feature can be extremely effective in finding the cause of the problem and in providing a quick solution.

The tracing system is completely transparent and does not have any impact on file system performance because it is fully handled in memory. It is automatically started by the StartUp command, so no further action is required.

Tip: You are not limited to these reports. Since the data is in a DB2 database, you can create your own reports as well.

Chapter 1. IBM Tivoli Workload Scheduler 8.2 new features 7

Page 24: Tivoli Workloud Scheduler Guide

In case of problems, you are asked to create a trace snap file by issuing some simple commands. The trace snap file is then inspected by the Tivoli support team, which uses the logged information as an efficient problem determination base. The Autotrace feature, already available with Version 8.1, has now been extended to run on additional platforms. Configuration options are available in the TWShome/trace directory.

Example 1-1 shows the directories related to the Autotrace. Note that tracing configurations and options can be defined at the Master level. Customizable logging options can be configured for clearer reporting and quicker problem resolution.

Example 1-1 Files related to Autotrace (Master level)

<aix-inv1b:tws> cd $MAESTROHOME<aix-inv1b:tws> pwd/usr/local/tws/maestro<aix-inv1b:tws> cd trace<aix-inv1b:tws> lsatctl ffdc init_trace productconfig ffdc_out libatrc.a

Example 1-2 shows the Autotrace configuration file.

Example 1-2 Autotrace configuration file

<aix-inv1b:tws> more config# AutoTrace configuration file## This file is used to customize trace configurations.# It is installed (encoded into AutoTrace control channel)# by `atctl init' (sync) or `atctl config replace'.# Processes only examine the installed configuration when# they start; existing processes ignore any changes.# The installed configuration may be displayed using `atctl config'.## The file format is a relaxed stanza. Stanzas may be defined# in any order desired. There are four keywords that introduce

Note: On Windows® NT, the netman service is started automatically when a computer is restarted. The StartUp command can be used to restart the service if it is stopped for any reason.

On UNIX, the StartUp command is usually installed in the /etc/rc file, so that netman is started each time a computer is rebooted. StartUp can be used to restart netman if it is stopped for any reason.

8 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 25: Tivoli Workloud Scheduler Guide

# a new stanza; most take an argument to restrict their effect:# default# product $id # 32-bit value or a product name# process $name # process name as invoked# channel $number # 1..255, inclusive

For more details on the Autotrace facility refer to Tivoli Workload Scheduler Version 8.2, Error Message and Troubleshooting, SC32-1275.

1.4 Job return code processingNew with this version, you can define the success condition for a job, using a very simple logical expression syntax. This provides:

� More granularity in defining the job success and fail policy.

� More flexibility in controlling the job execution flow based on the execution results.

The job return code is now saved in the Plan and visible by the Job Scheduling Console and by conman. If a job is a recovery job, the jobinfo utility can return the information about the original return code, such as:

jobinfo RSTRT_RETCOD

The conman showjobs has been enhanced to retrieve the return code information of a given job. For example:

conman “sj <jobselect>; keys retcode”

If you want to use the new return code functionality, you have to add the RCCONDSUCC keyword in the job definition (you can also define the return code from the JSC in the New Job Definition window).

For example:

RCCONDSUCC “RC = 2 OR (RC >= 6 AND RC <= 10)”

This expression means that if the job’s return code is equal to 2 or a value between 6 and 10, the job’s execution will be considered successful, while it will be considered in error in any other cases.

The default behavior (if you do not code the return code) is:

� If the return code is equal to 0, the job is considered successful (SUCC)

Chapter 1. IBM Tivoli Workload Scheduler 8.2 new features 9

Page 26: Tivoli Workloud Scheduler Guide

� If the return code is different from 0, the job is considered an error (ABEND)

Return code functionality is covered in detail in Chapter 4, “Return code management” on page 151.

1.5 New options for handling time constraintsJobs and job streams are constrained by the start times and deadlines specified for them, and also by any dependencies they have on the completion of other jobs or job streams. To aid you in determining the status of jobs and job streams, the following enhancements have been implemented:

� Messages related to job and job stream errors and delays are displayed by the Job Scheduling GUI and logged to the message log.

– A message is issued for jobs and job streams that have reached the start time specified for them, but cannot start because of pending dependencies on other jobs or job streams.

– A message is issued when the deadline time is reached for a job or job stream that has not yet started.

� A query facility enables you to query jobs and job streams using the Job Scheduling GUI or command-line interface for the following information:

– Jobs or job streams whose start times have been reached, but have not yet started

– Jobs or job streams whose deadlines have been reached, but have not yet started

– Jobs or job streams whose deadlines have been reached, but have not yet completed running

� New options enable you to start jobs or job streams whose deadlines have been reached but have not yet started.

1.6 New options in the localopts fileIn IBM Tivoli Workload Scheduler Version 8.2, a number of new options have been implemented to be used in the localopts file. Example 1-3 on page 11 is a sample of an IBM Tivoli Workload Scheduler Version 8.2 localopts file. All SSL related options are discussed in Chapter 5, “Security enhancements” on page 171.

10 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 27: Tivoli Workloud Scheduler Guide

Example 1-3 Abbreviated local options file

# SSL Attributes#nm SSL port =31113SSL key =/usr/local/tws/maestro/ssl/MASTERkey.pemSSL certificate =/usr/local/tws/maestro/ssl/MASTERcert.pemSSL key pwd =/usr/local/tws/maestro/ssl/MASTERpwd.sthSSL CA certificate =/usr/local/tws/maestro/ssl/cacert.pem#SSL certificate chain =/usr/local/tws/maestro/ssl/TWSCertificateChain.crtSSL random seed =/usr/local/tws/maestro/ssl/TWS.rndSSL Encryption Cipher =SSLv3SSL auth mode =cpu#SSL auth string =tws

1.6.1 nm tcp timeoutWith this attribute for the netman process, specify the maximum number of seconds that mailman and conman will wait for the completion of a request on a linked workstation that is not responding. The default is 600 seconds. The local options file is located in the TWShome directory.

1.7 Job events processing enhancementsThis version sees new positional variables added to job events from 101 to 118 and sent in the Event.log file to be processed by IBM Tivoli Workload Scheduler Plus module and shown by IBM Tivoli Enterprise Console® such as:

� Job estimated start-time� Job effective start time� Job estimated duration � Job deadline time

Also in this version, event status is monitored at Symphony file generation time for key jobs. A parameter SYMEVNTS in BmEvents.conf allows you to get a picture of the key jobs that are in ready, hold and exec status. This information is reported on IBM Tivoli Enterprise Console by IBM Tivoli Workload Scheduler Plus module.

Job events processing enhancements are covered in more detail in Chapter 7, “IBM Tivoli Enterprise Console integration” on page 221.

Chapter 1. IBM Tivoli Workload Scheduler 8.2 new features 11

Page 28: Tivoli Workloud Scheduler Guide

1.8 Networking and security enhancementsThree major networking and security enhancements have been introduced in this version: full firewall support, centralized security, and SSL encryption and authentication.

1.8.1 Full firewall support In previous versions of IBM Tivoli Workload Scheduler, running the commands to start or stop a workstation or to get the standard list required opening a direct TCP/IP connection between the originator and the destination nodes. In a firewall environment, this forces users to break the firewall to open a direct communication path between the Master and each Fault Tolerant Agent in the network.

Tivoli Workload Scheduler Version 8.2 features a new configurable attribute, behindfirewall, in the workstation's definition in the database. You can set this attribute to ON to indicate that a firewall exists between that particular workstation and its Domain Manager, and that the link between the Domain Manager and the workstation (which can be another Domain Manager itself) is the only allowed link between the respective domains.

Also, for all the workstations having this attribute set to ON, the commands to start or stop the workstation or to get the standard list will be transmitted through the domain hierarchy instead of opening a direct connection between the Master (or Domain Manager) and the workstation.

For more information on this, refer to 5.1, “Working across firewalls” on page 172.

1.8.2 Centralized security mechanismA new global option makes it possible to change the security model in the Tivoli Workload Scheduler network. If you use this option, then the Security files for the FTAs in the network can be created or modified only on the Master. If you choose this option, the Tivoli Workload Scheduler administrator needs to create, update, and distribute the Security files for all the agents.

Setting this global option on also triggers a security mechanism that uses an encrypted, randomly generated Security file checksum and a Symphony file run number to identify and trust the Tivoli Workload Scheduler network corresponding to that Master.

The main goal of the centralized security model is to take away from the root (or administrator) user of an FTA the means of deleting the Security file and of re-creating it with the maximum authorization, thus gaining the capability to issue

12 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 29: Tivoli Workloud Scheduler Guide

Tivoli Workload Scheduler commands that affect other machines within the network. In Example 1-4, globalopts file centralized security is enabled (shown in bold). Consequently, only the Master Domain Controller administrator has the ability to modify local Fault Tolerant Agent Security files. The globalopts file is located in the TWShome/mozart directory.

Example 1-4 Abbreviated global options file

Set automatically grant logon as batch=yes on the master Workstation to enable this# user right on NT machines for a job's streamlogon user.#automatically grant logon as batch =no## For open networking systems bmmsgbase and bmmsgdelta must equal 1000.# bmmsgbase =1000bmmsgdelta =1000##--------------------------------------------------------------------------# Entries introduced in TWS-7.0 Timezone enable =yesDatabase audit level =1Plan audit level =1#----------------------------------------------------------------------------# Entries introduced in TWS-8.2centralized security =yesenable list security check =no

For more information on centralized security, refer to 5.3, “Centralized user security definitions” on page 204.

1.8.3 SSL encryption and authentication supportIn IBM Tivoli Workload Scheduler Version 8.2 network communication between IBM Tivoli Workload Scheduler systems can be configured to use SSL. This ensures secured, authenticated and encrypted connection between components running in secure and non-secure domains.

SSL uses digital certificates to authenticate the identity of a workstation. The IBM Tivoli Workload Scheduler administrator must plan how authentication will be used within the network:

� Use one certificate for the entire Tivoli Workload Scheduler network.

Tip: If you prefer to use the traditional security model, you can still do so by not activating the global variable.

Chapter 1. IBM Tivoli Workload Scheduler 8.2 new features 13

Page 30: Tivoli Workloud Scheduler Guide

� Use a separate certificate for each domain.

� Use a separate certificate for each workstation.

SSL support is automatically installed with IBM Tivoli Workload Scheduler Version 8.2. To activate SSL support, the administrator must perform the following actions:

� Set up a private key, a certificate, and a trusted Certificate Authority (CA) list.

� Set the SSL local options.

� Add the SSL attributes to the workstation's definition in the database.

For detailed information on SSL encryption and authentication support in IBM Tivoli Workload Scheduler refer to 5.2, “Strong authentication and encryption using Secure Socket Layer protocol (SSL)” on page 176.

1.9 IBM Tivoli Workload Scheduler for Applications Along with the release of IBM Tivoli Workload Scheduler Version 8.2, IBM released new versions of IBM Tivoli Workload Scheduler Application Extensions, bundled in a product called IBM Tivoli Workload Scheduler for Applications. This new packaging includes the following access methods:

� Oracle E-Business Suite Access Method (MCMAGENT): Schedule and control Oracle E-Business Suite 11.0 or 11.i jobs. Schedule and control Oracle Applications 10.7 jobs.

� PeopleSoft Access Method (psagent): Schedule and control PeopleSoft 7.x and 8.x jobs.

� R/3 Access Method (r3batch): Schedule and control R/3 3.1g or higher standard jobs. Schedule and control R/3 Business Information Warehouse jobs. National Language Support (NLS). SAP BC-XBP 2.0 Interface Support and R/3 Logon Groups Job editing/creation from Job Scheduling Console.

� z/OS® Access Method (mvsca7, mvsjes, mvsopc): Schedule and control JES2, JES3, OPC 3.0 or higher, CA-7.

A new GUI installation process is also available to simply the IBM Tivoli Workload Scheduler for Applications 8.2 installation process.

To create and set up an Extended Agent workstation, you should go through the following steps. This is common to all access methods.

� Perform all the post-installation steps required by each method.

� Create one or more Extended Agent workstation definitions.

14 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 31: Tivoli Workloud Scheduler Guide

� Create the options file for those Extended Agent workstation definitions.

� Install one or more access methods through:

– InstallShield Multiplatform (ISMP) wizard installer– IBM Tivoli Configuration Manager 4.2– TAR-based installation for Tier 2 platforms

For more information, refer to IBM Tivoli Workload Scheduler for Applications User Guide, SC32-1278.

Important: The PeopleSoft access method requires some pre-installation steps before running the installer.

A template for running a silent installation is also available with this product and is located in the response_file directory. Refer to the IBM Tivoli Workload Scheduler Release Notes, SC32-1277 for further information.

Chapter 1. IBM Tivoli Workload Scheduler 8.2 new features 15

Page 32: Tivoli Workloud Scheduler Guide

16 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 33: Tivoli Workloud Scheduler Guide

Chapter 2. Job Scheduling Console enhancements

This chapter describes the latest enhancements added to the Job Scheduling Console (JSC) V1.3. The JSC has a new look and many added areas of functionality required for today’s scheduling market needs.

The Job Scheduling Console now has an easier to understand interface, with a Tivoli-compliant look and feel, making it even easier to navigate and update database entries.

The following topics are covered in this chapter:

� “New JSC look and feel” on page 18� “Other JSC enhancements” on page 24

2

© Copyright IBM Corp. 2003. All rights reserved. 17

Page 34: Tivoli Workloud Scheduler Guide

2.1 New JSC look and feelThe Job Scheduling Console now comprises a number of views, as shown in Figure 2-1:

� Actions list or Actions Tree view� Work with engines or Engine view� Windows Explorer view

In addition to these views, there is also a Task Assistant, which is the help feature of the JSC.

Figure 2-1 JSC 1.3 views

2.1.1 Action listThis new Actions list pane, as shown in Figure 2-2 on page 19, can be used to perform actions on a specific IBM Tivoli Workload Scheduler engine, such as defining a new scheduling object. This new feature makes navigating scheduling engines very easy.

The first level is the IBM Tivoli Workload Scheduler object type, while the second level is the FTA workstation name in which the object will be created. The

Actions Tree viewWindows Explorer view

Engine view

18 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 35: Tivoli Workloud Scheduler Guide

second-level selection can be dependent on the IBM Tivoli Workload Scheduler engine (or connector) that you currently select in the Engine view.

This Action list view is often referred to as the portfolio or the Action Tree view, which lists the actions that are available to you.

Figure 2-2 Actions list pane

The Actions list pane can be displayed or hidden (toggled on/off) by selecting View -> Show -> Portfolio from the menu bar as shown in Figure 2-3 on page 20.

Chapter 2. Job Scheduling Console enhancements 19

Page 36: Tivoli Workloud Scheduler Guide

Figure 2-3 Toggling Action list pane on/off

2.1.2 Work with enginesThe Work with engines pane, as shown in Figure 2-4 on page 21, is a tree view that displays your scheduler engines. If you expand any of the objects that represent the scheduler engines, you see the lists and groups available for that particular engine. You can right-click any list item to display a pop-up menu that allows you to manage the list properties.

20 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 37: Tivoli Workloud Scheduler Guide

Figure 2-4 Work with engine pane

There are two types of views that are provided with the JSC: the database view and the plan view.

� The database view shows the objects in the IBM Tivoli Workload Scheduler database.

� The plan view shows the instances of objects in the IBM Tivoli Workload Scheduler plan.

The Common Default Plan lists will connect to all available connectors and produce the necessary output.

You can also define your own group and lists.

A great benefit of creating custom lists is that end users need only work with their assigned agents and parts of the enterprise they are responsible for. Organizing lists across domains and parts of the business leads to a more efficient use of the JSC, since the only data that needs to be loaded is the appropriate list data. This helps with JSC refresh rates and creates less network traffic.

User Defined list

Default Database Lists

Default Plan Lists

Common Default Plan Lists

User Defined Group

Name of the instance

Chapter 2. Job Scheduling Console enhancements 21

Page 38: Tivoli Workloud Scheduler Guide

2.1.3 Explorer viewThe Explorer view shown in Figure 2-5 provides an integrated view of job streams and job instances with their related jobs, for job stream instances, and dependencies, if any, in a tree and table view. A tree view is in the left pane and a table in the right pane.

Figure 2-5 Explorer pane

If you select the tree root in the left pane, all job stream instances are displayed in table form in the right pane. In both panes, a pop-up menu is available for managing job stream instances.

If you select a job stream instance in the left pane, it expands to display the All Jobs folder and the Dependencies folder. To view all jobs contained in the job stream instance, select the All Jobs folder. In both the left and right pane, a pop-up menu is available for managing jobs.

22 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 39: Tivoli Workloud Scheduler Guide

To view all the dependencies contained in the job stream instance, select the Dependencies folder. The contents of the folder are displayed in table form in the right pane.

When the list is refreshed, it is re-created based on the data returned by the engine. Any job stream instances with their related jobs and dependencies that were expanded are collapsed to be updated. You have to navigate to the job stream instance that interests you and expand the jobs and dependencies folder again.

2.1.4 Task AssistantClicking the question mark in the top-right corner of the IBM Tivoli Job Scheduling Console will open the Task Assistant as shown in Figure 2-6. This feature is very useful for new users and has online help, with descriptions for every major function within the JSC.

Figure 2-6 Task Assistant

You can open the Task assistant using one of the following methods:� ? button on the upper right corner of the window� Help menu� F1 key when the cursor is positioned on the action for which you need help.

Chapter 2. Job Scheduling Console enhancements 23

Page 40: Tivoli Workloud Scheduler Guide

2.2 Other JSC enhancementsApart from the main display changes, IBM Tivoli Job Scheduling Console, Version 1.3 also features the following enhancements:

� Progressive message numbering� Hyperbolic Viewer for job stream instances and job instances� Column layout customization� Late jobs handling for the distributed environment� Job return code mapping for the distributed environment� SSL support for the distributed environment� Support for firewalls in IBM Tivoli Workload Scheduler network� Work across firewalls for the Job Scheduling Console and Tivoli Management

Framework

The last three of these enhancements are discussed in Chapter 5, “Security enhancements” on page 171.

2.2.1 Progressive message numberingAs shown in Figure 2-7, JSC messages are now identified using progressive numbering.

Figure 2-7 Progressive message numbering

Every message is identified using progressive numbering

GJSXXXXY

Y

Where XXXX is a progressive number

Indicates the message type

There 3 types of messages are available:

� Error messages Y=E� Warning messages Y=W� Information messagesY=I

24 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 41: Tivoli Workloud Scheduler Guide

2.2.2 Hyperbolic ViewerThe Hyperbolic Viewer displays an integrated graphical view of job streams and job instances with their related jobs, job streams and other dependencies, as shown in Figure 2-8. This view can be very useful for displaying complicated queries that involve many job stream instances. You can access the Hyperbolic Viewer by selecting View -> Hyperbolic Viewer from the JSC menu. It is only used for the Scheduled Jobs and Job Streams Plan Lists.

Figure 2-8 Hyperbolic Viewer

2.2.3 Column layout customizationWhen creating a query list, you can now decide which columns will be displayed. A new pane in the list properties window displays check boxes for all columns available in the query, as shown in Figure 2-9 on page 26. By selecting or clearing the check boxes, you can create a list containing only the columns you have selected, or modify an existing list. In order to access the Column Definition

Chapter 2. Job Scheduling Console enhancements 25

Page 42: Tivoli Workloud Scheduler Guide

window when creating a custom list, click the Column Definition tab in the Properties window.

Figure 2-9 Column Definition window

2.2.4 Late job handlingJobs and job streams are constrained by the start times and deadlines specified for them, and also by any dependencies they have on the completion of other jobs or job streams. To aid you in determining the status of jobs and job streams, the following enhancements have been implemented:

� Messages related to job and job stream errors and delays are displayed and logged to the IBM Tivoli Workload Scheduler message and event logs.

New with ITWS 8.2!New with ITWS 8.2!

26 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 43: Tivoli Workloud Scheduler Guide

� A query facility enables you to query jobs and job streams for the following information:

– Jobs or job streams whose start times have been reached, but have not yet started

– Jobs or job streams whose deadlines have been reached, but have not yet started

– Jobs or job streams whose deadlines have been reached, but have not yet completed running

– Start jobs or job streams whose deadlines have been reached but have not yet started

Figure 2-10 on page 28 shows how to input the latest start and termination deadlines.

Chapter 2. Job Scheduling Console enhancements 27

Page 44: Tivoli Workloud Scheduler Guide

Figure 2-10 Latest start time and termination deadline

2.2.5 Job return code mappingYou can now define a logical expression to represent which return codes are to be considered successful, as shown in Figure 2-11 on page 29. The ability to

28 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 45: Tivoli Workloud Scheduler Guide

define the job as successful or abend allows more flexibility in controlling the job execution flow depending on the result of the job execution.

Figure 2-11 Return code mapping

For more information on these JSC enhancements and other JSC features, refer to Tivoli Workload Scheduler Job Scheduling Console User’s Guide, SH19-4552. You can also find some suggestions about optimizing JSC performance in 9.5, “Optimizing Job Scheduling Console performance” on page 307.

Chapter 2. Job Scheduling Console enhancements 29

Page 46: Tivoli Workloud Scheduler Guide

30 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 47: Tivoli Workloud Scheduler Guide

Chapter 3. Installation

This chapter provides step-by-step installation instructions for Tivoli Workload Scheduler 8.2 and Job Scheduling Console (JSC) including the setup of Tivoli Framework A number of common scenarios are provided, including:

� “Installing a Master Domain Manager on UNIX” on page 35� “Adding a new feature” on page 56� “Promoting an agent” on page 78� “Upgrading to Version 8.2 from a previous release” on page 92� “Installing the Job Scheduling Console” on page 109� “Installing using the twsinst script” on page 130� “Silent install using ISMP” on page 132� “Troubleshooting installation problems” on page 135� “Uninstalling Tivoli Workload Scheduler 8.2” on page 137

3

© Copyright IBM Corp. 2003. All rights reserved. 31

Page 48: Tivoli Workloud Scheduler Guide

3.1 Installation overviewThe following are the products used in the test environment:

� AIX® 5L™ Version 5.1� Red Hat Linux Enterprise Server 2.1� Windows 2003 Service Pack 3� Tivoli Workload Scheduler 8.2 Version 8.2� Tivoli Workload Scheduler 8.2 Version 8.1� Tivoli Workload Scheduler 8.2 Version 7.0� Job Scheduling Services 1.2� IBM Tivoli Workload Scheduler Connector 8.2� Tivoli Management Framework 4.1

3.1.1 CD layoutThe following CDs are required to start the installation process:

� Tivoli Workload Scheduler 8.2 Installation Disk 1

This CD-ROM includes install images for most of the Tier 1 platforms, TMF JSS, and IBM Tivoli Workload Scheduler Connector. Table 3-1 is a complete contents list.

Table 3-1 Tivoli Workload Scheduler 8.2 Installation Disk 1

File or directory Description

AIX/ bin/ catalog/ CLI/ codeset/ SETUP.bin Tivoli_TWS_AIX.SPB twsinst

AIX ISMP and twsinst installation files

HPUX/ HP-UX ISMP and twsinst installation files

SOLARIS/ Sun Solaris ISMP and twsinst installation files

WINDOWS/ Windows ISMP installation files

RESPONSE_FILE/ Silent installation templates

TWS_CONN/ TMF JSS and TWS Connector installation files

TWSPLUS/ TWS Plus Module files

32 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 49: Tivoli Workloud Scheduler Guide

� Tivoli Workload Scheduler 8.2 Installation Disk 2

This CD-ROM includes ISMP installation images from the remaining Tier 1 platforms, and install images for the Tier 2 platforms. Table 3-2 is a complete contents list.

Table 3-2 Tivoli Workload Scheduler 8.2 Installation Disk 2

SETUP.bin Setup program

SETUP.jar Java™ class archive used by SETUP.bin

Tivoli_TWS_LP.SPB Software Package Block (SPB) archive

TWS_size.txt Component sizes in megabytes

media.inf CD-ROM information

File or directory Description

LINUX_I386/ Linux (Intel®) ISMP installation files

LINUX_S390/ Linux (z/OS) ISMP installation files

DYNIX/ IBM Sequent® MAESTRO.TAR archive

IRIX/ SGI Irix MAESTRO.TAR archive

LINUX_PPC/ Linux (PowerPC®) MAESTRO.TAR archive

OSF/ HP/Compaq Tru64 MAESTRO.TAR archive

RESPONSE_FILE/ Silent installation templates

TWS_CONN/ TMF JSS and TWS Connector installation files

TWSPLUS/ TWS Plus Module files

As400/ AS/400® Limited Fault Tolerant Agent installation archive

Add-On/WINDOWS/Perl5 Perl 5.8.0 for Windows

tedw_apps_etl/ IBM Tivoli Enterprise Data Warehouse integration files

SETUP.bin Setup program

SETUP.jar Java class archive used by SETUP.bin

File or directory Description

Chapter 3. Installation 33

Page 50: Tivoli Workloud Scheduler Guide

3.1.2 Major modificationsThe installation causes the following major modifications to the installed directory structure and modified files:

1. The TWShome/../unison directory has been relocated inside the main IBM Tivoli Workload Scheduler installation directory. The contents have been primarily relocated as follows:

a. All binaries have been moved to the TWShome/bin directory.

b. The netman related files, NetConf and NetReq.msg, have been relocated to the TWShome/network directory.

c. The TWShome/../unison/network directory has been relocated to TWShome/network.

2. The components file, normally found in /usr/unison, is no longer used for Tivoli Workload Scheduler 8.2 Version 8.2 installations on Tier 1 platforms supporting the ISMP installation method. Instead, a new TWSRegistry.dat file is now used. Unlike the components file, the TWSRegistry.dat file is only used to register information regarding the installed agent; it is not referred to at run time.

The TWSRegistry file is located:

– On UNIX in the directory:

/etc/TWS

– On Windows in the directory:

%SystemRoot%\system32

3. The IBM Tivoli Workload Scheduler installation depends on the workstation type you are installing. For example if you install a Fault Tolerant Agent, you will not find the Jnextday script in the TWShome directory. This only gets installed if the workstation type is a Master or a Backup Master.

Tivoli_TWS_LP.SPB Software Package Block (SPB) archive

TWS_size.txt Component sizes in Megabytes

media.inf CD-ROM information

File or directory Description

34 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 51: Tivoli Workloud Scheduler Guide

3.2 Installation roadmapThe basic flow of the InstallShield MultiPlatform installation process is illustrated diagramatically in Figure 3-1.

Figure 3-1 Installation flow

3.3 Installing a Master Domain Manager on UNIXThe manual Tivoli Workload Scheduler Version 8.2, SC32-1273 contains detailed instructions for upgrading to, installing, and adding features to IBM Tivoli Workload Scheduler Version 8.2. In this section we give a step-by-step guide to installing a Master Domain Manager, including the IBM Tivoli Workload Scheduler engine, IBM Tivoli Workload Scheduler Connector and Tivoli Framework in a single operation on an AIX 5L Version 5.1 system.

We recommend that you create a separate file system to protect the root file system and also to prevent other applications from inadvertently filling up the file system IBM Tivoli Workload Scheduler is installed in.

A file system size of 500 MB should be enough for IBM Tivoli Workload Scheduler Domain Managers and Master including Tivoli Framework, but exact

TYPICAL

FRESHADD FEATURES PROMOTE - MIGRATION

Wizard Language Selection

Welcome Panel

License Panel

Discovery Panel

User Panel

Location Panel

Install Type Panel

Agent Type Panel

Feature Panel

CUSTOMFULL

CONNECTOR TIVOLI PLUS MODULE

SAFTA - MASTER - BKM

Connector Panel Tivoli Plus Module Panel

CPU Definition Panel

Summary Panel

I s Framework Already inst alled? Framework Panel

No

Yes

Language Panel

Chapter 3. Installation 35

Page 52: Tivoli Workloud Scheduler Guide

space requirements will vary considerably from one installation to another depending on the number and types of jobs run plus the amount of time logs are retained. Note that without the Tivoli Framework installed, a file system 250-300 MB should be adequate.

1. Log in as root and create a TWSuser and group. We used the user tws and the group tivoli.

a. Create a group for the TWSuser:

# smitty group

Select Add a Group and type tivoli in the Group Name field. Leave the other options as default.

b. Create the TWSuser:

# smitty user

i. Select Add a User and type tws for the User Name, tivoli for the Primary Group, and the location of the TWSuser’s home directory (for example, /usr/local/tws/maestro) for the Home Directory.

ii. Set an initial password for the user tws:

# passwd tws

iii. Telnet to localhost, then log in as the user tws and change the password. This change is necessary because AIX by default requires you to change the user’s password at the first login:

# telnet localhost

iv. Customize the user’s .profile to set up the user’s environment correctly for IBM Tivoli Workload Scheduler by adding the following lines using your favorite text editor, for example vi:

# source tws environment. ${HOME}/tws_env.sh

# set conman default to displaying expanded objectsMAESTRO_OUTPUT_STYLE=LONGexport PATH MAESTRO_OUTPUT_STYLE

Note: User names containing spaces are not permitted.

Tip: Confirm that the TWShome directory (TWSuser’s home directory), has been created before launching the setup program. The smitty user should create the TWShome directory for you, but other methods of creating users may not.

36 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 53: Tivoli Workloud Scheduler Guide

v. Now that the TWSuser has been set up correctly, you can close the telnet session by logging out, and return to root.

2. Mount the appropriate CD-ROM (for AIX this is Tivoli Workload Scheduler 8.2 Installation Disk 1) as follows:

# mount -r -V cdrfs /dev/cd0 /cdrom

3. Create a temporary directory for the setup program to copy some images to the local file system to allow unmounting of CDs during the installation. Although any existing temporary directory will do, the setup program does not clean up after itself, and using a dedicated directory greatly simplifies the cleanup process:

# mkdir /usr/local/tmp/TWS

4. Change directory to the top-level directory of the CD-ROM and launch the installation:

# cd /cdrom

# ./SETUP.bin -is:tempdir /usr/local/tmp/TWS

5. Select the appropriate language and click OK to continue, as shown in Figure 3-2 on page 38.

Note: This mount command is AIX specific. See Table 3-5 on page 149 for equivalent commands for other platforms.

Note: If the mount point /cdrom does not exist, create the directory /cdrom (mkdir /cdrom) or substitute all references to /cdrom with an alternate mount point that does exist.

Note: There may be some delay while the install images are copied to the local file system before any output is generated. This is particularly true when a slow CD-ROM drive is used.

Chapter 3. Installation 37

Page 54: Tivoli Workloud Scheduler Guide

Figure 3-2 Language selection

6. The welcome window lists the actions available. Click Next to continue with the installation, as shown in Figure 3-3.

Figure 3-3 Installation overview

7. Having read the terms and conditions, select I accept the terms in the license agreement, then click Next to continue as shown in Figure 3-4 on page 39.

38 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 55: Tivoli Workloud Scheduler Guide

Figure 3-4 Software License Agreement

8. The Install a new Tivoli Workload Scheduler Agent option is selected by default. Click Next to continue as shown in Figure 3-5 on page 40.

Chapter 3. Installation 39

Page 56: Tivoli Workloud Scheduler Guide

Figure 3-5 Installation operation

9. Specify the TWSuser name created in step b on page 36, then click Next to continue, as shown in Figure 3-6 on page 41.

40 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 57: Tivoli Workloud Scheduler Guide

Figure 3-6 Specifying the TWSuser

10.Tivoli Workload Scheduler 8.2 is installed into the TWSuser’s home directory. Review the path, then click Next to continue as shown in Figure 3-7 on page 42.

Chapter 3. Installation 41

Page 58: Tivoli Workloud Scheduler Guide

Figure 3-7 Destination directory

11.Select the Custom install option and click Next as shown in Figure 3-8 on page 43.

42 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 59: Tivoli Workloud Scheduler Guide

Figure 3-8 Type of installation

12.Select Master Domain Manager and click Next as shown in Figure 3-9 on page 44.

Chapter 3. Installation 43

Page 60: Tivoli Workloud Scheduler Guide

Figure 3-9 Type of agent to install

13.Type in the following information and click Next as shown in Figure 3-10 on page 45:

a. The company name as you would like it to appear in program headers and reports.

b. The Tivoli Workload Scheduler 8.2 name for this workstation.

c. The TCP port number used by the instance being installed. It must be a value in the range 1-65535. The default is 31111.

Note: Spaces are permitted, provided that the name is not enclosed in double quotation marks.

Note: This name cannot exceed 16 characters, cannot contain spaces, and is not case sensitive.

44 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 61: Tivoli Workloud Scheduler Guide

Figure 3-10 Workstation configuration information

14.Check Connector - Install and Configure then click Next as shown in Figure 3-11 on page 46.

Chapter 3. Installation 45

Page 62: Tivoli Workloud Scheduler Guide

Figure 3-11 Optional features

15.Type the name that identifies the instance in the Job Scheduling Console window. The name must be unique within the scheduler network. We used the convention hostname_TWSuser, as shown in Figure 3-12 on page 47.

46 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 63: Tivoli Workloud Scheduler Guide

Figure 3-12 Connector Instance Name

16.Select any additional languages to install and click Next as shown in Figure 3-13 on page 48. We did not select any additional languages to install at this stage, since this requires the Tivoli Management Framework 4.1 Language CD-ROM be available in addition to the Tivoli Framework 4.1 Installation CD-ROM during the install phase.

Chapter 3. Installation 47

Page 64: Tivoli Workloud Scheduler Guide

Figure 3-13 Additional languages

17.Specify the directory where you would like the Tivoli Management Framework installed and click Next to continue, as shown in Figure 3-14 on page 49.

48 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 65: Tivoli Workloud Scheduler Guide

Figure 3-14 Framework destination directory

18.Review the installation settings and click Next as shown in Figure 3-15 on page 50.

Chapter 3. Installation 49

Page 66: Tivoli Workloud Scheduler Guide

Figure 3-15 Installation settings

19.A progress bar indicates that the installation has started, as shown in Figure 3-16 on page 51.

50 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 67: Tivoli Workloud Scheduler Guide

Figure 3-16 Progress bar

20.On the Locate the Installation Image window, you are prompted for the location of the Tivoli Management Framework images. You will first need to change out of the CD-ROM root before you can unmount the Tivoli Workload Scheduler 8.2 Installation CD-ROM:

# cd /# umount /cdrom

21.Replace the Tivoli Workload Scheduler 8.2 Installation CD-ROM with the Tivoli Framework 4.1 Installation CD-ROM and mount the CD-ROM:

# mount -r -V cdrfs /dev/cd0 /cdrom

22.Once the CD-ROM is mounted, select /cdrom in the Locate Installation Image window and click OK, as shown in Figure 3-17 on page 52.

Tip: The output from the setup program may have obscured your command-line prompt, press Enter to get a prompt back, in order to enter the change directory and umount commands.

Chapter 3. Installation 51

Page 68: Tivoli Workloud Scheduler Guide

Figure 3-17 Locate the Tivoli Framework installation image

23.The progress bar will indicate that the installation is continuing. Once the Tivoli Management Framework installation has completed, a Tivoli Desktop will launch, as shown in Figure 3-18 on page 53.

52 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 69: Tivoli Workloud Scheduler Guide

Figure 3-18 Tivoli Desktop

24.A pop-up window prompting for the Tivoli Workload Scheduler 8.2 Engine CD should appear shortly after the Tivoli Desktop, as shown in Figure 3-19 on page 54. Before acknowledging the prompt, you will need to unmount the Tivoli Framework CD:

# umount /cdrom

Chapter 3. Installation 53

Page 70: Tivoli Workloud Scheduler Guide

Replace the Tivoli Framework CD with the Tivoli Workload Scheduler 8.2 Installation Disk 1 CD, then mount the CD and change directory to the root directory of the CD:

# mount -r -V crdfs /dev/cd0 /cdrom# cd /cdrom

Figure 3-19 Insert CD

25.Once the installation is complete you will get a final summary window. Click Finish to exit the setup program, as shown in Figure 3-19.

Figure 3-20 Installation complete

54 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 71: Tivoli Workloud Scheduler Guide

26.You have now finished with the Tivoli Workload Scheduler 8.2 Installation CD-ROM and can unmount it:

# cd /# umount /cdrom

27.This would be a good time to clean up the temporary disk space used by the setup program:

# cd /usr/local/tmp# rm -rf TWS

28.Finally there are a few more steps to complete the setup and start Tivoli Workload Scheduler 8.2:

a. Log in as the TWSuser.

b. Run the composer command to add the FINAL schedule definition to the database by running the following command:

$ composer add Sfinal

c. Run the Jnextday script:

$ ./Jnextday

d. Give Tivoli Workload Scheduler 8.2 a few minutes to start up, then check the status by running the command:

$ conman status

If Tivoli Workload Scheduler 8.2 started correctly, you will see the status:

Batchman LIVES

Tip: The output from the setup program may have obscured your command-line prompt. After exiting the setup program, press Enter to get a prompt back.

Tip: You need to be in the TWShome directory and have the TWSuser’s environment correctly configured for this command to be successful.

Tip: This is the only time that the Jnextday script should be run in this way. After this initial run, Tivoli Workload Scheduler 8.2 will schedule the Jnextday job to run daily. If it is necessary to run Jnextday before its scheduled time, for example while testing in a development environment, release all the dependencies on the FINAL schedule using conman or the Job Scheduling Console.

Chapter 3. Installation 55

Page 72: Tivoli Workloud Scheduler Guide

e. After installation, the default job limit is set to zero. In order for jobs with a priority lower than GO (101) to run, this limit needs to be raised:

$ conman “limit;10”

3.4 Adding a new featureYou can install the following optional components or features that were not installed during a previous Tivoli Workload Scheduler 8.2 installation using the ISMP installation program:

Tivoli Plus Module Integrates Tivoli Workload Scheduler 8.2 with Tivoli Management Framework, Tivoli Enterprise Console, and Distributed Monitoring. The Tivoli Management Framework Version 3.7.1 or 4.1 is a prerequisite for this component. If a version earlier than 3.7.1 is found, this feature cannot be installed. If an installation is not detected, Version 4.1 is automatically installed.

TWS Connector The Job Scheduling Console communicates with the Tivoli Workload Scheduler 8.2 system through the Connector. It translates instructions entered through the Console into scheduler commands. The Tivoli Management Framework 3.7.1 or 4.1 is a prerequisite for this component. If a version earlier than 3.7.1 is found, this feature cannot be installed. If an installation is not detected, Version 4.1 is automatically installed.

Language Packs The English language pack and the language locale of the operating system are installed by default. The installation program enables users to select any of the supported languages.

Before performing an upgrade, be sure that all Tivoli Workload Scheduler 8.2 processes and services are stopped. If you have any jobs that are running currently, they should be allowed to complete or you should stop the related processes manually.

To install the IBM Tivoli Workload Scheduler Connector optional features, an existing Tivoli Workload Scheduler 8.2 FTA without Tivoli Management Framework must be installed using the following steps:

1. From the Job Scheduling Console, stop the target workstation. Otherwise from the command line on the MDM while logged on as the TWSuser, use the following command:

$ conman “stop workstation”

56 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 73: Tivoli Workloud Scheduler Guide

2. From the Job Scheduling Console, unlink the target workstation. From the command line on the MDM, use the following command:

$ conman “unlink workstation”

3. Log on to the target workstation as root (UNIX/Linux), or the local Administrator (Windows).

4. From the command line (DOS prompt on Windows), stop the netman process as follows:

– On UNIX:

$ su - TWSuser -c “conman shut\;wait”

– On Windows:

C:\> cd \win32app\maestroC:\win32app\maestro> .\Shutdown.cmd

5. To verify whether there are processes still running, complete the following steps:

– On UNIX, run the command:

$ ps -u TWSuser

– On Windows, run the command:

C:\win32app\maestro> unsupported\listproc.exe

Verify that the following processes are not running:

netman, mailman, batchman, writer, jobman, JOBMAN (UNIX only), stageman, JOBMON (Windows only), tokensrv (Windows only).

6. Insert the Tivoli Workload Scheduler 8.2 Installation CD-ROM (CD 1 for UNIX and Windows, CD 2 for Linux).

7. Run the setup program for the operating system on which you are upgrading:

– On UNIX/Linux, while logged on as root, mount the CD-ROM and change directory to the root directory of the CD-ROM:

# mount -r -V cdrfs /dev/cd0 /cdrom# cd /cdrom# ./SETUP.bin [-is:tempdir temporary_directory]

Tip: If you are adding a feature to an installation that includes the Connector, be sure that you stop the connector processes also.

Note: The following mount command is AIX specific. See Table 3-5 on page 149 for equivalent commands for other platforms.

Chapter 3. Installation 57

Page 74: Tivoli Workloud Scheduler Guide

– On Windows, launch the SETUP.exe file in the WINDOWS folder on the CD-ROM as shown in Figure 3-21.

Figure 3-21 Run SETUP.exe

8. Select the installation wizard language and click OK to continue, as shown in Figure 3-22.

Figure 3-22 Select language window

9. The welcome window lists the actions available. Click Next to continue the upgrade, as shown in Figure 3-59 on page 95.

Tip: If you run the SETUP.bin in the root of the Tivoli Workload Scheduler 8.2 CD-ROM, the files necessary to install the Tivoli Workload Scheduler 8.2 engine are copied to the local hard disk and the installation launched from the hard disk. Since the setup program does not remove the files that it copies to the hard disk, creating a temporary directory specifically for the setup program to use will simplify the cleanup of these files following the add feature.

If you are upgrading the Tivoli Workload Scheduler 8.2 engine only, this is unnecessary and the SETUP.bin in the appropriate platform directory can be launched directly from the CD-ROM, thereby reducing the amount of temporary disk space required.

58 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 75: Tivoli Workloud Scheduler Guide

Figure 3-23 Welcome window

10.Having read the terms and conditions, select I accept the terms in the license agreement, then click Next to continue as shown in Figure 3-24 on page 60.

Chapter 3. Installation 59

Page 76: Tivoli Workloud Scheduler Guide

Figure 3-24 Software License Agreement

11.From the drop-down list, select the existing installation to be upgraded, as shown in Figure 3-25 on page 61.

60 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 77: Tivoli Workloud Scheduler Guide

Figure 3-25 Discovery window

12.The Add a feature to the selected instance radio button is selected by default. Click Next to continue as shown in Figure 3-26 on page 62.

Chapter 3. Installation 61

Page 78: Tivoli Workloud Scheduler Guide

Figure 3-26 Add a feature to the selected instance

13.Review the TWSuser information then click Next to continue, as shown in Figure 3-27 on page 63.

62 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 79: Tivoli Workloud Scheduler Guide

Figure 3-27 User window

14.Review the installation directory and click Next to continue, as shown in Figure 3-28 on page 64.

Chapter 3. Installation 63

Page 80: Tivoli Workloud Scheduler Guide

Figure 3-28 Location window

15.Review the CPU information and click Next to continue, as shown in Figure 3-29 on page 65.

64 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 81: Tivoli Workloud Scheduler Guide

Figure 3-29 CPU definition window

16.Check Connector - Install and Configure then click Next as shown in Figure 3-30 on page 66.

Chapter 3. Installation 65

Page 82: Tivoli Workloud Scheduler Guide

Figure 3-30 Optional features window

17.Type the name that identifies the instance in the Job Scheduling Console window, then click Next to continue. The name must be unique within the scheduler network. We used the convention hostname_TWSuser as shown in Figure 3-31 on page 67.

66 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 83: Tivoli Workloud Scheduler Guide

Figure 3-31 Connector Instance Name

18.Select any additional languages to install and click Next as shown in Figure 3-32 on page 68. We did not select any additional languages to install at this stage, since this requires the Tivoli Management Framework 4.1 Language CD-ROM to be available in addition to Tivoli Management Framework 4.1 Installation CD-ROM during the add feature stage.

Chapter 3. Installation 67

Page 84: Tivoli Workloud Scheduler Guide

Figure 3-32 Additional languages window

19.Review and modify if required the Tivoli Management Framework installation directory, then click Next to continue as shown in Figure 3-33 on page 69.

Note: The remaining fields are optional and apply only to Windows. Unless you intend to deploy Tivoli Management Framework programs or Managed Nodes in your Tivoli Management Framework environment, leave them empty.

68 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 85: Tivoli Workloud Scheduler Guide

Figure 3-33 Tivoli Management Framework installation window

20.Review the installation settings and click Next to start adding the feature as shown in Figure 3-34 on page 70.

Chapter 3. Installation 69

Page 86: Tivoli Workloud Scheduler Guide

Figure 3-34 Summary window

21.A progress bar indicates that the installation has started as shown in Figure 3-35 on page 71.

70 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 87: Tivoli Workloud Scheduler Guide

Figure 3-35 Progress bar

22.On the Locate the Installation Image window, you will be prompted for the location of the Tivoli Management Framework CD-ROM.

On UNIX, unmount the Tivoli Workload Scheduler 8.2 CD-ROM:

# cd /# umount /cdrom

23.Replace the Tivoli Workload Scheduler 8.2 Installation CD-ROM with the Tivoli Management Framework CD-ROM.

24.On UNIX, mount the Tivoli Management Framework Installation CD-ROM:

# mount -r -V cdrfs /dev/cd0 /cdrom

25.Select the root directory of the CD-ROM on the Locate Installation Image window and click OK as shown in Figure 3-36 on page 72.

Note: On UNIX you will need to change the directory of the Tivoli Workload Scheduler Installation CD-ROM before you will be able to unmount it. If you find that the command-line prompt has been obscured by output from the setup program, just press Enter to get a prompt back.

Chapter 3. Installation 71

Page 88: Tivoli Workloud Scheduler Guide

Figure 3-36 Locate the Installation Image window

26.The progress bar will indicate that the installation is continuing. On UNIX, once the Tivoli Management Framework installation has completed, a Tivoli Desktop will launch as shown in Figure 3-18 on page 53.

27.A pop-up window prompting for the Tivoli Workload Scheduler 8.2 Installation CD-ROM, as shown in Figure 3-37 should appear shortly after the Tivoli Management Framework installation completes.

Figure 3-37 Insert CD-ROM pop-up window

On UNIX, unmount the Tivoli Workload Scheduler 8.2 Installation CD-ROM:

# umount /cdrom

72 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 89: Tivoli Workloud Scheduler Guide

28.Replace the Tivoli Management Framework Installation CD-ROM with the Tivoli Workload Scheduler 8.2 Installation CD-ROM you removed previously.

29.On UNIX, mount the Tivoli Workload Scheduler 8.2 Installation CD-ROM:

# mount -r -V cdrfs /dev/cd0 /cdrom

30.Click OK to continue with the installation of the Tivoli Management Framework, Job Scheduling Services and IBM Tivoli Workload Scheduler Connector.

31.On UNIX, this is the final step and the installation of the IBM Tivoli Workload Scheduler Connector will be followed by a final summary window. Click Finish to exit the setup program as shown in Figure 3-43 on page 77. The remaining steps below should be ignored.

On Windows, you will be prompted to reboot now, or later as shown in Figure 3-38. The default is to reboot now. Click Next to reboot now.

Figure 3-38 Reboot window

Note: The following mount command is AIX specific, see Table 3-5 on page 149 for equivalent commands for other platforms.

Chapter 3. Installation 73

Page 90: Tivoli Workloud Scheduler Guide

32.After the Windows system reboots, log back in as the same local Administrator, and the add feature will continue by re-prompting for the installation language as shown in Figure 3-39. Select the required language and click OK to continue.

Figure 3-39 Language selection window

33.A progress bar indicates that the add feature has resumed, as shown in Figure 3-40.

Figure 3-40 Progress bar

34.A pop-up window prompting for the locations of the TMF_JSS.IND file may appear. Be sure that the Tivoli Workload Scheduler 8.2 Installation CD 1 is

74 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 91: Tivoli Workloud Scheduler Guide

installed, then select the TWS_CONN directory and click OK, as shown in Figure 3-41.

Figure 3-41 Locate the Installation Image pop-up

35.A progress bar will indicate that the add feature has resumed as shown in Figure 3-42 on page 76.

Chapter 3. Installation 75

Page 92: Tivoli Workloud Scheduler Guide

Figure 3-42 Progress bar

36.Once the add feature completes you will get a final summary window. Click Finish to exit the setup program, as shown in Figure 3-43 on page 77.

76 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 93: Tivoli Workloud Scheduler Guide

Figure 3-43 Add feature complete window

37.On UNIX unmount the CD-ROM:

# cd /# umount /cdrom

38.Remove the Tivoli Workload Scheduler 8.2 Installation CD-ROM.

39.Log in as the TWSuser.

40.Restart the netman process as follows:

– On UNIX:

$ ./StartUp

– On Windows:

C:\win32app\maestro> .\StartUp.cmd

41.From the Job Scheduling Console, link to the target workstation. Otherwise from the command line on the MDM, use the following command:

$ conman “link workstation”

Chapter 3. Installation 77

Page 94: Tivoli Workloud Scheduler Guide

42.From the Job Scheduling Console, start the target workstation. From the command line on the MDM, use the following command:

$ conman “start workstation”

3.5 Promoting an agentYou can reconfigure an existing Tivoli Workload Scheduler 8.2 agent to a different type of agent as follows:

� Promote a Standard Agent to a Master Domain Manager or Backup Master Domain Manager

� Promote a Standard Agent to a Fault Tolerant Agent

� Promote a Fault Tolerant Agent to a Master Domain Manager

For example, in order to promote an existing Fault Tolerant Agent with the Tivoli Management Framework and Connector already installed you would use the following steps:

1. From the Job Scheduling Console, stop the target workstation. Otherwise from the command line on the MDM while logged on as the TWSuser, use the following command:

$ conman “stop workstation”

2. From the Job Scheduling Console, unlink the target workstation. From the command line on the MDM, use the following command:

$ conman “unlink workstation”

3. Log on to the target workstation as root (UNIX), or the local Administrator (Windows).

4. From the command line (command prompt on Windows), stop the netman process as follows:

– On UNIX:

$ su - TWSuser -c “conman shut\;wait”

– On Windows:

C:\> cd TWShomeC:\win32app\maestro> .\Shutdown.cmd

Note: You cannot promote an X-agent to a Fault Tolerant Agent.

78 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 95: Tivoli Workloud Scheduler Guide

5. Stop the connector processes as follows:

– On UNIX:

$ su -c TWSuser -c “TWShome/bin/wmaeutil.sh ALL -stop”

– On Windows:

C:\win32app\maestro> wmaeutil.cmd ALL -stop

6. To verify whether there are processes still running, complete the following steps:

– On UNIX, run the command:

$ ps -u TWSuser

– On Windows, run the command:

C:\win32app\maestro> unsupported\listproc.exe

Verify that the following processes are not running:

netman, mailman, batchman, writer, jobman, JOBMAN (UNIX only), stageman, JOBMON (Windows only), tokensrv (Windows only), maestro_engine, maestro_plan, maestro_database.

7. Insert the Tivoli Workload Scheduler 8.2 Installation Disk (CD 1 for UNIX and Windows, CD 2 for Linux).

8. Run the setup program for the operating system on which you are upgrading:

– On UNIX/Linux, while logged on as root, mount the CD-ROM and change directory to the appropriate platform directory:

# mount -r -V cdrfs /dev/cd0# cd /cdrom/PLATFORM# ./SETUP.bin [-is:tempdir temporary_directory]

– On Windows, launch the SETUP.exe file in the WINDOWS folder on the CD-ROM as shown in Figure 3-44 on page 80.

Note: The following mount command is AIX specific. See Table 3-5 on page 149 for equivalent commands for other platforms.

Tip: If you run the SETUP.bin in the root of the Tivoli Workload Scheduler 8.2 CD-ROM the files necessary to install the Tivoli Workload Scheduler 8.2 engine are copied to the local hard disk and the installation launched from the hard disk. If you are upgrading the Tivoli Workload Scheduler 8.2 engine only, this is unnecessary and the SETUP.bin in the appropriate platform directory can be launched directly from the CD-ROM, thereby reducing the amount of temporary disk space required.

Chapter 3. Installation 79

Page 96: Tivoli Workloud Scheduler Guide

Figure 3-44 Run SETUP.exe

9. Select the installation wizard language and click OK to continue, as shown in Figure 3-45.

Figure 3-45 Select installation language window

10.The welcome window lists the actions available. Click Next to continue the upgrade, as shown in Figure 3-59 on page 95.

80 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 97: Tivoli Workloud Scheduler Guide

Figure 3-46 Welcome window

11.Having read the terms and conditions, select I accept the terms in the license agreement, then click Next to continue as shown in Figure 3-47 on page 82.

Chapter 3. Installation 81

Page 98: Tivoli Workloud Scheduler Guide

Figure 3-47 Software License Agreement

12.Select the existing installation to be upgraded from the drop-down list, as shown in Figure 3-48 on page 83.

82 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 99: Tivoli Workloud Scheduler Guide

Figure 3-48 Discovery window

13.The Add a feature to the selected instance radio button is selected by default. Select Promote the selected instance, then click Next to continue as shown in Figure 3-49 on page 84.

Chapter 3. Installation 83

Page 100: Tivoli Workloud Scheduler Guide

Figure 3-49 Promote the selected instance

14.Review the TWSuser information, then click Next to continue, as shown in Figure 3-50 on page 85.

84 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 101: Tivoli Workloud Scheduler Guide

Figure 3-50 User window

15.Review the installation directory and click Next to continue, as shown in Figure 3-51 on page 86.

Chapter 3. Installation 85

Page 102: Tivoli Workloud Scheduler Guide

Figure 3-51 Location window

16..Following the discovery of the existing Tivoli Workload Scheduler 8.2 components, select Master Domain Manager and click Next as shown in Figure 3-52 on page 87.

86 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 103: Tivoli Workloud Scheduler Guide

Figure 3-52 Type of agent window

17.Confirm the workstation name required for the new Master Domain Manager, then click Next to continue, as shown in Figure 3-53 on page 88.

Chapter 3. Installation 87

Page 104: Tivoli Workloud Scheduler Guide

Figure 3-53 CPU definition window

18.Review the installation settings and click Next to start promoting the agent, as shown in Figure 3-54 on page 89.

88 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 105: Tivoli Workloud Scheduler Guide

Figure 3-54 Summary window

19.A progress bar indicates that the installation has started, as shown in Figure 3-55 on page 90.

Chapter 3. Installation 89

Page 106: Tivoli Workloud Scheduler Guide

Figure 3-55 Progress bar

20.Once the installation completes you will get a final summary window. Click Finish to exit the setup program, as shown in Figure 3-56 on page 91.

90 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 107: Tivoli Workloud Scheduler Guide

Figure 3-56 Installation complete window

21.On UNIX, unmount the CD-ROM:

# cd /# umount /cdrom

22.Remove the Tivoli Workload Scheduler 8.2 Installation CD-ROM.

23.Log in as the TWSuser.

24.Restart the netman process as follows:

– On UNIX:

$ ./StartUp

– On Windows:

C:\win32app\maestro> .\StartUp.cmd

25.From the Job Scheduling Console, link to the target workstation. Otherwise from the command line on the MDM, use the following command:

$ conman “link workstation”

Chapter 3. Installation 91

Page 108: Tivoli Workloud Scheduler Guide

26.From the Job Scheduling Console, start the target workstation. From the command line on the MDM, use the following command:

$ conman “start workstation”

3.6 Upgrading to Version 8.2 from a previous releaseBefore performing an upgrade, be sure that all Tivoli Workload Scheduler 8.2 processes and services are stopped. If you have any jobs that are running currently, they should be allowed to complete or stop the related processes manually.

The upgrade procedure on Tier 1 platforms backs up the entire Tivoli Workload Scheduler Version 7.0 or 8.1 installation to the TWShome_backup_TWSuser directory.

Some configuration files such as localopts, globalopts, etc. are preserved by the upgrade, whereas others such as jobmanrc (jobmanrc.cmd on Windows) are not. Should you have locally customized files that are not preserved by the upgrade, then the customized files can be located in the TWShome_backup_TWSuser directory and merged with the Tivoli Workload Scheduler 8.2 files.

As an added precaution, be sure that you have a verified system backup including all Tivoli Workload Scheduler 8.2 related files before proceeding with the upgrade.

Tip: The backup files are moved to the same file system where you originally installed the previous release. A check is made to ensure that there is enough space on the file system. Otherwise, the upgrade will not proceed.

If you do not have the required disk space to perform the upgrade, back up the mozart database and all your customized configuration files, and install a new instance of Tivoli Workload Scheduler 8.2, then transfer the saved files to the new installation.

Restriction: The backup will fail on UNIX/Linux if the TWShome directory is a file system mount point.

92 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 109: Tivoli Workloud Scheduler Guide

Follow these steps:

1. From the Job Scheduling Console, stop the target workstation. Otherwise from the command line on the MDM while logged on as the TWSuser, use the following command:

$ conman “stop workstation”

2. From the Job Scheduling Console, unlink the target workstation. From the command line on the MDM, use the following command:

$ conman “unlink workstation”

3. Log on to the target workstation as root (UNIX) or the local Administrator (Windows).

4. From the command line (DOS prompt on Windows), stop the netman process as follows:

– On UNIX:

$ su - TWSuser -c “conman shut\;wait”

– On Windows:

C:\> cd \win32app\maestroC:\win32app\maestro> .\Shutdown.cmd

5. To verify whether there are processes still running, complete the following steps:

– On UNIX, run the command:

$ ps -u TWSuser

– On Windows, run the command:

C:\win32app\maestro> unsupported\listproc.exe

Verify that the following processes are not running:

netman, mailman, batchman, writer, jobman, JOBMAN (UNIX only), stageman, JOBMON (Windows only), tokensrv (Windows only)

Also, be sure that no system programs are accessing the TWShome directory or anything below it, including the command prompt and Windows Explorer. If any of these files are in use, the backup of the existing instance will fail.

Tip: If you are upgrading an installation that includes the Connector, be sure that you also stop the connector processes.

Chapter 3. Installation 93

Page 110: Tivoli Workloud Scheduler Guide

6. Insert the Tivoli Workload Scheduler 8.2 Installation CD-ROM (CD 1 for UNIX and Windows, CD 2 for Linux).

7. Run the setup program for the operating system on which you are upgrading:

– On UNIX/Linux, while logged on as root, mount the CD-ROM, change the directory to the appropriate platform directory, and run the setup program:

# mount -r -V /dev/cd0 /cdrom# cd /cdrom/PLATFORM# ./SETUP.bin [-is:tempdir temporary_directory]

– On Windows, launch the SETUP.exe file in the WINDOWS folder on the CD-ROM as shown in Figure 3-57.

Figure 3-57 Run SETUP.exe

Note: The setup program will not detect the components file if it has been relocated from the /usr/unison directory using the UNISON_COMPONENT_FILE environment variable. In order for the setup program to successfully discover the existing instance, the relocated components file will need to be copied into /usr/unison before proceeding with the upgrade.

Note: The following mount command is AIX specific. See Table 3-5 on page 149 for equivalent commands for other platforms.

Tip: If you run the SETUP.bin in the root of the Tivoli Workload Scheduler 8.2 CD-ROM, the files necessary to install the Tivoli Workload Scheduler 8.2 engine are copied to the local hard disk and the installation launched from the hard disk. If you are upgrading the Tivoli Workload Scheduler 8.2 engine only, this is unnecessary and the SETUP.bin in the appropriate platform directory can be launched directly from the CD-ROM reducing the amount of temporary disk space required.

94 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 111: Tivoli Workloud Scheduler Guide

8. Select the installation wizard language and click OK to continue, as shown in Figure 3-58.

Figure 3-58 Select language window

9. The welcome window lists the actions available. Click Next to continue the upgrade, as shown in Figure 3-59.

Figure 3-59 Welcome window

10.Having read the terms and conditions, select I accept the terms in the license agreement, then click Next to continue as shown in Figure 3-60.

Chapter 3. Installation 95

Page 112: Tivoli Workloud Scheduler Guide

Figure 3-60 Software License Agreement

11.Select the existing installation to be upgraded from the drop-down list, as shown in Figure 3-61 on page 97.

96 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 113: Tivoli Workloud Scheduler Guide

Figure 3-61 Discovery window

12.The Upgrade the selected instance radio button is selected by default. Click Next to continue as shown in Figure 3-62 on page 98.

Chapter 3. Installation 97

Page 114: Tivoli Workloud Scheduler Guide

Figure 3-62 Upgrade the selected instance

13.The upgrade actions window gives an overview of the upgrade process. Click Next to continue, as shown in Figure 3-63 on page 99.

98 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 115: Tivoli Workloud Scheduler Guide

Figure 3-63 Upgrade overview

14.Review the TWSuser information and on Windows enter and confirm the TWSuser’s password, then click Next to continue, as shown in Figure 3-64 on page 100.

Chapter 3. Installation 99

Page 116: Tivoli Workloud Scheduler Guide

Figure 3-64 User window

15.Review the installation directory and click Next to continue, as shown in Figure 3-65 on page 101.

100 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 117: Tivoli Workloud Scheduler Guide

Figure 3-65 Location window

16.Select the type of agent being upgraded, then click Next to continue, as shown in Figure 3-66 on page 102.

Note: The type of agent selected must match the type of agent being upgraded.

Chapter 3. Installation 101

Page 118: Tivoli Workloud Scheduler Guide

Figure 3-66 Type of agent to upgrade

17.Review the CPU information and click Next to continue, as shown in Figure 3-67 on page 103.

102 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 119: Tivoli Workloud Scheduler Guide

Figure 3-67 CPU definition window

18.If the Tivoli Management Framework is found, but it is the wrong version, it has no IBM Tivoli Workload Scheduler Connector or it is otherwise incomplete, you will see a warning window as shown in Figure 3-68 on page 104. Click Next to skip the IBM Tivoli Workload Scheduler Connector upgrade at this time.

Chapter 3. Installation 103

Page 120: Tivoli Workloud Scheduler Guide

Figure 3-68 Tivoli Management Framework discovery failure

19.Review the installation settings and click Next to start the upgrade, as shown in Figure 3-69 on page 105.

Tip: These installation settings are read from the TWShome/localopts file. If they are incorrect, it is possible to click Back, then edit the localopts file and once the settings are correct return to this window by clicking Next. When doing this operation on Windows, take care not to leave the editor or command prompt running.

104 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 121: Tivoli Workloud Scheduler Guide

Figure 3-69 Summary window

20.A progress bar indicates that the installation has started as shown in Figure 3-70 on page 106.

Chapter 3. Installation 105

Page 122: Tivoli Workloud Scheduler Guide

Figure 3-70 Progress bar

21.Once the installation is complete, you will get a final summary window. Click Finish to exit the setup program, as shown in Figure 3-71 on page 107.

Note: On Windows 2000 only, in the case of an unsuccessful installation as shown in Figure 3-72 on page 108, check the log file indicated before you exit the setup program. After the setup program exits, the log file may be removed.

106 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 123: Tivoli Workloud Scheduler Guide

Figure 3-71 Upgrade completed successfully

Chapter 3. Installation 107

Page 124: Tivoli Workloud Scheduler Guide

Figure 3-72 Unsuccessful upgrade

22.Confirm that the netman process has started:

– On UNIX:

# ps -u tws

– On Windows:

C:\> C:\win32app\maestro\unsupported\listproc.exe

23.From the Job Scheduling Console, link to the target workstation. Otherwise from the command line on the MDM while logged on as the TWSuser, use the following command:

$ conman “link workstation”

24.From the Job Scheduling Console, start the target workstation. From the command line on the MDM, use the following command:

$ conman “start workstation”

108 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 125: Tivoli Workloud Scheduler Guide

25.On UNIX, unmount the CD-ROM:

# cd /# umount /cdrom

26.Remove the Tivoli Workload Scheduler 8.2 Installation Disk.

3.7 Installing the Job Scheduling ConsoleThe Job Scheduling Console can be installed on any workstation that has a TCP/IP connection. However, to use the Job Scheduling Console, Version 1.3 you should have the following components installed within your Tivoli Workload Scheduler 8.2 network:

� Tivoli Management Framework 3.7.1 or 4.1

� Tivoli Job Scheduling Services 1.3

� IBM Tivoli Workload Scheduler Connector 8.2

Installing IBM Tivoli Workload Scheduler Connector Components was covered in 3.3, “Installing a Master Domain Manager on UNIX” on page 35 and 3.4, “Adding a new feature” on page 56. Although the IBM Tivoli Workload Scheduler Connector must be running to use the Console, you can install the Job Scheduling Console before the IBM Tivoli Workload Scheduler Connector.

You can install the Job Scheduling Console using any of the following installation mechanisms:

� Using an installation wizard that guides the user through the installation steps.

� Using a response file that provides input to the installation program without user intervention.

� Using Software Distribution to distribute the Job Scheduling Console files.

Here we will give an example of the first of these methods, using the installation wizard interactively. The installation program can perform a number of actions:

� Fresh install

� Adding new languages to an existing installation

� Repairing an existing installation

However, the steps below assume that you are performing a fresh install:

1. Insert the Job Scheduling Console CD 1 in the CD-ROM drive.

Chapter 3. Installation 109

Page 126: Tivoli Workloud Scheduler Guide

2. Run the setup program for the appropriate platform:

– On UNIX, while logged in as root, mount the CD-ROM:

# mount -r -V cdrfs /dev/cd0 /cdrom# cd /cdrom# ./setup.bin [-is:tempdir temporary_directory]

– On Windows, launch the SETUP.exe file in the WINDOWS folder on CD-ROM as shown in Figure 3-73.

Figure 3-73 Run SETUP.exe

3. Select the installation wizard language and click OK to continue, as shown in Figure 3-74.

Figure 3-74 Select language window

4. The welcome window lists the actions available. Click Next to continue as shown in Figure 3-75 on page 111.

Note: The following mount command is AIX specific. See Table 3-5 on page 149 for equivalent commands for other platforms.

110 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 127: Tivoli Workloud Scheduler Guide

Figure 3-75 Welcome window

5. Having read the terms and conditions, select I accept the terms in the license agreement, then click Next to continue as shown in Figure 3-76 on page 112.

Chapter 3. Installation 111

Page 128: Tivoli Workloud Scheduler Guide

Figure 3-76 Software License Agreement

6. Select the required installation directory, then click Next to continue as shown in Figure 3-77 on page 113.

112 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 129: Tivoli Workloud Scheduler Guide

Figure 3-77 Location window

7. Select the type of installation required:

Typical English and the language of the locale are installed.

Custom Allows you to choose the languages you want to install.

Full All languages are automatically installed.

We chose Typical, which is selected by default and is likely to be the most appropriate choice for the majority of users. Having made your choice, click Next to continue as shown in Figure 3-78 on page 114.

Chapter 3. Installation 113

Page 130: Tivoli Workloud Scheduler Guide

Figure 3-78 Installation type window

8. Select the required locations for the program icons, then click Next to continue as shown in Figure 3-79 on page 115.

Note: The options available will vary depending upon the target platform.

114 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 131: Tivoli Workloud Scheduler Guide

Figure 3-79 Icon location window (Windows variant)

9. Review the installation settings and click Next to start the upgrade, as shown in Figure 3-80 on page 116.

Chapter 3. Installation 115

Page 132: Tivoli Workloud Scheduler Guide

Figure 3-80 Summary window

10.A progress bar indicates that the installation has started as shown in Figure 3-81 on page 117.

116 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 133: Tivoli Workloud Scheduler Guide

Figure 3-81 Progress bar

11.Once the installation completes you get a final summary window. Click Finish to exit the setup program, as shown in Figure 3-82 on page 118.

Chapter 3. Installation 117

Page 134: Tivoli Workloud Scheduler Guide

Figure 3-82 Installation complete window

3.7.1 Starting the Job Scheduling ConsoleIf icons and/or shortcuts were created during the installation, you can use these to launch the Job Scheduling Console. Alternatively, the Job Scheduling Console can be launched from the command line using a platform-specific script found in the bin/java subdirectory of the installation directory as shown in Table 3-3.

Table 3-3 JSC platform-specific startup scripts

A Job Scheduling Console logon window will be displayed. Enter the login name associated with Tivoli Management Framework Administrator configured within

Platform Script

AIX AIXconsole.sh

HP-UX HPconsole.sh

Linux LINUXconsole.sh

Windows NTconsole.cmd

SUN Solaris SUNconsole.sh

118 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 135: Tivoli Workloud Scheduler Guide

the IBM Tivoli Workload Scheduler Security file (by default this will be the TWSuser), the user’s password, and the host name of the machine running the IBM Tivoli Workload Scheduler Connector, and click OK, as shown in Figure 3-83.

Figure 3-83 Logon window

The Job Scheduling Console main window is displayed as shown in Figure 3-84 on page 120.

Chapter 3. Installation 119

Page 136: Tivoli Workloud Scheduler Guide

Figure 3-84 JSC main window

3.7.2 Applying Job Scheduling Console fix packHaving determined that the Job Scheduling Console has installed correctly, you should now apply the latest available fix pack. Job Scheduling Console, Version 1.3 fix packs can be downloaded via anonymous FTP from:

ftp://ftp.software.ibm.com/software/tivoli_support/patches/patches_1.3/

or via HTTP from:

http://www3.software.ibm.com/ibmdl/pub/software/tivoli_support/patches_1.3/

Download the fix pack README file plus the fix pack image for each required platform.

Having downloaded the necessary files, you should spend some time reviewing the fix pack README file. The README will give you an overview of the defects fixed, known limitations and dependencies, plus installation instructions. This file is found in “README file for JSC Fix Pack 01” on page 350.

120 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 137: Tivoli Workloud Scheduler Guide

Having ensured that the Job Scheduling Console is not running and you have taken the necessary backups, use the following steps to apply the fix pack:

1. Extract the fix pack image (.tar file on UNIX/Linux or .ZIP file on Windows) into a temporary directory.

2. Run the setup program extracted from the archive in the previous step:

– On UNIX, run the following command as root:

# setup.bin [-is:tempdir temporary_directory]

– On Windows, launch the SETUP.exe file as shown in Figure 3-85.

Figure 3-85 Run SETUP.exe

3. The welcome window lists the actions available. Click Next to continue with the discovery phase, as shown in Figure 3-86 on page 122.

Chapter 3. Installation 121

Page 138: Tivoli Workloud Scheduler Guide

Figure 3-86 Welcome window

4. During the discovery phase, the setup program will search for the existing JSC 1.3 instance and display the results as shown in Figure 3-87 on page 123. Having confirmed that the discovered path is correct, click Next to continue.

Note: An error will be displayed if no instance of JSC 1.3 is found.

122 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 139: Tivoli Workloud Scheduler Guide

Figure 3-87 Discovery window

5. The first time you apply the fix pack the only option available is Apply - fix pack nn. Select this option, then click Next to continue as shown in Figure 3-88 on page 124.

Chapter 3. Installation 123

Page 140: Tivoli Workloud Scheduler Guide

Figure 3-88 Installation action window

6. Review the installations settings and click Next to start the fix pack application, as shown in Figure 3-89 on page 125.

124 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 141: Tivoli Workloud Scheduler Guide

Figure 3-89 Summary window

7. A progress bar indicates that the fix pack application has started as shown in Figure 3-90 on page 126.

Chapter 3. Installation 125

Page 142: Tivoli Workloud Scheduler Guide

Figure 3-90 Progress bar

8. Once the fix pack application completes, you get a final summary window. Click Finish to exit the setup program as shown in Figure 3-91 on page 127.

126 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 143: Tivoli Workloud Scheduler Guide

Figure 3-91 Installation complete window

9. At this stage the JSC fix pack is installed in a non-permanent mode with a backup of the previous version stored on your workstation. Once you have tested the JSC and are happy that it is working correctly you can make the fix pack application permanent and free up the disk space occupied by the previous version by committing the fix pack. To commit the fix pack, follow the remaining steps in this section.

10.Relaunch the setup program as described in Figure 3-86 on page 122.

11.Click Next to skip over the welcome window and begin the discovery phase.

12.Having confirmed that the correct path has been discovered, click Next to continue.

13.Select Commit - fix pack nn then click Next to continue as shown in Figure 3-92 on page 128.

Note: Should you have problems with the fix pack version of the JSC and need to revert to the existing version, then follow the steps below, but select the Rollback action instead of the Commit action.

Chapter 3. Installation 127

Page 144: Tivoli Workloud Scheduler Guide

Figure 3-92 Installation action window

14.Review the installation settings and click Next to commit the fix pack, as shown in Figure 3-93 on page 129.

128 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 145: Tivoli Workloud Scheduler Guide

Figure 3-93 Summary window

15.A progress bar indicates that the installation has started.

16.Once the commit completes, you get a final summary window. Click Finish to exit the setup program, as shown in Figure 3-94 on page 130.

Chapter 3. Installation 129

Page 146: Tivoli Workloud Scheduler Guide

Figure 3-94 Installation complete window

3.8 Installing using the twsinst scriptWith Tivoli Workload Scheduler 8.2 on Tier 1 platforms excluding Linux, you can install an instance, uninstall the product, upgrade from a previous release, or promote an instance from the command line using the twsinst script.

The general prerequisites, authorization roles required, and descriptions of the command-line arguments required by twsinst can be found in the manual Tivoli Workload Scheduler Version 8.2, SC32-1273.

By way of an example, in order to install a Backup Master Domain Manager from the command line using twsinst, you would use the following steps:

1. Log on as the user root.

2. Create a group for the TWSuser, such as tivoli.

3. Create the TWSuser, for example tws82, making sure that you specify an appropriate HOME directory for this user, since this is the directory Tivoli Workload Scheduler 8.2 will be installed into.

130 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 147: Tivoli Workloud Scheduler Guide

Customize the users .profile to set up the user’s environment correctly for TWSuser. See step b on page 36 for further details.

4. Insert the Tivoli Workload Scheduler 8.2 Installation CD 1.

5. Mount the CD-ROM.

6. Change directory to the appropriate platform directory below the root directory on the CD-ROM.

7. Run the twsinst program:

# ./twsinst -new -uname tws82 -cputype bkm_agent -thiscpu BACKUP -master MASTER -port 31182 -company IBM

The resulting output from this command can be seen in Example 3-1.

Example 3-1 twsinst output

# ./twsinst -new -uname tws82 -cputype bkm_agent -thiscpu BACKUP -master MASTER -port 31182 -company IBM

Licensed Materials Property of IBMTWS-WSH(C) Copyright IBM Corp 1998,2003US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.TWS for UNIX/TWSINST 8.2Revision: 1.23

AWSFAB033I Installation completed successfully.AWSFAB045I For more details see the /tmp/TWS_SOLARIS_tws82^8.2.log log file.#

8. Following the successful completion of installation, the netman process will have been started. Confirm that netman is present by running the following command:

# ps -u tws82

9. Unmount and remove the Tivoli Workload Scheduler 8.2 Installation CD-ROM.

Note: The TWShome directory must exist before attempting to run twsinst. Therefore if the directory was not created automatically when the TWSuser was created, manually create the directory before proceeding.

Chapter 3. Installation 131

Page 148: Tivoli Workloud Scheduler Guide

3.9 Silent install using ISMPA common use of a response file is to run the wizard in silent mode. This enables you to specify all wizard installation fields without running the wizard in graphical mode.

The silent installation is performed when the wizard is run with the -options command line switch.

A response file contains:

� Values that can be used to configure the installation program.

� Instructions for each section to help guide you through customizing the options.

To install in silent mode, use the following steps:

1. Log on to the target workstation as root (UNIX/Linux) or the local Administrator (Windows).

1. Insert the Tivoli Workload Scheduler 8.2 Installation CD-ROM (CD 1 for UNIX and Windows, CD 2 for Linux).

2. On UNIX, mount the CD-ROM:

# mount -r -V cdrfs /dev/cd0 /cdrom

3. Select the appropriate response file template from the RESPONSE_FILE directory on the installation CD-ROM and copy it to a temporary directory where it can be edited:

– On UNIX:

# cp /cdrom/RESPONSE_FILE/freshInstall.txt /tmp

– On Windows:

C:\> copy D:\RESPONSE_FILE\freshInstall.txt C:\tmp

The following sample response files are available:

– freshInstall.txt– migrationInstall.txt– updateInstall.txt

The remainder of this example presumes a fresh install is being performed, using the file freshInstall.txt. A copy of the freshInstall.txt file is found in “Sample freshInstall.txt” on page 359.

Note: The following mount command is AIX specific. See Table 3-5 on page 149 for equivalent commands for other platforms.

132 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 149: Tivoli Workloud Scheduler Guide

4. Edit the copy of the freshInstall.txt file made in the last step, and customize the values of the required keywords (search for the lines starting with -W).

a. The file contains both UNIX-specific and Windows-specific keywords. The required keywords are customized by default for Windows.

b. Enable the optional keywords by removing the leading ### characters from the line (search for ### to find the keywords you can set).

c. Specify a value for a keyword by replacing the characters within double quotation marks, such as “value”.

5. Save the changes made to the file. A copy of the customized file we created can be found in “Customized freshInstall.txt” on page 365.

6. Enter the following command:

– On UNIX:

# cd /cdrom# SETUP.bin -options freshinstall.txt

– On Windows:

D:\Windows\> SETUP.exe -options freshinstall.txt

Note that the install program runs in the background, and will take a while to analyze the response file before any sign that the installation is running are visible.

3.10 Installing Perl5 on WindowsThe diagnostic tool Metronome.pl and the Tivoli Workload Scheduler 8.2 warehouse enablement archiver script tws_launch_archiver.pl are both Perl scripts. Although a Perl5 interpreter is included with most modern UNIX releases

Note: The sample response files are UNIX format files that do not contain carriage return characters at the end of each lines. This means that on Windows you need to edit these files using a UNIX file format aware text editor such as found by selecting Start -> Programs -> Accessories -> Wordpad. Notepad and other incompatible editors will see the entire contents of the file as a single line of text.

Tip: If there was a previous instance of Tivoli Workload Scheduler 8.2 installed in the same location and the TWShome directory had not been cleaned up as recommended in 3.12.3, “Tiding up the TWShome directory” on page 142, the existing localopts file will take precedence over the settings defined in the response file.

Chapter 3. Installation 133

Page 150: Tivoli Workloud Scheduler Guide

and all Linux releases, Perl5 is not found out of the box on Windows platforms, but a copy of Perl5 for Windows is included on the Tivoli Workload Scheduler 8.2 Installation CD 2 under the Add-On\WINDOWS directory.

We would highly recommend installing Perl5 onto all machines running Tivoli Workload Scheduler 8.2 where not already installed. Perl makes an excellent scripting language for writing generic scripts that will run on UNIX and Windows platforms, and has many benefits over .bat command file and Visual Basic scripts.

To install Perl5 on Windows, use the following steps:

1. Log in as the TWSuser.

2. Insert the Tivoli Workload Scheduler 8.2 Installation CD 2.

3. Open a command prompt by selecting Start -> Programs -> Accessories -> Command Prompt.

4. Copy the CD-ROM \Add-On\Windows\Perl5 folder and everything below it to a location such as C:\win32app\Perl5 on your hard disk, using the following commands:

C:\win32app\tws> D:D:\> cd Add-On\WINDOWSD:\Add-ON\WINDOWS> xcopy Perl5 C:\win32pp\Perl5 /E /I /H /K

Alternatively the directory tree can be copied from the Desktop using Windows Explorer or similar tool.

5. Then run the following commands, substituting the appropriate path if you copied Perl5 to a location other than C:\win32app\Perl5:

C:\win32app\tws> assoc .pl=PerlScriptC:\win32app\tws> ftype PerlScript=C:\win32app\Perl5\bin\perl.exe %1 %*

6. Using a text editor of your choice, create a text file hello.pl as shown in Example 3-2.

Example 3-2 A simple perl script

#!perl## a simple perl script

Important: In the general availability code of IBM Tivoli Workload Scheduler, a Perl library that is required to run Metronome on Windows is missing. This library is shipped with IBM Tivoli Workload Scheduler Fix Pack 01, so you need to install this fix pack first in order to run Metronome on a Windows platform.

134 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 151: Tivoli Workloud Scheduler Guide

print “Hello, world!\n”

exit 0;

7. Test the Perl interpreter by running your script as follows:

C:>\win32app\tws> hello.pl

Note that if the directory containing the script hello.pl is not included in your default search PATH, it will be necessary to use an explicit command such as:

C:>\win32app\tws> C:\win32app\tws\scripts\hello.pl

3.11 Troubleshooting installation problemsThis section includes details of the installation process log files and common install and/or upgrade-related problems.

3.11.1 Installation process log filesDetails of the installation process are logged in the following files:

TWSIsmp.log Written to by the ISMP install program.

TWSInstall.log Written to by execute programs defined in the Software Package Blocks. This log contains details on the success or failure of each installation step.

TWS_platform_TWSuser^8.2.logWritten to by the Software Distribution process and lists the files copied to the target machine.

tivoli.sinstall Written to by Tivoli Management Framework TMR server installation process.

tivoli.cinstall Written to by the Tivoli Management Framework winstall process.

These log files are created in the following system temporary directories:

� On UNIX in $TMPDIR if defined otherwise in /tmp.

� On Windows in %TMPDIR%.

Note: We found that on Windows 2000, %TMPDIR% was set by default to a folder where files were removed automatically as soon as the application completed. Therefore it is advisable that you find and review or copy to another location the log files on Windows 2000 and possibly other Windows platforms before exiting the setup program.

Chapter 3. Installation 135

Page 152: Tivoli Workloud Scheduler Guide

Should you encounter difficulties getting the setup program to launch correctly, rerun the setup program (setup.bin on UNIX/Linux or SETUP.exe on Windows) with the following option:

-is:log log_file

One possible problem is that the default temporary directory does not contain enough free disk space to run the setup program correctly, in which case an alternate temporary directory can be specified using the option:

-is:tempdir temporary_directory

For example, on UNIX you might run the setup program specifying both of the above options as follows:

# setup.bin -is:tempdir /var/tmp/TWS -is:log /var/tmp/ismp.log

3.11.2 Common installation problemsThe following are possible reasons for an installation/upgrade failure:

� On Windows, if the Control Panel -> Services application is open while the installation program is creating the IBM Tivoli Workload Scheduler services, the installation/upgrade process fails.

� The IBM Tivoli Workload Scheduler installation and upgrade procedure have to be launched after you have done the following:

– Stopped all IBM Tivoli Workload Scheduler processes (and IBM Tivoli Workload Scheduler Connector processes, if installed). If any IBM Tivoli Workload Scheduler processes are left running, files are locked. Then both the backup processes and the upgrade itself are likely to fail.

– Stopped all IBM Tivoli Workload Scheduler processes for other existing IBM Tivoli Workload Scheduler instances (if there is more than one instance of IBM Tivoli Workload Scheduler installed on the machine where the install/upgrade is being performed).

� On Windows, before starting an upgrade, verify that you are not locking the TWShome directory with Windows Explorer, a command prompt, or similar.

� On UNIX/Linux, if the TWShome directory is a file system mount point, the upgrade will fail because the backup process attempts to relocate the existing TWShome directory by renaming it. In this case the only option is to save the existing configuration files, plus any required logs then perform a fresh install and manually migrate the config and saved logs.

136 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 153: Tivoli Workloud Scheduler Guide

3.12 Uninstalling Tivoli Workload Scheduler 8.2Uninstalling Tivoli Workload Scheduler 8.2 will only remove programs and files originally installed by the installation program. Files added after the install, including configuration files such as localopts, will be left in the TWShome directory. Also, any files or programs in use at the time of the uninstall will not be removed. Therefore, before you proceed with the uninstall be sure that all Tivoli Workload Scheduler 8.2 processes and services are stopped and that there are no active or pending jobs.

3.12.1 Launch the uninstallerHere are the details to launch the uninstaller.

On UNIXTo launch the uninstaller on UNIX/Linux, use the following steps:

1. Log in as root.

2. Change directory to the _uninst directory below TWShome:

# cd TWShome/_uninst

3. Run the uninstall program:

# ./uninstaller.bin

On WindowsLaunch the uninstaller on Windows from the Add/Remove Program window launch, using the following steps:

1. Log on as a user with Local Administrator rights.

2. Launch the Add/Remove Program using Start -> Settings -> Control Panel -> Add/Remove Programs.

Tip: You need to be consistent with the installation method you choose. For example if you chose to use the ISMP-based installation for installing IBM Tivoli Workload Scheduler, later when you need to install the fix packs, you should also use ISMP. Using any other method in this case, such as twsinst or IBM Tivoli Configuration Manager, might produce unpredictable results.

Note: The Tivoli Workload Scheduler 8.2 uninstall program does not remove the IBM Tivoli Workload Scheduler Connector, IBM Tivoli Workload Scheduler Plus Module, or Tivoli Management Framework.

Chapter 3. Installation 137

Page 154: Tivoli Workloud Scheduler Guide

3. Select the correct Tivoli Workload Scheduler 8.2 instance and click Change/Remove as shown in Figure 3-95.

Figure 3-95 Control Panel

3.12.2 Using the uninstallerOnce the uninstaller has been launched, you would use the same steps for UNIX/Linux and Windows, as follows:

1. Select the desired language and click OK to continue, as shown in Figure 3-96.

Figure 3-96 Language selection pop-up

138 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 155: Tivoli Workloud Scheduler Guide

2. The welcome window lists the actions available. Click Next to continue with the uninstall as shown in Figure 3-97.

Figure 3-97 Welcome window

3. Review the uninstall settings, then click Next if you wish to continue with the uninstall, as shown in Figure 3-98 on page 140.

Chapter 3. Installation 139

Page 156: Tivoli Workloud Scheduler Guide

Figure 3-98 Uninstall settings window

4. An information window indicates that the uninstall has started, as shown in Figure 3-99 on page 141.

140 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 157: Tivoli Workloud Scheduler Guide

Figure 3-99 Information window

5. Once the uninstall is complete, you will get a final summary window. Click Finish to exit the setup program, as shown in Figure 3-100 on page 142.

Chapter 3. Installation 141

Page 158: Tivoli Workloud Scheduler Guide

Figure 3-100 Installation complete window

6. Finally, tidy up the TWShome directory as detailed in the next section.

3.12.3 Tiding up the TWShome directoryIf you no longer require the configuration files, logs, and so on, which are not removed by the uninstaller, then remove the TWShome directory and any subdirectories as follows.

On UNIXFrom the command line, change directory to the directory above the TWShome directory, then recursively delete the TWShome directory:

# cd TWShome/..# rm -rf TWShome

On WindowsRemove the TWShome directory and any subdirectories using Windows Explorer or a similar tool.

142 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 159: Tivoli Workloud Scheduler Guide

3.12.4 Uninstalling JSS and IBM Tivoli Workload Scheduler Connector

To uninstall the Tivoli Job Scheduling Services (JSS) and IBM Tivoli Workload Scheduler Connector, use the following procedure:

1. Log in as root (UNIX) or the local Administrator (Windows).

2. From the command line (command prompt on Windows), stop the Connectors as follows:

– On UNIX:

# su -c TWSuser -c “‘maestro‘/bin/wmaeutil.sh ALL -stop”

– On Windows:

C:\> TWShome\bin\wmaeutil.cmd ALL -stop

3. Be sure that the Tivoli Management Framework environment is configured:

– On UNIX:

# . /etc/Tivoli/setup_env.sh

– On Windows:

C:\>%windir%\System32\drivers\etc\Tivoli\setup_env.cmd

4. Confirm that the Tivoli Management Framework environment is configured, and that both the JSS and IBM Tivoli Workload Scheduler Connector are installed. Run:

# wuninst -list

This command will return a list of Tivoli Management Framework products that can be uninstalled as shown in Example 3-3. We are interested in the following products:

– TMF_JSS– TWSConnector

Example 3-3 Uninstallable products

# wuninst -listCreating Log File (/tmp/wuninst.log)...------------------------------------------------ Uninstallable Products installed:------------------------------------------------TMF_JSSTWSConnector

Note: For Windows, you need also to start the bash environment with the bash command.

Chapter 3. Installation 143

Page 160: Tivoli Workloud Scheduler Guide

wuninst complete.

5. First uninstall the IBM Tivoli Workload Scheduler Connector using the following command:

# wuninst TWSConnector node -rmfiles

Where node is the host name of the box where the IBM Tivoli Workload Scheduler Connector is installed, as known by the Tivoli Management Framework. If you are unsure of the correct node name to use, run the following command and check for the name in the hostname(s) column:

# odadmin odlist

The wuninst command will prompt for confirmation before proceeding with the uninstall, as shown in Example 3-4.

Example 3-4 Uninstalling the IBM Tivoli Workload Scheduler Connector

# wuninst TWSConnector sunu10a -rmfilesCreating Log File (/tmp/wuninst.log)...This command is about to remove TWSConnector from the entire TMR.Are you sure you want to continue? (y=yes, n=no) ?yRemoving TWSConnector from the entire TMR.... ( this could take a few minutes )Creating Task...Running Task... ( this could take a few minutes )############################################################################Task Name: uninstall_taskTask Endpoint: sunu10a (ManagedNode)Return Code: 0------Standard Output------Creating Log File (/tmp/twsclean.log)...Removing TWSConnector...Removing TWSConnector installation info...---->Removing TWSConnector from .installed...Removing TWSConnector local instances...Removing MaestroEngine instance 1165185077.1.779#Maestro::Engine# gentlyRemoving MaestroDatabase instance 1165185077.1.780#Maestro::Database# gentlyRemoving MaestroPlan instance 1165185077.1.781#Maestro::Plan# gentlyRemoving TWSConnector methods and instances...Checking TWSConnector instances...Removing class MaestroEngine Removing resource from Policy Regions Deleting Class Object gently Unregister from TNRRemoving class MaestroDatabase Removing resource from Policy Regions Deleting Class Object gently Unregister from TNRRemoving class MaestroPlan

144 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 161: Tivoli Workloud Scheduler Guide

Removing resource from Policy Regions Deleting Class Object gently Unregister from TNRRemoving TWSConnector files...-->Removing Files...---->Removing /usr/local/Tivoli/bin/solaris2/Maestro---->Removing cli programsRemoving TWSConnector from ProductLocations...Removing TWSConnector ProductInfo...---->Removing TWSConnector from Installation object---->Checking for wuninst ended on Managed NodesUninstall of TWSConnector complete.------Standard Error Output------############################################################################

Cleaning up...wuninst complete.Please run wchkdb -u

6. After uninstalling the IBM Tivoli Workload Scheduler Connector, uninstall the Job Scheduling Services using the command:

# wuninst TMF_JSS node -rmfiles

The output from this command is shown in Example 3-5.

Example 3-5 Uninstalling JSS

# wuninst TMF_JSS sunu10a -rmfilesCreating Log File (/tmp/wuninst.log)...This command is about to remove TMF_JSS from the entire TMR.Are you sure you want to continue? (y=yes, n=no) ?yRemoving TMF_JSS from the entire TMR.... ( this could take a few minutes )Creating Task...Running Task... ( this could take a few minutes )############################################################################Task Name: uninstall_taskTask Endpoint: sunu10a (ManagedNode)Return Code: 0------Standard Output------Creating Log File (/tmp/jssclean.log)...Looking for dependent Products...Removing TMF_JSS...Removing TMF_JSS installation info...---->Removing TMF_JSS from .installed...Removing TMF_JSS local instances...Removing TMF_JSS methods and instances...Checking TMF_JSS instances...Removing class SchedulerEngine

Chapter 3. Installation 145

Page 162: Tivoli Workloud Scheduler Guide

Removing resource from Policy Regions Deleting Class Object gently Unregister from TNRRemoving class SchedulerDatabaseRemoving resource from Policy Regions Deleting Class Object gently Unregister from TNRRemoving class SchedulerPlan Removing resource from Policy Regions Deleting Class Object gently Unregister from TNRRemoving class imp_TMF_JSS::InstanceManager Removing resource from Policy Regions Deleting Class Object gently Unregister from TNRRemoving TMF_JSS files...-->Removing Files...---->Removing /usr/local/Tivoli/bin/solaris2/TAS/JSS---->Removing cli programsRemoving TMF_JSS from ProductLocations...Removing TMF_JSS ProductInfo...---->Removing TMF_JSS from Installation object---->Checking for wuninst ended on Managed NodesUninstall of TMF_JSS complete.------Standard Error Output------############################################################################

Cleaning up...wuninst complete.Please run wchkdb -u

7. Finally, check the Tivoli Management Framework database for inconsistencies as suggested:

# wchkdb -u

This output from this command is shown in Example 3-6.

Example 3-6 Check database for inconsistencies

# wchkdb -u

wchkdb: Preparing object lists:wchkdb: Checking object database:.....................................................................................................................................................................................................................................................................................wchkdb: Done checking object database.

146 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 163: Tivoli Workloud Scheduler Guide

3.13 Troubleshooting uninstall problemsTo determine the installation method used, and identify the appropriate uninstall method, work through the methods in Table 3-4 from the top, stopping at the first match in the How to determine column.

Table 3-4 How to uninstall

3.13.1 Uninstall manuallyShould it be necessary to uninstall manually, use the following steps:

1. Stop all the IBM Tivoli Workload Scheduler processes including Connector processes where appropriate.

2. Delete the TWShome directory and everything below.

3. Open the TWSRegistry.dat file and delete all the rows containing the TWSuser string.

The TWSRegistry.dat file is to be found in the following locations:

On UNIX /etc/TWS/TWSRegistry.dat

On Windows %SystemRoot%\system32\TWSRegistry.dat

An example of a TWSRegistry.dat file can be seen in Example 3-7.

Example 3-7 Sample TWSRegistry.dat file

/Tivoli/Workload_Scheduler/tws_DN_objectClass=OU/Tivoli/Workload_Scheduler/tws_DN_PackageName=TWS_NT_tws.8.2

Installation method How to determine Uninstall method

ISMP TWShome/_uninst/uninstall.[bin|exe] exists

On UNIX, run uninstall.bin On Windows use Add/Remove program

twsinst TWShome/twsinst script exists

wremovesp

Configuration Manager neither twsinst nor uninstall.[bin|exe] exist

twsinst-uninst

Note: It is important that you use the uninstallation program for the method that you had used for installing the product. For example if you had used the ISMP installation to install IBM Tivoli Workload Scheduler, you should not use another method such as twsinst to uninstall the product. This might cause unpredictable results.

Chapter 3. Installation 147

Page 164: Tivoli Workloud Scheduler Guide

/Tivoli/Workload_Scheduler/tws_DN_MajorVersion=8/Tivoli/Workload_Scheduler/tws_DN_MinorVersion=2/Tivoli/Workload_Scheduler/tws_DN_PatchVersion=/Tivoli/Workload_Scheduler/tws_DN_FeatureList=TBSM/Tivoli/Workload_Scheduler/tws_DN_ProductID=TWS_ENGINE/Tivoli/Workload_Scheduler/tws_DN_ou=tws/Tivoli/Workload_Scheduler/tws_DN_InstallationPath=c:\win32app\maestro/Tivoli/Workload_Scheduler/tws_DN_UserOwner=tws/Tivoli/Workload_Scheduler/tws_DN_MaintenanceVersion=/Tivoli/Workload_Scheduler/tws_DN_Agent=FTA

4. On Windows only, using regedit.exe, from the HKEY_LOCAL_MACHINE stanza find all keys containing the string TWSuser by clicking Edit -> Find, and delete all found except for the Legacy entries.

5. If ISMP was used initially to install IBM Tivoli Workload Scheduler, delete the entries from the vpd.properties file containing the string TWSuser.

The vpd.properties file can be found in the following locations:

On AIX /usr/lib/objrepos/vpd.properties

On Linux /root/vpd.properties

On HP-UX If you are unable to locate a vpd.properties file using the find command in a command window, use the SAM application to remove IBM Tivoli Workload Scheduler.

On Solaris ISMP does not use a vpd.properties file on Solaris, but instead uses the native Solaris product registry. To clean the product registry, use Admintool.

On Windows %SystemRoot%\vpd.properties

An example of a vpd.properties file can be seen in Example 3-8.

Example 3-8 Sample vpd.properties file

b489900b86554fc4c244436498230110| | | | | |1=Tivoli Workload Scheduler 8.2 Win32 Registry uninstall key|Win 32 Registry key | | | | |c:\win32app\maestro|0|0|1|1557aa482df5e5013543616468e30178|8|2| |0| |1|0|false| |true|3|b489900b86554fc4c244436498230110| | | | | |1bd56edd61d38ac70ecee7e780e217cac| | | | | |1=Tivoli Workload Scheduler 8.2 CLI for Windows2|Check for WINDOWS SPB condition| | | | |c:\win32app\maestro\uninstcli|0|0|1|1557aa482df5e5013543616468e30178|8|2| |0| |1|0|false| |true|3|bd56edd61d38ac70ecee7e780e217cac| | | | | |171b18919afff0e159eaff44ce28d5430| | | | | |1=Tivoli Workload Scheduler 8.2 CLI for Windows|CLI for Windows| | | | |c:\win32app\maestro\uninstcli|0|0|1|1557aa482df5e5013543616468e30178|8|2| |0| |1|0|false| |true|3|71b18919afff0e159eaff44ce28d5430| | | | | |1558bf31abca195e9eeefc137d3c5eb4b| | | | | |1=Tivoli Workload Scheduler Engine 8.2 for Windows|SPB Feature for Windows| | | |

148 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 165: Tivoli Workloud Scheduler Guide

|c:\win32app\maestro|0|0|1|1557aa482df5e5013543616468e30178|8|2| |0| |1|0|false| |true|3|558bf31abca195e9eeefc137d3c5eb4b| | | | | |14abdda266077f8b0e1070e859c1ed5cd| | | | | |1=Tivoli Workload Scheduler 8.2 Lap Component|IBM Lap Component| | | | |c:\win32app\maestro|0|0|1|1557aa482df5e5013543616468e30178|8|2| |0| |1|0|false| |true|3|4abdda266077f8b0e1070e859c1ed5cd| | | | | |11557aa482df5e5013543616468e30178|8|2| |0| |1=TWS|Tivoli Workload Scheduler| |IBM Tivoli Systems Inc.| |8.2|c:\win32app\maestro|0|0|1|1557aa482df5e5013543616468e30178|8|2| |0| |1|0|false|”$J(install_dir)/_uninst” “uninstall.jar” “uninstall.dat” ““|true|3|1557aa482df5e5013543616468e30178|8|2| |0| |11b76c910f25e05c418e5477ba952483f| | | | | |1=Tivoli Workload Scheduler Engine 8.2 component for Windows|SPB Component for Windows| | | | |c:\win32app\maestro|0|0|1|558bf31abca195e9eeefc137d3c5eb4b| | | | | |1|0|false| |true|3|1b76c910f25e05c418e5477ba952483f| | | | | |1

3.14 Useful commandsA number of example commands used during this chapter are AIX specific. If you are using a different platform, you may find the alternatives listed in Table 3-5 helpful.

Table 3-5 Useful commands

Note: Tivoli Workload Scheduler 8.2 is not the only application that writes into the vpd.properties file. Other applications such as WebSphere® also use this file.

Description Platform Command

Mount a CD-ROM AIX mount -r -V /dev/cd0 /cdrom

HP-UX mount /cdrom

Linux mount /mnt/cdrom

Solaris mount /cdrom

Unmount a CD-ROM AIX umount /cdrom

HP-UX umount /cdrom

Linux umount /mnt/cdrom

Solaris eject cdrom

Chapter 3. Installation 149

Page 166: Tivoli Workloud Scheduler Guide

Create a user AIX smitty user

HP-UX useradd

Linux useradd

Solaris useradd [-u uid] -g TWSgroup -d TWShome -c “comment” TWSuser

Create a group AIX smitty group

HP-UX groupadd TWSgroup

Linux groupadd TWSgroup

Solaris groupadd TWSgroup

Description Platform Command

150 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 167: Tivoli Workloud Scheduler Guide

Chapter 4. Return code management

This chapter discusses the job return code management enhancements included in Tivoli Workload Scheduler 8.2 and contains the following:

� “Return code management overview” on page 152� “Adding a return code expression to a job definition” on page 152� “Defining a return code condition” on page 153� “Monitoring return codes” on page 155� “Conman enhancement” on page 157� “Jobinfo enhancement” on page 160

4

© Copyright IBM Corp. 2003. All rights reserved. 151

Page 168: Tivoli Workloud Scheduler Guide

4.1 Return code management overviewTivoli Workload Scheduler Version 8.1 and earlier versions supported very basic logic to handle the return code that a job completed with. Either the job returned a 0 (zero) and was considered successful (SUCC), or returned anything other than a 0 and was considered unsuccessful (ABEND).

With Tivoli Workload Scheduler 8.2 using a simple logical expression to define the specific return codes that should be considered successful has been introduced. The return code condition expression is saved within the Plan and is visible from the Job Scheduling Console and conman.

If no specific return code condition is defined for a job, the default action is as in previous versions: a return code of 0 is considered successful and anything else unsuccessful.

4.2 Adding a return code expression to a job definitionA return code condition can be added to an existing or a new job definition using either composer or the Job Scheduling Console.

A new keyword RCCONDSUCC has been added to the job statement, which is used to specify the return code condition as shown in Example 4-1. In this example, the job DBSELOAD will be considered successful if it completes with any of the return codes 0, 5, 6, 7, 8, 9 or 10.

Example 4-1 Job definition including RCCONDSUCC keyword

$JOBS

CHATHAM#DBSELOAD DOCOMMAND “‘maestro‘/scripts/populate.sh” STREAMLOGON “^TWSUSER^” DESCRIPTION “populate database redbook example job” RECOVERY STOP AFTER CHATHAM#RECOVERY RCCONDSUCC “(RC = 0) OR ((RC > 4) AND (RC < 11))”

Using the Job Scheduling Console, the return code condition is defined in the Return Code Mapping field on the Task tab within the Job Definition window, as shown in Figure 4-1 on page 153.

152 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 169: Tivoli Workloud Scheduler Guide

Figure 4-1 Job definition window

4.3 Defining a return code conditionThe return code condition can be up to 256 characters in length, and can contain any combination of comparison and boolean expressions.

Comparison expressionA comparison expression has the syntax:

[(] RC operator operand [)]

Where the comparison operator is one of those listed in Table 4-1 on page 154 and the operand is an integer between -2147483647 and 2147483647.

Chapter 4. Return code management 153

Page 170: Tivoli Workloud Scheduler Guide

Table 4-1 Comparison operators

Boolean expressionSpecifies a logical combination of comparison expressions. The syntax is:

comparison_expression operator comparison_expression

Where the logical operator is one of those listed in Table 4-2.

Table 4-2 Logical operators

Note that the expression is evaluated from left to right. Parentheses can be used to assign priority to the expression evaluation.

The return code condition is stored in the JCL field (SCRIPTNAME or DOCOMMAND) of the job definition. When a return code condition is defined, the length of this field is reduced by the length of the return code (including spaces and parentheses) plus 10 characters. The maximin length of the JCL field is limited to 4096 characters. Therefore, given the return code condition “(RC = 0)

Operator Description Example

< Less than RC < 11

<= Less than or equal to RC <= 2

> Greater than RC > 4

>= Greater than or equal to RC >= 3

= Equal to RC = 0

!= Not equal to RC != 5

<> Not equal to RC <> 128

Operator Example Description

AND RC > 4 AND RC < 11 Successful if return code is 5, 6, 7,8, 9 or 10

OR RC = 0 OR RC = 2 Successful if return code is 0 or 2

NOT NOT RC = 128 Successful if return code is something other than 128

Tip: Be aware that it is possible to define a return code condition that will never evaluate to true, such as (RC = 0 AND RC = 1) which will result in the job always being unsuccessful (ABEND).

154 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 171: Tivoli Workloud Scheduler Guide

OR ((RC > 4) AND (RC < 11))”, the maximum JCL length is reduced from 4096 to 4050.

4.4 Monitoring return codesUsing conman, the return code returned by a completed job can be viewed using the showjob command. Example 4-2 shows the job DBSELOAD with a return code of 7 and a state of SUCC.

Example 4-2 Conman showjob output

$ conman sj chatham#DAILY_DB_LOAD TWS for UNIX (AIX)/CONMAN 8.2 (1.36.1.7) Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2001US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.Installed for user ‘’.Locale LANG set to “en_US”Schedule (Exp) 09/30/03 (#126) on MASTER. Batchman LIVES. Limit: 10, Fence: 0, Audit Level: 1sj chatham#DAILY_DB_LOAD(Est) (Est)CPU Schedule Job State Pr Start Elapse Dependencies Return Code CHATHAM #DAILY_DB_LOAD **************************************** SUCC 10 22:11 00:04 DATASPLT SUCC 10 22:11 00:01 #J17922 0 DATAMRGE ABEND 10 22:12 00:01 #J17924 1 CHCKMRGE SUCC 10 22:12 00:01 #J17926 0 DATACLNS SUCC 10 22:12 00:01 #J17932 0 DATARMRG SUCC 10 22:13 00:01 #J18704 0 DBSELOAD SUCC 10 22:13 00:01 #J18706 7

Note: If the JCL field contains an IBM Tivoli Workload Scheduler parameter or operating system environment variable, the return code condition will not be saved by either composer or the JSC. This is a known problem (APAR IY46920) and will be fixed in a future fix pack.

Chapter 4. Return code management 155

Page 172: Tivoli Workloud Scheduler Guide

DATAREPT SUCC 10 22:13 00:01 #J18712 0 DATARTRN SUCC 10 22:14 00:01 #J18714 0 $

Within the Job Scheduling Console, the return code a job completed with can be found in the Return Code column of the All Scheduled Jobs window. Figure 4-2 shows the job BDSELOAD with a return code of 5 and a status of Successful.

Figure 4-2 Job status window

Note: In this example the default column order has been changed for clarity.

156 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 173: Tivoli Workloud Scheduler Guide

4.5 Conman enhancementThe conman showjobs function has been enhanced to retrieve the return code information for a given job. The default showjobs output has been adjusted to include an additional column for the return code. An example of this can be seen in Example 4-2 on page 155.

Also, a new argument retcod, when used in conjunction with the keys argument, will give the return code for a specified job, as shown in Example 4-3.

Example 4-3 Displaying return codes using conman showjob command

$ conman sj chatham#daily_db_load.dbseload\;keys\;retcod TWS for UNIX (AIX)/CONMAN 8.2 (1.36.1.7) Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2001US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.Installed for user ‘’.Locale LANG set to “en_US”Schedule (Exp) 10/16/03 (#150) on MASTER. Batchman LIVES. Limit: 10, Fence: 0, Audit Level: 1sj chatham#daily_db_load.dbseload;keys;retcod8$

The retcod feature may not at first appear overly useful, but when integrated into a script, it can become quite powerful.

4.5.1 Retcod exampleIn this example we have a job stream that flows as shown in the diagram in Figure 4-3 on page 158.

Note: On UNIX/Linux boxes, the semi-colon character (;) is a command separator and needs to be quoted either by surrounding the arguments in double quotes (“) or preceding the semi-colon with a back slash (\) as in Example 4-3.

Chapter 4. Return code management 157

Page 174: Tivoli Workloud Scheduler Guide

Figure 4-3 Job stream flow diagram

We then took this flow diagram and created a job stream with the necessary follows dependencies to achieve this job flow. The resulting job stream can be seen in Example 4-4.

Example 4-4 Job stream containing branch job

SCHEDULE CHATHAM#DAILY_DB_WORK * daily database process redbook example schedule ON WORKDAYS FOLLOWS DAILY_DB_LOAD.DBSELOAD: JOB_21 JOB_22 JOB_23 FOLLOWS JOB_21, JOB_22 JOB_24 FOLLOWS JOB_23 JOB_25 FOLLOWS JOB_23END

We now complicate this scenario by wanting to only run either JOB_24 or JOB_25 depending on the outcome of jobs JOB_21 and JOB_22. If JOB_21 and JOB_22 both exit with return codes less than 3, we will run job JOB_25.

JOB_21

Branch JOBJOB_23

JOB_22

JOB_24 JOB_25

158 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 175: Tivoli Workloud Scheduler Guide

Otherwise we want to run job JOB_24. This we achieved by having the branch job, JOB_23, check the return codes that JOB_21 and JOB_22 completed with and then cancel the job that is not required to run, as shown in Figure 4-4.

Figure 4-4 The branch job cancels

The branch job JOB_23 runs the script branch.sh, which uses the conman showjobs retcod command to obtain the return code for each of JOB_21 and JOB 22, then uses a conman canceljob command to cancel the job not required. The complete script can be seen in Example 4-5.

Example 4-5 Script branch.sh

:# @(#)branch.sh 1.1 2003/10/17##

echo “branch.sh 1.1 2003/10/17”

# return codes: ${OK=0} ${FAIL=1}

# set UNISON_DIR to $HOME if unset, this enables the script to be tested# from the command line as the <TWSuser>if [ ! “${UNISON_DIR}” ]then UNISON_DIR=”${HOME}”fi

# get return codes from precessesor jobsJOB_21_RC=‘conman -gui sj ${UNISON_SCHED}.JOB_21\;keys\;retcod 2>&1 |egrep “^\^d” |cut -c3-‘

Chapter 4. Return code management 159

Page 176: Tivoli Workloud Scheduler Guide

JOB_22_RC=‘conman -gui sj ${UNISON_SCHED}.JOB_22\;keys\;retcod 2>&1 |egrep “^\^d” |cut -c3-‘

# get CPU and SCHEDULE from environmentSCHED=”${UNISON_CPU}#${UNISON_SCHED}”

# if both jobs 21 and 22 exit with a return code less than 3 run job 25# else run job 24if [ ${JOB_21_RC} -lt 3 ] && [ ${JOB_22_RC} -lt 3 ]then echo “INFO: cancelling job JOB_24” conman “cj ${SCHED}.JOB_24;noask”else echo “INFO: cancelling job JOB_25” conman “cj ${SCHED}.JOB_25;noask”fi

# give the cancel a chance to get processedsleep 10

# all doneexit ${OK}

For a complete description of conman and all the available options, see Chapter 4, “Conman Reference” in the Tivoli Workload Scheduler Version 8.2, Reference Guide, SC32-1274.

4.6 Jobinfo enhancementThe jobinfo command is a utility that can be called from a job script to obtain various information about the current job, including:

� If the job was scheduled or submitted as a docommand construct� The jobs priority level� If the job is the result of a conman rerun command� If the job is being run as a recovery job

This utility has been extended in Tivoli Workload Scheduler 8.2 to include the rstrt_retcode option to enable a recovery job to determine the return code of the parent job, and is run as follows:

$ jobinfo rstrt_retcode

When combined with a return code condition, jobinfo rstrt_retcode can be used to direct the recovery job to take different actions depending on the parent jobs return code.

160 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 177: Tivoli Workloud Scheduler Guide

4.6.1 Jobinfo exampleA parent job is known to exit with a return code in the range 0 ..10, whereby 0, 5, 6, 7, 8, 9 and 10 are deemed to be successful, and 1, 2, 3 and 4 are deemed to be unsuccessful.

The parent job is then defined with the following return code condition:

(RC = 0) OR ((RC > 4) AND (RC < 11))

And a recovery job as shown in Example 4-6. Note that the job is defined with the recovery action RERUN. This enables the recovery job to take some corrective action, and then the parent job will attempt to run again.

Example 4-6 Parent job definition

$JOBS

CHATHAM#DBSELOAD DOCOMMAND “/usr/local/tws/maestro/scripts/populate.sh” STREAMLOGON “^TWSUSER^” DESCRIPTION “populate database redbook example job” RECOVERY RERUN AFTER MASTER#RECOVERY RCCONDSUCC “(RC = 0) OR ((RC > 4) AND (RC < 11))”

The recovery job itself is defined as shown in Example 4-7.

Example 4-7 Recovery job definition

$ JOBS

CHATHAM#RECOVERY DOCOMMAND “^TWSHOME^/scripts/recovery.sh” STREAMLOGON “^TWSUSER^” DESCRIPTION “populate database recovery redbook example job” RECOVERY STOP

In the event of the parent job running unsuccessfully, we want the recovery job to take the actions described in Table 4-3 depending upon the parent job’s return code.

Table 4-3 Recovery job actions

Parent Job Return Code Action

1 Abend job stream.

2 Rerun the parent job having taken no other corrective action.

3 Submit a fix script, then rerun the parent job.

Chapter 4. Return code management 161

Page 178: Tivoli Workloud Scheduler Guide

The recovery job runs a script (recovery.sh), which uses the jobinfo utility to obtain information about itself first, then the return code the parent job completed with.

In the first case, where the recovery job is obtaining information about itself, we set the variable RSTRT_FLAG with the value returned by the jobinfo rstst_flag option:

RSTRT_FLAG=‘jobinfo rstrt_flag 2>/dev/null‘

This will set the variable to the value YES if this script is running as a recovery job, or NO otherwise. We later test this variable and abend the job should it not be running as a recovery job.

In the second case, we use the jobinfo rstrt_retcode option to set the variable RSTRT_RETCODE to the parent job’s return code:

RSTRT_RETCODE=‘jobinfo rstrt_retcode 2>/dev/null‘

We then use a case statement based on the value of RSTRT_RETCODE to define the appropriate actions. You will note that there is no option 1 within the case statement. Option 1 will match the default action at the bottom of the case, as will any unexpected values, which is to have the recovery script abend by returning an unsuccessful exit status as shown in Figure 4-5 on page 163.

4 Change an IBM Tivoli Workload Scheduler parameter, then rerun the parent job.

Note: We used a modified jobmanrc, which set the PATH variable within the jobs environment to include the TWShome/bin directory. If you do not do something similar, it will be necessary to call jobinfo using an explicit path, for example:

RSTRT_FLAG=‘/usr/local/tws/maestro/bin/jobinfo rstrt_flag 2>/dev/null‘

Note: The version of jobinfo included in IBM Tivoli Workload Scheduler 8.2 Fix Pack 01 returns the wrong result when the rstrt_flag option is used within a recovery job. We worked around this problem by replacing the Fix Pack 01 version of jobinfo with the version originally included with IBM Tivoli Workload Scheduler 8.2.

Parent Job Return Code Action

162 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 179: Tivoli Workloud Scheduler Guide

Figure 4-5 Job DBSELOAD completes with return code 1

In the case of option 2, where we want to rerun the parent job without taking any other action, the recovery job simply needs to exit with a successful exit status as shown in Figure 4-6 on page 164.

Chapter 4. Return code management 163

Page 180: Tivoli Workloud Scheduler Guide

Figure 4-6 Job DBSELOAD completes with return code 2

In the case of option 3, where we want to submit a fix script, then rerun the parent job, we use a fairly complicated conman sbd (submit command), which submits a conman sbj (submit job) on the Master. This means we do not need to have access to the jobs database on the local workstation. The recovery job then waits for a short time (30 seconds) in the example before completing with a successful exit status so that the parent job can rerun.

We do not want the parent job to rerun before the fix script has completed, and waiting for 10 seconds is certainly not a good way to be sure that this does not happen. However, the parent job has a dependency on the resource DATALOAD. Therefore by including a dependency on the same resource when submitting the fix script job, we only need to wait long enough for the fix script to submit and acquire the resource. The parent job will then wait for the fix script to complete and release the resource before it reruns, as shown in Figure 4-7 on page 165.

164 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 181: Tivoli Workloud Scheduler Guide

Figure 4-7 Job DBSELOAD completes with return code 3

Finally in the case of option 4, where we want to rerun the parent job after changing the value of an IBM Tivoli Workload Scheduler parameter, the recovery job uses the IBM Tivoli Workload Scheduler parms utility to set the IBM Tivoli Workload Scheduler parameter DBVLEVEL to the value 9:

parms -c DBVLEVEL 9

Tip: A more robust approach would be to run further conman commands following the sleep and check that the fix script job had indeed been submitted correctly and had acquired the resource before exiting. The recovery job would then exit with a successful exit status if all was well, or with an unsuccessful exit status if there was a problem.

Chapter 4. Return code management 165

Page 182: Tivoli Workloud Scheduler Guide

Having set the IBM Tivoli Workload Scheduler parameter, the recovery script will exit with a successful exit status as shown in Figure 4-8.

Figure 4-8 Job DBSELOAD completes with return code 4

Finally, Figure 4-9 on page 167 shows the job stream status following the job DBSELOAD completing with a successful return code, in this case 8, but the effect would have been the same had the return code been any of 0, 5, 6, 7, 8, 9 or 10.

Note: We used a modified jobmanrc, which set the PATH variable within the jobs environment to include the TWShome/bin directory. If you do not do something similar, it will be necessary to call parms using an explicit path, for example:

/usr/local/tws/maestro/bin/parms -c DBVLEVEL 9

166 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 183: Tivoli Workloud Scheduler Guide

Figure 4-9 Job DBSELOAD completes with a successful return code

The complete script run by the job RECOVERY is shown in Figure 4-8.

Example 4-8 Script recovery.sh

:# @(#)recovery.sh 1.1 2003/10/16## IBM Redbook TWS 8.2 New Features - sample recovery job

echo “recovery.sh 1.1 2003/10/16”

# return codes: ${OK=0} ${FAIL=1}

# set UNISON_DIR to $HOME if unset, this enables the script to be tested# from the command line as the <TWSuser>if [ ! “${UNISON_DIR}” ]then UNISON_DIR=”${HOME}”fi

Chapter 4. Return code management 167

Page 184: Tivoli Workloud Scheduler Guide

# run jobinfo to get some detailsRSTRT_FLAG=‘jobinfo rstrt_flag 2>/dev/null‘RSTRT_RETCODE=‘jobinfo rstrt_retcode 2>/dev/null‘

# confirm running as a recovery jobif [ “${RSTRT_FLAG}” = “YES” ]then echo “INFO: recovery job for ${UNISON_JOB}”else echo “ERROR: not running as a recovery job” exit ${FAIL}fi

# figure out what to do next based on the exit code of predisessorcase ${RSTRT_RETCODE} in 2) echo “INFO: action rerun parent job” rc=${OK} ;; 3) echo “INFO: action submit fix script, and rerun parent job” conman “sbd MASTER#’conman sbj ${UNISON_CPU}#DATAFIX\;into=${UNISON_CPU}#${UNISON_SCHED}\;needs=1 dataload;priority=20;noask’;alias=${UNISON_SCHED}_sbj_datafix” echo “INFO: need to ensure that fix script runs before dataload reruns” echo “WARNING: sleeping for 30 seconds is not the best way to do this!” sleep 30 rc=${OK} ;; 4) echo “INFO: action change database validate level and rerun parent job” parms -c DBVLEVEL 9 rc=${OK} ;; *) echo “WARNING: No action defined for RC = ${RSTRT_RETCODE}” rc=${FAIL} ;;esac

# check rc is definedif [ ! “${rc}” ]then echo “WARNING: exit code not defined, defaulting to FAIL” rc=${FAIL}fi# all doneexit ${rc}

168 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 185: Tivoli Workloud Scheduler Guide

For a complete description of jobinfo and all the available options, see Chapter 5, “Utility Commands” in the Tivoli Workload Scheduler Version 8.2, Reference Guide, SC32-1274. Details of the parms utility can also be found in the same chapter.

Chapter 4. Return code management 169

Page 186: Tivoli Workloud Scheduler Guide

170 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 187: Tivoli Workloud Scheduler Guide

Chapter 5. Security enhancements

This chapter discusses the security enhancements included in Tivoli Workload Scheduler 8.2 and contains the following:

� “Working across firewalls” on page 172

� “Strong authentication and encryption using Secure Socket Layer protocol (SSL)” on page 176

� “Centralized user security definitions” on page 204

5

© Copyright IBM Corp. 2003. All rights reserved. 171

Page 188: Tivoli Workloud Scheduler Guide

5.1 Working across firewallsIn previous releases of Tivoli Workload Scheduler 8.2, communication between workstations was not always routed through the domain hierarchy. Some operations, such as starting and stopping workstations or retrieving job output, used a direct connection from the Master or Domain Manager to the target workstation. In order for this communication to be successful, access needed to be enabled through the firewall for each Tivoli Workload Scheduler 8.2 workstation behind the firewall.

With the configuration shown in Figure 5-1, it would be necessary to open connections from the Master to each of the workstations in the domain SECUREDM.

Figure 5-1 Connections through firewall without behindfirewall configured

In this case, the firewall rules required would have been as shown in Example 5-1 on page 173.

IP Firewall

IP Firewall

aster DomainMaster Domain Manager

Domain Manager

CHATHAM

Domain Manager

TWS6

WS11 TWS12 TWS1 TWS5

Firewall

172 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 189: Tivoli Workloud Scheduler Guide

Example 5-1 Before sample firewall rules

src 9.3.4.47/32 dest 10.2.3.190/32 port 31111 proto tcp permitsrc 10.2.3.190/32 dest 9.3.4.47/32 port 31111 proto tcp permit

src 9.3.4.47/32 dest 10.2.3.184/32 port 31111 proto tcp permitsrc 102.3.184/32 dest 9.3.4.47/32 port 31111 proto tcp permitsrc 9.3.4.47/32 dest 10.2.3.189/32 port 31111 proto tcp permitsrc 10.2.3.189 dest 9.3.4.47/32 port 31111 proto tcp permit

With just a small number of workstations involved as is the case here, this is not a major concern, but as the number of workstations increase, particularly if workstations are added and/or removed frequently and strict change control procedures are enforced when changes are made to the firewall rules, this can become a major problem.

With Tivoli Workload Scheduler 8.2, it is now possible to configure workstations so that the start commands, stop commands, and the retrieving of job output follow the domain hierarchy where firewalls exist in the network.

It is important to understand in the design phase of a Tivoli Workload Scheduler 8.2 network where the firewalls are positioned in the network, and which Fault Tolerant Agents and which Domain Managers are behind a particular firewall. Once this has been established, the behindfirewall attribute in the workstation definition for the Domain Managers and Fault Tolerant Agents behind the firewall should be set to ON as shown in Example 5-2, or using the Job Scheduling Console from the workstation definition window select Behind Firewall as shown in Figure 5-2 on page 174.

Example 5-2 Workstation definition for a Domain Manager behind a firewall

cpuname TWS6os UNIXnode tws6.itsc.austin.ibm.comtcpaddr 31111domain SECUREDMTIMEZONE CSTfor maestrotype FTAautolink onfullstatus onresolvedep onbehindfirewall on

end

Chapter 5. Security enhancements 173

Page 190: Tivoli Workloud Scheduler Guide

Figure 5-2 Behind Firewall workstation definition

In the example network shown in Figure 5-1 on page 172, we would set the behindfirewall attribute to ON for the workstations TWS1, TWS5 and TWS6, and to OFF for the remaining workstations. Once Jnextday had run and the modified workstation definitions have taken effect, the firewall configuration can be modified to permit a connection for the Domain Manager only, as shown in Example 5-3.

Example 5-3 After sample firewall rule

src 9.3.4.47/32 dest 10.2.3.190/32 port 31111 proto tcp permitsrc 10.2.3.190/32 dest 9.3.4.47/32 port 31111 proto tcp permit

174 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 191: Tivoli Workloud Scheduler Guide

With this configuration stop commands, start commands and the retrieval of job stdlist files for workstations TWS1 and TWS5 will follow the domain hierarchy from the Master down through the Domain Manager TWS6 as shown in Figure 5-3.

Figure 5-3 Connections through firewall with behindfirewall configured

The new Tivoli Workload Scheduler 8.2 service ROUTER (service 2007 in the Netconf file) has been created to perform the Firewall Support.

The ROUTER process runs on Domain Managers and acts as a bridge to forward the request (stop/start commands or retrieve the job log) to the next node in the hierarchical path (it could be a subordinate Domain Manager or Fault Tolerant Agent).

IP Firewall

Master DomainMaster Domain Manager

Domain Manager

CHATHAM

Domain Manager

TWS6

TWS11 TWS12 TWS1 TWS5

Firewall

Chapter 5. Security enhancements 175

Page 192: Tivoli Workloud Scheduler Guide

5.2 Strong authentication and encryption using Secure Socket Layer protocol (SSL)

Tivoli Workload Scheduler 8.2 provides a secure, authenticated, and encrypted connection mechanism between components running in both non-secure and secure environments. This mechanism is based on the Secure Socket Layer (SSL) protocol and uses the OpenSSL Toolkit, which is automatically installed as part of the Tivoli Workload Scheduler 8.2 engine.

Important: In order to optimize the initialization time of a Tivoli Workload Scheduler 8.2 network during the creation of the daily plan, a new stop; progressive; wait conman command has been implemented. This command provides a hierarchical stop where each Domain Manager stops the workstations in its domain. Using this command reduces the stopping time of the workstations, which is especially important when all workstations need to be stopped before sending the new Symphony file.

To use this feature, replace the stop @!@; wait line within the Jnextday script with stop; progressive; wait.

For example; if we issue a stop; progressive; wait command on the MDM in Figure 5-3 on page 175:

1. Master Domain Manager will send a stop command to CHATHAM and TWS6 and stop itself.

2. CHATHAM will send a stop command to FTAs TWS11 and TWS12 and stop itself.

3. TWS6 will send a stop command to FTAs TWS1 and TWS5 and stop itself.

Note that if you use this feature, there is a slight chance that some workstations might not be stopped when their Domain Managers are ready to send the Symphony file, but in that case the Domain Manager will wait for five minutes and then retry sending the Symphony file.

If you use the Firewall Support option, we recommend you use the progressive stop feature. This is not mandatory, but when used, will increase the performance of the Jnextday process. Note that the default behavior in Jnextday is not to use the progressive stop.

176 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 193: Tivoli Workloud Scheduler Guide

Although installed by default, you or your Tivoli Workload Scheduler 8.2 administrator can decide whether or not to implement SSL support across your network. If SSL support is not implemented, Tivoli Workload Scheduler 8.2 will operate in the same way as previous versions, specifically:

� Authentication is based on an IP-checking mechanism that can be subject to IP spoofing.

� Encryption is limited to users’ passwords only. All other information (such as job invocations, job output) is sent in clear text.

5.2.1 The SSL protocol internalsThe SSL protocol has been developed by Netscape and is based on a private and public key methodology. It is the highest security standard currently in use for Internet communications. The SSL protocol runs above TCP/IP and below high-level protocols such as HTTP, IMAP or TWS protocol as seen in Figure 5-4.

Figure 5-4 SSL procotcol

SSL procotoc has three basic properties:

� The connection is private. Encryption is used after an initial handshake to define a secret key. Symmetric cryptography is used for data encryption (for example, DES and RC4).

� The peer’s identity can be authenticated using asymmetric, or public-key, cryptography (for example, RSA).

� The connection is reliable. Message transport includes a message integrity check that uses a keyed MAC. Secure hash functions, such as SHA and MD5, are used for MAC computations.

To authenticate a peer’s identity, the SSL protocol uses X.509 certificates called digital certificates. Digital certificates are, in essence, electronic ID cards that are

Note: For more information on the OpenSSL Toolkit, refer to the OpenSSL organization’s Web site at:

http://www.openssl.org/

Application layer

Network layerSecure Sockets Layer

TCP/IP Layer

TWS Proprietary Protocol HTTP LDAP

Chapter 5. Security enhancements 177

Page 194: Tivoli Workloud Scheduler Guide

issued by trusted parties and enable a user to verify both the sender and the recipient of the certificate through the use of public-key cryptography.

Public-key cryptography uses two different cryptographic keys: a private key and a public key. Public-key cryptography is also known as asymmetric cryptography, because you can encrypt information with one key and decrypt it with the complement key from a given public-private key pair. Public-private key pairs are simply long strings of data that act as keys to a user’s encryption scheme. The user keeps the private key in a secure place (for example, encrypted on a computer’s hard drive) and provides the public key to anyone with whom the user wants to communicate. The private key is used to digitally sign all secure communications sent from the user, while the public key is used by the recipient to verify the sender’s signature.

Public-key cryptography is built on trust; the recipient of a public key needs to have confidence that the key really belongs to the sender and not to an impostor.

Digital certificates provide that confidence. For this reason, the IBM Tivoli Workload Scheduler workstations that share an SSL session must have locally installed repositories for the X.509 certificates that will be exchanged during the SSL session establishment phase to authenticate the session.

A digital certificate is issued by a trusted authority, also called a Certificate Authority (CA). A signed digital certificate contains:

� The owner’s Distinguished Name� The owner’s public key� The Certificate Authority’s (issuer’s) Distinguished Name� The signature of the Certificate Authority over these fields

A certificate request that is sent to a Certificate Authority for signing contains:

� The owner’s (requester’s) Distinguished Name� The owner’s public key� The owner’s own signature over these fields

A root CA’s digital certificate is an example of a self-signed digital certificate.

Users can also create their own self-signed digital certificates for testing purposes.

The following example describes in a simplified way how digital certificates are used in establishing an SLL session. In this scenario, Appl1 is a client process that opens an SLL connection with the server application Appl2:

1. Client Appl1 asks to open an SSL session with server Appl2.

178 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 195: Tivoli Workloud Scheduler Guide

2. Appl2 starts the SSL handshake protocol. It encrypts the information using its private key and sends its certificate with the matching public key to Appl1.

3. Appl1 receives the certificate from Appl2 and verifies that it is signed by a trusted Certificate Authority. If the certificate is signed by a trusted CA, Appl1 can optionally extract some information (such as the Distinguished Name) stored in the certificate and performs additional authentication checks on Appl2.

4. At this point, the server process has been authenticated, and the client process starts its part of the authentication process; that is, Appl1 encrypts the information using its private key and sends the certificate with its public key to Appl2.

5. Appl2 receives the certificate from Appl1 and verifies that it is signed by a trusted Certificate Authority.

6. If the certificate is signed by a trusted CA, Appl2 can optionally extract some information (such as the Distinguished Name) stored in the certificate and performs additional authentication checks on Appl1.

5.2.2 Planning for SSL support in Tivoli Workload Scheduler 8.2To implement SSL support for a network, the IBM Tivoli Workload Scheduler administrator must plan in advance how the workstations will authenticate each other. The administrator can opt to configure the Tivoli Workload Scheduler 8.2 network so that all the workstations that open SSL sessions authenticate in the same way, or configure different authentication levels for each workstation. The authentication levels selected will affect the way the digital certificates are created and installed.

The following authentication methods are available:

caonly Two workstations trust each other if each receives from the other a certificate that is signed by a trusted Certificate Authority.

Tip: When planning for SSL implementation, take into account that it is not a good idea to use SSL on all nodes of a Tivoli Workload Scheduler network. The reason is the potential performance penalty, especially during the link phase of the FTAs. It is a good idea to use the SSL only on the workstations in your DMZ that serve as an entry point from a nonsecure zone to the secure zone.

Chapter 5. Security enhancements 179

Page 196: Tivoli Workloud Scheduler Guide

string Two workstations trust each other if after receiving a certificate with the signature of a trusted CA, each performs a further check by extracting the Common Name from the certificate and compares it with a string defined in the localopts file attribute SSL auth string by the Tivoli Workload Scheduler 8.2 administrator.

cpu Workstations trust each other if after receiving a certificate with the signature of a trusted CA, each performs a further check by extracting the Common Name from the certificate and compares it with the name of the workstation.

By selecting the appropriate authentication methods from the list above, the Tivoli Workload Scheduler 8.2 administrator can choose to implement one or a mix of the following:

� Use the same certificate for the entire network� Use a certificate for each domain� Use a certificate for each CPU

The Tivoli Workload Scheduler 8.2 administrator has the choice of requesting certificates from one of the many commercial Certificate Authorities (such as Baltimore, VeriSign, and so on) or creating his own Certificate Authority to create and sign the necessary certificates.

In the example given in 5.2.3, “Creating your own Certificate Authority” on page 181, we have selected to create our own Certificate Authority, enable SSL between the Master Domain Manager (MASTER) and one of the Domain Manager’s (TWS6), to use a different certificate for each of these workstations and have the certificate’s Common Name match the workstation name. If your certificates are going to be signed by a commercial CA, you can skip over the next section and go straight to 5.2.4, “Creating private keys and certificates” on page 188.

Important: Note that this provides the least level of security, since for example even a certificate for a banking application issued by a trusted Certificate Authority will be considered as valid by IBM Tivoli Workload Scheduler, although it was issued for the banking application and has nothing to with scheduling.

180 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 197: Tivoli Workloud Scheduler Guide

5.2.3 Creating your own Certificate AuthorityAlthough the OpenSSL command-line tool CA functionality was included only as an example, it provides all of the functionality necessary to set up a minimal CA that is more than adequate for Tivoli Workload Scheduler 8.2’s requirements.

In this section we will go through the steps necessary to create your own CA:

� Creating an environment for your CA� Creating an OpenSSL configuration file� Creating a self-signed root certificate

Creating an environment for your CAFirst select a machine to be your CA. In this example, we have assumed that you have selected a machine with Tivoli Workload Scheduler 8.2 installed. However, any machine with a current version of OpenSSL would work as your CA, but some of the steps below may need to be modified slightly due to differing locations and compiled defaults. We chose to make the Linux Enterprise Server 2.1 Domain Manager TWS6 our Certificate Authority, and put the CA-related files in a subdirectory of TWShome directory:

TWShome/ca

These are the steps we followed:

1. Log in as the TWSuser:

2. From the TWShome directory create the directory named “ca” plus two other directories:one named “private” to hold a copy of the CA certificate’s private key, and the other named “certs” to hold copies of the certificates issued:

$ mkdir ca $ cd ca$ mkdir certs private

Note: A template for the configuration file we used, plus a script create_ca.sh that performs the steps required to create your own CA, as detailed in the next section, are available for download via FTP from:

ftp://www.redbooks.ibm.com/redbooks/SG246628

For instructions on downloading these files, refer to Appendix B, “Additional material” on page 387.

Note: Work with your security administrator when configuring SSL for IBM Tivoli Workload Scheduler.

Chapter 5. Security enhancements 181

Page 198: Tivoli Workloud Scheduler Guide

3. Most of the CA files can safely be made available to anyone on the system. The one exception, however, is the CA certificate’s private key, which should be accessible only to those authorized to issue certificates for this CA. To restrict access, we simply restricted the permissions on the private directory:

$ chmod g-rwx,o-rwx private

However, you may want to consider burning the CA’s private key onto a CD-ROM, and only mounting the CD-ROM on the private directory when the key is required, keeping it locked away in a fireproof safe or other secure location at all other times.

4. It is important that no two certificates are issued by a CA with the same serial number. Therefore we created the file named serial and initialized it with the number 1:

$ echo “01” >serial

5. Next, we created an empty file index.txt, which will be used to hold a simple database of certificates issued:

$ touch index.txt

Creating an OpenSSL configuration fileYou need to create a configuration file that will be used by the OpenSSL command-line tool to obtain information about how to issue certificates. You can use the OpenSSL defaults, but using a configuration file saves some work in issuing commands to OpenSSL.

Example 5-4 on page 183 shows the configuration file for our CA. The first set of keys informs OpenSSL about the placement of the files and directories that it needs to use. The keys, default_crl_days, default_days, and default_md, correspond to the command-line crldays, days and md options, and are explained below:

default_crl_days Specifies the number of days between Certificate Revocation Lists (CRLs). Since Tivoli Workload Scheduler 8.2 ignores CRLs the value specified here is of little importance unless you use this CA to sign certificates for products besides Tivoli Workload Scheduler 8.2.

Note: OpenSSL requires the serial number to be a hexadecimal number consisting of at least two digits.

Note: If you use the command-line options, they will override the corresponding options specified in the configuration file.

182 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 199: Tivoli Workloud Scheduler Guide

default_days Specifies the number of days issued certificate will be valid for.

default_md Specifies the message digest algorithm to used when signing issued certificates and CRLs.

The policy key gives the name of the default policy that will be used. A policy definition reflects the fields in a certificate’s Distinguished Name. The x509_extensions key is used for the name of a section that has the extensions to be added to each certificate that belongs to this CA. In our example, we only included the basicConstraint extension and set it to False, which effectively eliminates the use of certificate chains.

Example 5-4 shows the configuration file.

Example 5-4 A simple CA configuration definition

[ ca ]default_ca = twsca

[ twsca ]dir = /opt/tws/cacertificate = $dir/cacert.pemdatabase = $dir/index.txtnew_certs_dir = $dir/certsprivate_key = $dir/private/cakey.pemserial = $dir/serial

default_crl_days = 7default_days = 365default_md = md5

policy = twsca_policyx509_extensions = certificate_extensions

[ twsca_policy ]commonName = suppliedstateOrProvinceName = suppliedcountryName = suppliedemailAddress = suppliedorganizationName = suppliedorganizationalUnitName = supplied

[ certificate_extensions ]basicConstraints = CA:false

Note: Tivoli Workload Scheduler 8.2 requires X.509v3 certificates.

Chapter 5. Security enhancements 183

Page 200: Tivoli Workloud Scheduler Guide

Now that we have created a configuration file, we need to tell OpenSSL where to find it. We can use the environment variable OPENSSL_CONF for this purpose.

To this end, we created a script, ca_env.sh by running the following command:

$ cat <<EOT >ca_env.sh> :> # set environment for OpenSSL Certificate Authority> #>> OPENSSL_CONF=\${HOME}/ca/openssl.cnf> export OPENSSL_CONF> EOT

Alternatively, you can simple create the ca_env.sh script using your prefered text editor, for example vi.

The resulting ca_env.sh script file is shown in Example 5-5.

Example 5-5 Script ca_env.sh

:# set environment for OpenSSL Certifical Authority#

OPENSSL_CONF=${HOME}/ca/openssl.cnfexport OPENSSL_CONF

We can now source the script as and when required, thereby setting the required environment variable, using the following command:

$ . ca_env.sh

Creating a self-signed root certificateWe need to create a self-signed root certificate before starting to issue certificates with our CA. In order to do that, we first add the following fields to the configuration file that we created in “Creating an OpenSSL configuration file” on page 182. Our additions are shown in Example 5-6.

Example 5-6 Configuration file additions for generating a self-signed root certificate

[ req ]default_bits = 2048default_keyfile = /opt/tws/ca/private/cakey.pemdefault_md = md5

Note: By default, OpenSSL uses a system-wide configuration file, and the location of this file is dependent on the installation and platform.

184 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 201: Tivoli Workloud Scheduler Guide

prompt = nodistinguished_name = root_ca_distinguished_name

x509_extensions = root_ca_extensions

[ root_ca_distinguished_name ]commonName = ITSO/Austin CAstateOrProvinceName = TexascountryName = USemailAddress = [email protected] = ITSO/Austin Certificate Authority

[ root_ca_extensions ]basicConstraints = CA:true

The explanation of the fields used in this example are as follows:

default_bits Instructs the OpenSSL to generate a private key for the certificate with a length specified (default is 512 bits; in the example we have used 2048, which provided more protection).

default_keyfile Specifies the location to write the generated private key.

default_md Specifies the message digest algorithm to use to sign the key. MD5 was chosen in the example.

prompt Instructs OpenSSL how to get the information for the certificate’s Distinguished Name. A value of No means it should get the name from the distinguished_name field instead of prompting.

distinguished_name Specifies the fields that make up the Distinguished Name, used with prompt=no. In our example this is specified as root_ca_distinguished_name. Note that the value of these fields are specified in the root_ca_distinguished_name section of Example 5-6 on page 184.

x509_extensions Specifies the extensions that are included in the certificate. In our example this is specified as root_ca_extensions. Note that the value of these fields are specified in the root_ca_extensions section.

After finishing this configuration step, you can generate your self-signed root certificate. To do this, from the root directory of the CA, which is /opt/tws/ca in our scenario, first source the ca_env.sh script to set the required environment variable:

$ . ca_env.sh

Chapter 5. Security enhancements 185

Page 202: Tivoli Workloud Scheduler Guide

Then execute the following command:

$ openssl req -x509 -newkey rsa -days 365 -out cacert.pem -outform PEM

OpenSSL will prompt you to enter a pass phrase (twice) to encrypt your private key.

Example 5-7 shows the results of the above command.

Example 5-7 Output from generating a self-signed root certificate

$ openssl req -x509 -newkey rsa -days 365 -out cacert.pem -outform PEMGenerating a 2048 bit RSA private key...........................................................................+++......................+++writing new private key to ‘/opt/tws/ca/private/cakey.pem’Enter PEM pass phrase:Verifying - Enter PEM pass phrase:-----$

To view the resulting certificate, run the following command:

$ openssl x509 -in cacert.pem -text -noout

Tip: On some systems it may be necessary to seed the random number generator prior to successfully running this command. If this is the case, you will encounter an error similar to the following:

unable to load ‘random state’This means that the random number generator has not been seededwith much random data.Generating a 1024 bit RSA private key25752:error:24064064:random number generator:SSLEAY_RAND_BYTES:PRNG not seeded:md_rand.c:503:You need to read the OpenSSL FAQ, http://www.openssl.org/support/faq.html25752:error:04069003:rsa routines:RSA_generate_key:BN lib:rsa_gen.c:182:

Should this occur then run the following command:

$ openssl rand -out TWS.rnd -rand ../bin/openssl 8192

Then rerun the previous command:

$ openssl req -x509 -newkey rsa -days 365 -out cacert.pem -outform PEM

Important: This pass phrase is important and should not be compromised or forgotten.

186 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 203: Tivoli Workloud Scheduler Guide

The output from this command can be seen in Example 5-8. Note, however, that your certificate will not be identical, since your public and private key will be different from ours.

Example 5-8 View CA certificate

$ openssl x509 -in cacert.pem -text -nooutCertificate: Data: Version: 3 (0x2) Serial Number: 0 (0x0) Signature Algorithm: md5WithRSAEncryption Issuer: CN=ITSO/Austin CA, ST=Texas, C=US/[email protected], O=ITSO/Austin Certificate Authority Validity Not Before: Oct 9 10:30:53 2003 GMT Not After : Oct 8 10:30:53 2004 GMT Subject: CN=ITSO/Austin CA, ST=Texas, C=US/[email protected], O=ITSO/Austin Certificate Authority Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (2048 bit) Modulus (2048 bit): 00:b7:72:20:90:73:be:98:bb:1e:f9:32:34:f2:8c: a5:38:f9:52:6f:36:b9:43:07:94:96:c3:a6:03:d3: c6:8c:c9:39:ee:9e:70:fe:8b:3e:fe:cf:81:ca:c6: 86:f2:6a:a1:bc:87:d4:a5:44:8e:5e:27:75:aa:bb: 72:e0:ea:21:47:17:da:c7:86:7d:96:dd:fa:95:5b: 62:4e:35:48:24:39:8d:67:66:d3:a6:6f:fd:27:ff: 3a:e3:c6:05:44:25:07:3b:98:b3:33:a4:8c:5c:36: 72:77:6f:ce:b5:c4:52:e3:25:8c:67:03:df:15:a5: 7a:cd:9f:00:87:4a:88:91:c4:6c:d7:a4:7a:c6:d4: 5f:1f:6d:d3:6d:a5:0c:8d:55:77:d2:b8:d5:0e:ac: 29:82:ea:d1:e4:1d:93:52:3a:23:35:c0:e2:d3:7d: 14:3d:63:8c:ea:e7:8d:70:9b:69:d9:4c:c3:5e:68: 65:f9:94:9b:92:b4:d8:dd:cd:29:30:b2:e3:a9:6f: 08:5a:76:db:75:e4:4a:58:f9:a7:9d:04:e0:7a:c0: bd:0f:7c:14:5d:a5:2e:28:b3:9c:68:a3:c5:10:97: ea:b2:3b:65:d9:c8:75:dd:93:69:f2:94:e0:68:bb: 23:1c:74:7a:e9:2d:ec:35:19:4e:54:ac:c4:df:cb: e1:2b Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: CA:TRUE Signature Algorithm: md5WithRSAEncryption 03:d7:cd:5b:63:cc:23:28:f2:ed:46:0e:70:78:2c:d5:c2:f1: c0:61:ef:99:82:b2:80:51:c0:d6:0d:ab:14:75:19:87:a2:06: 46:fa:b7:6b:27:7f:20:64:6e:48:c3:d0:c3:f8:a0:a5:56:ce:

Chapter 5. Security enhancements 187

Page 204: Tivoli Workloud Scheduler Guide

3f:40:34:16:aa:ac:16:ec:24:cd:ba:16:de:8e:d8:b2:18:65: 06:f8:e5:11:e3:30:68:e1:d7:b8:da:ae:e6:10:16:c7:75:10: 9d:47:04:76:6e:ae:7e:c9:bf:ba:ee:04:da:9f:95:89:0f:81: 85:1d:64:b6:7b:74:2f:19:1a:16:56:7e:fd:c2:6e:bd:6b:06: 25:7b:af:df:73:50:97:11:79:7d:87:8d:37:1c:a5:44:34:6c: e1:05:0b:03:42:d3:57:79:ef:5e:50:74:c2:65:ee:d3:e5:26: 27:93:42:22:6c:f1:15:92:a5:3f:70:c3:80:a1:03:7e:71:e2: 45:07:c4:02:51:cc:5f:73:43:3d:7a:20:bf:a1:48:1c:ad:6c: 94:e0:26:75:e7:10:9a:4c:64:a3:ee:d1:f5:62:23:33:ce:77: a6:0a:e5:5a:4a:3c:4c:d5:e1:1e:9c:a9:cb:e2:da:cf:ae:7b: 94:13:a9:22:71:5a:30:57:be:51:4b:fb:a2:f5:42:5a:25:03: 15:79:dc:55$

5.2.4 Creating private keys and certificatesThe following steps explain how to create one key and one certificate. You can decide whether to use one key and certificate pair for the entire network, one for each domain, or one for each workstation. The steps below assume that you will be creating a key and certificate pair specifically for each workstation participating in an SSL connection, which in the case of our example are the three workstations MASTER, BACKUP and TWS6. The workstation BACKUP requires a key and certificate pair so that in the event of a switchmgr occurring, BACKUP is able to link successfully to TWS6.

Repeat the following steps on each workstation in turn, substituting the appropriate workstation name as required:

1. Log on as the TWSuser.

2. Create a directory under TWShome to contain the SSL configuration files. The default directory specified in the localopts file is TWShome/ssl, which is as good as any:

$ mkdir ssl

3. Change directory to the SSL directory:

$ cd ssl

4. Create a script ssl_env.sh as seen in Example 5-9, which we will use to configure the environment variable OPENSSL_CONF to specify the location of the required OpenSSL configuration file:

Example 5-9 ssl_env.sh script

$ cat <<EOT >ssl_env.sh> :> # set environment for OpenSSL> #

188 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 205: Tivoli Workloud Scheduler Guide

>> OPENSSL_CONF=\${HOME}/bin/openssl.cnf> export OPENSSL_CONF> EOT$

Alternatively, you can simply create the ssl_env.sh script using your preferred text editor, for example vi.

The resulting ssl_env.sh script file is shown in Example 5-10.

Example 5-10 Script ssl_env.sh

:# set environment for OpenSSL#

OPENSSL_CONF=${HOME}/bin/openssl.cnfexport OPENSSL_CONF

5. Now source the ssl_env.sh script to set the OPENSSL_CONF environment variable in the current environment as follows:

$ . ssl_env.sh

6. The command to generate a certificate is similar to the command we used to create our self-signed root certificate. We use the command-line tool’s req command, but we’ll need to specify some extra parameters as follows:

$ openssl req -newkey rsa:1024 -keyout CPUkey.pem -keyform PEM -out CPUreq.pem -outform PEM

Chapter 5. Security enhancements 189

Page 206: Tivoli Workloud Scheduler Guide

The operation will be much more interactive, prompting for information to fill in to the certificate request’s Distinguished Name. Example 5-11 shows the output from generating a certificate request.

Example 5-11 Generating a certificate request

$ openssl req -newkey rsa:1024 -keyout CPUkey.pem -keyform PEM -out CPUreq.pem -outform PEMGenerating a 1024 bit RSA private key............++++++..................................++++++writing new private key to ‘CPUkey.pem’Enter PEM pass phrase:Verifying - Enter PEM pass phrase:-----You are about to be asked to enter information that will be incorporatedinto your certificate request.What you are about to enter is what is called a Distinguished Name or a DN.There are quite a few fields but you can leave some blankFor some fields there will be a default value,If you enter ‘.’, the field will be left blank.-----Country Name (2 letter code) [AU]:USState or Province Name (full name) [Some-State]:TexasLocality Name (eg, city) []:AustinOrganization Name (eg, company) [Internet Widgits Pty Ltd]:IBM

Tip: On some systems, it may be necessary to seed the random number generator prior to successfully running this command. If this is the case, you will encounter an error similar to the following:

unable to load ‘random state’This means that the random number generator has not been seededwith much random data.Generating a 1024 bit RSA private key25752:error:24064064:random number generator:SSLEAY_RAND_BYTES:PRNG not seeded:md_rand.c:503:You need to read the OpenSSL FAQ, http://www.openssl.org/support/faq.html25752:error:04069003:rsa routines:RSA_generate_key:BN lib:rsa_gen.c:182:

Should this occur, then run the following command:

$ openssl rand -out TWS.rnd -rand ../bin/openssl 8192

Then rerun the previous command:

$ openssl req -newkey rsa:1024 -keyout CPUkey.pem -keyform PEM -out CPUreq.pem -outform PEM

190 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 207: Tivoli Workloud Scheduler Guide

Organizational Unit Name (eg, section) []:ITSO/AustinCommon Name (eg, YOUR name) []:TWS6Email Address []:[email protected] Please enter the following ‘extra’ attributesto be sent with your certificate requestA challenge password []:once upon a timeAn optional company name []:$

The result of this command is the creation of two files: CPUreq.pem and CPUkey.pem. The former, CPUreq.pem, contains the certificate request as shown in Example 5-12, and CPUkey.pem contains the private key that matches the public key embedded in the certificate request. As part of the process to generate a certificate request, a new key pair was also generated. The first passphrase that is prompted for is the passphrase used to encrypt the private key. The challenge phrase is stored in the certificate request, and is otherwise ignored by OpenSSL. Some CAs may make use of it, however.

Example 5-12 The resulting certificate request

$ openssl req -in CPUreq.pem -text -nooutCertificate Request: Data: Version: 0 (0x0) Subject: C=US, ST=Texas, L=Austin, O=IBM, OU=ITSO/Austin, CN=TWS6/[email protected] Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:e1:09:89:65:94:06:b6:dc:5c:aa:e7:53:60:1d: 6a:84:a6:67:cf:8a:c2:31:3a:b4:c3:e4:92:74:26: 7c:ad:44:ca:1a:b7:5b:4f:41:68:6b:a3:d4:0c:06: d9:83:33:55:3a:80:4a:50:9f:e3:41:f8:0a:35:7a: d1:f5:28:c3:18:73:55:75:ab:72:ef:25:db:16:37: c7:cd:2b:ca:32:a9:f9:bc:cc:ba:df:de:ca:7c:71: cd:0b:a3:72:66:0e:99:ee:29:34:4d:b1:cd:04:2e: 00:72:ab:f8:c9:54:8b:a5:4e:5b:1d:30:66:cf:6d: cc:48:7f:48:c2:d6:71:1f:b3 Exponent: 65537 (0x10001) Attributes: challengePassword :once upon a time Signature Algorithm: md5WithRSAEncryption ad:2c:e1:6d:94:78:0a:43:99:16:f8:f7:5f:38:f5:00:bd:fa:

Note: Remember the passphrase used. You will require it again later.

Chapter 5. Security enhancements 191

Page 208: Tivoli Workloud Scheduler Guide

5d:25:5e:df:8c:48:97:2f:10:33:56:cd:4b:d3:75:22:57:5b: 53:9b:b3:c6:48:a9:60:c1:a5:40:a7:1e:40:71:29:87:87:a1: 56:8b:e8:64:a6:91:58:20:8b:0c:09:2f:77:18:9a:8e:f4:94: 58:6d:ef:1d:d4:7f:cb:11:89:57:d5:90:8b:90:d8:e8:c8:2b: 75:8e:f0:06:b0:4a:03:7f:66:ab:25:96:30:05:5b:a4:87:43: 0f:98:a5:48:ae:ac:ea:b8:2b:ca:c8:e9:3b:26:ae:94:2d:50: b0:3e$

7. With a certificate request now in hand, we can use our CA to issue a certificate. If you are not using your own CA but a commercial CA, then you should send the certificate request, the file CPUreq.pem, to your CA by their preferred means and then proceed with the instructions in “Configuring Tivoli Workload Scheduler 8.2” on page 195 once you have received your signed certificate.

8. For the sake of convenience in this example, the certificate request that we will be using, CPUreq.pem, should be copied into the CA’s root directory. Make sure that the OPENSSL_CONF variable is set to the CA’s configuration file:

$ . ca_env.sh

9. Issue the following command to generate a certificate:

$ openssl ca -in CPUreq.pem

OpenSSL will first confirm the configuration file in use, then request a passphrase. In this case it is the passphrase for the CA private key that is required.

Once the passphrase has been supplied, the Distinguished Name details are displayed and you will be prompted to confirm that the certificate should be signed. You should check that the displayed information is correct before you approve certification of the certificate request.

Once the certificate request has been certified, you will be prompted to confirm if the certificate should be written to the CA database or not. We would recommend that all certified certificates be recorded in the CA database.

Finally, the certificate signed with the CA’s private key is displayed, as shown in Example 5-13 on page 193.

Note: When checking the information contained within the certificate request, be sure that the commonName exactly matches the uppercase workstation name if using SSL auth mode cpu, or the specified string if using SSL auth mode string.

192 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 209: Tivoli Workloud Scheduler Guide

Example 5-13 Issuing a certificate from a certificate request

$ openssl ca -in CPUreq.pemUsing configuration from /opt/tws/ca/openssl.cnfEnter pass phrase for /opt/tws/ca/private/cakey.pem:Check that the request matches the signatureSignature okThe Subject’s Distinguished Name is as followscountryName :PRINTABLE:’US’stateOrProvinceName :PRINTABLE:’Texas’localityName :PRINTABLE:’Austin’organizationName :PRINTABLE:’IBM’organizationalUnitName:PRINTABLE:’ITSO/Austin’commonName :PRINTABLE:’TWS6’emailAddress :IA5STRING:’[email protected]’Certificate is to be certified until Jul 29 15:29:19 2004 GMT (365 days)Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]yWrite out database with 1 new entriesCertificate: Data: Version: 3 (0x2) Serial Number: 3 (0x3) Signature Algorithm: md5WithRSAEncryption Issuer: CN=ITSO CA, ST=Texas, C=US/[email protected], O=Root Certificate Authority Validity Not Before: Jul 30 15:29:19 2003 GMT Not After : Jul 29 15:29:19 2004 GMT Subject: CN=TWS6, ST=Texas, C=US/[email protected], O=IBM, OU=ITSO/Austin Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:e1:09:89:65:94:06:b6:dc:5c:aa:e7:53:60:1d: 6a:84:a6:67:cf:8a:c2:31:3a:b4:c3:e4:92:74:26: 7c:ad:44:ca:1a:b7:5b:4f:41:68:6b:a3:d4:0c:06: d9:83:33:55:3a:80:4a:50:9f:e3:41:f8:0a:35:7a: d1:f5:28:c3:18:73:55:75:ab:72:ef:25:db:16:37: c7:cd:2b:ca:32:a9:f9:bc:cc:ba:df:de:ca:7c:71: cd:0b:a3:72:66:0e:99:ee:29:34:4d:b1:cd:04:2e: 00:72:ab:f8:c9:54:8b:a5:4e:5b:1d:30:66:cf:6d: cc:48:7f:48:c2:d6:71:1f:b3 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints:

Chapter 5. Security enhancements 193

Page 210: Tivoli Workloud Scheduler Guide

CA:FALSE Signature Algorithm: md5WithRSAEncryption 23:ae:8a:8d:56:45:15:b5:04:df:5c:79:b2:d3:3f:e2:36:9a: 9d:87:98:76:87:5a:4a:8a:6a:33:44:cb:b2:5e:f7:b6:d9:d7: fa:1a:99:96:23:5c:74:0e:22:e7:88:d2:51:8b:36:1a:bb:3d: 9d:24:6c:dc:18:96:72:5b:29:a7:81:65:04:8c:33:54:b8:0f: d7:3c:c0:52:81:77:38:19:05:8f:97:bd:8b:bd:01:2d:b0:20: 7f:fa:d1:be:fa:20:62:48:93:1f:79:f1:6e:5c:35:1f:74:ba: 01:9e:3b:dd:1c:ab:99:d1:cc:30:ed:fb:f0:af:93:ce:a2:df: 2b:17:17:07:27:47:b5:d6:3c:d7:80:43:88:7f:34:68:c6:8d: 1c:42:d9:6d:81:01:12:a6:5b:bb:e0:13:57:29:69:c2:d2:4a: 58:d4:fd:eb:50:8a:49:59:8a:b8:ce:5b:11:8c:4d:32:a7:e1: e9:4a:0c:20:04:ec:5c:c5:f9:0f:7f:e0:14:d4:28:83:72:86: 58:2f:15:b2:3e:ce:58:6d:49:bd:42:95:80:e6:10:47:d0:ea: 56:d2:02:9b:21:ad:a2:84:34:a3:a5:15:1a:7f:4b:ad:62:e1: 89:10:fd:36:3a:dd:2f:db:b5:3b:a9:c5:ab:31:7e:0a:a3:7f: 66:70:52:e0-----BEGIN CERTIFICATE-----MIIDATCCAemgAwIBAgIBAzANBgkqhkiG9w0BAQQFADB6MRAwDgYDVQQDEwdJVFNPIENBMQ4wDAYDVQQIEwVUZXhhczELMAkGA1UEBhMCVVMxJDAiBgkqhkiG9w0BCQEWFXJpY2tfam9uZXNAdWsuaWJtLmNvbTEjMCEGA1UEChMaUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwHhcNMDMwNzMwMTUyOTE5WhcNMDQwNzI5MTUyOTE5WjCBgjENMAsGA1UEAxMEVFdTNjEOMAwGA1UECBMFVGV4YXMxCzAJBgNVBAYTAlVTMTAwLgYJKoZIhvcNAQkBFiF0d3NAYWl4LWludjFiLml0c2MuYXVzdGluLmlibS5jb20xDDAKBgNVBAoTA0lCTTEUMBIGA1UECxMLSVRTTy9BdXN0aW4wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAOEJiWWUBrbcXKrnU2AdaoSmZ8+KwjE6tMPkknQmfK1Eyhq3W09BaGuj1AwG2YMzVTqASlCf40H4CjV60fUowxhzVXWrcu8l2xY3x80ryjKp+bzMut/eynxxzQujcmYOme4pNE2xzQQuAHKr+MlUi6VOWx0wZs9tzEh/SMLWcR+zAgMBAAGjDTALMAkGA1UdEwQCMAAwDQYJKoZIhvcNAQEEBQADggEBACOuio1WRRW1BN9cebLTP+I2mp2HmHaHWkqKajNEy7Je97bZ1/oamZYjXHQOIueI0lGLNhq7PZ0kbNwYlnJbKaeBZQSMM1S4D9c8wFKBdzgZBY+XvYu9AS2wIH/60b76IGJIkx958W5cNR90ugGeO90cq5nRzDDt+/Cvk86i3ysXFwcnR7XWPNeAQ4h/NGjGjRxC2W2BARKmW7vgE1cpacLSSljU/etQiklZirjOWxGMTTKn4elKDCAE7FzF+Q9/4BTUKINyhlgvFbI+zlhtSb1ClYDmEEfQ6lbSApshraKENKOlFRp/S61i4YkQ/TY63S/btTupxasxfgqjf2ZwUuA=-----END CERTIFICATE-----Data Base Updated$

The generated certificate is written in PEM format to a file in certs directory. The filename consists of the certificate number followed by the extension .pem.

10.You should now make a copy of the highest numbered certificate in the certs directory and return it to the certificate requestor.

Figure 5-5 on page 195 summarizes the steps that we have performed to create a self-signed digital certificate.

194 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 211: Tivoli Workloud Scheduler Guide

Figure 5-5 Summary of the steps for creating a self-signed digital certificate

Configuring Tivoli Workload Scheduler 8.2Now that you have your signed certificate, Tivoli Workload Scheduler 8.2 needs to be configured to use it. There are a number of steps involved, including a change to the workstation definition that will come into effect following the next Jnextday. Therefore, a little planning is required at this stage.

Use the following steps to configure Tivoli Workload Scheduler 8.2:

1. Tivoli Workload Scheduler 8.2 needs to know the passphrase associated with the certificate. This is the passphrase that was entered when generating the certificate request to create the private key in step 6 on page 189. Write the passphrase used into a file called CPUpwd.txt without any appended control characters such as a line feed. The behavior of echo varies from one platform

Note: You will also require a copy of the CA’s certificate. In our example, this is the file cacert.pem in the TWShome/ca, and a copy of this file should be distributed to the Tivoli Workload Scheduler 8.2 workstations along with their signed certificate files, for example 03.pem.

Note: If you are introducing an SSL connection between two workstations, you must configure both sides of the connection before the workstations will be able to link successfully:

Create the ROOT digital certificate

Create the CPU certificate Signing

Request

Create the CPU digital certificate

One time operation:

Copy CPU and Root certificate

CPU CA Root

Chapter 5. Security enhancements 195

Page 212: Tivoli Workloud Scheduler Guide

to another and often from one shell to another, but one of the following variants should give the required output:

$ echo -n “pass phrase” >CPUpwd.txt

or:

$ echo “pass phrase\c” >CPUpwd.txt

or:

$ echo -e “pass phrase\c” >CPUpwd.txt

2. There are a number of ways to confirm that the file contains only the passphrase itself. One is to simply run ls -l and confirm that the file size exactly matches the number of characters in the passphrase. Possibly a better way is to use the od utility, which will display the file contents including any control characters. For example, if you used the passphrase “pass phrase”, then the file in Example 5-14 is correct, but the Example 5-15 is incorrect because the file contains a line feed character, denoted by the \n at the end of the line.

Example 5-14 Good passphrase correctly written to file

$ od -c CPUpwd.txt0000000 p a s s p h r a s e0000013$

Example 5-15 Bad passphrase, file also contains a line-feed character

$ od -c CPUpwd.txt0000000 p a s s p h r a s e \n0000014$

3. To encode the passphrase using base64, run the following command:

$ openssl base64 -in CPUpwd.txt -out CPUpwd.sth

4. The file containing the plain text passphrase can now be deleted, since this file is no longer required:

$ rm CPUpwd.txt

Note: It is very important that this file contains only the passphrase itself, and no additional characters including a carriage return or line feed character. Once this file is encoded, if the passphrase does not exactly match, Tivoli Workload Scheduler 8.2 will not be able to access the private key and the workstation will fail to link.

196 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 213: Tivoli Workloud Scheduler Guide

5. By this stage, you should have a number of files in your TWShome/ssl directory, which need to be defined in the TWShome/localopts file. These files are summarized in Table 5-1.

Table 5-1 Files specified in localopts

5.2.5 Setting SSL local optionsThe next step is to configure the SSL attributes in the TWShome/localopts file.

A number of new attributes have been added to the TWShome/localopts file, although not all are required in all cases. A complete list of these attributes can be seen in Example 5-16.

Example 5-16 Default localopts SSL attributes

# SSL Attributes# nm SSL port =0SSL key =/opt/tws/ssl/TWS.keySSL certificate =/opt/tws/ssl/TWS.crtSSL key pwd =/opt/tws/ssl/TWS.sthSSL CA certificate =/opt/tws/ssl/TWSTrustedCA.crtSSL certificate chain =/opt/tws/ssl/TWSCertificateChain.crtSSL random seed =/opt/tws/ssl/TWS.rndSSL Encryption Cipher =SSLv3SSL auth mode =caonlySSL auth string =tws

localopts attribute filename

SSL key CPUkey.pem

SSL certificate serial_number.pem

SSL key pwd CPUpwd.sth

SSL ca certificate cacert.pem

Note: If the random number generator needed seeding, you will also need to iinclude the localopts attribute and SSL random seed, and you must specify here the seed file location.

Note: This needs to be repeated on each of the workstations involved in the SSL communications.

Chapter 5. Security enhancements 197

Page 214: Tivoli Workloud Scheduler Guide

As shown in Table 5-1 on page 197, there are a number of file locations that need to be specified here, plus some other attributes that define the authentication mode, port to use, and so on. A full description of all these attributes and their purpose can be found in Chapter 9 of the manualTivoli Workload Scheduler Version 8.2, SC32-1273. With the exception of the TWShome path names (due to different installation locations on the machines in question), and the certificate file name, the localopts SSL configuration we used on all three workstations (TWS6, MASTER and BACKUP) was identical. The configuration for TWS6 is shown in Example 5-17.

Example 5-17 Customized SSL localopts attributes

# SSL Attributes#nm SSL port =31113SSL key =/opt/tws/ssl/CPUkey.pemSSL certificate =/opt/tws/ssl/03.pemSSL key pwd =/opt/tws/ssl/CPUpwd.sthSSL CA certificate =/opt/tws/ssl/cacert.pem#SSL certificate chain =/opt/tws/ssl/TWSCertificateChain.crt#SSL random seed =/opt/tws/ssl/TWS.rndSSL Encryption Cipher =SSLv3SSL auth mode =cpu#SSL auth string =tws

Example 5-18 Customized SSL localopts attributes including SSL random seed

# SSL Attributes#nm SSL port =31113SSL key =/opt/tws/ssl/CPUkey.pemSSL certificate =/opt/tws/ssl/03.pemSSL key pwd =/opt/tws/ssl/CPUpwd.sthSSL CA certificate =/opt/tws/ssl/cacert.pem#SSL certificate chain =/opt/tws/ssl/TWSCertificateChain.crtSSL random seed =/opt/tws/ssl/TWS.rndSSL Encryption Cipher =SSLv3

Note: SSL connections are disabled if the nm SSL port localopts attribute is set to 0. Therefore, you need to select a free port number to use and specify the port when editing the other localopts attributes. We used port 31113.

Note: If you found it necessary to seed the random number generator before you were able to successfully issue openssl commands, you will also need to set the SSL random seed attribute to the location of the TWS.rnd file as shown in Example 5-18.

198 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 215: Tivoli Workloud Scheduler Guide

SSL auth mode =cpu#SSL auth string =tws

5.2.6 Configuring SSL attributesUsing the composer command line or the Job Scheduling Console, we need to update the workstation definition in the database, to configure the attributes secureaddr and securitylevel appropriately.

� secureaddr defines the port used to listen for incoming SSL connections. This value must match the value specified in the nm SSL port localopts attribute for the workstation in question. It must also be different from the nm port localopts attribute used to define the port used for normal communications.

� securitylevel specifies the type of SSL authentication for the workstation and must have one of the values, enabled, on or force. For a complete description of these attributes, see Chapter 9 of the manual Tivoli Workload Scheduler Version 8.2, SC32-1273.

In our example the Domain Manager TWS6 will require this attribute to be set to On, while the Master Domain Manager MASTER and Backup Domain Manager BACKUP should have this attribute set to Enabled. By using Enabled for MASTER and BACKUP, we are specifying that these workstations should use SSL when communicating with other workstations that are configured to use SSL, for example the Domain Manager TWS6, but also use non-SSL connections when communicating with workstations not configured to use SSL, for example the Domain Manager CHATHAM.

Domain Manager TWS6 has the securitylevel attribute set to On instead of Force, since it only uses SSL to communicate with its Domain Manager, be that MASTER or BACKUP and not the FTAs below it. Had we instead used Force, it would have been necessary to create certificates for the FTAs below TWS6, namely TWS1 and TWS5, and enabled SSL on those workstations also.

Example 5-19 shows the workstation definitions for TWS6 and MASTER. BACKUP is fundamentally identical to MASTER.

Example 5-19 Workstation definitions

cpuname MASTERdescription “MASTER CPU”

Note: If securitylevel is specified, but secureaddr is missing, the port 31113 is used as the default value.

Chapter 5. Security enhancements 199

Page 216: Tivoli Workloud Scheduler Guide

os UNIXnode aix-inv1btcpaddr 31111secureaddr 31113domain MASTERDMTIMEZONE CSTfor maestrotype FTA autolink on fullstatus on resolvedep on securitylevel enabled behindfirewall offend

cpuname TWS6os UNIXnode tws6.itsc.austin.ibm.comtcpaddr 31111secureaddr 31113domain SECUREDMTIMEZONE CSTfor maestrotype FTA autolink on fullstatus on resolvedep on securitylevel on behindfirewall onend

Using the Job Scheduling Console, these attributes correspond to the SSL Communication and Secure Port attributes of the General tab within the workstation definition window, as shown in Figure 5-6 on page 201.

200 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 217: Tivoli Workloud Scheduler Guide

Figure 5-6 Setting SSL attributes using the Job Scheduling Console

5.2.7 Trying out your SSL configurationFollowing the customization of your TWShome/localopts files, you will need to stop and restart all TWS processes including netman in order for the new local options to be activated. Also, the modified workstation definition will only come into effect following the next occurrence of Jnextday.

Chapter 5. Security enhancements 201

Page 218: Tivoli Workloud Scheduler Guide

Following these two events, if all has gone well your workstations will link up just as they did prior to the SSL configuration. You should, however, see conman reporting an additional flag (S) as shown in the TWS6 entry in Example 5-20, or from the Job Scheduling Console. See the new configuration reported in the SSL Communication and Secure Port columns as shown in Figure 5-7 on page 203.

Example 5-20 conman showcpu SSL flag

$ conman “sc;l”TWS for UNIX (AIX)/CONMAN 8.2 (1.36.1.7)Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2001US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.Installed for user ‘’.Locale LANG set to “en_US”Schedule (Exp) 07/30/03 (#46) on MASTER. Batchman LIVES. Limit: 10, Fence: 0,Audit Level: 0sc;lCPUID HOST FLAGS ADDR NODEMASTER MASTER AF T 31111 aix-inv1bBACKUP BACKUP AFAT 31182 yarmouth.itsc.austin.ibm.comTWS6 TWS6 SAF T 31111 tws6.itsc.austin.ibm.comCHATHAM CHATHAM AF T 31111 chatham.itsc.austin.ibm.com$

Note: If your connection is through a firewall, you will need to change the firewall rules to permit the nm SSL port to pass through in addition to the nm port.

When a workstation links, an initial connection is made by the local mailman process to the remote netman using nm port port instructing writer to launch. The local mailman then establishes an SSL connection to the newly launched remote writer process using nm SSL port.

202 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 219: Tivoli Workloud Scheduler Guide

Figure 5-7 JSC workstation SSL status

Should the workstations fail to link. check the Tivoli Workload Scheduler 8.2 logs on the workstations on both sides of the link. SSL-related errors are described in Chapter 13 of the Tivoli Workload Scheduler Version 8.2, Error Message and Troubleshooting, SC32-1275.

For example, we encountered the error:

AWSDEB045E Error during creation of SSL context. Reason: Error:06065064:digital envelope routines: EVP_DecryptFinal: bad decrypt.

The Tivoli Workload Scheduler Version 8.2, Error Message and Troubleshooting, SC32-1275 guide describes this error as follows:

AWSDEB045E Error during creation of SSL local context: Reason: 1

Explanation: This error occurs when one of the TWS network processes (mailman, writer netman or scribner) is unable to initialize the local SSL context. The problem could be related to either an invalid SSL setting specified within the localopts file or to an initialization error of the SSL.

Operator Response: To solve this problem you should check all the values of the options specified within the localopts file of the node where the error is logged (see Chapter 9 of the Tivoli Workload Scheduler Version 8.2, SC32-1273 manual for more details). One of the most common errors is related to the password. If the source password file contains some spurious

Chapter 5. Security enhancements 203

Page 220: Tivoli Workloud Scheduler Guide

characters, the resulting encrypted file will not be useful and the initialization of the SSL will fail.

This did lead us to the problem in this case, namely a line feed in the file used to encode the password. However, the Error:06065064:digital envelope routines: EVP_DecryptFinal: bad decrypt part of the message is a standard OpenSSL error message, and can occur in many products using OpenSSL to provide SSL facilities. Therefore if Tivoli Workload Scheduler Version 8.2, Error Message and Troubleshooting, SC32-1275 does not immediately answer the question, a search for all or part of this string using Google (http://www.google.com) or other search engine should provide many helpful hits to guide you to the problem.

5.3 Centralized user security definitionsIBM Tivoli Workload Scheduler 8.2 determines a user’s capabilities by comparing the user name or Tivoli Management Framework Administrator name with the object definitions contained within the TWShome/Security file.

Traditionally, an individual Security file is maintained locally on each Tivoli Workload Scheduler 8.2 workstation. Tivoli Workload Scheduler 8.2 supports this traditional local Security file model, but introduces a new centralized Security file model.

With local Security files, each workstation (Fault Tolerant Agent, Standard Agent or Domain Manager) has its own local Security file, which may be identical to other Security files in the network or unique to the workstation. Furthermore, by default the TWSuser on the workstation can modify the local Security file if desired, although it is possible to configure the Security file to prevent this, it is not possible to prevent the root user (or Administrator on Windows) from replacing the Security file with one containing any rights desired.

With a centralized Security file, a single controlled Security file is created and distributed manually across the TWS network. Although each workstation still has its own local copy of this Security file, if this file is tampered with locally, all rights to the conman interface, plus composer and TWS Connector if installed, are revoked on that workstation.

Note: SSL-related messages are written to the old stdlist/yyyy.mm.dd/TWSuser log, not the new stdlist/logs/yyyyymmdd_TWSMERGE.log log, when localopts attribute merge stdlists = yes, which is the default setting. This is a known defect and will be fixed in a future fix pack.

204 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 221: Tivoli Workloud Scheduler Guide

5.3.1 Configuring centralized securityPerform the following steps to configure centralized security:

1. Modify the TWShome/mozart/globalopts file on the Master Domain Manager setting the centralized security attribute to yes as shown in Example 5-21.

Example 5-21 globalopts configured for centralized security

# TWS globalopts file on the TWS Master defines attributes# of the TWS network....

#----------------------------------------------------------------------------# Entries introduced in TWS-8.2centralized security =yesenable list security check =no

2. Once the globalopts option is activated, the next Jnextday will load security-related information into the Symphony file. This security-related information will include that centralized security is enabled plus the checksum of the Master Domain Manager’s Security file. Once the new day’s Symphony file has been distributed, all workstations will have the same security information.

In a network with centralized security enabled, two workstations will not be able to establish a connection if one has centralized security turned off in its Symphony file, or if their Security file information does not match. However, a workstation will always accept incoming connections from its Domain Manager, even if the Security file information sent from the Domain Manager does not match the information in the workstation’s Symphony file. This is enforced to allow the Tivoli Workload Scheduler 8.2 administrator to change the Security file on the Master Domain Manager without having to distribute the new Security file out to the entire network before running Jnextday.

Every time a command is run on the workstation of a network with centralized security, either with conman or from the Job Scheduling Console, the security routines verify that the security information in the Symphony file matches the information in the local Security file. If they do not, no (or reduced) access is granted to Tivoli Workload Scheduler 8.2 objects and commands and a security violation message is logged.

Note: Tivoli Workload Scheduler 8.2 will run quite happily without the correct Security file on each FTA. However, running conman commands locally such as conman shut to stop netman will fail.

Chapter 5. Security enhancements 205

Page 222: Tivoli Workloud Scheduler Guide

If the workstation’s Security file is deleted and recreated, its security information will not match the information in the Security file of the Master Domain Manager (that is recorded in the Symphony file). In addition, a mechanism associated with the creation process of the Symphony file prevents the security information in the Symphony file from being tampered with.

5.3.2 Configuring the JSC to work across a firewallThe Job Scheduling Console, Version 1.3 provides a method for connecting to a Tivoli Workload Scheduler 8.2 engine positioned behind a firewall, using known TCP/IP ports.

For example, the JSC is running on the Windows 2000 machine 3d054-1 has the Job Scheduling Console installed. In order to connect to the Master Domain Manager, which is positioned behind a firewall, the following steps are necessary:

1. On MASTER, enable single-port Bulk Data Transfer (BDT), using the odadmin single_port_bdt command as follows:

# /etc/Tivoli/setup_env.sh# odadmin single_port_bdt TRUE all

2. By default the Bulk Data Transfer service uses port 9401. If a different port is required, this can be specified using the odadmin set_bdt_port command as follows:

# odadmin set_bdt_port 31194

3. Modify the firewall configuration to permit the necessary connections for the JSC to access the TWS Connector, as shown in Example 5-22.

Example 5-22 JSC sample firewall rules

src 9.3.5.8/32 dest .9.3.4.47/32 port 94src 9.3.5.8/32 dest 9.3.4.47/32 port 31194

Note: If the Master Domain Managers Security file is modified, you must be sure that the modified file is copied to the Backup Master Domain Manager at the very least.

Note: We were only able to get single_port_bdt to function correctly when the TWS Connector was installed on an AIX platform. On the other platforms tested, we were unable to retrieve Job stdlist files if single_port_bdt was set to TRUE.

206 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 223: Tivoli Workloud Scheduler Guide

Chapter 6. Late job handling

In this chapter we will discuss the new late job handling features of IBM Tivoli Workload Scheduler 8.2. These features bring the distributed IBM Tivoli Workload Scheduler product closer to some of the functionality of IBM Tivoli Workload Scheduler for z/OS with regards to time-critical work and greater scheduling flexibility surrounding jobs that have passed their latest start times.

The following sections are included in this chapter:

� “Terminology changes” on page 208� “Configuration” on page 208� “Latest Start Time” on page 209� “Suppress option” on page 210� “Continue option” on page 212� “Cancel option” on page 215� “Termination Deadline” on page 218

6

© Copyright IBM Corp. 2003. All rights reserved. 207

Page 224: Tivoli Workloud Scheduler Guide

6.1 Terminology changesSince the introduction of the JSC and the merge of the Maestro and OPC scheduling products, the terminology used for the GUI has differed from that of the command line. This particularly relates to the term “Deadline”, since this is used in the GUIs of Versions 7.0 and 8.1 for one purpose and in Version 8.2 for a different purpose. Therefore, we will first clarify the changes in command-line and GUI terminology that have occurred over the last four releases of IBM Tivoli Workload Scheduler (maestro). These changes are most apparent in the latest release.

Table 6-1 summarizes these terminology changes.

Table 6-1 Terminology changes

6.2 ConfigurationIn order for the late job handling functionality to work, the localopts file on the agents will need to be configured. The bm check until and bm check deadline options specify the number of seconds between checks for jobs that have either passed their Latest Start Time (UNTIL) or Termination Deadline (DEADLINE) time. The bm check deadline option may not already be specified in the file, so it may need to be added. The default value for bm check until is 300 seconds. The bm check deadline option should be set to the same value. These default settings enable the batchman process to run efficiently and not severely impact system processing. These values should not be changed. Example 6-1 on page 209 shows these entries in the localopts file.

Version Time before which job can start

Time after which job must not start

Time when job will be considered as LATE if it has not completed

GUI CLI GUI CLI GUI CLI

6.1 AT AT UNTIL UNTIL

7.0 Start AT Deadline UNTIL

8.1 Start AT Deadline UNTIL

8.2 Start AT Latest Start Time UNTIL Termination Deadline

DEADLINE

208 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 225: Tivoli Workloud Scheduler Guide

Example 6-1 Part of the localopts file

# TWS localopts file defines attributes of this Workstation.~~~~~~~~~~~~#--------------------------------------------------------------------# Attributes of this Workstation for TWS batchman process:bm check file =120bm check status =300bm look =15bm read =10bm stats =offbm verbose =offbm check until =300bm check deadline=300

6.3 Latest Start TimeThe functionality of the Latest Start Time (UNTIL) has been extended to provide greater scheduling functionality. When specifying a Latest Start Time, the user must determine an action to be followed if this time passes and the job has not started. These options are Suppress, Continue and Cancel. This introduces some new terminology (Table 6-2) into the Job Dependencies section of the job stream definition.

Table 6-2 New terminology

Within the JSC GUI, as seen in Figure 6-1 on page 210, the Suppress option is highlighted as the default action. The ONUNTIL option should be specified after the UNTIL option, when building job priorities within a job stream using the CLI. If no action is defined, then ONUNTIL SUPPR is assumed, since it is the default action.

GUI CLI

Suppress ONUNTIL SUPPR

Continue ONUNTIL CONT

Cancel ONUNTIL CANC

Chapter 6. Late job handling 209

Page 226: Tivoli Workloud Scheduler Guide

Figure 6-1 Latest Start Time in JSC GUI

6.4 Suppress optionSuppress is the default option. This is automatically selected within the GUI and will be followed if no option is specified in the command line, as seen in Example 6-2.

Example 6-2 Example job schedule definition

SCHEDULE MASTER#TESTJS1_SUPP ON REQUEST

210 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 227: Tivoli Workloud Scheduler Guide

MASTER#ACCJOB01 MASTER#ACCJOB02 FOLLOWS ACCJOB01 UNTIL 1650 ONUNTIL SUPPR MASTER#ACCJOB03 FOLLOWS ACCJOB02 DEADLINE 1655 MASTER#ACCJOB04 FOLLOWS ACCJOB03END

The action followed with this option is the same as the current behavior. If a job has not started by the time specified as the Latest Start Time, then the launch of the job will be suppressed. All subsequent jobs that depend on the job in question will not be launched either.

Each of the jobs in the example has a run time of five minutes. As the first job ACCJOB01 has not completed by 16:50, ACCJOB02, which has a dependency on it, cannot start before its Latest Start Time. As shown in Figure 6-2, the JSC GUI indicates that the time has passed by placing a message in the Information column.

Figure 6-2 JSC GUI

The command line (Example 6-3) shows that the job has been suppressed.

Example 6-3 Command line showings that the job has been suppressed

MASTER #TESTJS1_SUPP ******** READY 10 10/10 (00:04) ACCJOB01 SUCC 10 10/10 00:06 #J1932 ACCJOB02 HOLD 10 (10/10)(00:04) [Until]; [Suppressed] ACCJOB03 HOLD 10 (10/10)(00:05) [Late] ; ACCJOB02 ACCJOB04 HOLD 10 (10/10)(00:04) ACCJOB03

Chapter 6. Late job handling 211

Page 228: Tivoli Workloud Scheduler Guide

Event number 105 is also written to the bmevents file, which can then be passed to the IBM Tivoli Enterprise Console.

6.5 Continue optionThe continue option is used to notify users that a job has not started by a given time, rather than the job has not finished by a given time, which is done by using Termination Deadline.

To use this option, select the Continue radio button when specifying a Latest Start Time for a job using the JSC GUI as seen in Figure 6-3 on page 213.

212 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 229: Tivoli Workloud Scheduler Guide

Figure 6-3 Continue option in JSC GUI

Also, you can use the ONUNTIL CONT option when using an UNTIL time on the command line (Example 6-5 on page 215).

Example 6-4 Continue option in the command line

SCHEDULE MASTER#TESTJS1_CONT ON REQUEST MASTER#ACCJOB01 MASTER#ACCJOB02 FOLLOWS ACCJOB01 UNTIL 1650 ONUNTIL CONT

Chapter 6. Late job handling 213

Page 230: Tivoli Workloud Scheduler Guide

MASTER#ACCJOB03 FOLLOWS ACCJOB02 DEADLINE 1655 MASTER#ACCJOB04 FOLLOWS ACCJOB03END

The action followed with this option is that the job will run when all of its dependencies are met, as though no time limitation has been associated to it. It should be used where the user wishes the jobs to run regardless of the time, but want to be informed if a job has not started by a predetermined time. Nothing is displayed on the JSC GUI (Figure 6-4), although it is on the command line and an event, 121, is sent to the bmevents file to be passed on to the IBM Tivoli Enterprise Console. In order to effectively use this option, users need to be either monitoring using this command line or using the Plus Module and IBM Tivoli Enterprise Console.

Figure 6-4 JSC GUI

Each of the jobs in the example has a run time of five minutes. Despite a Latest Start Time being set on ACCJOB02, it starts when ACCJOB01, which it depends on, has completed.

The command line shows that the time has passed and that the Continue option has been set.

Example 6-5 Command line shows that Continue option has been set

MASTER #TESTJS1_CONT ******** SUCC 10 10/10 00:21 ACCJOB01 SUCC 10 10/10 00:06 #J1931 ACCJOB02 SUCC 10 10/10 00:06 #J1934; [Until]; [Continue] ACCJOB03 SUCC 10 10/10 00:06 #J1936; [Late] ACCJOB04 SUCC 10 10/10 00:06 #J1937

214 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 231: Tivoli Workloud Scheduler Guide

Event number 121 is also written to the bmevents file, which can then be passed to the IBM Tivoli Enterprise Console.

6.6 Cancel optionThe cancel option has the greatest effect on the order in which jobs run and its use should be carefully considered.

To use this option, select the Cancel radio button when specifying a Latest Start Time for a job using the JSC GUI, as seen in Figure 6-4 on page 214.

Chapter 6. Late job handling 215

Page 232: Tivoli Workloud Scheduler Guide

Figure 6-5 Cancel option in JSC GUI

You can also use the ONUNTIL CANC option when using an UNTIL time on the command line (Example 6-6).

Example 6-6 Cancel option in the command line

SCHEDULE MASTER#TESTJS1_CANC ON REQUEST MASTER#ACCJOB01 MASTER#ACCJOB02 FOLLOWS ACCJOB01

216 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 233: Tivoli Workloud Scheduler Guide

UNTIL 1650 ONUNTIL CANC MASTER#ACCJOB03 FOLLOWS ACCJOB02 DEADLINE 1655 MASTER#ACCJOB04 FOLLOWS ACCJOB03END

The action followed with this option is that the job that has a Latest Start Time will be cancelled automatically if it has not started by the time specified, effectively removing it from the plan, in the same way as if the job had been cancelled manually. Any jobs or streams that are dependent on the job being cancelled will then run once all their other dependencies have been met, as though they had never waited for the job being cancelled in the first place.

Figure 6-6 shows this in the JSC GUI.

Figure 6-6 JSC GUI

Each of the jobs in the example has a run time of five minutes. As the first job ACCJOB01 has not completed by 16:50, ACCJOB02, which has a dependency on it, cannot start before its Latest Start Time. Job ACCJOB02 is therefore cancelled. The GUI indicates that the job has been cancelled by placing a message in the Information column.

The command line shows that the time has passed, the job has been cancelled, and that the dependent job that ACCJOB03 starts has no other dependencies.

Chapter 6. Late job handling 217

Page 234: Tivoli Workloud Scheduler Guide

Example 6-7 Command line shows that job has been cancelled

MASTER #TESTJS1_CANC ******** SUCC 10 10/10 00:14 ACCJOB01 SUCC 10 10/10 00:06 #J1930 ACCJOB02 CANCL10 (00:04) [Cancelled] ACCJOB03 SUCC 10 10/10 00:06 #J1933; [Late] ACCJOB04 SUCC 10 10/10 00:06 #J1935

Event number 122 is also written to the bmevents file, which can then be passed to the IBM Tivoli Enterprise Console.

6.7 Termination Deadline This new option brings the functionality of the IBM Tivoli Workload Scheduler Distributed product closer to that of IBM Tivoli Workload Scheduler for z/OS.

With this option, users can determine a time when a job would be considered late if it has not completed by that time. This allows users to be notified in advance of any potential overruns that may affect subsequent processes and/or the completion of a batch cycle.

To use this option, select the Specify time option and enter the time in the Time Restrictions section of the Job Priorities, when adding it to a job stream using the GUI (Figure 6-7 on page 219). The normal elapsed time, based on previous runs of the job, will be displayed at the base of the window, which may help in determining the time to enter. Of course, if this job has not run in the past, the timings must be derived from testing performed outside of IBM Tivoli Workload Scheduler.

218 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 235: Tivoli Workloud Scheduler Guide

Figure 6-7 Termination Deadline option in JSC GUI

On the command line, this time is added by using the DEADLINE option.

Example 6-8 Deadline option in the command line

SCHEDULE MASTER#TESTJS1_CONT ON REQUEST MASTER#ACCJOB01 MASTER#ACCJOB02 FOLLOWS ACCJOB01 UNTIL 1650 ONUNTIL CONT MASTER#ACCJOB03

Chapter 6. Late job handling 219

Page 236: Tivoli Workloud Scheduler Guide

FOLLOWS ACCJOB02 DEADLINE 1655 MASTER#ACCJOB04 FOLLOWS ACCJOB03END

The action followed with this option is that the job will be considered late if it has not completed by the time specified. The job will continue to run to completion. This is shown in Figure 6-8.

Figure 6-8 JSC GUI

Each of the jobs in the example has a run time of five minutes. As the job ACCJOB03 has not completed by its Termination Deadline time,16:55, it is considered as being late. The GUI indicates the fact that the job is late by placing a message in the Information column. The message displayed is “Deadline has passed, Userjob.”. This message is not as informative as it could be as it does not actually state that the job is running late. However, it is known that this message indicated that the job is late, then it is usable.

The command line shows more clearly that the job is running late.

Example 6-9 Command line shows that job is running late

MASTER #TESTJS1_CONT ******** SUCC 10 10/10 00:21 ACCJOB01 SUCC 10 10/10 00:06 #J1931 ACCJOB02 SUCC 10 10/10 00:06 #J1934; [Until]; [Continue] ACCJOB03 SUCC 10 10/10 00:06 #J1936; [Late] ACCJOB04 SUCC 10 10/10 00:06 #J1937

Event number 120 is also written to the bmevents file which can then be passed to the IBM Tivoli Enterprise Console.

220 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 237: Tivoli Workloud Scheduler Guide

Chapter 7. IBM Tivoli Enterprise Console integration

This chapter explains the integration of Tivoli Workload Scheduler in a mixed operating system environment with Tivoli Enterprise Console. We first give a brief overview of Tivoli Enterprise Console and then describe the integration with Tivoli Workload Scheduler and AlarmPoint®, a third-party application, in a case scenario.

To take into consideration the variety of operating system environments, we are using IBM Tivoli Workload Scheduler on UNIX and Windows and IBM Tivoli Enterprise Console on UNIX (Solaris). However, they all run on multiple platforms and hence the implementation is platform-independent.

We cover the following topics in this chapter:

� “IBM Tivoli Enterprise Console” on page 222� “Integration using the Tivoli Plus Module” on page 223� “Installing and customizing the integration software” on page 226� “Our environment – scenarios” on page 232� “ITWS/TEC/AlarmPoint operation” on page 236

7

© Copyright IBM Corp. 2003. All rights reserved. 221

Page 238: Tivoli Workloud Scheduler Guide

7.1 IBM Tivoli Enterprise ConsoleThe IBM Tivoli Enterprise Console is a powerful, rule-based event management application that provides network, application, system and database monitoring. IBM Tivoli Enterprise Console collects, processes, and displays events gathered from various applications and systems within the managed environment. It can also provide automatic responses to events, so minimal user interaction is required.

Even though Tivoli Workload Scheduler displays if any jobs or job streams fail or abend and shows if any of the agents are unlinked or down, it is not always possible to get to the root of the problem nor is it possible to monitor everything using the IBM Tivoli Workload Scheduler only. This is why IBM Tivoli Enterprise Console is a great tool to use for monitoring all possible aspects of IBM Tivoli Workload Scheduler and the entire system environment, starting from the entire network (using IBM Tivoli NetView®) and down to the smallest application processes and application internals.

We will see in this chapter how it is possible to monitor this and correlate events from the system environment and from IBM Tivoli Workload Scheduler. We will also describe the possibility of using both IBM Tivoli Enterprise Console and IBM Tivoli Workload Scheduler to notify the relevant people who may take an appropriate action, using a third-party notification server – AlarmPoint.

7.1.1 IBM Tivoli Enterprise Console componentsBefore discussing the entire integration environment and the use of IBM Tivoli Enterprise Console, we have to cover some basic terminology. These are main components of the Tivoli Enterprise Console:

� Event adapters

Event adapters are processes that run on remote or local systems and monitor their resources (such as database application, physical disk, memory, etc.). Adapters detect reportable events, format them, and send to the IBM Tivoli Enterprise Console server. The event server then further processes the event. Adapters can monitor remote systems in the two ways: they can either receive events from any source that actively produces them, or they can monitor ASCII log files if a source actively updates them with messages.

� IBM Tivoli Enterprise Console event server

IBM Tivoli Enterprise Console Server is the central server that handles all events in the management system environment. IBM Tivoli Enterprise Console Server does event processing, evaluating them against a set of predefined rules and decides what actions, if any, to take on those events.

222 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 239: Tivoli Workloud Scheduler Guide

� Event consoles

Event consoles are graphical user interfaces that allow users to see relevant events in different levels of details. Users can also respond to events, for example close or acknowledge them or initiate a task, depending on their administration roles.

� Events

Events are units of information detected by event adapters and sent to the event server.

� Event definitions

These are definitions of type of events together with their properties. They are stored in BAROC files (Basic Recorder of Objects in C). Events and their properties have to be defined and then compiled into the rule base.

� Rules

A set of preprogrammed logic steps, based on prolog, which process events that come through to the event server. Rules process the events and take action based on the attributes of those events. Just like event definitions, rules also have to be compiled into the rule base before they can be applied to incoming events.

When an event adapter detects an event generated from a source, it formats the event and sends it to the event server. The event server processes the event and automatically performs any predefined tasks. An adapter can be configured to discard certain type of events before they are even sent to IBM Tivoli Enterprise Console, so no unnecessary processing by IBM Tivoli Enterprise Console is needed.

This type of event configuration and predefining the event server rules are the topics we are covering in this chapter. We will be taking IBM Tivoli Workload Scheduler as a source of our events.

7.2 Integration using the Tivoli Plus ModuleThe integration between Tivoli Enterprise Console and Tivoli Workload Scheduler is available with the IBM Tivoli Workload Scheduler product and it comes with the installation media. It provides systems management of IBM Tivoli Workload Scheduler as an application by passing meaningful events to IBM Tivoli Enterprise Console.

Tivoli Plus Module comes with a set of predefined message formats, event classes, rules and actions that can be taken if those situations arise. The Plus

Chapter 7. IBM Tivoli Enterprise Console integration 223

Page 240: Tivoli Workloud Scheduler Guide

Module is installed under the existing Tivoli Framework. The integration provides the following:

� Tivoli Enterprise Event Management. The predefined rules include event reporting of some of the following:

– All abended or failed jobs and job streams.

– Unanswered prompts.

– Job potentially missing the deadline warnings (Late Jobs).

– IBM Tivoli Workload Scheduler agents unlinked.

– IBM Tivoli Workload Scheduler agents down.

– Notifying the system administrator of the events received and the action taken.

� Tivoli Distributed Monitoring. This gives more in-depth IBM Tivoli Workload Scheduler application status monitoring such as:

– IBM Tivoli Workload Scheduler disk space availability (stdlist, schedlog, and free space).

– IBM Tivoli Workload Scheduler swap space availability.

– IBM Tivoli Workload Scheduler page outs.

– IBM Tivoli Workload Scheduler application status (batchman, netman, mailman, jobman).

– IBM Tivoli Workload Scheduler host status

Tivoli Distributed Monitoring (DM) for IBM Tivoli Workload Scheduler uses DM Version 3.7 and requires Distributed Monitoring Universal Monitors 3.7 to be installed. The current Tivoli Monitoring version is 5.1.1, called IBM Tivoli Monitoring, which uses different methods of monitoring, Resource Models. TWS Distributed Monitors 3.7 can be imported into the IBM Tivoli Monitoring Resource Models using the IBM Tivoli Monitoring Workbench, but they are not covered in this book, as standard IBM Tivoli Monitoring would give more sophisticated application monitoring with Web-based views.

For more information on IBM Tivoli Monitoring and how to import TWS Distributed Monitors 3.7 to Resource Models, refer to the redbook IBM Tivoli Monitoring Version 5.1.1 Creating Resource Models and Providers, SG24-6900.

Plus Module Integration includes a set of Tivoli tasks that can be used for the following:

� Tivoli tasks for configuring the event server. These tasks compile and import IBM Tivoli Enterprise Console rules and classes either into already existing

224 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 241: Tivoli Workloud Scheduler Guide

rule base or creates a new rule base containing events for IBM Tivoli Workload Scheduler.

� Tivoli tasks for configuring IBM Tivoli Enterprise Console adapters. These include configuring adapters in the Tivoli Framework environment (TME® adapters) and the adapters that are outside the Tivoli systems management environment (non-TME adapters).

� A set of IBM Tivoli Workload Scheduler Report tasks. Report tasks produce formatted IBM Tivoli Workload Scheduler standard reports, which are generally available in IBM Tivoli Workload Scheduler (rep1-4b, rep7, rep8, rep11, reptr, etc.).

� Tivoli tasks that allow manipulation of IBM Tivoli Workload Scheduler, such as stopping/starting and linking/unlinking of IBM Tivoli Workload Scheduler agents.

In real environment situations, most of these tasks are never used, since these features are available via the Job Scheduling Console. The tasks that are usually used are only for IBM Tivoli Enterprise Console configuration, although this can also be done manually.

7.2.1 How the integration worksIBM Tivoli Workload Scheduler uses the configuration file (BmEvents.conf), which needs to be configured to send specific IBM Tivoli Workload Scheduler events to the IBM Tivoli Enterprise Console. This file can also be configured to send SNMP traps (for integration with products that use SNMP events, such as NetView, for example). Events in the configuration file come in the form of numbers, where each number is mapped to a specific class of IBM Tivoli Workload Scheduler event. BmEvents.conf also specifies the name of the application log file that IBM Tivoli Workload Scheduler writes into (event.log). This file is monitored by IBM Tivoli Enterprise Console adapters, which then forward them to the event server. When IBM Tivoli Enterprise Console receives events from IBM Tivoli Workload Scheduler, it evaluates them against a set of rules, processes them, and takes the appropriate action, if needed.

Notification tool used: AlarmPoint In our environment, we have also added the integration with a third-party critical notification tool, AlarmPoint, to notify the relevant administrators, management, or end users when certain situations arise.

AlarmPoint from Invoq Systems is an interactive alerting system that provides notification, escalation and event resolution via any voice or wireless device. AlarmPoint accepts alarms information from a management system (for example, IBM Tivoli Enterprise Console, IBM Tivoli NetView) or any other network,

Chapter 7. IBM Tivoli Enterprise Console integration 225

Page 242: Tivoli Workloud Scheduler Guide

performance, capacity, security, or help desk systems. It translates each alarm to voice and text messages, locates the right person in real time, and provides the critical information and suggested actions for solving the issue. Once found, the user can take any action over a phone or wireless device. The product is built on relational databases and provides a Web interface for all users of the application.

For more information on AlarmPoint, refer to Invoq Systems’ Web site at:

http://www.invoqsystems.com

Figure 7-1 shows the integration between IBM Tivoli Workload Scheduler, event server, and AlarmPoint.

Figure 7-1 Integration between the products

7.3 Installing and customizing the integration softwareThe IBM Tivoli Workload Scheduler Plus Module is installed through the Tivoli Framework, using standard Tivoli framework product install options. We will show how to set up and configure the Plus Module to enable Tivoli Enterprise Console to process IBM Tivoli Workload Scheduler events and act upon them. Installation and customization steps are covered in the IBM Tivoli Workload Scheduler Version 8.2 Plus Module User’s Guide, SC32-1276. Refer to this manual for detailed installation steps.

After the installation, a new icon appears on the Tivoli Desktop, called TivoliPlus. This icon contains the TWS Plus for Tivoli collection. The collection contains a number of configuration tasks that can be used for set up and configuration of IBM Tivoli Enterprise Console and IBM Tivoli Workload Scheduler as shown in Figure 7-2 on page 227.

Note: Before installing the Plus Module, make sure you do the full Tivoli database backup!

226 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 243: Tivoli Workloud Scheduler Guide

The IBM Tivoli Workload Scheduler Plus Module is installed under the $BINDIR/../generic_UNIX/TME/PLUS/TWS directory, where $BINDIR is the Tivoli framework binary directory. The installation places a set of configuration scripts, IBM Tivoli Enterprise Console event, rule and format files under this directory.

Figure 7-2 TWS Plus for Tivoli collection

7.3.1 Setting up the IBM Tivoli Enterprise ConsoleTo set up IBM Tivoli Enterprise Console to receive IBM Tivoli Workload Scheduler events, you need to run the Setup EventServer for TWS task under the TWS Plus Module collection (Figure 7-3 on page 228). This task calls the config_tec.sh script, which detects the version of IBM Tivoli Enterprise Console installed and compiles the rules, accordingly. There are two options within this task: to create the new IBM Tivoli Enterprise Console rule base or to use the existing one. The following options need to be specified:

� New rule base name� Rule base to clone� Path to the new rule base

Chapter 7. IBM Tivoli Enterprise Console integration 227

Page 244: Tivoli Workloud Scheduler Guide

Figure 7-3 Event server setup

After the task is configured, the IBM Tivoli Enterprise Console server is re-started and the new rule base becomes active.

The next step is to configure log adapters. There are two tasks for this configuration:

� Configuring TME Adapters: For adapters within the Tivoli managed environment

� Configuring Non-TME Adapters: For adapters outside the Tivoli managed environment.

We have used TME Adapters for our environment. In this case, IBM Tivoli Workload Scheduler Master has a Tivoli endpoint and a Tivoli Enterprise Console adapter running. The Configure TME Logfile Adapter task does the following:

� Stops the adapter from running� Adds the maestro format file into the existing one and compiles it� Makes modifications to the adapter configuration file � Re-starts the adapter

The adapter configuration file contains configuration options and filters for the adapter. It is read by the adapter at startup. The file points out to the event server

228 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 245: Tivoli Workloud Scheduler Guide

to send events to any of the ASCII files we’d like to monitor. In our case, the configuration task adds the TWShome/event.log file to be monitored.

Format files define formats of messages that are displayed in IBM Tivoli Enterprise Console and matches them to event classes. There are two different IBM Tivoli Workload Scheduler format files:

� maestront.fmt for Windows platform� maestro.fmt for all other operating system platforms.

We have used Windows IBM Tivoli Enterprise Console adapters for Windows 2000 platform, rather than Windows NT® adapters. They are different and for Windows 2000 platform we recommend that you use tecad_win adapters, rather than tecad_nt. The Plus Module configuration scripts are written for configuring Windows NT adapters. We had to modify those to include Windows 2000 adapters, in order for them to work. Modifications were made to the script task uses, called config_tme_logadapter.sh.

The adapter configuration task sets up the BmEvents.conf file. This file exists under the TWShome/config directory. On UNIX, this file can also be found in TWShome/OV and TWShome/unsupported/OpC directories. The configuration tasks can be run separately on selected subscribers. If run by default, it will run on all clients that are subscribed to the IBM Tivoli Workload Scheduler Client list profiles.

The TWS/TEC integration configuration file, BmEvents.conf also needs to be configured to write into the log file (TWShome/event.log in our case) and the adapters need to be configured to read this file (LogSources option in the adapter configuration file).

After the adapter configuration, IBM Tivoli Workload Scheduler engine needs to be restarted for the events to start writing into the log file.

7.3.2 The Plus Module configuration issuesWhen using the Plus Module configuration tasks, we have found a number of issues where tasks were failing to complete the configuration.

Note: Tivoli has changed IBM Tivoli Workload Scheduler event classes. They are now different for IBM Tivoli Workload Scheduler in UNIX/Linux and Windows environments. IBM Tivoli Workload Scheduler classes for UNIX/Linux are of the form TWS_Base, while for the Windows platform, they are of the form TWS_NT_Base.

Chapter 7. IBM Tivoli Enterprise Console integration 229

Page 246: Tivoli Workloud Scheduler Guide

Issues with IBM Tivoli Enterprise Console configuration task � Configuration task fails when creating a new event console definition for IBM

Tivoli Workload Scheduler.

� Configuration task fails when creating TWSPlus event group.

� Configuration task loads three rules into the rule base, when only one is needed (or possibly two, if using the Tivoli Distributed Monitoring)

Workaround: We have completed the task by creating the TWS Event Console and the group manually.

Issues with configuring TME Logfile Adapter on UNIX� The configuration script looks for BmEvents.conf file in the root directory,

instead of IBM Tivoli Workload Scheduler home directory.

Workaround: You need to copy this file manually to the root directory in order for it to work.

� The UNIX logfile adapter core dumps as soon as it starts receiving IBM Tivoli Workload Scheduler events.

Fix: Apply IBM Tivoli Enterprise Console 3.8 Fix Pack 01 to fix this issue.

Issues with configuring the TME Logfile Adapter on Windows � TME Logfile Adapter on Windows uses Windows NT adapters by default. If

using Windows 2000, this would not work.

Workaround: Either configure those manually, or change the configuration scripts.

TME Logfile Adapter on Windows looks for Tivoli endpoint installation under the $SystemRoot/Tivoli/lcf directory (that is, c:\winnt\tivoli\lcf) and fails if Tivoli endpoint is installed somewhere else. Note that this is not the Tivoli endpoint default location, so it is almost certain that this task would not work.

7.3.3 RecommendationsBecause of the various issues found when configuring the integration between IBM Tivoli Workload Scheduler and IBM Tivoli Enterprise Console, we can recommend the following. Instead of installing the Plus Module in a live environment, you can install it on a test machine and copy the BAROC, rule and format files across. Use the standard Tivoli ACP (Adapter Configuration Profile) profile to download the format file onto the IBM Tivoli Workload Scheduler Master machine and configure this adapter to monitor the IBM Tivoli Workload Scheduler event.log file.

230 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 247: Tivoli Workloud Scheduler Guide

For IBM Tivoli Enterprise Console configuration, use only one rule file, depending on the implementation. For example, use maestro_plus.rls if IBM Tivoli Workload Scheduler monitored Master is on a UNIX/Linux platform and use maestront_plus.rls if IBM Tivoli Workload Scheduler Master is on a Windows platform. When using the Tivoli Distributed Monitoring option, load the maestro_mon.rls rule file. If modification to the rules is needed, modify them, then compile and load into the existing or new rule base. If there is a need to create a new event group and a console, you can use the wconsole command to do this manually.

7.3.4 Implementation considerationsFor the implementation issues, an important TWS/TEC configuration feature that needs to be considered when monitoring the IBM Tivoli Workload Scheduler environment is whether to monitor only Master Domain Manager (and possibly the Backup Master) or all IBM Tivoli Workload Scheduler agents. IBM Tivoli Workload Scheduler Master Domain Manager logs the entire IBM Tivoli Workload Scheduler network information, if the Full Status option has been switched on (which is the case in most environments). This means that it is only needed to monitor the Master Domain Manager for IBM Tivoli Workload Scheduler events, as they will be reported from the entire IBM Tivoli Workload Scheduler environment. However, since the IBM Tivoli Enterprise Console adapter is running on this machine, it will report to IBM Tivoli Enterprise Console the host name of the Master for all IBM Tivoli Workload Scheduler events, regardless which IBM Tivoli Workload Scheduler agent they come from. Workstation name, job name and schedule name are still reported in IBM Tivoli Enterprise Console, but as a part of the message, as well as IBM Tivoli Enterprise Console slots.

When configuring IBM Tivoli Workload Scheduler to report events from every IBM Tivoli Workload Scheduler agent, this would result in duplication of events coming from the Master DM, as well as from each agent. This can be solved by creating new IBM Tivoli Enterprise Console rules which will “detect” this duplication, on the basis of slots, such as job_name, job_cpu, schedule_name and schedule_cpu.

Another important consideration when implementing IBM Tivoli Workload Scheduler and IBM Tivoli Enterprise Console is the need for monitoring Backup Master DM for events. Just like the Master DM, the Backup Master would have the Full Status option enabled and therefore log events from the entire IBM Tivoli Workload Scheduler environment. However, the Backup Master would only need to be monitored in case of failover, when it becomes Master. There are two options here: either switching the IBM Tivoli Enterprise Console adapter to forward IBM Tivoli Enterprise Console events in the failover scenario, or modify IBM Tivoli Enterprise Console events to drop duplicate events and report only

Chapter 7. IBM Tivoli Enterprise Console integration 231

Page 248: Tivoli Workloud Scheduler Guide

one of them. When using IBM Tivoli Enterprise Console adapters for general monitoring, we recommend the second option.

7.4 Our environment – scenariosThere are some new IBM Tivoli Workload Scheduler events produced in IBM Tivoli Workload Scheduler Version 8.2 that did not exist before. This is to reflect some new features of IBM Tivoli Workload Scheduler only available in Version 8.2. Some of these events are:

� Job is late� Job until time expired (with cancel or continue option)� Schedule is late� Schedule until time expired (with cancel or continue option)

The full listings of all available IBM Tivoli Workload Scheduler events and their corresponding event numbers (old and new) is given in 7.5.1, “Full IBM Tivoli Workload Scheduler Event Configuration listing” on page 240.

We have created a scenario that demonstrates how the real-world environment integration would work and actions that would be taken in case of a critical job failure. The scenario, which consists of three jobs, DBSELOAD, DATAREPT and DBSELREC, is illustrated in Figure 7-4. Note that both DATAREPT and DBSELREC are dependent upon the successful completion of DBSELOAD.

Figure 7-4 Our scenario

232 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 249: Tivoli Workloud Scheduler Guide

Our scenario reflects the following (see Figure 7-4 on page 232):

� If a DBSELOAD job abends, the recovery job, DBSELREC, runs automatically and checks for return codes.

� If the return code of the DBSELOAD job is 1 or greater than 10, an event with a fatal severity is sent to IBM Tivoli Enterprise Console, which causes IBM Tivoli Enterprise Console to take action to create an AlarmPoint critical call.

� AlarmPoint finds the appropriate Technical Support person who is responsible for this job, calls him or her and delivers the message. It also gives the person an option to acknowledge or close the IBM Tivoli Enterprise Console event by pressing relevant keypads on the phone.

� Because the DBSELOAD job is critical and there could be a possible service impact affecting end users, AlarmPoint also informs the Product Support group in charge, according to the escalation procedure.

� If the DATAREPT job is late (if not started by 03:00 in the afternoon), a severity “warning” event is sent to IBM Tivoli Enterprise Console – Late message.

� IBM Tivoli Enterprise Console passes relevant information to AlarmPoint, which notifies the Technical Support group via pager or SMS or e-mail.

� If the DATAREPT job abends, an IBM Tivoli Enterprise Console critical event is created and AlarmPoint calls Technical Support with an option to re-run the job.

If Technical Support does not fix the job within 30 minutes, AlarmPoint informs management of the situation. The AlarmPoint XML-based Java client is installed and running on the IBM Tivoli Enterprise Console server. The client is responsible for all the communications between the event server and the AlarmPoint server.

When, in our case, the DBSELOAD fatal event arrives to IBM Tivoli Enterprise Console, IBM Tivoli Enterprise Console has a predefined rule that executes a shell script. This script (send_ap_action.sh) passes parameters to the AlarmPoint Java Client. The client translates this into a special XML message that is sent to AlarmPoint. AlarmPoint then finds the relevant people that need to be notified for this situation (Technical Support and Product Control groups). It also finds all devices these people need to be notified on (for example, mobile phones, pagers, e-mails, etc.) and notifies them. If any of those people want to respond to acknowledge or close the IBM Tivoli Enterprise Console event, the XML message is passed back to IBM Tivoli Enterprise Console from AlarmPoint. IBM Tivoli Enterprise Console acknowledges or closes this event accordingly.

If the DATAREPT job is abending, the Technical Support person gets an option on the telephone keypad to re-run the job. The AlarmPoint Java Client passes

Chapter 7. IBM Tivoli Enterprise Console integration 233

Page 250: Tivoli Workloud Scheduler Guide

this message to Tivoli Workload Scheduler, which submits the re-run request. If this job is late, AlarmPoint sends a pager or an SMS message to the Product support group to notify them. They cannot act on the event in this case.

Figure 7-5 Event Viewer Group TWS

In our environment, we have used the Plus Module IBM Tivoli Enterprise Console rules and added a few rules to include integration with AlarmPoint. The rules reflect our scenario only, but give a good idea of how the integration can be done.

Figure 7-6 shows the AlarmPoint Pager message and Figure 7-7 on page 235 shows an AlarmPoint two-way e-mail response that is generated as a result of this event. Figure 7-8 on page 235 shows an AlarmPoint Mobile Phone SMS message sent to a mobile phone, giving an option via the phone to re-run the job.

Figure 7-6 AlarmPoint Pager message

234 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 251: Tivoli Workloud Scheduler Guide

Figure 7-7 AlarmPoint two-way e-mail response

Figure 7-8 AlarmPoint Mobile Phone SMS message

7.4.1 Events and rulesThe Configure EventServer task adds all three IBM Tivoli Workload Scheduler rules into the rule base, even though not all of them are necessarily required. The rule files available with the Plus Module are:

� maestro.rls� maestro_nt.rls� maestro_dm.rls

Chapter 7. IBM Tivoli Enterprise Console integration 235

Page 252: Tivoli Workloud Scheduler Guide

The first two are needed if the IBM Tivoli Workload Scheduler monitored machines are on Windows and UNIX. Generally, if monitoring only one of the platforms, you would only need the rule file for that platform. There is no need to use both. The last rule file is used for Distributed Monitoring. If not using DM, this rule file is irrelevant.

We have also added another rule (redbook.rls) to include our scenario cases, together with rules for AlarmPoint integration.

BmEvents.conf file has been configured to send almost all events, but this is only needed for testing. After the testing period is finished, you will need to change this file to include monitoring for only desired IBM Tivoli Workload Scheduler events (see Example 7-1 on page 237).

7.5 ITWS/TEC/AlarmPoint operationTo summarize the integration between IBM Tivoli Workload Scheduler, IBM Tivoli Enterprise Console, and AlarmPoint, we will refer to the case scenario as an example of use in a real-life environment. We will expand on this scenario and cover what you can do with this integration.

After installation and customization of IBM Tivoli Workload Scheduler and IBM Tivoli Enterprise Console, you need to decide what type of events you would like to be reported via Tivoli Enterprise Console. There are two different approaches to achieve this. One is to report all available events and then filter out or stop events that are not needed (for example, harmless events of jobs starting their execution). The other is to refer to the event listing (see Table 7-1 on page 240) and pick only the events you are interested in. We recommend the second approach, since the first may produce far too many events to keep track of.

Every time a reportable event that is configured in BmEvents.conf file occurs in IBM Tivoli Workload Scheduler environment (for example if a job abends), it is written into the event.log log file. IBM Tivoli Enterprise Console adapter monitors this file and, as soon as there is a new event written into it, the adapter sends a message to IBM Tivoli Enterprise Console. IBM Tivoli Enterprise Console applies a set of rules on this event (maestro_plus.rls) and takes action, if any. If an event is of a critical nature, it can be configured to make a critical notification using AlarmPoint. AlarmPoint can then take an action from the notified person (if presented with options) and acknowledge or close the IBM Tivoli Enterprise Console event or run or re-run an IBM Tivoli Workload Scheduler job, depending on the nature of the event.

This is the basic operation of how the integration works between Tivoli Workload Scheduler, event server and, if available, AlarmPoint. We have also given the

236 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 253: Tivoli Workloud Scheduler Guide

best practices and recommended settings and configuration for this integration. Configuration files and IBM Tivoli Enterprise Console rules are listed in this section.

BmEvents.conf fileThis configuration file specifies the type of IBM Tivoli Workload Scheduler events that are sent to the Enterprise Console. The file should be located in the IBM Tivoli Workload Scheduler home directory. By default, the following events are reported:

� An IBM Tivoli Workload Scheduler process is reset, link dropped or link failed.� Domain Manager switched.� Job or/and schedule abended, failed or suspended.� Prompts are issued and waiting to be answered.

It is also possible to switch on the rest of the events that report almost every change in IBM Tivoli Workload Scheduler internal status. We have used some new events only available in IBM Tivoli Workload Scheduler Version 8.2, such as reporting for late jobs and schedules (see Example 7-1).

This file also defines whether the events should be reported only from the Master Domain Manager for all other workstations or from each workstation separately (the OPTIONS settings). The FILE option specifies the name of the log file that the IBM Tivoli Enterprise Console adapter will be reading from. This file should be located in the IBM Tivoli Workload Scheduler home directory (usually called event.log).

The BmEvents.conf file can be configured either manually or using the Plus Module task (see 7.3.1, “Setting up the IBM Tivoli Enterprise Console” on page 227).

Example 7-1 BmEvents.conf file

# @(#) $Header: /usr/local/SRC_CLEAR/maestro/JSS/maestro/Netview/RCS/BmEvents.conf,v 1.6 1996/12/16 18:19:50 ee viola_thunder $# This file contains the configuration information for the BmEvents module.## This module will determine how and what batchman notifies other processes# of events that have occurred.## The lines in this file can contain the following:

OPTIONS=MASTER

# MASTER This tells batchman to act as the master of the network and# information on all cpus are returned by this module.

Chapter 7. IBM Tivoli Enterprise Console integration 237

Page 254: Tivoli Workloud Scheduler Guide

# # OFF This tells batchman to not report any events.## default on the master cpu is to report all job scheduling events # for all cpus on the Maestro network (MASTER); default on other cpus # is to report events for this cpu only.

LOGGING=KEY

# ALL This tells batchman all the valid event numbers are reported.## KEY This tells batchman the key-flag filter is enabled## default is ALL for all the cpus

SYMEVNTS=YES

# YES tells batchman to report a picture of job status events as soon as the new plan is # generated. It is valid only for key-flagged jobs with LOGGING=KEY# # NO does not report these events. It is the default.

# EVENTS = 51 101 102 105 111 151 152 155 201 202 203 204 251 252 301 EVENTS=1 51 52 53 101 102 103 104 105 106 107 110 111 112 113 115 116 120 121 122 151 152 154 155 157 163 164 165 201 202 203 204 251 252 301

# <n> is a valid event number (see Maestro.mib traps for the valid event# numbers and the contents of each event.## default is 51 101 102 105 111 151 152 155 201 202 203 204 251 252 301

# These can be followed with upto 5 different notification paths in the# following format:# PIPE=<filename> This is used for communicating with a Unison Fifo file.# The format of this is a 4 byte message len followed by the message.# FILE=<filename> This is for appending to the end of a regular file.# This file will be truncated on each new processing day.# MSG=<filename%-.msg> This is used for communicating with a Unison Message# file.# The default event strings encoding is UTF-8.# Use the following keywords instead of the previous ones, if you want # events written in local language:# PIPE_NO_UTF8=<filename># FILE_NO_UTF8=<filename># MSG_NO_UTF8=<filename%-.msg>

# To communcate with Unison snmp agent, the following is required:

238 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 255: Tivoli Workloud Scheduler Guide

# PIPE=/usr/lib/maestro/MAGENT.P#PIPE=/usr/lib/maestro/MAGENT.P

# To communcate with OperationsCenter using the demonstration log file # encapsulation the following is required:FILE=/export/home/maestro/event.log

tecad.conf fileThis file is used by the IBM Tivoli Enterprise Console adapter and read at adapter startup. The only part of this adapter that needs to be configured is the LogSources option. This option points out which log file IBM Tivoli Enterprise Console needs to monitor. In our scenario, this is the event.log file from the IBM Tivoli Workload Scheduler home directory. The tecad.conf file is installed with the IBM Tivoli Enterprise Console adapter and can usually be found under the following directories:

� For TME adapters: $LCFDIR/bin/<platform>/TME/TEC/adapters/etc

� For non-TME adapters: $TECAD/etc (where $TECAD is the adapter installation directory)

We have used two different IBM Tivoli Enterprise Console adapter configuration files: one for the Windows platform (tecad_win.conf) and one for the UNIX platform (tecad_logfile.conf). These files are shown in the following examples.

Example 7-2 tecad_win.conf file

# tecad_win Configuration#PreFilterMode=OUT#ServerLocation=yodaEventMaxSize=4096BufEvtPath=C:\WINNT\system32\drivers\etc\Tivoli/tec/tecad_win.cachePollInterval=30Pre37Server=noSpaceReplacement=TRUELanguageID=ENGLISHWINEVENTLOGS=Application,System,Security,Directory,DNS,FRSFilter:Class=NT_BaseFilter:Class=NT_Logon_Successful;Filter:Class=NT_User_Logoff;Filter:Class=TEC_Error;

PreFilter:Log=Security#

Chapter 7. IBM Tivoli Enterprise Console integration 239

Page 256: Tivoli Workloud Scheduler Guide

Example 7-3 tecad_logfile

# tecad_logfile Configuration#ServerLocation=@EventServerEventMaxSize=4096BufEvtPath=/etc/Tivoli/tec/tecad_logfile.cachePollInterval=30Pre37Server=no

LogSources=,/export/home/maestro/event.log#Filter:Class=Logfile_BaseFilter:Class=Logfile_SendmailFilter:Class=Amd_UnmountedFilter:Class=Amd_MountedFilter:Class=Su_Success;

TEC rulesIBM Tivoli Enterprise Console Rules used in this integration are the Plus Module rules, which are located in the maestro_plus.rls rule set. The original Plus Module rule executes a hidden IBM Tivoli Workload Scheduler task to e-mail standard output of abended jobs to system administrators. However, this is only possible on UNIX, since Tivoli uses the sendmail method, which is not supported on Windows. Also, in most cases UNIX system administrators will not be interested in IBM Tivoli Workload Scheduler job listings, but people who are responsible for those jobs. This is why we used AlarmPoint to make sure relevant people are always informed when critical situations arise. The maestro_plus rule set is found in “maestro_plus rule set” on page 372.

7.5.1 Full IBM Tivoli Workload Scheduler Event Configuration listingTable 7-1 lists all IBM Tivoli Workload Scheduler event numbers and corresponding message types.

Table 7-1 Full ITWS Event Configuration listing

Event Number Message Type

1 TWS reset

51 Process reset

101 Job abend

102 Job failed

103 Job launched

240 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 257: Tivoli Workloud Scheduler Guide

104 Job done

105 Job suspended until expired

106 Job submitted

107 Job cancelled

108 Job in ready status

109 Job in hold status

110 Job restarted

111 Job failed

112 Job successful pending

113 Job extern

114 Job in intro status

115 Job in stuck status

116 Job in wait status

117 Job in wait deferred status

118 Job in scheduled status

204 Job recovery prompt issued

119 Job property modified

120 Job is late

121 Job until time expired with continue option

122 Job until time expired with cancel option

151 Schedule abend

152 Schedule stuck

153 Schedule started

154 Schedule done

155 Schedule suspend, until time expired

156 Schedule submitted

157 Schedule cancelled

Event Number Message Type

Chapter 7. IBM Tivoli Enterprise Console integration 241

Page 258: Tivoli Workloud Scheduler Guide

158 Schedule ready

159 Schedule in hold status

160 Schedule in extern

161 Schedule in cancel pending status

162 Schedule property modified

163 Schedule is late

164 Schedule until time expired with continue option

165 Schedule until time expired with cancel option

201 Global prompt issued

202 Schedule prompt issued

203 Job prompt issued

251 Link dropped

252 Link failed

301 Domain Manager switch

Event Number Message Type

242 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 259: Tivoli Workloud Scheduler Guide

Chapter 8. Disaster recovery with IBM Tivoli Workload Scheduler

In this chapter, we discuss the various options that are available when designing a Tivoli Workload Scheduler architecture with regards to disaster recovery, ranging from the provision of a Backup Master Domain Manager, to allow for the failure of maintenance of the Master Domain Manager, to the building of a disaster recovery network to best fit your production environment requirements and policies. We also describe the procedures that you will need to perform in the event of system failure.

The following topics are covered in this chapter:

� “Introduction” on page 244� “Backup Master configuration” on page 244� “On-site disaster recovery” on page 245� “Short-term switch of Master Domain Manager” on page 246� “Long-term switch of Master Domain Manager” on page 249� “Off-site disaster recovery” on page 252

8

© Copyright IBM Corp. 2003. All rights reserved. 243

Page 260: Tivoli Workloud Scheduler Guide

8.1 IntroductionDisaster recovery for IBM Tivoli Workload Scheduler can be split into two distinct types, which are dealt with in different ways. The first type relates to a failure on the IBM Tivoli Workload Scheduler Production Master Domain Manager (MDM), known as an on-site disaster recovery, while the second deals with a total loss of all systems in the IBM Tivoli Workload Scheduler network due to fire or other unforeseen external influences. This is known as an off-site or full disaster recovery, which will also include a complete recovery of databases and applications.

8.2 Backup Master configurationFor continual scheduling availability during system outages, whether planned or unplanned, a Backup Master Domain Manager (BMDM) should be installed on a second server, where the Tivoli Framework is also required, to allow access using the Java GUI. IBM recommends that in a full Framework environment, where other Framework products will be used, the MDM and BMDM should not be installed on the main TMR server, but on managed nodes within the TME network. Following these guidelines will provide access via the GUI during systems outages of the MDM. However, this configuration would result in no GUI access to IBM Tivoli Workload Scheduler during a system failure of the TMR server. Although this will not impact the running of the IBM Tivoli Workload Scheduler network, it will limit the monitoring of the network to the CLI only. Even with the most experienced IBM Tivoli Workload Scheduler CLI experts, this would require constant monitoring for jobs completing with an abnormal status during the outage of a system that may be outside of the IBM Tivoli Workload Scheduler network. The only normal exception to this is where there are no other Framework products being used. In this situation, both the MDM and BMDM are installed on stand-alone TMR servers.

We would recommend that even if there is an existing Framework network, the IBM Tivoli Workload Scheduler implementation should be treated as though no other Framework products are being used where both the MDM and BMDM are installed on stand-alone TMRs, with Tivoli endpoints also installed and connected to the main system monitoring TMR server. This configuration will result in an increased amount of Framework administration with regards to Framework administrators, but this should be small following the initial installation. The high level of availability that the solution provides easily counteracts this overhead. If there is a failure of the main systems management TMR, IBM Tivoli Workload Scheduler monitoring via the GUI can continue uninterrupted, in order to ensure that business-critical batch processes complete successfully without the need to rely on the CLI, while the system monitoring

244 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 261: Tivoli Workloud Scheduler Guide

TMR failure is being resolved. This will, therefore, remove the pressure from the operations department for the TMR to be restarted.

8.3 On-site disaster recoveryThe on-site disaster recovery can take one of two forms, either a short-term or long-term switch to the Backup Master Domain Manager (BMDM). The determining factors dictating the type of switchover required are the need to use the IBM Tivoli Workload Scheduler database during the switchover and the time of day that the failure of the MDM occurs. Some manual intervention would therefore be required in determining which type of switchover is required as well as performing the necessary actions, since there is no automated process built in to IBM Tivoli Workload Scheduler. However, it is possible to script these actions. Details of the steps required within a switchover script are described later within this chapter.

There are three factors that determine if a short-term or long-term switch is required:.

� Will there be a requirement to submit additional work into IBM Tivoli Workload Scheduler during the time of the switch?

� Will there be a requirement to add new records to the IBM Tivoli Workload Scheduler Databases during the time of the switch?

� Will the Jnextday job be required to run during the switch?

If the answer to any of the above is yes, then a long-term switch is required. Note that the first stage of a long-term switch is the short-term switch procedure.

During the time while the Backup Master is in control, following a short-term switch, it will maintain a log of the updates to the Symphony file, in the same way as when communication between agents is lost due to network outages. The default maximum size for the update file, tomaster.msg, is approximately 9.5 MB. While the size of this file can be increased by using the evtsize command, a long-term switch should be considered if this file starts to get full, with a switch back to the original MDM following the running of Jnextday. This will avoid any possible loss of job state information that may occur if the size of this file affects the switchback process.

In the case of MDM failure, unless the problem can be resolved by a system reboot, we would recommend that a long-term switch is always performed and that the switchover is maintained until at least the start of the next processing day. Following these guidelines will ensure that any possible issues with the switchover procedure are avoided.

Chapter 8. Disaster recovery with IBM Tivoli Workload Scheduler 245

Page 262: Tivoli Workloud Scheduler Guide

8.4 Short-term switch of Master Domain ManagerEach Fault Tolerant Agent within the network receives a copy of the production control database, commonly known as the Symphony file at the start of each production day, typically 0600. All updates to each of the local Symphony files are also sent to the Master Domain Manager (MDM) so that it has a view of all processing across the network, thus allowing for a central location for monitoring and control. The crossover of control to the Backup Master Domain Manager (BMDM) is known as the short-term switch and is only used when no access to the IBM Tivoli Workload Scheduler database is required during the time of the switch.

The result of the short-term switch is that all of the scheduling updates are redirected from the original MDM to the BMDM, which will act as the MDM, thus allowing for the original MDM to be shut down, or in the case of a failure, fixed and re-started. The switch can be performed by a simple command or GUI option. This is also the first step in the long-term switchover process.

For a short-term switch of Master Domain Managers, a simple switch of the manager of the MASTERDM Domain is required. Following the switch, users may only submit additional ad hoc jobs, since there is no access to the IBM Tivoli Workload Scheduler database, unless the BMDM is pointed to the copy of the database that has been sent from the Master and the globalopts file has been modified. This is described in 8.5, “Long-term switch of Master Domain Manager” on page 249. In order to create new jobs or schedules during this time, the long-term switch procedure should be followed. If new jobs and schedules are created during a long-term switch, then they will either need to be recreated on the Master when it returns, or the databases will need to be copied back to the Master from the backup, which is recommended.

The procedure for the short-term switch to the Backup Master is the same regardless of whether this is due to the Master workstation crashing or if there is a requirement to perform maintenance on the workstation, which requires IBM Tivoli Workload Scheduler to be stopped.

This can either be performed via the JSC or the command line. If the JSC is to be used, the connection will need to be made to the BMDM.

8.4.1 Using the JSCFollow the following procedure for a short-term switch using the JSC:

1. Once connected to the BMDM, select and load the Status of all Domains list in the Default Plan lists (Figure 8-1 on page 247).

246 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 263: Tivoli Workloud Scheduler Guide

Figure 8-1 Status of all Domains list

2. Select the MASTERDM row and right-click to open the Actions menu (Figure 8-2).

Figure 8-2 Actions menu

3. Select the Switch Manager option (Figure 8-3 on page 248).

Chapter 8. Disaster recovery with IBM Tivoli Workload Scheduler 247

Page 264: Tivoli Workloud Scheduler Guide

Figure 8-3 Switch Manager menu

4. Use the Find window to list the agents in the network and select the workstation that is to take over as the Master (Figure 8-4).

Figure 8-4 Find tool

5. Refresh the Status of all Domains view until the switch is confirmed.

6. Log on to all the agents in the network as maestro, enter the conman CLI and check that all agents have recognized the switch using the sd command.

248 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 265: Tivoli Workloud Scheduler Guide

8.4.2 Using the command lineFollow the following procedure for a short-term switch using the command line:

1. Log on to any agent in the network as the install user, typically maestro. If the long-term switch is script based and is to be performed, then using the command-line procedure may prove easier, since a command-line session will be required in order to run the script.

2. Enter the conman command line interface by typing conman.

3. Be sure that the agent has a status of ‘Batchman LIVES. Issue a start command if it is not.

4. Enter the following switch command, where YYYYYYYY is the workstation name that is to take over as the MDM:

switchmgr masterdm ; YYYYYYY

5. While still in the conman CLI, check that the switch has taken place by using the SHOWDOMAINS command. Enter sd for short. The switch will take a few minutes to take place.

6. Log on to all the agents in the network as the install user and enter the conman CLI and check that all agents have recognized the switch using the sd command.

7. Reverse the procedure to revert back to the original MDM.

8.5 Long-term switch of Master Domain ManagerThe key to a successful switch to a backup Master Domain Manager is in the preparation, in particular the transfer of required files to the BMDM on a daily basis and the creation of procedures and switchover scripts.

The long-term switch deals with the location of the active IBM Tivoli Workload Scheduler databases, which is determined by the master option within the globalopts file. This file exists within the mozart directory on all workstations, although only the globalopts file on the Master workstation has any effect on how the environment acts. Whenever access to the database is requested, via the composer command line or JSC, the master option in the local globalopts file is checked to ensure that it matches the workstation name where the request is being made. If it does not match, then an error is produced and no access is gained.

In order to successfully gain access to a database located on the Backup Master, then the globalopts file on the Backup Master will need to be amended so that the master option contains its own workstation name. This should not be set prior to the switchover procedure, that is by having it permanently set, because it will

Chapter 8. Disaster recovery with IBM Tivoli Workload Scheduler 249

Page 266: Tivoli Workloud Scheduler Guide

allow users to update the backup database with changes that will not be reflected in the live database. This is a protection feature to avoid mistakes.

If the long-term switch is to be carried out by running a script, the best way to make the required change to the globalopts file is to have a copy on the BMDM with the master option set to the name of the BMDM, which can be copied over the original file at the appropriate time. This file could be called globalopts.bkup. A file called globalopts.mstr would also need to be created so that it can be copied back following the switch back to the MDM. The copy of the globalopts file would be the first step in the switchover, following the short-term switch.

As the main aspect of the long-term switch is the access to the active databases, it is important is to be sure that the database files on the Backup Master are as up to date as possible. There are two ways that this can be achieved, both of which have their own set of considerations.

The first method is to copy the physical database files, including key files, from the MDM at the start of the day. This option requires files to be copied from multiple directories and might require a more involved fix process if the databases on the Backup Master are corrupted or the copy has failed. Care would need to be taken to ensure that the databases are not being updated during the copy, because this could lead to inconsistency between the MDM and BMDM and in extreme cases might cause corruption.

Secondly, flat files can be exported from the database and then copied to the BMDM. The advantage of this method is that no updates to the database will be allowed during the export. The import process of the databases on the BMDM should not be performed until a long-term switch is actually initiated. This will require running the composer replace command after you have performed the Domain switch and a copy has been made of the globalopts file. The disadvantage with this method is the contents of the users’ database. User passwords are not extracted during the flat file creation, for security reasons, so these would therefore need to be added following the load. This can be achieved relatively easily by using non-expiring passwords, ensuring that the password is the same for all Windows batch job users, and either typing in the password via a prompt during the load script or by having a file that is owned and only readable by a root that contains the password that can be read by the load script.

Whichever of the above options is chosen, the copy of the physical databases or exported flat files, the copy should take place on a daily basis at a set time so as to ensure that all involved understand and appreciate the state that the databases are in following a long-term switch. Due to the importance of the runmsgno file, explained later, it would be advisable for this to take place soon after the running of the Jnextday job. In this way, a switch can be performed at any time after the copy during the current production day, without issue. It should also be noted that, unless further copies are performed, any changes to the

250 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 267: Tivoli Workloud Scheduler Guide

database during the production day will not be reflected following a switch and will need to recreated. During normal day-to-day running, it is probable that the number of changes will be minimal. If many changes are being made to the database, then multiple copies should be scheduled.

Additionally, the parameters database has to be considered, since it is the only database that is not exclusive to MDM. If the parameters database on the BMDM is used, then it will need to be merged with that from the Master during the switch. Due to the format of the database files then this, if no other, would need to be exported into a flat file on both the MDM and BMDM and the two respective output files merged, and then replaced on to the BMDM. This requirement alone would lead us to recommend that the process of the transfer from the MDM to the BMDM should be by flat file export.

As a further alternative, the databases can be located on a central server that can be accessed by both the MDM and BMDM, even when one or the other is not active. This, of course, brings its own issues, in that such a configuration is reliant on network availability and the amendment of the localopts file in the install user’s home directory to indicate the location of the databases. The main advantage of this solution is that it avoids the requirement to copy the databases between MDM and BMDM.

As well as the copy of the databases, there is also a requirement to copy the runmsgno file from the mozart directory on MDM to the BMDM. This file contains the current sequence number of the Symphony file and will be checked during the running of Jnextday.

Once a switch and a load of the databases have taken place, the FINAL job stream and Jnextday job will need to be run on the BMDM, when it is acting as the Master. As further preparation, a Jnextday job and FINAL job stream should be created for the BMDM, with the FINAL job stream specified to run “on request” to ensure that it is not scheduled on a daily basis. This will need to be submitted during the switchover and amended to run everyday, while the original FINAL job stream should be amended to run on request so that the original MDM will be online for a number of days prior to switch back. If the Jnextday script has been amended, then these amendments should be reflected on the BMDM. In order to change the run cycle of the FINAL job streams on the MDM and BMDM while using a switchover script, an exported file containing the two FINAL job streams with amended run cycles can be stored on the BMDM and loaded.

The script found at “Script for performing long-term switch” on page 382 is an example of one that could be used to perform the long-term switch, as far as the preparation work regarding the globalopts and runmsgno files is concerned. It uses databases files that have been exported using the composer create command and copied to the BMDM prior to the switch. For simplicity, the

Chapter 8. Disaster recovery with IBM Tivoli Workload Scheduler 251

Page 268: Tivoli Workloud Scheduler Guide

parameters database is not used for jobs on the BMDM under normal conditions, so there is no need to merge these files.

This shell script will run on Windows with some minimal modification utilizing bash, which will be available on the MDM and BMDM as part of the Framework TMR installation.

8.6 Off-site disaster recoveryThere are several methods that can be employed when implementing an off-site disaster recovery strategy. The choice of method will be driven primarily by the business impact that a total loss of systems within the production environment will cause. Additionally, but not always, politics and budget restrictions may play a major part in the decision-making process. The importance of an effective off-site disaster recovery method for a production environment should not be underestimated and should be considered at the design stage of any new implementation, rather than as an afterthought, since it can influence the architecture implemented.

Although each implementation of IBM Tivoli Workload Scheduler has its own unique qualities, which may require an alternative method of off-site disaster recovery, generally there are three options that should be considered.

8.6.1 Cold startWhile a cold start off-site disaster recovery strategy would provide the least expensive of the three options described here and would require minimal maintenance, this option requires the greatest amount of work to activate and takes the longest time to recover back to the state when the system loss occurred. In the event that an off-site disaster recovery needs to be implemented, processing would normally need to return to at least the start of the daily batch processing and in some cases to the start of the IBM Tivoli Workload Scheduler processing day, depending on the regularity of the application backups and time of the day that the failure occurs.

Implementing this solution requires systems that match those that exist in the production IBM Tivoli Workload Scheduler network. To ensure the cleanest and quickest switchover, these systems should be set up with the same system names and IP addresses as those in production. Because these systems will normally be isolated from the live systems, this should not be an issue. If they are in a network configuration that would cause contention if both sets of systems were to be active at the same time, then some manipulation of files will be required during the switch process.

252 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 269: Tivoli Workloud Scheduler Guide

If the off-site facilities are provided in-house, then much of the preparation work needed for this solution can be in place prior to any failure, although this will not necessarily improve the time taken for the batch process to be restarted. In this case, it would be worth considering either the warm or even hot start option. Normally this solution is used where a third-party supplier will provide the systems, under agreement, in the event of failure.

On a daily basis, more regularly if possible, flat file copies of the IBM Tivoli Workload Scheduler databases should be made (using the composer create command), along with backups of the application databases and transported off site to a secure site or to the off-site location, if in-house. Putting them in a safe, even a fireproof location in the same building as the live systems is not good enough.

If a failure occurs, the systems will need to be built and configured to match those that they are to replace and the application data restored. If the application backup is made midway through a batch run, then it is imperative that the staff performing the rebuild are aware of this fact, because it will determine the point where the production work is restarted.

Once all the systems have been rebuilt, the IBM Tivoli Workload Scheduler agents will need to be built on each of the systems. If the same system names and IP addresses as the live systems are used, then this can be performed as part of the application data recovery. The IBM Tivoli Workload Scheduler object databases can then be loaded using the composer replace command. The workstations should be loaded first and the job streams last to avoid errors occurring during the load. The IBM Tivoli Workload Scheduler network can be started by running the Jnextday command. All agents will have their limit set to 0, which will prevent any work starting before final checks have been completed and the restart point determined if the application backups were taken midway through a batch run.

8.6.2 Warm startAs the name suggests, this method of off-site disaster recovery is a combination of both the cold and hot start methods and can be implemented in one of two ways.

A warm start enables production processing to continue from a point midway through a batch run, following a time-stamped application database update. For this solution to work effectively, it is important to create scripts and commands that are re-runable, even if successful, as the database snapshot may well occur in the middle of a job and that job would need to be rerun, as a starting point, on the off-site systems, following a loss of the live systems. This can be achieved by limiting database commits to one per script or job or by ensuring that if duplicate

Chapter 8. Disaster recovery with IBM Tivoli Workload Scheduler 253

Page 270: Tivoli Workloud Scheduler Guide

data is applied to a database, it overwrites the data rather than appends to the tables.

Regardless of which of the two options are used, this strategy is built around the off-site systems being managed in-house and that permanent network connectivity is established between the live systems and the disaster recovery site up to the point of total loss of the live systems.

8.6.3 Independent solutionThis solution requires a duplicate IBM Tivoli Workload Scheduler network to be created on the off-site systems that matches the live systems. It is not required, as it is in the cold start solution, that the systems have the same names and IP addresses as the live systems. Since these systems, including IBM Tivoli Workload Scheduler, will be up and running on a constant basis, the workstation definitions will only be added once, commonly from a flat file exported from the live systems, so the system names can be amended during the install phase and not during the pressure of a switchover, as would be done with a cold start. To have identical IP addresses and host names as those on the physical network would require a separate network from the live systems and would impact the ability to transfer data, so it is not recommended. However, it is essential that the IBM Tivoli Workload Scheduler workstation names match those in production. Since this is an independent network, there would be no risk of contention with having IBM Tivoli Workload Scheduler workstations the same as live.

On a daily basis, flat file exports of the IBM Tivoli Workload Scheduler databases (composer create), excluding workstations, would be sent to the off-site MDM and loaded immediately (composer replace). Jnextday would then run to ensure that the day’s batch work is built into the Symphony file and ready to go. The limits would be set to 0 on all systems to ensure that no jobs actually run and there would need to be a process of cancelling all carry forward job streams prior to the Jnextday job running, to ensure that nothing is carried forward. This process, of course, would need to be removed in the event of a failover.

One system at the off-site location would need to be installed as an FTA in the MASTERDM domain in the live IBM Tivoli Workload Scheduler network with options set to Full status. Following a switch to the disaster recovery systems, this agent’s Symphony file will act as reference to the state that the live systems were in at the point of failure.

During the day, time-stamped database update files should be sent from the live systems and stored in a reception area on the disaster recovery systems. These files should be applied at the end of the batch cycle, assuming that a switchover has not occurred, to ensure that the disaster recovery are systems are up to date

254 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 271: Tivoli Workloud Scheduler Guide

to at least the end of the previous working day, and also to conserve space and reduce the update time during a switchover.

If a failure occurs during a batch cycle, all the update files for the current day will then need to be applied. Once these have been applied, the batch processing can then be restarted from the point of the jobs that were running at the time of the database snapshot. The Agent that is part of the live system and the time-stamped database files can be cross-referenced to provide this information. It is important that the batch work be re-started from the point of the last database snapshot and not the point of failure, as several hours may have passed and the databases will not be in a state to run this work.

The application of the update files could take several hours, hence the term warm start.

8.6.4 Integrated solutionThis solution is identical to the hot start solution except that the application databases on the off-site systems are updated by time-stamped snapshot update files in the same way as the independent solution.

Once again, this option is only a warm start, since there would be a time delay between initial failure and the restart of the processing, which would effectively be from a point prior to that of the failure.

Details of the architecture are covered in 8.6.5, “Hot start” on page 255, while the recovery and restart procedure are the same as the independent solution.

8.6.5 Hot startThis solution has the least impact on batch processing, with a short lag time, and requires the minimum amount of work to perform the switch at the point of failure of the production systems. As a consequence, it will require a slight increase in the workload of the schedulers and a high-speed link between the live and off-site systems.

The entire off-site disaster recovery network would be included within the live production architecture with the agents that will act as the MDM and BMDM during the switchover in the MASTERDM Domain and configured as additional BMDMs to the live MDM. If there is a limit on the number of systems available, these systems could easily be those that would be running the batch work during the time the disaster recovery site is in operation. While this is not the ideal solution, it would be sufficient to allow processing to continue until a return the live systems or replacement systems can be implemented. The IBM Tivoli

Chapter 8. Disaster recovery with IBM Tivoli Workload Scheduler 255

Page 272: Tivoli Workloud Scheduler Guide

Workload Scheduler databases would need to be copied to these systems in the same way as they are to the live BMDM.

To aid the schedulers, these agents should have IBM Tivoli Workload Scheduler workstation names that closely resemble those of the live systems. We would recommend that the live, or production, agent names start with a “P”, while the agents that make up the disaster recovery element of the network start with an “R” and the remainder of the workstation names being the same. Note that “D” should not be used because it could be confused with a Development network. This will also allow the operations teams to create lists that will allow them to either side of the network independently.

The application databases on the off-site systems will need to be constantly replicated with those on the live systems via a high-speed link on a real time basis. In the event of a switchover being performed, the production work can therefore be restarted on the disaster recovery systems by starting the jobs that were running when the live systems failed. To ensure that this is possible, it is essential that the scripts and commands as built to be completely rerun able as with the Warm Start option.

The preparation required for this solution is the same as the switchover to the on-site BMDM, with IBM Tivoli Workload Scheduler database files being copied to the disaster recovery MDM and BMDM on a daily basis and the creation of files and scripts to populate the databases and the update of the globalopts file and FINAL job stream.

Additionally, all objects created for the live systems will need to be duplicated for the disaster recovery systems. Following our recommendations regarding the naming of the agents will assist in this process.

The disaster recovery agents would need to have their LIMIT set to 0 and FENCE set to 100 to ensure that no work would actually run. Jobs would need to be created to run on these systems and scheduled with a priority of 101 to run prior to Jnextday, to cancel any carry forward job streams.

Since the preparation for this solution is identical to that of the switch to the live BMDM, then the procedure to perform the switch will be the same, with the added step of amending the LIMIT and FENCE levels of the agents. Once the switch has been performed, the processing can be restarted from the point of failure, which can be identified by examining the plan.

256 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 273: Tivoli Workloud Scheduler Guide

Chapter 9. Best practices

In this chapter, we cover best practices when implementing IBM Tivoli Workload Scheduler Version 8.2, and include many useful tips and techniques, based on actual field experience.

This chapter has the following sections:

� “Planning” on page 258� “Deployment” on page 276� “Tuning localopts” on page 293� “Scheduling best practices” on page 300� “Optimizing Job Scheduling Console performance” on page 307� “IBM Tivoli Workload Scheduler internals” on page 315� “Regular maintenance” on page 326� “Basic fault finding and troubleshooting” on page 339� “Finding answers” on page 348

9

© Copyright IBM Corp. 2003. All rights reserved. 257

Page 274: Tivoli Workloud Scheduler Guide

9.1 Planning In the following sections you will find some best practices for planning an IBM Tivoli Workload Scheduler implementation.

9.1.1 Choosing platformsUNIX/Linux have advantages over Windows in the areas of scalability and reliability. Properly managed environments with AIX, Solaris, Linux and/or HP-UX are extremely available. Also some performance optimization parameters (such as sync level, see 9.3, “Tuning localopts” on page 293), which are especially important for busy Masters and Domain Managers, are only available on UNIX/Linux platforms. For the roles of Master Domain Managers (MDM) and Domain Managers (DM), UNIX/Linux is probably a far better choice.

For example AIX 5L running on Enterprise Server class IBM pSeries systems configured for high availability host IBM Tivoli Workload Scheduler Master Domain Managers and Domain Managers very effectively.

9.1.2 Hardware considerationsA good general rule is to use a powerful machine as the Master Domain Manager, ideally a dedicated machine or at best a dedicated disk for your IBM Tivoli Workload Scheduler installation. IBM Tivoli Workload Scheduler itself is not a cpu-intensive application, but is a disk-intensive application. To allow for the archived Symphony files and the database, we recommend a minimum of 2 GB disk space, not including the requirement for Tivoli Framework, for the machine that will host the Master Domain Manager. The hardware used to run the Master for a large IBM Tivoli Workload Scheduler environment should usually be a midrange UNIX/Linux server.

Some examples in this range are:

� IBM ̂pSeries™ and Regatta servers running AIX 5L� IBM ̂xSeries® running Red Hat or SuSE Linux� SunBlade systems running Solaris 2.8 or 2.9� HP 9000 K and N classes running HP-UX 11 or 11i

It is again important to remember that the IBM Tivoli Workload Scheduler engine is not a cpu-intensive application except when Jnextday is running. In most configurations, disk I/O is the limiting factor in its performance. Such measures as putting IBM Tivoli Workload Scheduler on its own physical disks, on a separate disk adapter, or on a RAID array (especially RAID-0), can boost performance in a large high-workload IBM Tivoli Workload Scheduler environment. Ultra2 or Ultra160 (IBM Ultrastar) SCSI storage components can

258 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 275: Tivoli Workloud Scheduler Guide

also relieve I/O bottlenecks. If a server is more than two years old, it may have storage interfaces and components with performance that falls well below current standards and may not perform well in particularly demanding environments.

9.1.3 ProcessesA Master Domain Manager or Domain Manager will have one writer process for each agent hosted and one mailman server process for each unique server ID allocated plus the master mailman process. At least three processes will exist for each job running locally on the workstation. This can add up to considerably more processes than most servers are initially configured to handle from a single user. Open files on Master Domain Manager and Domain Managers can also add significant load on UNIX systems. It is advisable to work with your system administrator to tune all given options in advance of a Tivoli Workload Scheduler deployment. This will ensure that the system has been configured to host the number of processes that IBM Tivoli Workload Scheduler can generate. Make sure to check both the system-wide and per-user process limits. If the operating system hits one of these limits while IBM Tivoli Workload Scheduler is running, IBM Tivoli Workload Scheduler will eventually stop completely on the node. If this happens on a Master Domain Manager or Domain Manager, scheduling on multiple nodes may be affected.

Notes on RAID technologies:

� RAID-0: RAID-0 means stripe set without parity. In RAID-0, data is divided into blocks and spread across the disks in an array. Spreading operations across multiple disks improves read/write performance, because operations can be performed simultaneously. Although RAID-0 provides the highest performance, it does not provide any fault tolerance. Failure of a drive in a RAID-0 array causes all of the data within the stripe to become inaccessible.

� RAID-1: RAID-1 means disk mirroring. RAID-1 provides an identical copy of a disk in the array. Data written to the primary disk is written also to a mirror disk. It provides fault tolerance and improves read performance, but it can also degrade write performance because of dual-write operations.

� RAID-5: RAID-5 means stripe set with parity. RAID-5 provides redundancy of all data on the array, and allows a replacement of a single disk without system downtime. Although it offers lower performance than RAID-0 or RAID-1, it provides higher reliability and faster recovery.

Chapter 9. Best practices 259

Page 276: Tivoli Workloud Scheduler Guide

9.1.4 Disk spaceIBM Tivoli Workload Scheduler can consume large amounts of disk on networks that run large numbers of jobs. On production systems, disk space availability should be monitored using an automated tool, such as IBM Tivoli Monitoring. IBM Tivoli Workload Scheduler should reside on its own file system to minimize the effect of other applications on IBM Tivoli Workload Scheduler operation. In medium and large implementations, IBM Tivoli Workload Scheduler should reside on its own physical disk. This configuration will enhance performance and reliability. The amount of disk space that IBM Tivoli Workload Scheduler uses is greatly dependent on the workload. Peaks in disk usage can be minimized by regularly removing files from the TWShome/schedlog, TWShome/stdlist, TWShome/audit and TWShome/tmp directories. The built-in command, rmstdlist, is provided to remove aged job output and IBM Tivoli Workload Scheduler logs. Disk space utilized by the IBM Tivoli Workload Scheduler scheduling object or mozart database can be minimized by regularly running the composer build commands on the Master Domain Manager. This will also correct some database errors that could cause composer or Jnextday to fail. More information about rmstdlist, composer build, Jnextday, and the TWShome/schedlog and TWShome/stdlist directories can be found in the Tivoli Workload Scheduler Version 8.2, Reference Guide, SC32-1274.

Refer to 9.7.1, “Cleaning up IBM Tivoli Workload Scheduler directories” on page 326 for automated ways to clean up IBM Tivoli Workload Scheduler directories.

9.1.5 Inodes IBM Tivoli Workload Scheduler can consume large numbers of inodes when storing large amounts of job output on UNIX systems in TWShome/stdlist. However inodes are not an issue on Microsoft® Windows operating systems. On an FTA, which runs large numbers of jobs, inode consumption can grow quickly. Although most new UNIX boxes should not present a problem, it is worth consideration.

9.1.6 Mailman server processes or Domain ManagersOne of the most important aspects of deploying a successful IBM Tivoli Workload Scheduler network is its architecture. When designing the network, the goal should be to achieve the maximum degree of balance and concurrency. A balanced IBM Tivoli Workload Scheduler network is one in which mailman server

Tip: Another recommendation for UNIX systems that might improve performance is to use a separate file system to host the stdlist directory.

260 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 277: Tivoli Workloud Scheduler Guide

processes and Domain Managers all have roughly similar workloads to process throughout the day. Concurrency is achieved in larger enterprises by having multiple mailman server processes and/or Domain Managers simultaneously distributing and processing data from the Master Domain Manager.

Another critical architectural goal should be to insulate the master mailman process from being connected directly to the agents. The master or default mailman process (also known as the “ “ Server ID) is critical to IBM Tivoli Workload Scheduler processing on the Master Domain Manager. If it is also serving Fault Tolerant Agents or Standard Agents, the Master Domain Manager is more vulnerable to outages due to network or machine hangs on the IBM Tivoli Workload Scheduler network.

To enable the server process, allocate a character such as A, in the server attribute of the workstation definition for each agent (Fault Tolerant or Standard Agent) when using composer, as shown in Example 9-1, or the Server field of the workstation definition if using the Job Scheduling Console as shown in Figure 9-1 on page 262. In total, 36 server process can be allocated, if all possible Server IDs are used (A..Z and 0..9).

Example 9-1 Server process allocated using composer

cpuname BACKUPdescription “backup master”os Unixnode yarmouth.itsc.austin.ibm.comtcpaddr 31182secureaddr 31113domain MASTERDMTIMEZONE CSTfor maestrotype FTA autolink on fullstatus on resolvedep on securitylevel enabled server A behindfirewall offend

Chapter 9. Best practices 261

Page 278: Tivoli Workloud Scheduler Guide

Figure 9-1 Adding Server IDs

In order to better understand the meaning of Server IDs, let us consider the following examples.

Figure 9-2 on page 263 shows an example IBM Tivoli Workload Scheduler domain with no Server ID defined. In this case the main mailman process on Domain Manager handles all outbound communications with the FTAs in the domain.

262 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 279: Tivoli Workloud Scheduler Guide

Figure 9-2 A ITWS configuration with no Server IDs

Figure 9-3 on page 264 shows the same domain with three Server IDs defined. As seen from the diagram, one extra mailman process is spawned for each Server ID in the domain.

Domain Manager

DMB

MASTERDM

Master Domain Manager

AIX

AIX

Domain Manager

DMA

HPUX

Linux Windows 2000 Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

OS/400

Chapter 9. Best practices 263

Page 280: Tivoli Workloud Scheduler Guide

Figure 9-3 The same configuration with three different server IDs

Figure 9-4 on page 265 shows the usage of extra mailman processes in an multidomain environment.

DomainA AIXDomain Manager

DMA

SERVERA mailman

SERVERA mailman

SERVER4mailman

SERVER4mailman

SERVER1 mailman

SERVER1 mailman

Linux

FTA1

Server ID AWindows 2000

FTA3

Server ID 1Solaris

FTA2

Server ID A

parent domain manager

FTA4

HPUXServer ID 1

OS/400Server ID 4

FTA5

264 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 281: Tivoli Workloud Scheduler Guide

Figure 9-4 Multidomain environment

By allocating separate processes dedicated to the communication with other workstations, mailman servers leave the main mailman dedicated to the transfer and network hub activity. The use of mailman server processes on the Domain Manager and Master Domain Manager must be carefully planned. The main consideration is the number of mailman servers a main mailman is connecting to, or the number of agents a mailman server is connected to. The maximum

Important: It is important to note that the server processes need only be assigned to Fault Tolerant Agents or Standard Agents. Communication with Extended Agents is handled via the host FTA and hence the server processes allocated to that FTA, while Domain Managers always communicate with the master mailman process. If server processes are allocated within the workstation definitions of Extended Agents or Domain Managers, they will be ignored.

Domain Manager

DMB

MASTERDMMaster Domain Manager

AIX

AIXDomain Manager

DMA

HPUX

Linux Windows 2000 Solaris

DomainA DomainB

FTA1 FTA2 FTA3 FTA4

OS/400

Server ID AServer ID B

SERVERA mailman

SERVERA mailman

SERVERB mailman

SERVERB mailman

SERVERA mailman

SERVERA mailman

SERVERB mailman

SERVERB mailman

SERVER1 mailman

SERVER1 mailman

SERVER2 mailman

SERVER2 mailman

Server ID A Server ID 1 Server ID 2Server ID B

Chapter 9. Best practices 265

Page 282: Tivoli Workloud Scheduler Guide

number is about 20 for Solaris, 50 for Windows and about 100 for other UNIX boxes, depending on their power. Although this varies with hardware size, network performance, design, and workload, typical numbers should be about 10 for Solaris, about 15 for Windows and about 20 for other UNIX boxes.

As a Tivoli Workload Scheduler network grows, this ratio should be tuned to minimize the initialization time of the network. Initialization time is defined as the time from the beginning of the Jnextday job until all functional agents are fully linked.

As a general rule, Domain Managers are better suited to environments where clusters of agents are located at remote locations from the Master Domain Manager via relatively slow network links, whereas Server IDs are better when agents are Master Domain Manager are connected by a fast local network.

Important considerations when configuring mailman server processes:

� When configuring extra mailman servers, do not forget that each mailman server process uses extra CPU resources on the workstation that it is created on (Domain Manager or Master Domain Manager), so be careful not to create excessive amount of mailman servers. Configuring extra mailman servers was much more important in the single domain architecture (pre-IBM Tivoli Workload Scheduler 6.1 implementations). Multiple Domain implementations reduced the requirement for multiple mailman server processes.

� Some cases where usage of extra mailman servers might be beneficial are:

– For Backup Master.

– For FTAs connected directly to the Master (for example dedicating one mailman server process for each set of 10 FTAs, depending on the workload and configuration).

– For slow-initializing FTAs that are at the other end of a slow link. (If you have more than a couple of workstations over a slow link connection to the Master Domain Controller, a better idea is to place a remote Domain Manager to serve these workstations. See 9.1.7, “Considerations when designing an IBM Tivoli Workload Scheduler network” on page 267.)

� If you have unstable workstations in the network, do not put them under the same mailman server with your critical servers.

266 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 283: Tivoli Workloud Scheduler Guide

9.1.7 Considerations when designing an IBM Tivoli Workload Scheduler network

When designing an IBM Tivoli Workload Scheduler network, there are several things that need to be considered:

� What are your job scheduling requirements?

� How critical are your jobs to the business? What is the effect on your business if Domain Manager A (DMA) goes down? Does DMA need a Backup Domain Manager at the ready?

� How large is your IBM Tivoli Workload Scheduler network? How many computers does it hold? How many applications and jobs does it run?

The size of your network will help you decide whether to use a single domain or the multiple domain architecture. If you have a small number of computers, or a small number of applications to control with IBM Tivoli Workload Scheduler, there may not be a need for multiple domains.

� How many geographic locations will be covered in your IBM Tivoli Workload Scheduler network? How reliable and efficient is the communication between locations?

This is one of the primary reasons for choosing a multiple domain architecture. One domain for each geographical location is a common configuration. If you choose single domain architecture, you will be more reliant on the network to maintain continuous processing.

� Do you need centralized or decentralized management of IBM Tivoli Workload Scheduler?

An IBM Tivoli Workload Scheduler network, with either a single domain or multiple domains, gives you the ability to manage IBM Tivoli Workload Scheduler from a single node, the Master Domain Manager. If you want to manage multiple locations separately, you can consider the installation of a separate IBM Tivoli Workload Scheduler network at each location. Note that some degree of decentralized management is possible in a stand-alone IBM Tivoli Workload Scheduler network by mounting or sharing file systems.

� Do you have multiple physical or logical entities at a single site? Are there different buildings, and several floors in each building? Are there different departments or business functions? Are there different applications?

These may be reasons for choosing a multi-domain configuration. For example, a domain for each building, department, business function, or each application (manufacturing, financial, engineering, etc.).

� Do you have reliable and efficient network connections?

Chapter 9. Best practices 267

Page 284: Tivoli Workloud Scheduler Guide

� Do you run applications, such as SAP R/3, PeopleSoft or Oracle, that will operate with IBM Tivoli Workload Scheduler?

If they are discrete and separate from other applications, you may choose to put them in a separate IBM Tivoli Workload Scheduler domain.

� Would you like your IBM Tivoli Workload Scheduler domains to match your Windows domains?

This is not required, but may be useful.

� Do you want to isolate or differentiate a set of systems based on performance or other criteria?

This may provide another reason to define multiple IBM Tivoli Workload Scheduler domains to localize systems based on performance or platform type.

� How much network traffic do you have now?

If your network traffic is manageable, the need for multiple domains is less important.

� Do your job dependencies cross system boundaries, geographical boundaries, or application boundaries? For example, does the start of Job1 on CPU3 depend on the completion of Job2 running on CPU4?

The degree of interdependence between jobs is an important consideration when laying out your IBM Tivoli Workload Scheduler network. If you use multiple domains, you should try to keep interdependent objects in the same domain. This will decrease network traffic and take better advantage of the domain architecture.

� What level of fault tolerance do you require?

An obvious disadvantage of the single domain configuration is the reliance on a single Domain Manager. In a multi-domain network, the loss of a single Domain Manager affects only the agents in its domain.

� Will your environment contain firewalls?

In the design phase of a Tivoli Workload Scheduler network, the administrator must know where the firewalls are positioned in the network, which Fault Tolerant Agents and which Domain Managers belong to a particular firewall, and which are the entry points into the firewall.

Among these considerations, the layout of the physical network is one of the most important. The structure of the domains must reflect the topology of the network in order to best use the communication channels. The following example explains this.

268 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 285: Tivoli Workloud Scheduler Guide

Let us suppose we have the configuration illustrated in Figure 9-5:

� One Master Domain Manager is in New York.� 30 FTAs are in New York, and 60 are in New Jersey.� 40 FTAs are in Chicago.

5000 jobs run each day, balanced between the FTAs. The Symphony file is 5 MB in size when Jnextday is run. New Jersey and Chicago are accessed through a WAN link.

Figure 9-5 Sinfonia distribution in the single domain environment

The FTAs, upon re-linking to the Master Domain Manager after Jnextday, request a copy of the Sinfonia file. The MDM has to send a copy to each FTA. The total MB transferred over the WAN is 500 MB.

Now, look at the second topology shown in Figure 9-6 on page 270:

� One Master Domain Manager is in New York. � 30 FTAs are in New York.� One Domain Manager with 60 FTAs is in New Jersey.� One Domain Manager with 40 FTAs is in Chicago.

In this topology, each FTA reports to its respective DM, which reports to the MDM. The DM is also responsible for keeping contact with its FTAs and pushing down a Sinfonia file to each one after Jnextday.

Even though New Jersey and Chicago are still accessed through the WAN, the MB push to each city after Jnextday is reduced to 10 MB. This reduces the WAN traffic considerably.

MASTER-NewYork

Master

ChicagoLocation

NewJerseyLocation

FTA FTA FTA FTA

Sinfonia

40 FTAs 60 FTAs

30 localFTAs

Chapter 9. Best practices 269

Page 286: Tivoli Workloud Scheduler Guide

Additionally, because the DMs are responsible for initializing their own FTAs, it shortens the length of time from start of Jnextday to start of production across the network by initializing in parallel.

Figure 9-6 Sinfonia distribution in the multi domain environment

Therefore, Domain Managers across wide area networks are definitely a good idea. You need to plan to implement them accordingly.

The number of FTAs in your network topology dictates the number of Domain Managers you must implement. If you have 200 FTAs with one Domain Manager, you have not balanced out the message processing because all your FTAs report to one Domain Manager, which in turn, reports everything to the MDM. Therefore, you have created a situation where two boxes are hit hard with incoming messages.

Each FTA generates a writer process on its DM. With UNIX and Linux, you can configure the number of processes a user can have, but on Microsoft Windows there is a limit to the number of named pipes (about 100). Each writer on Windows uses a named pipe. Logically, fewer FTAs under each Domain Manager allows for faster dependency resolution within that Domain Manager structure. Each DM processes fewer messages than the DM in the situation previously

Note: In Figure 9-6, Sinfonia distribution through the Domain Managers is shown with arrows. Note that distribution to local FTAs is not shown, since it does not change with the multi domain environment.

MASTERDM-NewYork

MasterDomain Manager

Domain Manager

DMA

Domain Manager

DMB

ChicagoDomain

NewJerseyDomain

FTA FTA FTA FTA

Sinfonia

40 FTAs 60 FTAs

30 localFTAs

270 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 287: Tivoli Workloud Scheduler Guide

listed and reports all messages back to the MDM. This leaves only the MDM processing all messages.

A suggested limit of FTAs for each DM is about 50 to 60.

In the previous situation (scenario with 200 FTAs), you can implement four DMs. Try to put your FTAs that depend on each other for dependency resolution under the same DM. If you have inter-Domain Manager dependency resolution, the messages must still go through the MDM, which has to process all messages from all four DMs.

Also, in the previous situation, if the MDM has a problem and cannot communicate with one of the DMs, all dependency resolution required from the network of that DM does not occur. Therefore, it is a better practice to put all machines dependent upon each other within the same DM network with no more than 60 or so FTAs for faster message processing within the DM network.

Wrapping upHere is a summary that compares single and multiple domain networks.

Advantages of single domain networks are:

� Simpler architecture.� Centralized control and management.

Disadvantages of single domain networks are:

� No sharing of information or communication between the single domain networks.

� The Master Domain Manager maintains communications with all of the workstations in the IBM Tivoli Workload Scheduler network. Failure of the Master Domain Manager means no communication in the domain network.

Advantages of multiple domain networks:

� Ad hoc schedules and jobs can be submitted from the Master Domain Manager, or from any other Domain Manager or agent that has access to the database files on the Master Domain Manager.

Important: This is an average number and depends on various factors such as workload, machine configuration, network, etc. Also this limit should not be confused with the number of FTAs that one mailman server can handle.

Tip: When configuring your Domain Manager infrastructure, try to put your critical servers close to the Master in the Domain Manager hierarchy.

Chapter 9. Best practices 271

Page 288: Tivoli Workloud Scheduler Guide

� Reduced network traffic due to localized processing.

� Local resolution of inter-agent dependencies.

� Localized security and administration.

� Flexible topology that maps to your business model.

� On-the-fly switchover to Backup Domain Manager.

9.1.8 Standard Agents The Standard Agents function much like FTAs, yet lack both local fault tolerance and local job stream and job launch capability. Standard Agents rely on their host Fault Tolerant Agents to resolve dependencies and to order local job stream and job launch.

The lack of fault tolerance has caused many organizations to select FTAs over Standard Agents. But Standard Agents do have some merits in certain cases and the following summarizes some of the situations where Standard Agents might be preferred over FTAs:

� To facilitate global resources: Global resources do not currently exist for IBM Tivoli Workload Scheduler. Therefore, the only way to share resources is through a common manager. Using Standard Agents can help in this situation.

� Low-end machines: If you need to install IBM Tivoli Workload Scheduler agent on low-end machines with little CPU and memory power, Standard Agents might be preferred choice, since they require less machine resources.

� Cluster environments: For cluster environments Standard Agents might help because they require simpler configuration for fall-back situations than FTAs.

However, if you do not have these requirements, you should prefer FTAs over Standard Agents due to more functionality and inherent network fault-tolerant characteristics of FTAs.

9.1.9 High availabilityA Backup Master is a critical part of a highly available IBM Tivoli Workload Scheduler environment. If the production Master or a critical Domain Manager fails and cannot be immediately recovered, a backup Master will allow production to continue so that there is no interruption or delays to the days processing. See the section “Switching to a Backup Domain Manager” in IBM Tivoli Workload Scheduler Version 8.2, Planning and Installation, SC32-1273.

For outages that do not cross Jnextday, access to the database will be required for ad hoc job submission. If an outage occurs during the Jnextday process,

272 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 289: Tivoli Workloud Scheduler Guide

access to the database will be needed for the Jnextday process to complete successfully. There are a number of possible approaches for continuous processing of the workday:

� Copy the IBM Tivoli Workload Scheduler databases to the Backup Domain Manager on a regular basis. The contents of the TWShome/mozart directory should be copied following Jnextday so that the Backup Master Domain Manager is aware of the current Plan run number, plus the database files contained within the TWShome/network directory.

� Store the IBM Tivoli Workload Scheduler database files on a shared disk as typically found in a clustered environment.

� Remotely mount IBM Tivoli Workload Scheduler databases from another machine. This is generally not a good practice, since in the event of a failure of the server physically holding the files, access to the files is lost at the very least and potentially IBM Tivoli Workload Scheduler may hang on the servers with the remote mounts until the failed server is restored.

Server hardware manufacturers and third parties offer a variety of high availability features and solutions that can be used to greatly increase the uptime of a critical IBM Tivoli Workload Scheduler node. Some examples are:

� IBM HACMP� Linux xSeries cluster� HP ServiceGuard� Sun Cluster Software� Microsoft Cluster Services

All will work with IBM Tivoli Workload Scheduler, but the installation and configuration can be complex. Consider Tivoli Services for large implementations if expertise is not available in-house, or contact your hardware or software vendor for information about high availability server configuration information.

9.1.10 Central repositories for important filesIBM Tivoli Workload Scheduler has several files that are important for usage of IBM Tivoli Workload Scheduler and for the daily production workload. Managing these files across several IBM Tivoli Workload Scheduler workstations can be a cumbersome and very time-consuming task. Using central repositories for these files can save time and make your management more effective.

Note: Do not copy the TWShome/network/NetReq.msg file to the Backup Domain Manager when coping database files from the network directory.

Chapter 9. Best practices 273

Page 290: Tivoli Workloud Scheduler Guide

The scripts filesThe scripts are very important objects when doing job scheduling on the IBM Tivoli Workload Scheduler Fault Tolerant Agents. It is the scripts that actually perform the work or the job on the agent system, for example, update the payroll database or the customer inventory database.

The job definition for distributed jobs in IBM Tivoli Workload Scheduler contains a pointer (the path or directory) to the script. The script by itself is placed locally on the Fault Tolerant Agent. Since the Fault Tolerant Agents have a local copy of the Plan (Symphony) and the script to run, they can continue running jobs on the system even if the connection to the IBM Tivoli Workload Scheduler Master is broken. This way we have the fault tolerance on the workstations.

Managing scripts on several IBM Tivoli Workload Scheduler Fault Tolerant Agents and making sure that you always have the correct versions on every Fault Tolerant Agent can be a time-consuming task. Furthermore, you need to make sure that the scripts are protected so that they are not updated by the wrong person. Unprotected scripts can cause problems in your production environment if someone has changed something without notifying the responsible planner or change manager.

We suggest placing all scripts used for production workload in one common script repository. The repository can be designed in different ways. One way could be to have a subdirectory for each fault-tolerant workstation (with the same name as the name on the IBM Tivoli Workload Scheduler workstation).

All changes to scripts are done in this production repository. On a daily basis, for example, just before the Jnextday process, the master scripts in the central repository are distributed to the Fault Tolerant Agents. The daily distribution can be handled by a Tivoli Workload Scheduler scheduled job.

This approach can be made even more advanced, for example, by using a software distribution application to handle the distribution of the scripts. This way, the software distribution application can help keep track of different versions of the same script. If you encounter a problem with a changed script in a production shift, you can simply ask the software distribution application to redistribute a previous version of the same script and then rerun the job.

Tip: Regardless of where a central repository with a distribution is used or not, it is always a good idea to have a copy of the scripts to be run on the Master Domain Manager for use as both reference and backup.

274 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 291: Tivoli Workloud Scheduler Guide

Security filesThe IBM Tivoli Workload Scheduler Security file is used to protect access to database and Plan objects. On every IBM Tivoli Workload Scheduler engine (Domain Manager, Fault Tolerant Agent, etc.) you can issue conman commands for the Plan and composer commands for the database. IBM Tivoli Workload Scheduler Security files are used to ensure that the right people have the right access to objects in IBM Tivoli Workload Scheduler.

Security files can be created or modified on every local IBM Tivoli Workload Scheduler workstation and they can be different from ITWS workstation to ITWS workstation.

Unless you have a firm requirement for different Security files (due to company policy, etc.), we suggest that you use the one of the two approaches, which do not require maintenance of different Security files on workstations:

� Use the centralized security function

IBM Tivoli Workload Scheduler Version 8.2 introduced a function called the centralized security. If used, this function allows the Security files of all the workstations of the IBM Tivoli Workload Scheduler network to be created and modified only on the Master. The IBM Tivoli Workload Scheduler administrator is responsible for their production, maintenance, and distribution of the Security file.

This has the benefit of preventing risks for someone tampering with the Security file on a workstation, but the administrator still has the job of distributing the Security file on workstations.

� Leave the default Security file in place on all the FTAs

This will only allow access to the TWSuser (by default tws) and root. This way there is no chance that any other user can interfere with the scheduling on an agent by logging into it directly. The access for the operators and schedulers will only be through the Master Domain Manager or Backup Master Domain Manager and via the JSC (and CLI for the schedulers where required). The Security file on these systems are the key to controlling who can do what. If there is a requirement for operators to connect to a DM or FTA directly, then

Important: In the current implementation of centralized security, if an administrator distributes a new security onto an FTA, this should be followed with a re-link of the FTA (to re-send the Symphony file). Otherwise this FTA would not be able to participate in the IBM Tivoli Workload Scheduler network until the next start of the day, since its Symphony file would point to the old Security file.

Chapter 9. Best practices 275

Page 292: Tivoli Workloud Scheduler Guide

the Security file can be copied to those machines, but this should be on exceptional bases.

This approach is not needed for distributing the Security file onto the workstations (for the most cases). The only risk in this approach is for a hacker logging on to the workstation with a root (or Administrator) user ID and deleting and recreating the Security file on that workstation.

Parameters file (database)Another important file is the IBM Tivoli Workload Scheduler parameters file or database. Although it is possible to have different parameter databases on different IBM Tivoli Workload Scheduler workstations, we suggest having one common parameter database.

The parameter database can then be managed and updated centrally. On a daily basis the updated parameter database can be distributed to all the IBM Tivoli Workload Scheduler workstations. The process can be as follows:

1. Update the parameter database daily.

This can be done by a daily job that uses the IBM Tivoli Workload Scheduler parms command to add or update parameters in the parameter database.

2. Create a text copy of the updated parameter database using the IBM Tivoli Workload Scheduler composer create command:

composer create parm.txt form parm

3. Distribute the text copy of the parameter database to all your IBM Tivoli Workload Scheduler workstations.

4. Restore the received text copy of the parameter database on the local IBM Tivoli Workload Scheduler workstation using the IBM Tivoli Workload Scheduler composer replace command:

composer replace parm.txt

These steps can be handled by one job in IBM Tivoli Workload Scheduler. This job could, for example, be scheduled just before or after Jnextday runs.

9.2 DeploymentIn the following sections you will find some best practices for deploying an IBM Tivoli Workload Scheduler implementation.

276 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 293: Tivoli Workloud Scheduler Guide

9.2.1 Installing a large IBM Tivoli Workload Scheduler environmentTivoli products, such as the Tivoli Management Agent and Tivoli Software Distribution, can assist in the deployment of large numbers of IBM Tivoli Workload Scheduler agents. In this release, IBM Tivoli Workload Scheduler components can be installed by distributing a software package block (SPB), using either Tivoli Software Distribution Version 4.1 or the Software Distribution component of IBM Tivoli Configuration Manager Version 4.2. An SPB exists for each supported Tier 1 platform. See “Installing The Product Using Software Distribution” in the Tivoli Workload Scheduler Version 8.2, SC32-1273.

9.2.2 Change controlGood change control is an essential requirement for any IBM Tivoli Workload Scheduler environment. It is advisable to test new workload and Tivoli software updates before deploying into production. Some attempt should be made to construct a dedicated test environment that, to some degree, mirrors the production IBM Tivoli Workload Scheduler environment. Operating system and/or IBM Tivoli Workload Scheduler security should be used to tightly control user access to the production IBM Tivoli Workload Scheduler Master Domain Manager.

9.2.3 Patching on a regular basisSome organizations require explicit permission and extensive testing prior to implementation. However, not patching your system with the latest patches can result in problematic system behavior, which can require immediate patching, putting you in a panicked and uncomfortable situation.

Patching is one of the most important things you can do for your systems. If you are a part of an organization that requires several authorizations to apply a patch, then consider patching on a quarterly basis.

Download all newly released patches on a quarterly basis, and have a schedule set up in advance that you implement quarterly for testing and paper processing. This way, you can implement three months of patches four times a year instead of out-of-process patching when problems arise.

This can alleviate tensions between parties in the organization who consistently receive emergency requests to patch when proper procedures were not followed.

If you require no testing or paper processing to patch your IBM Tivoli Workload Scheduler systems, you can consider patching your systems as soon as a patch is available. If it is an engine patch, you can automate the task by remotely

Chapter 9. Best practices 277

Page 294: Tivoli Workloud Scheduler Guide

mounting the install_patch and related binaries and running the command on each IBM Tivoli Workload Scheduler installation.

Furthermore, if a patch has an update for the IBM Tivoli Workload Scheduler Connector or the Job Scheduling Services, you can implement these through the command line, and you can automate this process as well.

The patches for IBM Tivoli Workload Scheduler can be downloaded via anonymous FTP from:

ftp://ftp.software.ibm.com/software/tivoli_support/patches/

or HTTP from:

http://www3.software.ibm.com/ibmdl/pub/software/tivoli_support/patches/

9.2.4 IP addresses and name resolutionWhen defining a workstation under the node field, you can enter either an IP address or a host name. Always use host names where possible and always be sure that all machines in a Tivoli Workload Scheduler network can correctly resolve themselves plus any other workstation they may have cause to link to. For example, a leaf node Fault Tolerant Agent should be able to resolve itself, the Master and Backup Domain Manager, plus any intermediate Domain Manager or Backup Domain Manager. Also, the Master and Backup Domain Manager must be able to make a direct IP connection to each of the FTAs in the entire network. This is so that standard list files can be retrieved and switch manager commands can be issued and the information distributed. Unless the Behind Firewall function is employed, both of these functions use a direct connection.

The use of DHCP allocated IP addresses is not recommended, but where necessary use host name aliases that can be redirected to the currently allocated IP address, since changes to the node name field only take effect following the next occurrence of Jnextday.

Important: You need to consider applying at least available IBM Tivoli Workload Scheduler fix packs in your environment. Fix packs are cumulative fixes that go through extensive testing by Tivoli.

278 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 295: Tivoli Workloud Scheduler Guide

9.2.5 Message file sizesIBM Tivoli Workload Scheduler uses files with the .msg extension as part of a mechanism for local and remote interprocess communication. These files are located in the TWShome (twsuser) and in the TWShome/pobox directory. The various IBM Tivoli Workload Scheduler processes communicate vital production data with each other via this mechanism. Because this information is vital, it is critical that these files are not corrupted or deleted unless absolutely necessary. If in doubt, contact Tivoli Customer Support before deleting. These files have a default maximum size of 10,000,000 bytes (approximately 9.5 MB).

When this limit is exceeded, processing might slow down or IBM Tivoli Workload Scheduler might shut itself down on the local node.

The size of these files will grow if any of the following conditions exist:

� An agent is down and no operator intervention is performed.

� Network connectivity has been lost without operator intervention.

� Performance problems on the Master cause it to be unable to keep up with the pace of messages from the agents. In some situations, it is necessary to increase the size of these files.

In most cases, however, the IBM Tivoli Workload Scheduler administrator should work to eliminate the cause of the message backlog rather than implementing

Tip: The use of host names in node name fields relies on the presence and maintenance of DNS or fully populated and maintained host files. This should be seen as an advantage, since when using host names instead of IP addresses, IBM Tivoli Workload Scheduler will attempt to resolve the IP addresses. If these are not correctly configured within the DNS or hosts file, linking processes will fail or slow down significantly. Therefore it is better to have the linking process fail and get the problem identified and fixed, then hidden.

Important: If a message file limit is reached, IBM Tivoli Workload Scheduler will shut down if there is no reader process, for example a pobox/agent.msg file limit is reached on an unlinked agent. However if one of the message files in the TWShome directory reaches its limit and the reading process for that message file is still running, IBM Tivoli Workload Scheduler will continue to run, but will run very slowly.

Chapter 9. Best practices 279

Page 296: Tivoli Workloud Scheduler Guide

the workaround of increasing the message file size. The size of the .msg files can be increased using the following command:

evtsize <filename.msg> nnnnnnn

Where nnnnnnn is the new size of the message file.

For example to increase the size of the mailman message file to 20 MB, use the following command:

$ evtsize mailbox.msg 20000000

This change will remain until the file is deleted and re-created.

You can use the following command to query the size of a message file:

evtsize -show <filename.msg>

To change the default creation size for all .msg files to 15 MB, add the following line to TWShome/StartUp and TWShome/.profile:

EVSIZE=15000000export EVSIZE

Before running any of these commands, make sure that you have a verified and available backup of at least the IBM Tivoli Workload Scheduler file system(s).

Tip on reducing the size of log filesIBM Tivoli Workload Scheduler 8.2 uses a common toolkit called CCLOG to log messages. This toolkit is designed to be used with all Tivoli programs, not only IBM Tivoli Workload Scheduler. Applications insert in their code the needed CCLOG statements to use the CCLOG facilities. The output format of the logs can be controlled using an external configuration file. Also, the logging level can be set using the configuration file. This file is called TWSCCLog.properties and it is located in the TWShome directory.

You can change the attributes of the TWSCCLog.properties file to shrink the size of the log files. Example 9-2 on page 281 shows the TWSCCLog.properties file. For instance, by commenting out the lines in Example 9-2 on page 281 shown in bold, you can suppress most of the extra information in the headers.

Tip: If you anticipate that a workstation will be down for some time due to a network problem or maintenance, check the Ignore box in the workstation definition of this workstation. This will prevent the workstation from being included in the next production cycle, thereby preventing the increase of message file sizes due to this inactive workstation. When the problem is resolved, you can uncheck the Ignore box in the workstation definition to include the workstation in the production cycle again.

280 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 297: Tivoli Workloud Scheduler Guide

Example 9-2 TWSCCLog.properties

tws.loggers.level=INFO

twsHnd.logFile.className=ccg_multiproc_filehandler#twsHnd.logFile.MPFileSemKey=31111twsHnd.logFile.formatterName=formatters.basicFmt

#----------------------------------------------# basic format properties#----------------------------------------------formatters.basicFmt.className=ccg_basicformatterformatters.basicFmt.dateTimeFormat=%H:%M:%S %d.%m.%Yformatters.basicFmt.separator=|#tws.loggers.organization=#tws.loggers.product=#tws.loggers.component=

Example 9-3 shows the TWSMERGE log before commenting out those lines (tws.loggers.organization=, tws.loggers.product=, and tws.loggers.component=) and Example 9-4 shows the same log file with those lines commented out.

Example 9-3 TWSMERGE file before the changes

05:59:37 06.06.2003|IBM|TWS|8.2|twsl302|BATCHMAN:BATCHMAN Startup Stats:05:59:37 06.06.2003|IBM|TWS|8.2|twsl302|BATCHMAN:BATCHMAN : LIVES05:59:37 06.06.2003|IBM|TWS|8.2|twsl302|BATCHMAN:SCHEDULE DATE : 06/06/0305:59:37 06.06.2003|IBM|TWS|8.2|twsl302|BATCHMAN:START DATE :05:59:37 06.06.2003|IBM|TWS|8.2|twsl302|BATCHMAN:START TIME : 005:59:37 06.06.2003|IBM|TWS|8.2|twsl302|BATCHMAN:RUN NUMBER : 505:59:37 06.06.2003|IBM|TWS|8.2|twsl302|BATCHMAN:THIS CPU : DEV821MST05:59:37 06.06.2003|IBM|TWS|8.2|twsl302|BATCHMAN:MASTER CPU : DEV821MST

Example 9-4 TWSMERGE file after the changes

05:59:37 06.06.2003|twsl302|BATCHMAN:BATCHMAN Startup Stats:05:59:37 06.06.2003|twsl302|BATCHMAN:BATCHMAN : LIVES05:59:37 06.06.2003|twsl302|BATCHMAN:SCHEDULE DATE : 06/06/03

Tip: Output in IBM Tivoli Workload Scheduler message logs can be formatted in XML format by changing the CCLOG attributes, but it is not recommended for production use, because it would cause an increase in the size of log files.

Chapter 9. Best practices 281

Page 298: Tivoli Workloud Scheduler Guide

9.2.6 Implementing the Jnextday processJnextday is a script whose primary purpose is to create a new Symphony file. In this section we cover the details of Jnextday processing. This information is important if you need to customize the Jnextday script or do advanced troubleshooting when there are problems. Note that details of all the commands used in the Jnextday script can be found in IBM Tivoli Workload Scheduler Version 8.2 Reference Guide, SC32-1274.

Example 9-5 shows the Jnextday script.

Example 9-5 Jnextday script

conman link @!@schedulr –autodatecompilerreptr -pre Symnewconman stop @!@; wait; noaskstageman –log M$DATEwmaeutil ALL -stopconman startreptr -post $HOME/schedlog/M$DATErep8 -F $FDATE -T $TDATE -B $FTIME -E $TTIMElogman $HOME/schedlog/M$DATE

The explanation of the commands are:

� conman link @!@: Updates Symphony on Master with the status of all jobs and job streams. This is to receive as much information as possible from the Agents, which is used to update the MDM Symphony file of all job and job stream statuses.

� schedulr -autodate: Selects job streams for the current day.

� compiler: This takes the prodsked file and fully expands all Database objects and creates a file called Symnew.

� reptr -pre Symnew: Generate preproduction report.

� conman stop @!@;wait;noask: All IBM Tivoli Workload Scheduler processes must be stopped before running stageman.

� stageman -log=M$DATE: Takes job streams from the Symnew file and adds carry forward job streams to create today’s Symphony file.

� wmaeutil ALL -stop: Stops the Connector process.

� conman start: Starts all workstations in all domains.

� reptr -post $HOME/schedlog/M$DATE: Creates report for yesterday’s Symphony file.

282 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 299: Tivoli Workloud Scheduler Guide

� rep8 -F $FDATE -T $TDATE -B $FTIME -E $TTIME: Creates histogram for yesterday’s production day.

� logman $HOME/schedlog/M$DATE: Logs all the statistics of all jobs run in the last production day. All statistics are written to the Jobs database.

Figure 9-7 on page 283 shows the sequence of operations during the Jnextday process.

Figure 9-7 Jnextday process in detail

Three programs (Schedulr, Compiler and Stageman) that are essential in Jnextday processing are further explained below:

� Schedulr selects schedules for a specific date from the Master Schedule file (mastsked), and copies them to a new Production Schedule file (prodsked). The schedulr program by default prompts for a date and a list of schedules to add, and places the resulting schedules in a file named prodsked in the

cpudata

Workstations, Domains & Workstation

Classes

Symphony

Symphony

Sinfonia

New plan file

Copy of plan for agents

stagemanschedulr compiler Symnew

CF job streams

InterimPlan File

Incomplete carry forward job streams

prodsked

Production schedule file

Other scheduling objects

mastsked

System date

Job Streams

calendars

Old plan file

1 2 3

jobs

prompts

resources

userdata

jobs, prompts,resources,NT users

NT Users

Resources

Prompts

Jobs

Calendars

Inputs Outputs

Chapter 9. Best practices 283

Page 300: Tivoli Workloud Scheduler Guide

current working directory. The schedulr program can be used to select schedules for today's date or a specific date, and can create a file other than prodsked.

� Compiler converts the Production Schedule file (prodsked) into an interim Production Control file (usually named Symnew). The compiler program by default uses a file named prodsked in the current working directory, and creates a file named Symnew, also in the current working directory, using the date specified in the prodsked file.

� Stageman carries forward incomplete schedules, logs the old Production Control file, and installs the new Production Control file. In a network, a copy of the Production Control file, called Sinfonia, is also created for FTAs.

Choosing the start time of the production of day IBM Tivoli Workload Scheduler’s processing day begins at the time defined by the Global option start time, which is set by default to 6:00 AM. To automate the daily turnover, a schedule named FINAL is supplied by IBM. This job stream is selected for execution every day. It runs a job named Jnextday that performs preproduction tasks for the upcoming day, and post-production tasks for the day just ended.

Jnextday should run at a time of day that there is least scheduling activity such as early in the morning (like the default start time 6:00 AM) or late afternoon.

If you must run Jnextday at midnight, you must set the final schedule to run a few minutes past midnight. Remember, when changing your final schedule run time, you must change your start of day (start) in your TWShome/mozart/globalopts file to begin one minute later than the Jnextday run time.

Customizing the Jnextday scriptYou can modify the Jnextday script to meet your needs. When creating your own job stream, model it after the one supplied by IBM. Consider the following:

� If you choose to change the way stageman generates log file names, remember that reptr and logman must use the same names.

� If you would like to print the preproduction reports in advance of a new day, you can split the Jnextday into two jobs. The first job will execute schedulr, compiler and reptr. The second job will stop IBM Tivoli Workload Scheduler, execute stageman, start IBM Tivoli Workload Scheduler, and execute reptr and logman. The first job can then be scheduled to run at any time prior to the

284 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 301: Tivoli Workloud Scheduler Guide

end of day, while the second job is scheduled to run just prior to the end of day.

Refer to IBM Tivoli Workload Scheduler Version 8.2 Reference Guide, SC32-1274 for more details on available arguments for the commands used in Jnextday.

Running Jnextday in the middle of a production dayIf you run Jnextday in the middle of a production day, all the jobs that have already executed will be executed again. This can cause serious problems. Assuming your Symphony file is not corrupted, any schedules that have already completed will not be carried forward into the new Symphony file. You will have two Symphony files with the same date, so you will need to look at the first Symphony file to see jobs that were not carried forward. This first Symphony file will be located in the TWShome/schedlog directory and you will view it using listsym, setsym, and the conman show commands.

9.2.7 Ad hoc job/schedule submissionsThe amount of space pre-allocated for ad hoc jobs and schedules is tunable. You might get an error message such as “Too many jobs are scheduled for BATCHMAN to handle” when too many jobs are submitted using conman or with conman submit commands.

The space allocated is controlled in the TWShome/network/NetConf file in the line:

2002 son bin/mailman -parm value

By placing a -1 in the field marked value, IBM Tivoli Workload Scheduler is instructed to allocate the maximum available space for records (not just job records), which is 65,535 records. Smaller allocations can be used, but be careful to leave IBM Tivoli Workload Scheduler a wide margin for safety. The number of records used by each job varies based on the length of the job

Important: If you want to customize the Jnextday script, be aware that customization of the Jnextday script is not officially supported. As a best practice, if you want to use a user-defined job instead, leave the Jnextday alone, remove the FINAL schedule, and then build and schedule with a different name for the user-defined replacement job.

Important: If you need to run Jnextday in the middle of a production day, be sure to contact IBM Tivoli Workload Scheduler support prior to performing this action.

Chapter 9. Best practices 285

Page 302: Tivoli Workloud Scheduler Guide

definition; so it can be difficult to predict the size of this requirement. If this failure occurs, scheduling will be halted on the local node.

See 9.6.6, “Netman services and their functions” on page 325 for more information on options in the NetConf file.

9.2.8 Mailman and writerIBM Tivoli Workload Scheduler can experience performance problems when too many writer processes are running concurrently on the Master Domain Manager. These writer processes compete against the master mailman process for a file locks on the Mailbox.msg file. This reduces the mailman process' ability to read records from the file and, thereby, limits the amount of incoming records that the Master Domain Manager can process.

9.2.9 MonitoringAutomated monitoring is an essential part of a successful IBM Tivoli Workload Scheduler implementation. IBM Tivoli Monitoring, IBM Tivoli Enterprise Console, and IBM Tivoli Business Systems Manager are good examples of products that can be linked to IBM Tivoli Workload Scheduler to monitor status and events from IBM Tivoli Workload Scheduler. IBM Tivoli Monitoring can be used to monitor production system usage. Tivoli Enterprise Console's logfile adapters can be used to transmit selected IBM Tivoli Workload Scheduler events to a centralized IT console. IBM Tivoli Data Warehouse can also be used in conjunction with IBM Tivoli Workload Scheduler to provide an enterprise-wide central reporting repository for the whole IT infrastructure and provide a Web-based infrastructure for providing Web-based historical reports.

Some resources that are candidates for monitoring are:

� IBM Tivoli Workload Scheduler messages file size� Swap space� Disk space (or file system space for UNIX machines)� IBM Tivoli Workload Scheduler process status� IBM Tivoli Workload Scheduler events� Network status� System load average� Free inodes

286 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 303: Tivoli Workloud Scheduler Guide

File system size monitoring exampleIt is easier to deal with file system problems before they happen. If your file system fills up, IBM Tivoli Workload Scheduler will no longer function and your job processing will stop. To avoid problems, monitor the file systems containing your TWShome directory and /tmp. We cannot give you an exact percentage at which to be warned. This depends on many variables that change from installation to installation (or company to company).

Monitoring or testing for the percentage of the file system can be done by, for example, IBM Tivoli Monitoring and IBM Tivoli Enterprise Console.

Example 9-6 is an example of a shell script that will test for the percentage of the IBM Tivoli Workload Scheduler file system filled, and will report back if it is over 80 percent.

Example 9-6 Shell script that is monitor space in /dev/lv01 ITWS file system

.1 2003/09/03## monitor free space in the TWS file system

# Warn if used space exceeds this percentagepercent_warn="80"

# TWS home directoryTWShome="/usr2/tws82"

# Use POSIX complient dfcase `uname -s` in AIX) df="df" ;; HP-UX) df="df" ;; Linux) df="df"

Note: Using the solutions described in the redbook Integrating IBM Tivoli Workload Scheduler and Content Manager OnDemand to Provide Centralized Job Log Processing, SG24-6629, you can archive your stdlist files in a central location. This allows you to delete the original stdlist files on the local workstations, reducing the disk space required by IBM Tivoli Workload Scheduler.

Tip: If you have, for example, a 2 GB file system, you might want a warning at 80 percent, but if you have a smaller file system you will need a warning when a lower percentage fills up.

Chapter 9. Best practices 287

Page 304: Tivoli Workloud Scheduler Guide

;; SunOS) df="/usr/xpg4/bin/df" ;; *) df="df" ;;esac

# Get percentage of space used on the file system containing TWShomepercent_used=`${df} -P ${TWShome} |\ grep -v Filesystem |\ awk '{print $5}' |\ sed 's/%$//g'`

# Check percentage usedspace_ok=`expr ${percent_used} \> ${percent_warn}`

# If result of previous calculation is equal to 1, space used exceeded# specified percentage, else okay. if [ "${space_ok}" -eq 1 ]then cat <<EOFThis file system is over ${percent_warn}% full. You need to removeschedule logs and audit logs from the sub directoriesin the file system, or extend the file system."EOFelse echo "This file system is less than ${percent_warn}% full."fi

# All doneexit 0

9.2.10 SecurityA good understanding of IBM Tivoli Workload Scheduler security implementation is important when you are deploying the product. This is an area that sometimes creates confusion, since both Tivoli Framework security and IBM Tivoli Workload Scheduler native security (the Security file) need to be customized for a user to manage IBM Tivoli Workload Scheduler objects through JSC. The following explains the customization steps in detail, but keep in mind that these are for JSC users:

� Step 1:

The user ID that is entered in the JSC User Name field must match one of the Current Login Names of a Tivoli Administrator (Figure 9-8 on page 289). Also the user ID must be defined on the Tivoli Framework machine (usually TMR Server, but can also be a Managed Node) and the password must be valid.

288 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 305: Tivoli Workloud Scheduler Guide

Figure 9-8 Login name match

Tips:

� The easiest way to understand whether the user ID/password pair is correct is to do a telnet with the user name and password to the Tivoli Framework machine. If the telnet is successful, that means that the user ID/password pair is correct.

� If the Connector instance was created on a TMR Server (which is usually the case), you need to enter the TMR Server host name (or IP address) in the Server Name field. If the Connector instance was created on a Managed Node, you can either enter the Managed Node’s host name or the host name of the TMR Server that this Managed Node reports to. The only thing you need to assure is that the user name that you enter in the User Name field must be a valid user on the machine that you enter in the Server Name field.

Step 1: The JSC login name (tws in this example) must match a valid Login Name in one of the Tivoli Administrators. Also useridand corresponding password must be valid for the node named in the Server Name field.

Server Name is the hostname or IP address the Tivoli Tivoli Framework machine (usually a TMR Server, but can also be a Managed Node) that the user is logging in to. Note that the user specified in the User Name must be defined on this machine.

Tivoli Desktop

Chapter 9. Best practices 289

Page 306: Tivoli Workloud Scheduler Guide

� Step 2:

The Tivoli Administrator name (not the Login Name) must match the corresponding LOGON entry in the IBM Tivoli Workload Scheduler Security file. Note that for the JSC user CPU entry the IBM Tivoli Workload Scheduler Security file must be either @, or $framework. This is shown in Figure 9-9.

Figure 9-9 Administrator name match

� Step 3:

Finally the Tivoli Administrator has to have at least user TMR Role in order to view and modify the Plan and Database objects. If this user also needs to create a new Connector instance or change an existing one, you need to give him or her admin, senior or senior TMR Roles as well. This is shown in Figure 9-10 on page 291.

USER MAESTROCPU=@ + LOGON= TWS_tws

BEGIN. . .

END Step 2: Tivoli Administrator name (TWS_tws), not to be confused with the Login name, must match the entry in the ITWS Security file.

Tivoli Desktop.

ITWS Security fileCPU must be either @ or =$framework for the JSC user

290 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 307: Tivoli Workloud Scheduler Guide

Figure 9-10 TMR roles

9.2.11 Using the LIST authorityUp until IBM Tivoli Workload Scheduler 8.2, you were able to prevent a user from managing a Plan object, but not able to prevent him from listing the objects (in other words, know the existence of the objects). With IBM Tivoli Workload Scheduler 8.2, you can now prevent a user from listing the objects that he is not authorized to manage. This works for the following actions or commands:

� JSC Plan queries for Jobs, job streams, resources, workstations and prompts� Conman sj, ss, sc, sr, sp commands

A JSC or conman user that runs any of the above commands must have the LIST authority for an object in the Security file to see it in any objects lists resulting from the above commands. If a user does not have LIST access to an object in the Security file, this object will not be shown in any resulting list from the above commands.

Step 3: In order for the user to view and modify the plan and database objects, you need to give at least user TMR Role for the Administrator Name that the user belongs to. If this user also needs to create a new Connector instance or change an existing one, you need to give him admin, senior or super TMR Roles as well.

Important: It is important to understand that IBM Tivoli Workload Scheduler uses the Tivoli Framework for user authentication only when a request is made through the JSC, not through the command line. If a request is received through the command line, the user is authenticated from the IBM Tivoli Workload Scheduler Security file. So the above discussion is only true for JSC users.

Chapter 9. Best practices 291

Page 308: Tivoli Workloud Scheduler Guide

In IBM Tivoli Workload Scheduler 8.2, the default behavior is not to control LIST access authority for JSC Plan queries and conman show commands. This is done to provide easier migration from earlier versions. The IBM Tivoli Workload Scheduler administrator that wants this security restriction can enable this check by setting the entry enable list security check= yes in the globalopts file.

In IBM Tivoli Workload Scheduler 8.2, the LIST authority is automatically granted in the base Security file that is installed with the product. When you migrate from an earlier version of IBM Tivoli Workload Scheduler network (where the LIST authority was not supported in the Security file) you have to manually add the security LIST access to all the users you want to give this access in each Security file if you would like to use this feature by setting enable list security check= yes in the globalopts file.

9.2.12 Interconnected TMRs and IBM Tivoli Workload Scheduler A good way to remotely manage an IBM Tivoli Workload Scheduler environment is to interconnect Tivoli Management Regions (TMR). With a two-way communication link between two TMRs, a user can more easily manage the IBM Tivoli Workload Scheduler environment. A two-way communication link between two TMRs will allow the user to do the following:

� View remote Connector instances � Manage remote Masters

If you cannot see all instances in the JSC, select Connect from the Tivoli desktop to exchange the resources between Tivoli Management Regions.

The resources that need to be exchanged between TMRs are:

� MaestroEngine� MaestroPlan� MaestroDatabase� SchedulerEngine

Recommendation: We recommend that you use this feature by setting enable list security check= yes in the globalopts file and giving appropriate LIST authority to your users on a needs basis.

Note: IBM Tivoli Workload Scheduler will not work properly with one-way TMR connections.

Tip: You can also manually update the resources from the command line using the wupdate -f -r all Tivoli command.

292 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 309: Tivoli Workloud Scheduler Guide

� SchedulerDatabase� SchedulerPlan

To verify the exhange of resources, issue the following command on one of the TMRs:

Wlookup -ar MaestroEngine

The output of this command should show the resources of both TMRs for a successful interconnection.

See the Tivoli Management Framework Reference Manual Version 4.1, SC32-0806 for more details on TMR interconnections.

9.3 Tuning localoptsThe local options file, localopts, defines configuration options unique to each IBM Tivoli Workload Scheduler workstation. The following section describes some of the localopts options that can be customized to optimize performance. Example 9-7 shows a typical localopts file. Please refer to this file when going through various configuration options that we will describe in this section.

Note: The SchedulerEngine Framework resource enables the interconnected scheduling engines to present themselves in the Job Scheduling Console. The MaestroEngine Framework resource enables the wmaeutil command to manage running instances of Connectors.

Tip: Another case to be considered for an interconnected TMR environment is related to security. In an interconnected TMR environment with two or more different IBM Tivoli Workload Scheduler networks, you might want to give authorization to an administrator to manage the scheduling objects related to the local Connector instances in one TMR, but deny access to the scheduling objects in the other TMR. To do this, configure the IBM Tivoli Workload Scheduler Security file so that it does not give any permissions to the administrator for the scheduling objects related to the remote Connector instances in the other TMR. In this way, this administrator will be able to see the remote Connector instance name in the Engine View pane of the JSC (there is no way to prevent this), but he or she will not be able to see the scheduling or Plan objects that belong to this Connector.

Note: Tuning localopts is a specialized job, so if you are not very familiar with what you are doing, it might be best to get help from Tivoli Services.

Chapter 9. Best practices 293

Page 310: Tivoli Workloud Scheduler Guide

Example 9-7 Typical localopts file

# # TWS localopts file defines attributes of this Workstation.##----------------------------------------------------------------------------# Attributes of this Workstation:#thiscpu =MASTERmerge stdlists =yesstdlist width =80syslog local =-1##----------------------------------------------------------------------------# Attributes of this Workstation for TWS batchman process:#bm check file =120bm check status =300bm look =15bm read =10bm stats =offbm verbose =offbm check until =300bm check deadline=300##----------------------------------------------------------------------------# Attributes of this Workstation for TWS jobman process:#jm job table size =1024jm look =300jm nice =0jm no root =nojm read =10##----------------------------------------------------------------------------# Attributes of this Workstation for TWS mailman process:#mm response =600mm retrylink =600 mm sound off =nomm unlink =960mm cache mailbox =nomm cache size =32mm resolve master =yes##----------------------------------------------------------------------------# Attributes of this Workstation for TWS netman process:#nm mortal =no

294 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 311: Tivoli Workloud Scheduler Guide

nm port =31111nm read =10nm retry =800##----------------------------------------------------------------------------# Attributes of this Workstation for TWS writer process:#wr read =600wr unlink =120wr enable compression=no##----------------------------------------------------------------------------# Optional attributes of this Workstation for remote database files## mozart directory = /usr/local/tws/mozart# parameters directory = /usr/local/tws# unison network directory = /usr/local/tws/network##----------------------------------------------------------------------------# Custom format attributes #date format =1# The possible values are 0-ymd, 1-mdy, 2-dmy, 3-NLS.composer prompt =-conman prompt =%switch sym prompt=<n>%##----------------------------------------------------------------------------# Attributes for customization of I/O on mailbox files#sync level =high##----------------------------------------------------------------------------# Network Attributes#tcp timeout =600##----------------------------------------------------------------------------# SSL Attributes#nm SSL port =0SSL key =/usr/local/tws/ssl/TWS.keySSL certificate =/usr/local/tws/ssl/TWS.crtSSL key pwd =/usr/local/tws/ssl/TWS.sthSSL CA certificate =/usr/local/tws/ssl/TWSTrustedCA.crtSSL certificate chain =/usr/local/tws/ssl/TWSCertificateChain.crtSSL random seed =/usr/local/tws/ssl/TWS.rndSSL Encryption Cipher =SSLv3SSL auth mode =caonly

Chapter 9. Best practices 295

Page 312: Tivoli Workloud Scheduler Guide

SSL auth string =tws

9.3.1 File system synchronization levelSync level attribute specifies the frequency at which IBM Tivoli Workload Scheduler synchronizes messages held on disk with those in memory. There are three possible settings:

� Low: Lets the operating system handle the speed of write access. This option speeds up all processes that use mailbox files. Disk usage is notably reduced. If the file system is reliable the data integrity should be assured anyway.

� Medium: Makes an update to the disk after a transaction has completed. This setting could be a good trade-off between acceptable performance and high security against loss of data. Write is transaction-based. Data written is always consistent.

� High: Makes an update every time data is entered. This is the default setting.

9.3.2 Mailman cacheIBM Tivoli Workload Scheduler is able to read groups of messages from mailbox and put them into a memory cache. Access to disk through cache is extremely faster than accessing to disk directly. The advantage is even more relevant if you think that the traditional mailman needs at least two disk accesses for every mailbox message.

A special mechanism ensures that messages considered essential are not put into cache but are immediately handled. This avoids loss of vital information in

Important considerations for the sync level usage:

� For most UNIX systems (especially new UNIX systems with reliable disk subsystems), a setting of low or medium is recommended.

� We also recommend that you set this to low on end-to-end scheduling, since host disk subsystems are considered to be highly reliable systems.

� This option is not applicable on Windows systems.

� Regardless of the sync level value that you set in the localopts file, IBM Tivoli Workload Scheduler makes an update every time data is entered for messages that are considered essential, in other words, it uses sync level=high for the essential messages. Essential messages are messages that are considered of utmost importance by the IBM Tivoli Workload Scheduler.

296 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 313: Tivoli Workloud Scheduler Guide

case of a mailman failure. The settings in the localopts file regulate the behavior of mailman cache:

� mm cache mailbox: The default is no. Specify yes to enable mailman to use a reading cache for incoming messages.

� mm cache size: Specify this option only if you use the mm cache mailbox option. The default is 32 bytes and should be a reasonable value for most of the small and medium-sized IBM Tivoli Workload Scheduler installations. The maximum value is 512 and higher values are ignored.

9.3.3 Sinfonia file compressionStarting with IBM Tivoli Workload Scheduler 8.1, Domain Managers may distribute Sinfonia files to their FTAs in compressed form. Each Sinfonia record is compressed by mailman Domain Managers, sent and then decompressed by writer FTA. The size of compressed the Sinfonia record is about seven times smaller. It can be particularly useful when Symphony file is huge and network connection between two nodes is slow or not reliable (WAN). If there are FTAs in the network that have pre-8.1 versions of IBM Tivoli Workload Scheduler, IBM Tivoli Workload Scheduler Domain Managers can send Sinfonia files to these workstations in uncompressed form.

The following setting in the localopts is used to set the compression in IBM Tivoli Workload Scheduler.

� wr enable compression=yes: This means that Sinfonia will be compressed. The default is no.

9.3.4 Customizing timeouts False timeouts can occur in large IBM Tivoli Workload Scheduler networks when mailman disconnects from a remote writer process because no response has been received from the remote node over a time interval. The agent may have responded, but the message has become caught in the traffic in the .msg file and

Tip: If necessary, you can experiment and increase this setting gradually to get performance. You can use larger values than 32 bytes for large networks. But in small networks, be careful to not set this value unnecessarily large, since this would reduce the available memory that could be allocated to other applications or other IBM Tivoli Workload Scheduler processes.

Tip: Due to the overhead of compression and decompression, we recommend you use compression if a Sinfonia file is 4 MB or larger.

Chapter 9. Best practices 297

Page 314: Tivoli Workloud Scheduler Guide

does not arrive before timeout expires. In this case, the TWSMERGE.log would contain messages, such as in Example 9-8.

Example 9-8 Timeout message

No incoming from <cpu> - disconnecting.

When mailman wants to link with another workstation it connects to netman and requests that writer be started. Netman starts the writer process and hands the socket connection from mailman to it. Mailman and writer do some handshaking and then writer waits for commands from mailman. It does this by issuing a read on the socket.

The read will wait until either mailman sends some data on the socket or a timeout happens. If writer has not heard from mailman in two minutes, it unlinks the connection. Mailman polls writer every minute to see if the connection is up and to let writer know that everything is fine. If the send aborts (due to a network problem or writer is down) mailman unlinks from the workstation.

There are some entries in localopts that can be used to tune the above algorithm. The configurable timeout parameters related with the writer are:

� wr unlink: Controls how long writer waits to hear from mailman before unlinking.

� wr read: Controls the timeout for the writer process.

Writer issues a read and times out in wr read seconds. It then checks if at least wr unlink seconds have passed since it has heard from mailman.

If not, it goes back to the read on the socket else it unlinks. For wr read to be useful in the above algorithm, it needs to be less than wr unlink.

Also, the following are the configurable timeout parameters related to mailman:

� mm retrylink: Controls how often mailman tries to relink to a workstation.

� mm unlink: Controls the maximum number of seconds mailman will wait before unlinking from a workstation that is not responding. The default is 960 seconds.

� mm response: Controls the maximum number of seconds mailman will wait for a response. The response time should not be less than 90 seconds.

Note: We recommend that you set the wr read to be at most half of wr unlink. This gives writer the chance to read from the socket twice before unlinking.

298 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 315: Tivoli Workloud Scheduler Guide

The wait time (mm unlink) should not be less than the response time specified for the nm response. If you receive a lot of timeouts in your network, increase it in small increments until the timeouts cease. This could also be a sign of performance problems on the Domain Manager or Master Domain Manager. If the Domain Manager or Master Domain Manager is unable to handle the pace of incoming records from the agents, timeouts can occur on Fault Tolerant Agent to Domain Manager or Master Domain Manager communications. A hardware upgrade or system reconfiguration might be considered.

9.3.5 Parameters that affect the file dependency resolution There are two important parameters in localopts that affect the file dependency resolution:

� bm check file: Controls the minimum number of seconds batchman will wait before re-checking for the existence of a file that is used as a dependency. The default is two minutes.

� bm check status: Controls the number of seconds batchman will wait between checking the status of an internetwork dependency (including the file dependency). The default is five minutes.

If you use a lot of file dependencies in your job streams, decreasing the values of these parameters might help speed up the file dependency resolution. But you should be careful because it is a trade-off between speed and CPU utilization. Unlike the other dependency checkings, file dependency checking is an external operation to IBM Tivoli Workload Scheduler. IBM Tivoli Workload Scheduler relies on the operating system to do the checking. For this reason, it is the most expensive operation in terms of resource usage. This is also evident from the fact that IBM Tivoli Workload Scheduler attempts to check the existence of a file only after all dependencies (for example predecessor, time, or prompt dependencies) have been satisfied.

Finally to a lesser extent, the following parameters also affect the speed of file dependency resolution:

� bm look: Controls the minimum number of seconds batchman will wait before scanning and updating its production control file. The default is 30 seconds.

� jm look: Controls the minimum number of seconds jobman will wait before looking for completed jobs and performing general job management tasks. The default is five minutes.

Chapter 9. Best practices 299

Page 316: Tivoli Workloud Scheduler Guide

� mm read: Controls the rate, in seconds, at which mailman checks its mailbox for messages. The default is 15 seconds. Specifying a lower value will cause IBM Tivoli Workload Scheduler to run faster but use more processor time.

If you have the available CPU cycles, you may want to compromise by lowering these values.

9.3.6 Parameter that affects the termination deadlineWith IBM Tivoli Workload Scheduler, for a job (or a jobstream) the user can define a deadline time, that is the time when the job is expected to be completed. If the job has not terminated when the deadline time expires, IBM Tivoli Workload Scheduler notifies the user about the missed deadline. A new local option, bm check deadline, has been introduced to allow the user to specify how often the check of the deadline time should be performed.

The bm check deadline interval has 0 as the default value. The user who wants IBM Tivoli Workload Scheduler to check the deadline times defined must add the bm check deadline interval to the localopts with a value greater than 0. If bm check deadline is 0, no check on deadline time will be performed, even if deadline time has been defined in the database. The reason for this default value (0) is to avoid any performance impact processing for customers who are not interested in the new function.

As a best practice, we recommend that you enable the check of the deadline times only on Domain Managers rather than on each FTA of the topology. Since Domain Managers have the status of all jobs running on their subordinate FTAs, for performance reasons it is better to track this at the domain level.

9.4 Scheduling best practicesThe following are some scheduling best practices:

� Job streams should only contain jobs that are related by predecessor dependencies. This will help if a job stream has to be re-run.

� Where dependencies between job streams exist, always place the dependency from the last job in the first job stream to the second job stream. Do not place the dependencies on the first job within the job stream. This will avoid unnecessary checks by the batchman process. This may require the

Note: Jobs defined on IBM Tivoli Workload Scheduler for z/OS in an end-to-end environment have the deadline time specified by default, unlike the distributed environment, where it is optional.

300 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 317: Tivoli Workloud Scheduler Guide

introduction of a dummy job into a job stream if the dependency is on a number of jobs in the first job stream. As a general rule, with the exception of resources (see 9.4.2, “Resources” on page 302), if a dependency relates to the first job in a job stream, then the dependency should be placed on the job stream.

� Start time and deadline times should be kept to a minimum. If possible, use predecessor dependencies instead.

� Where start times are used to control the start of a batch run in conjunction with simultaneously starting job streams, prompts should be considered to assist control.

� In most of the cases, use resources at the job level as opposed to the jobstream (or schedule) level. See 9.4.2, “Resources” on page 302 for more details on this.

� Avoid using cross-workstation resources.

� When using file dependencies, a start time should be used to ensure that file checking does not start before the earliest arrival time of the file, or a predecessor dependency if the file is created by a prior job.

� The number of calendars should be kept to a minimum, to ensure rapid progress during the Jnextday job. Offsets to existing calendars should be considered when determining a run cycle.

� Make sure calendars are kept up to date with a last-included date specified in the description.

� If possible, automate recovery procedures.

� Always use parameters in jobs for user login and directory structures to aid migration between IBM Tivoli Workload Scheduler environments.

� It is a good practice to use unique object names within seven characters, which are easier to manage since JSC columns do not need to be constantly resized. Also it is a good practice to not include information in object names that will be duplicated by conman/JSC, the CPU and JOB STREAM names in JOB names.

� Use a planned naming convention for all scheduling objects. This will be explained in more detail in the next section.

9.4.1 Benefits of using a naming conventionsUsing sensible naming conventions while designing and adding to a Tivoli Workload Scheduler network can have the following benefits:

� Ownership of job streams and jobs can be quickly identified.

Chapter 9. Best practices 301

Page 318: Tivoli Workloud Scheduler Guide

� Run cycles and functions can be organized in an efficient and logical running order.

� Queries and query lists can be run more efficiently.

� The IBM Tivoli Workload Scheduler Security file can be used to maximum benefit by using good naming conventions. Properly named objects and instances can be incorporated into a meaningful and secure hierarchy.

Again, using sensible and informative naming resolutions can speed up the IBM Tivoli Workload Scheduler network in terms of:

� Rapid problem resolution� Fewer queries are generated leading to better IBM Tivoli Workload Scheduler

network performance.

The following are some commonly used naming conventions:

� Select object names hierarchically

Accounting=ACC

Human Resources=HRS

� Select names by function

Accounts receivables=AR

Accounts payable=AP

� Select by run cycle

DAILY =DLY

WORKDAY=WRK

MONTHLY=MTH

“ON REQUEST”=ORQ

For example ACC_AR_WRK01.

9.4.2 ResourcesResources can be either physical or logical resources on your system. When defined in the IBM Tivoli Workload Scheduler database, they can be used as dependencies for jobs and job streams. Resources at the job stream level are allocated at the beginning of the job stream launching. Even though a job stream may take several hours to complete, the resources may only be needed for a

Tip: In the naming convention, you might also include the name of operating system that the job or job stream is intended for.

302 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 319: Tivoli Workloud Scheduler Guide

short time. For this reason, in most cases allocating a resource at the job level is the better choice.

9.4.3 Ad hoc submissionsSubmitting ad hoc jobs and job streams into the Plan file, rather than scheduling the necessary jobs and schedules, creates a significant increase in workload across the IBM Tivoli Workload Scheduler network, but in particular on the Master Domain Manager, and can bring production to a standstill while new records are being inserted in the more extreme cases or where large schedules are submitted in parallel.

If you find that you are submitting large quantities of jobs for purposes other than testing new jobs and/or job streams, either batch scheduling software is not appropriate for the purpose in hand or your workload is in need of some restructuring to enable it to be scheduled in advance.

Tips on resource usage:

� Allocating a resource on the job stream (or schedule) level might be useful when you want to hold a resource in use while a job has abended, because the job stream holds the resource in open status while the status of the job stream is STUCK. If you allocated the resource on the job level, the resource would be freed when the job has abended.

� You cannot use resources at both the schedule and job levels.

� If you might need to submit jobs requiring a resource ad hoc, you should in some cases schedule a dummy job to insert resources into the production day.

� It is a best practice to set IBM Tivoli Workload Scheduler resources less than the actual number of resources. For practical reasons, physical resources are hidden from IBM Tivoli Workload Scheduler. This allows for hardware failure. An example of this would be a failing tape drive. If a defined tape drive fails, an undefined tape drive can be added. This will allow continuous processing.

Note: Ad hoc jobs take up more space in the IBM Tivoli Workload Scheduler database than batch jobs.

Chapter 9. Best practices 303

Page 320: Tivoli Workloud Scheduler Guide

9.4.4 File status testingChecking for a file dependency is one of the more cpu-intensive and disk-intensive tasks performed by IBM Tivoli Workload Scheduler. In order to keep the workload to a minimum:

� Consider using an anchor job, one job testing for a file followed by the jobs or job streams needing to confirm the same file’s existence.

� Use a start time or other dependency to prevent IBM Tivoli Workload Scheduler from checking unnecessarily for the presence of a file, before its earliest arrival time.

� Use a deadline time to stop IBM Tivoli Workload Scheduler from checking for a file if there is a time after which it will not arrive.

� Adjust the TWShome/localopts attribute bm check file value to alter the interval at which files are checked for (see 9.3.5, “Parameters that affect the file dependency resolution” on page 299). The default value is to check every two minutes. Reducing this interval will enable IBM Tivoli Workload Scheduler to detect the file quicker after the file arrives, but is generally not recommended due to the increased overheads. Increasing this value will decrease the overheads significantly on systems actively checking for a large number of file dependencies simultaneously.

Resolving multiple file dependenciesMultiple files dependencies are checked in the order that they are specified in either the job stream (schedule) or job level of a job stream definition. If a file contains eight file dependencies, with each one being followed by a "," (comma), it will check for the first one.

� If the file exists, it will then look for the second one.

� If the first file does not exist, it will keep checking for the first file at the iteration specified in bm check.

IBM Tivoli Workload Scheduler will not look for any of the remaining seven files until the first file exists. This means that it is possible that files 2 through 8 can exist, but since file 1 does not exist it will not check for the other files until file 1 is

Notes:

� File dependency was called OPENS dependency in the legacy Maestro™ terminology. If you use the command line, you will still see OPENS for file dependency checking, since IBM Tivoli Workload Scheduler command line still uses legacy Maestro terminology.

� The Not Checked status from the File Status list in JSC indicates that IBM Tivoli Workload Scheduler has not got to the point of finding the file.

304 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 321: Tivoli Workloud Scheduler Guide

found. Once file 1 exists, then it will check for each of the remaining files. This can take several minutes before it gets to the last file.

It is important to note that once the file exists, IBM Tivoli Workload Scheduler will no longer check for that dependency for that job/job stream if commas are used to separate the list of files. This means that if files 1 through 7 existed and have later been removed before file 8 existed, as soon as file 8 exists the job/job stream will launch, even if the other files no longer exist. This is important if your job/job stream needs those files.

9.4.5 Time zonesIf you set the time zone of a workstation to the same time zone as the Master Domain Manager, you eliminate the problems with a dead zone.

Tip: One way to get around this file dependency checking problem is to use the qualifier with the “AND” modifier (a parameter when specifying the file dependency).

For example, suppose we have a job waiting for six files:

� /var/tmp/incoming/file1� /var/tmp/incoming/file2� /var/tmp/incoming/file3� /var/tmp/incoming/file4� /var/tmp/incoming/file5� /var/tmp/incoming/file6

With the normal file dependency, it searches first found as described in the previous paragraph. But, you can do a qualified file dependency as follows (The following shows this done from the command line, but you can also do it from the JSC):

OPENS "/var/tmp/incoming" (-f %p/file1 -a -f %p/file2 -a -f %p/file3 -a -f %p/file4 -a -f %p/file5 -a -f %p/file6)

This has two significant advantages:

� The dependency is not satisfied until all six files exist at the same time.

� The checking is a lot more efficient, since one execution of the test command (used internally by the IBM Tivoli Workload Scheduler from check for a file) is done instead of six times as many.

Chapter 9. Best practices 305

Page 322: Tivoli Workloud Scheduler Guide

We recommend that you enable time zones in the TWShome/mozart/globalopts file on the Master Domain Manager only after all workstations have a time zone defined within their workstation definition, as shown in Example 9-9. This is especially true for workstations that are not in the same time zone as the Master Domain Manager.

Example 9-9 Enabling time zone support in globalopts

# TWS globalopts file on the TWS Master defines attributes# of the TWS network.##----------------------------------------------------------------------------company =ITSO/Austinmaster =MASTERstart =0600history =10carryforward =yesignore calendars =nobatchman schedule =noretain rerun job name =no## Set automatically grant logon as batch=yes on the master Workstation to enable this # user right on NT machines for a job’s streamlogon user.#automatically grant logon as batch =no## For open networking systems bmmsgbase and bmmsgdelta must equal 1000.# bmmsgbase =1000bmmsgdelta =1000##--------------------------------------------------------------------------# Entries introduced in TWS-7.0 Timezone enable =yesDatabase audit level =0Plan audit level =0#----------------------------------------------------------------------------

Note: The dead zone is the gap between the IBM Tivoli Workload Scheduler start-of-day time on the Master Domain Manager and the time on the FTA in another time zone. For example, if an eastern Master Domain Manager with an IBM Tivoli Workload Scheduler start of day of 6:00 AM initializes a western agent with a 3-hour time-zone difference, the dead zone for this FTA is between the hours of 3:00 AM–6:00 AM. Before the availability of IBM Tivoli Workload Scheduler 7.0, special handling was required to run jobs in this time period. Now when specifying a time zone with the start time on a job or job stream, IBM Tivoli Workload Scheduler runs them as expected.

306 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 323: Tivoli Workloud Scheduler Guide

# Entries introduced in TWS-8.2centralized security =yesenable list security check =no

9.5 Optimizing Job Scheduling Console performanceJob Scheduling Console performance depends on various factors. The following are some tips for optimizing Job Scheduling Console performance.

9.5.1 Remote terminal sessions and JSCLarge IBM Tivoli Workload Scheduler environments tend to have many operators, schedulers, and administrators who require access to the Master. Using remote terminal sessions to log onto a JSC instance on the Master is not recommended. This can cause multiple memory problems and needlessly create resource control issues on the Master. The best practice is to install a Job Scheduling Console on every Job Scheduling Console user’s desktop locally.

9.5.2 Applying the latest fixesAlways apply the latest Job Scheduling Console fix packs and corresponding Connector patches. Tivoli frequently introduces performance improvements in these patches and using the latest versions will ensure that you are taking advantage of these improvements.

9.5.3 Resource requirementsWhile the minimum 128 MB RAM is required for base installation, this is generally insufficient for most IBM Tivoli Workload Scheduler environments. Consider at least 512 MB, or for larger environments 1 GB memory for the machine to host the JSC. Also, if you have a busy TMR server (with other Tivoli applications running on it) to host the Connector, consider installing a separate TMR server exclusively to be used for IBM Tivoli Workload Scheduler.

9.5.4 Setting the refresh rateRefresh rate in JSC determines the number of seconds after which a list display will periodically refresh. Do not forget that JSC is not designed to be a real-time user interface for performance reasons, that is, Plan updates appear in JSC either upon an on-demand request for a snapshot of the information or an auto-timed request for the same.

Chapter 9. Best practices 307

Page 324: Tivoli Workloud Scheduler Guide

By default, periodic refresh is disabled. You can set the scheduling controller periodic refresh rate property for a minimum of 30 seconds through a maximum of 7200 seconds (12 minutes). Note that in addition to this default value, you can use a separate setting for each filter window.

To adjust the refresh rate:

1. In the Job Scheduling view, select a scheduler engine and click the Properties button in the toolbar. The Properties - Scheduler window is displayed (Figure 9-11).

2. Open the Settings page and alter the Periodic Refresh box by entering the number of seconds after which a list display will periodically refresh.

Figure 9-11 Setting the refresh rate and buffer size

Tip: Do not set this property value too low. Otherwise, should the displayed list be very large, the interval between auto-refreshes might be less than the time it takes to actually refresh, and Job Scheduling Console will appear to lock up on you. Also if you are using several detached windows (you can detach up to seven windows) setting the refresh rate properly becomes even more important.

Important: Increasing the frequency a list is refreshed decreases performance. Unless separate settings are set for each filter window, the default refresh rate will be used.

308 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 325: Tivoli Workloud Scheduler Guide

9.5.5 Setting the buffer sizeSetting the buffer size property determines how many list items an ongoing search loads for you to view. For example, if you select 100, the results of a list are sent in blocks of 100 lines. The default is 50. Buffering more items quickens browsing of those loaded in memory but increases the load time each time Job Scheduling Console searches for more items from the Plan.

Conversely, a smaller buffer size slows item display somewhat, but the refresh from the Plan goes faster. You need to experiment with buffer size to determine what works best for your Job Scheduling Console instance.

To change buffer size for lists:

1. In the Job Scheduling view, select a scheduler engine and click the Properties button in the toolbar (Figure 9-11 on page 308). The Properties - Scheduler window is displayed.

2. Open the Settings page and select a value from the Buffer Size pull-down menu.

9.5.6 Iconize the JSC windows to force the garbage collector to workWhenever you need to decrease the amount of memory the JSC is using, you can minimize all the JSC windows. In this way the Java garbage collector starts its work and releases the unnecessary allocated memory. This decreases the memory used by JSC.

9.5.7 Number of open editorsWe recommend that you work with no more than three editors (for example multiple instances of Job Stream Editor) together at a time in JSC.

9.5.8 Number of open windowsEach JSC window consumes 60 to 100 MB of memory, depending on various factors, such as the number of objects in a list, etc. So it is very important for you to close the previously opened and unused JSC windows to prevent excessive swapping and degradation of the system.

9.5.9 Applying filters and propagating to JSC usersA good practice is to use filters whenever possible in order to reduce the amount of information retrieved. Using the default JSC lists (or views) for Database objects and Plan instances could cause long response times in the JSC. The

Chapter 9. Best practices 309

Page 326: Tivoli Workloud Scheduler Guide

default JSC list simply gives you a list with all Database objects in the database or Plan instances in the Plan. If you are using a default database list for job streams and the job stream database contains, for example, 5,000 job streams, loading the data in JSC and preparing this data to be shown in the JSC will take a long time. Using dedicated lists, created with appropriate filters, for example, only to show job streams starting with PROD, will optimize the JSC performance considerably.

310 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 327: Tivoli Workloud Scheduler Guide

Tips on creating query lists:

� For performance reasons try to create query lists that return not too many jobs or job streams in the same view. From a window management point of view, you need to be aiming at little more than a screenfull of data per query.

� A good naming convention is the key factor for creating efficient query lists.

� The Common Plan lists will connect to all available Connectors and produce the necessary output. While this may be helpful for some operations with multiple IBM Tivoli Workload Scheduler instances, you have to keep in your mind that:

– Connecting to all the Connectors and retrieving the information generates unnecessary load on the system, if you have some Connectors that you do not want to use. (To overcome this problem, you can modify the properties of the Common Plan list to exclude the Connectors that you do not want to include in the list.)

– It might return duplicate information if you have Connectors for different parts of the same IBM Tivoli Workload Scheduler network, such as the IBM Tivoli Workload Scheduler Master z/OS and the Primary Domain Manager.

� Especially if you havea large number of scheduling objects, we recommend that you delete all of the “All XXX object” queries and build your own ones from scratch. A good idea is to organize your query lists by workstations, or groups of workstations, or by job states. Figure 9-12 on page 312 shows a sample JSC configuration for an operator responsible for accounting and manufacturing jobs. This configuration is based on workstations and job states. If for example accounting jobs are more critical than the manufacturing jobs, you can set the refresh rate low for the accounting lists and high for the manufacturing lists.

� An alternative for creating query lists is to leave the default JSC lists (“All XXX object” lists), but to change the filter conditions of these lists each time you do a query. This on-the-fly query model is a good idea if you have a well-defined naming convention that allows you to easily identify objects from their names. In this case, changing the filter criteria each time you do a query might be more practical than creating separate query lists, since there might be too many filtering possibilities to create a query list in advance.

Chapter 9. Best practices 311

Page 328: Tivoli Workloud Scheduler Guide

Figure 9-12 Sample JSC configuration

Remember that all these lists can be made available to all JSC users, simply by saving the preferences.xml file and propagating it to your JSC users. User preferences are stored in a file named preferences.xml. The file contains the names and the details, including filters, of all the queries (or lists) that were saved during a session. Every time you close the Job Scheduling Console, the preferences.xml file is updated with any queries you saved in, or deleted from, the Job Scheduling Tree.

The preferences.xml file is saved locally in a user directory.

� On UNIX:

~/.tmeconsole/login-user@hostname_local

where ~ is the user’s HOME directory

Note: Note that in the previous version of JSC (JSC 1.2), the globalpreferences.ser file was used for propagating user settings.

312 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 329: Tivoli Workloud Scheduler Guide

� On Windows:

C:\Documents and Settings\Administrator\.tmeconsole\user@hostname_locale

Where:

– user is the name of the operating system user that you enter in the User Name fields in the JSC logon window. It is followed by the at (@) sign.

– hostname is the name of the system running the Connector followed by the underscore (_) sign.

– locale is the regional settings of the operating system where the Connector is installed.

For example, suppose that, to start the Job Scheduling Console, you log on for the first time to machine fta12, where the Connector was installed by user ITWS12. A user directory named ITWS12@fta12_en_US (where en_US stands for English regional settings) is created under the path described above in your workstation.

Every time you log onto a different Connector, a new user directory is added, and every time you close the Job Scheduling Console, a preferences.xml is created or updated in the user directory that matches your connection.

If you want to propagate a specific set of queries to new users, copy the relevant preferences.xml file in the path described above in the users’ workstations. If you want to propagate a preference file to existing users, you have them replace their own preferences.xml with the one you have prepared.

9.5.10 Java tuningIBM Tivoli Workload Scheduler provides predefined options that support average systems. For available options and hints and tips about how to improve JVM performances, you can check Sun's Web site at:

http://java.sun.com/j2se/1.3/docs/guide/performance/

Note: The preferences.xml file changes dynamically if the user logs onto the same Connector and finds that the regional settings have changed.

Note: Note that the preferences.xml file can also be modified with a simple text editor (for example to create multiple lists with similar characteristics), but unless you are very familiar with the file structure, we do not recommend that you manipulate the file directly but use the JSC instead.

Chapter 9. Best practices 313

Page 330: Tivoli Workloud Scheduler Guide

9.5.11 Startup scriptAnother place to look for a possible improvement in JSC performance is the Tivoli Job Scheduling Console startup script or in other words the JSC launcher (depending on the platform NTConsole.bat, AIXconsole.sh, Hpconsole.sh, LINUXconsole.sh and SUNconsole.sh). The startup script can have two flags (-Xms and -Xmx) that indicate the minimum and maximum number of required bytes for the Java process:

� The Xms parameter sets the startup size of the memory allocation pool (the garbage collected heap). The default is 64 MB of memory. The value must be greater than 1000 bytes.

� The Xmx parameter sets the maximum size of the memory allocation pool (the garbage collected heap). The default is 256 MB of memory. The value must be greater than or equal to 1000 bytes.

Example 9-10 shows a JSC startup script for the Windows 2000 platform. Note that unlike the JSC 1.2 startup script, in JSC 1.3 these parameters were removed from the script. When you edit the script with an editor, you will not see the -Xms and -Xmx parameters, and default values of these parameters are used. If you want to change the default values, you need to add explicitly these parameters into the command that starts the JSC as shown (in bold) in Example 9-10.

Example 9-10 JSC startup script

start "JSC" "%JAVAPATH%\bin\javaw" -Xms128m -Xmx256m -Dice.pilots.html4.defaultEncoding=UTF8-Djava.security.auth.login.config="%CONFIGDIR%"/jaas.config-Djava.security.policy="%CONFIGDIR%"/java2.policy-Djavax.net.ssl.trustStore="%CONFIGDIR%"/jcf.jks -Dinstall.root="%TMEJLIB%"-Dinstall.dir="%TMEJLIB%" -Dconfig.dir="%CONFIGDIR%" -classpath"%CLASSPATH%" com.tivoli.jsc.views.JSGUILauncher

314 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 331: Tivoli Workloud Scheduler Guide

9.6 IBM Tivoli Workload Scheduler internalsIt is very important to understand the process flow of various IBM Tivoli Workload Scheduler operations in order to do advanced customization and troubleshooting. This section first gives the directory structure of IBM Tivoli Workload Scheduler and then covers interaction between product components for major IBM Tivoli Workload Scheduler operations.

9.6.1 IBM Tivoli Workload Scheduler directory structureFigure 9-13 on page 316 shows a list of the most important files in the IBM Tivoli Workload Scheduler home directory on UNIX, Windows, and OS/400®.

Tips:

� The default values (64 for Xms and 256 for Xmx) are average values for all platforms and for average machine configurations. To get better performance you can change these values based on your machine environment. If, for example, you have a machine with 512 MB of RAM memory, a good choice is the values that are given in Example 9-10, but if you have a machine with 256 MB of RAM memory, it is better to use -Xms64m -Xmx128m.

� Messages such as “out.java.lang.OutofMemoryError” in the JSC error.log file point out that these options (particularly -Xmx) should definitely be increased.

� If you need more details on these settings, you can refer to:

http://java.sun.com/j2se/1.3/docs/tooldocs/win32/java.html

Note: Do not forget to make a backup of the JSC startup script before you test the settings.

Chapter 9. Best practices 315

Page 332: Tivoli Workloud Scheduler Guide

Figure 9-13 IBM Tivoli Workload Scheduler directory structure

Here is the definition of some of the important files:

� prodsked: Production schedule file. This file is created during Jnextday by the schedulr program. It contains the job streams for a single IBM Tivoli Workload Scheduler production day.

� Symnew: New Symphony file. This file is created during Jnextday by the compiler program. For each job stream in prodsked, compiler adds jobs and related information to and compiles it all into the Symnew file.

� Sinfonia: Initial Symphony file. This file is a workstation-independent version of the Symphony file made at the beginning of the production day. This file is sent to subordinate Domain Managers and FTAs. The file does not change during the course of the production day.

� Symphony: The active Plan file. This file is created during Jnextday by the stageman program. To build Symphony, stageman starts with Symnew, but

network binschedlog stdlistpobox

logs

batchman

netman

mailman

starter

writer

Security localoptsSecurity.conf TWSCCLog.propterties

SymphonySinfonia Courier.msgMailbox.msg Intercom.msgSymnewprodsked

StartUp Jnextday

ServerN.msg

FTA.msg

Color LegendOnly on ITWS Master

logs

event queuesSymbol Legend

scripts programs

plans

config files databases

schedulr

stageman

compiler

composer

conman

jobman

trace

tomaster.msg

version

YYYYMMDD_NETMAN.log

YYYYMMDD_TWSMERGE.log

audit mozart

scribner dwnldr

cpudatamastsked

jobs

calendars

globalopts

prompts

resources

userdata

parameters

NetConf

NetReq.msg

TWSHome

316 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 333: Tivoli Workloud Scheduler Guide

adds to it any job streams from the previous Symphony file that need to be carried forward – usually incomplete job streams that have the CARRYFORWARD option enabled.

9.6.2 IBM Tivoli Workload Scheduler process treeUnderstanding the IBM Tivoli Workload Scheduler process tree is very important, especially for troubleshooting purposes. It is particularly important to understand which process starts what. For example if you see that a process has not started, the first place to check is whether the parent process is running or not.

Figure 9-14 and Figure 9-15 on page 318 show the IBM Tivoli Workload Scheduler process tree on Windows and UNIX respectively.

Figure 9-14 Process tree on Windows

The NETMAN.EXE program is started by the StartUp.cmd script. It is the TWS network listener program.

One WRITER.EXE is spawned by netman for each connected TWS agent.

MAILMAN.EXE

NETMAN.EXE

BATCHMAN.EXE

JOBMAN.EXE

A SERVERID MAILMAN.EXE process is spawned on a domain manager for each server ID defined in a workstation definition of a workstation in that domain. This new mailman process connects to all the agents in that domain that are defined with that particular server ID (in this example, the server ID “A”).

JOBMON.EXE

JOBLNCH.EXE

jobmanrc.cmd

job file

WRITER.EXE

SERVERA(MAILMAN.EXE)

The job file executed is the script or command in the Task field of the job definition.

The jobmanrc.cmd script resides in the ITWS home directory. It is also called the global configuration script, because changes to this script affect all ITWS jobs on an agent.

The JOBMAN.EXE program keeps track of job states.

The BATCHMAN.EXE program handles all changes to the Symphony file.

The MAILMAN.EXE program is the master message handling program.

The JOBMON.EXE program monitors running jobs.

A separate instance of JOBLNCH.EXEis spawned for each job. The maximum number of jobs that can run simultaneously is the limit set for the workstation.

JOBLNCH.EXE

jobmanrc.cmd

job file

JOBLNCH.EXE

jobmanrc.cmd

job file

Chapter 9. Best practices 317

Page 334: Tivoli Workloud Scheduler Guide

Figure 9-15 Process tree on UNIX

9.6.3 Interprocess communication and link initializationLink initialization is an important step in initiating communication between IBM Tivoli Workload Scheduler workstations. Figure 9-16 on page 319 explains this step in detail.

One writer is spawned by netman for each connected TWS agent.mailman

netman

batchman

jobman

A new server ID mailman process is spawned on a domain manager for each server ID defined in a workstation definition of a workstation in that domain. This new mailman process connects to all the agents in that domain that are defined with that particular server ID (in this example, the server ID “A”).

job monitor(jobman)

jobmanrc

.jobmanrc

job file

writer

serverA(mailman)

A separate job monitor jobman process is a special jobman process spawned for each job that is run. The maximum number of jobs that can run simultaneously is the limit set for the workstation.

The job file executed is the script or command in the Task field of the job definition.

The jobmanrc script resides in the TWS home directory. It is also called the global configuration script, because changes to this script affect all TWS jobs on an agent.

If the file .jobmanrc exists in the home directory of the user executing the job and is executable, it is run. This script is also called the local configuration script, because changes to this script affect only a specific user’s jobs.

The jobman program keeps track of job states.

The batchman program handles all changes to the Symphony file.

The mailman program is the master message handling program.

The netman program is started by the StartUp script. It is the TWS network listener program.

job monitor(jobman)

jobmanrc

.jobmanrc

job file

job monitor(jobman)

jobmanrc

.jobmanrc

job file

318 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 335: Tivoli Workloud Scheduler Guide

Figure 9-16 Inter-process communication and link initialization

Steps involved in the establishment of a two-way IBM Tivoli Workload Scheduler link between a Domain Manager and a remote Fault Tolerant Agent are as follows:

1. Mailman reads the FTA’s CPU info record (CI) from the Symphony file; this CI record contains the host name/IP address and port number of the FTA.

2. Mailman on the DM establishes a TCP connection to netman on the FTA using details obtained from the Symphony file.

3. Netman determines that the request is coming from mailman, so netman spawns a new writer process to handle the connection from mailman; netman also spawns mailman if it is not already running.

4. At this point, mailman on the DM is connected to writer on the FTA. Mailman on the DM tells writer on the FTA the run number of the Symphony file on the DM. Writer compares this run number with the run number of the Symphony file on the FTA.

5. If the run numbers are different, writer requests the Sinfonia file from mailman and downloads it to the FTA; writer copies the Sinfonia file to Symphony.

6. Once a current Symphony file is in place on the FTA, mailman on the DM sends a start command to the FTA.

7. Netman on the FTA starts the mailman process on the FTA.

netman accepts the initial connection from the remote mailman, spawns a new writer process, and then hands the connection over to the writer process

Remote FTA

Courier.msg

Mailbox.msg

NetReq.msg

Intercom.msg

mailman

batchman

jobman

stop, start& shut

Changes toSymphony

mailman

netman

Message files TWS processes

2

3

4

5

7

Symphony writer

netman6

writer Symphony

Sinfonia9

8

10

11

112

Mailbox.msg

13

Chapter 9. Best practices 319

Page 336: Tivoli Workloud Scheduler Guide

8. Mailman on the FTA reads the CPU info record (CI) of the FTA’s parent Domain Manager from the Symphony file; this CI record contains the host name/IP address and port number of the DM.

9. Mailman uses the details read from the Symphony file to establish the uplink back to netman on the DM.

10.Netman determines that the request is coming from mailman, so netman spawns a new writer process to handle the connection.

11.At this point, mailman on the FTA is connected to writer on the DM and the full two-way TCP link has been established.

12.Writer on the DM writes messages received from the FTA to the Mailbox.msg file on the DM.

13.Likewise writer on the FTA writes messages from the DM to the Mailbox.msg file on the FTA.

9.6.4 IBM Tivoli Workload Scheduler ConnectorFigure 9-17 on page 321 shows interactions of various components of the IBM Tivoli Workload Scheduler Connector program (includes SAP Extended Agent and Connector interaction).

Note: Batchman, jobman, and their related message files are also present on the FTA.

320 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 337: Tivoli Workloud Scheduler Guide

Figure 9-17 A closer look at the IBM Tivoli Workload Scheduler Connector

The IBM Tivoli Workload Scheduler Connector consists of five different programs as described below. Each program has a specific function. The oserv program will call the connector program appropriate for the action being performed in JSC.

� maestro_database: Performs direct reads and writes of the IBM Tivoli Workload Scheduler database files in TWShome/mozart (just like composer).

� maestro_plan: Reads the Symphony file directly but it submits changes to the Symphony file by queuing events to the Mailbox.msg file (just as conman does).

� maestro_engine: Submits start and stop events to netman via the NetReq.msg file (just as conman does).

ITWS Master

Databases

Symphony

maestro_enginestart & stop events,

oserv

maestro_database

maestro_plan

maestro_x_serverr3batch

job_instance_output

remote scribnerremote SAP host

SAP pick lists: jobs, tasks, variants, etc.

netman

ITWS ITWS Connector TMF

Job Scheduling

Console

Job Scheduling

Console

Chapter 9. Best practices 321

Page 338: Tivoli Workloud Scheduler Guide

� maestro_x_server: Provides extra SAP R/3-specific functions to JSC; it calls r3batch to retrieve information from, and make changes to, a remote SAP system.

� job_instance_output: Retrieves the requested job log from scribner on the FTA where the job ran by contacting scribner on the remote host (just as conman does).

9.6.5 Retrieval of FTA joblogRetrieving the FTA joblog can be done in two different ways: through a conman command line or JSC. We explain both operations in the following sections.

Using conman on Master Domain ManagerFigure 9-18 on page 323 explains the process flow during the retrieval of the FTA joblog via conman on MDM.

Note: The IBM Tivoli Workload Scheduler Connector programs reside in $BINDIR/Maestro/bin directory.

322 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 339: Tivoli Workloud Scheduler Guide

Figure 9-18 Retrieval of FTA joblog – conman on MDM

The following steps are involved in this operation:

1. Operator requests joblog in conman.

2. Conman opens a TCP directly with the workstation where the joblog exists, bypassing the Domain Manager.

3. Netman on that workstation spawns scribner and hands over the TCP connection with conman to the new scribner process.

4. Scribner retrieves the joblog and sends the joblog to conman on the Master.

Note: If the behindfirewall attribute is set in the FTA’s workstation definition, all communication between the Master and FTA goes through the FTA’s Domain Manager. So step 2 is valid if the FTA is not configured with the Behind Firewall option. The Behind Firewall option is available with IBM Tivoli Workload Scheduler Version 8.2.

Chapter 9. Best practices 323

Page 340: Tivoli Workloud Scheduler Guide

Retrieval of FTA joblog – JSC via MDMFigure 9-19 explains the process flow during the retrieval of the FTA joblog via Job Scheduling Console.

Figure 9-19 Retrieval of FTA joblog – JSC via MDM

The following steps are involved in this operation:

1. Operator requests joblog in Job Scheduling Console

2. JSC connects to oserv running on the Master Domain Manager

3. Oserv spawns job_instance_output to fetch the joblog

4. Job_instance_output communicates over TCP directly with the workstation where the joblog exists, bypassing the Domain Manager

5. Netman on that workstation spawns scribner and hands over the TCP connection with job_instance_output to the new scribner process

6. Scribner retrieves the joblog

7. Scribner sends the joblog to job_instance_output on the Master

8. Job_instance_ouput relays the joblog to oserv

9. Oserv sends the joblog to JSC

324 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 341: Tivoli Workloud Scheduler Guide

9.6.6 Netman services and their functions The netman configuration file exists on all IBM Tivoli Workload Scheduler workstations. The name of this file is TWShome/network/NetConf and it defines the services provided by netman. The NetConf file supplied by Tivoli includes comments describing each service.

Netman services and their functions are explained in Figure 9-20.

Figure 9-20 Netman services and their functions

Note: In previous versions of IBM Tivoli Workload Scheduler, you had the option of installing netman in a different directory. IBM Tivoli Workload Scheduler Version 8.2 installation program always installs it in the TWShome/network directory.

scribner

netman

writer

dwnldr

mailman

stop

switchmgr

netman spawns one writer for each connected TWS agent; the newly spawned writer handles all the messages received from mailman on the remote agent

Description

netman spawns a single mailman; mailman then starts the rest of the TWS process tree (batchman, jobman)

netman issues a conman stop command to stop all TWS processes on the workstation except netman itself

netman spawns one scribner for each incoming request (from conman, job_instance_output, or a job log retriever thread) to retrieve a job log (job standard list)

netman issues a switchmgr command (at the request of conman or maestro_engine) to instruct the workstation to become the domain manager of its domain

netman spawns one downloader for each run of a centrally-managed job in an end-to-end environment

2001 client bin/writer

NetConf file

2002 son bin/mailman -parm 32000

2003 command bin/conman stop

2004 client bin/scribner

2005 command bin/switchmgr

2006 client bin/dwnldr

2007 client bin/router

2501 client bin/chkstat

2600 son bin/clagent

router

chkstat

clagent

netman spawns one router to route a start, stop or retrieve-joblog command to an FTA that has the behind firewall option enabled

netman spawns a single clagent to start the common listener agent (for TBSM integration)

service type program

netman spawns one chkstat for each incoming request (from netmth) to resolve an inter-network dependency

stop 2008 command bin/conman stop;progressive;wait

netman issues a conman stop;progressive;waitcommand to stop all TWS processes (except netman)rucursively on on all subordinate workstations

2502 client bin/starter conman -gui2503 client bin/starter symread

Obsolete services:

Chapter 9. Best practices 325

Page 342: Tivoli Workloud Scheduler Guide

9.7 Regular maintenance In the following sections you will find practical tips for the regular maintenance of an IBM Tivoli Workload Scheduler environment.

9.7.1 Cleaning up IBM Tivoli Workload Scheduler directoriesThe following IBM Tivoli Workload Scheduler directories will continue to grow in size unless periodically cleaned out. Regular, accurate backups should be obtained of all the IBM Tivoli Workload Scheduler directories. Always back up before removing any information from the IBM Tivoli Workload Scheduler environment.

� TWShome/stdlist� TWShome/schedlog� TWShome/audit� TWShome/atjobs� TWShome/tmp

The first three of the above directories are the most important to maintain. We will go into details of these and show you how to automate the cleanup process of these directories.

Files in the stdlist directoryBM Tivoli Workload Scheduler Fault Tolerant Agents save message files and job log output on the system where they run. Message files and job log output are saved in a directory named TWShome/stdlist. In the stdlist (standard list) directory, there will be subdirectories with the name ccyy.mm.dd (where cc is century, yy is year, mm is month, and dd is date).

This subdirectory is created on a daily basis by the IBM Tivoli Workload Scheduler netman process at or shortly after midnight when netman does a log switch.

Example 9-11 shows the contents of the stdlist directory.

Example 9-11 TWShome/stdlists directory

2003.08.22 2003.08.25 2003.08.28 2003.08.31 2003.09.03 logs2003.08.23 2003.08.26 2003.08.29 2003.09.01 2003.09.04 traces2003.08.24 2003.08.27 2003.08.30 2003.09.02 2003.09.05

The ccyy.mm.dd subdirectory contains a message file from netman, a message file from the other IBM Tivoli Workload Scheduler processes (mailman, batchman, jobman, and writer), and a file per job with the job log for jobs executed on this particular date as seen in Example 9-12 on page 327.

326 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 343: Tivoli Workloud Scheduler Guide

Example 9-12 Files in a stdlist, ccyy.mm.dd directory

NETMAN Message file with messages form NETMAN process O19502.0908 File with job log for job with process no. 19502 run at 09.08 O19538.1052 File with job log for job with process no. 19538 run at 10.52 O38380.1201 File with job log for job with process no. 38380 run at 12.01 TWS File with messages from MAILMAN, BATCHMAN, JOBMAN and WRITER

Example 9-13 shows the contents of the logs directory. There is one <yyyymmdd>_NETMAN.log (for netman messages) and one <yyyymmdd>_TWSMERGE.log file (for mailman, batchman, jobman, and writer messages) for each day in this directory.

Example 9-13 Files in the TWShome/stdlist/logs directory

20030725_NETMAN.log 20030814_TWSMERGE.log 20030904_NETMAN.log20030725_TWSMERGE.log 20030815_NETMAN.log 20030904_TWSMERGE.log20030726_NETMAN.log 20030815_TWSMERGE.log 20030905_NETMAN.log20030726_TWSMERGE.log 20030816_NETMAN.log 20030905_TWSMERGE.log20030727_NETMAN.log 20030816_TWSMERGE.log

Each job that is run under IBM Tivoli Workload Scheduler’s control creates a log file in the IBM Tivoli Workload Scheduler’s stdlist directory. These log files are created by the IBM Tivoli Workload Scheduler job manager process (jobman) and will remain there until deleted by the system administrator.

The easiest way to maintain the growth of these directories is to decide how long the log files are needed and schedule a job under IBM Tivoli Workload Scheduler’s control, which removes any file older than the given number of days. The rmstdlist command can perform the deletion of stdlist files. The rmstdlist command removes or displays files in the stdlist directory based on the age of the files.

Important: <yyyymmdd>_NETMAN.log and <yyyymmdd>_TWSMERGE.log files were introduced by IBM Tivoli Workload Scheduler Version 8.2. In previous versions of the product, NETMAN and TWS (this is not a fixed name, but corresponds to the name of the IBM Tivoli Workload Scheduler user, in this scenario, TWS) files were used instead of these files (see Example 9-12 on page 327). In IBM Tivoli Workload Scheduler Version 8.2, you still can see NETMAN and TWS files in the stdlist, ccyy.mm.dd directory as seen in Example 9-12, but this is because not all messages have been migrated to the new logfiles under TWShome/stdlist/logs. In the subsequent versions or fix packs of the product, these files will likely be removed from the stdlist, ccyy.mm.dd directory and only <yyyymmdd>_NETMAN.log and <yyyymmdd>_TWSMERGE.log files will be used.

Chapter 9. Best practices 327

Page 344: Tivoli Workloud Scheduler Guide

The rmstdlist command:

rmstdlist [-v |-u]rmstdlist [-p] [age]

Where:

-v Displays the command version and exits.

-u Displays the command usage information and exits.

-p Displays the names qualifying standard list file directories. No directory or files are removed. If you do not specify -p, the qualifying standard list files are removed.

age The minimum age, in days, for standard list file directories to be displayed or removed. The default is 10 days.

The following example displays the names of standard list file directories that are more than 14 days old:

rmstdlist -p 14

The following example removes all standard list files (and their directories) that are more than 7 days old:

rmstdlist 7

We suggest that you run the rmstdlist command on a daily basis on all your Fault Tolerant Agents. The rmstdlist command can be defined in a job in a job stream and scheduled by IBM Tivoli Workload Scheduler. You may need to save a backup copy of the stdlist files, for example, for internal revision or due to company policies. If this is the case, a backup job can be scheduled to run just before the rmstdlist job.

The job (or more precisely the script) with the rmstdlist command can be coded in different ways. If you are using IBM Tivoli Workload Scheduler parameters to specify the age of your rmstdlist files, it will be easy to change this age later if required.

Notes:

� At the time of writing this book there was a defect in rmstdlist that prevents removing TWSMERGE and NETMAN logs when the command is issued. This is expected to be resolved in Fix Pack 01.

� As a best practice ,do not keep more than 100 log files in the stdlist directory.

328 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 345: Tivoli Workloud Scheduler Guide

Example 9-14 shows an example of a shell script where we use the stdlist command in combination with IBM Tivoli Workload Scheduler parameters.

Example 9-14 Shell script using the rmstdlist command in combination with ITWS parameters

:# @(#)clean_stdlist.sh 1.1 2003/09/09## IBM Redbook TWS 8.2 New Features - sample clean up old stdlist files job

# specify TWS parameter to use herePARM="KEEPSTDL"

# return codes: ${OK=0} ${FAIL=1}

# start hereecho "clean_stdlist.sh 1.1 2003/09/09"

# set UNISON_DIR to $HOME if unset, this enables the script to be tested# from the command line but the TWSuserif [ ! "${UNISON_DIR}" ]then UNISON_DIR="${HOME}"fi

# add TWShome/bin to default PATHPATH=${PATH}:${UNISON_DIR}/bin

case "${#}" in 0) # TWS parameter specifies how long to keep stdlist files, retrieve at # run time from parameter database on the local workstation KEEPSTDL=`parms ${PARM}`

if [ ! "${KEEPSTDL}" ] then echo "ERROR: TWS parameter ${PARM} not defined" exit ${FAIL} fi ;; 1) # number of days to keep stdlist files is passed in as a command line # argument. KEEPSTDL="${1}" ;; *) # Usage error echo "ERROR: usage: ${0} <days>"

Chapter 9. Best practices 329

Page 346: Tivoli Workloud Scheduler Guide

exit ${FAIL} ;;esac

echo "Removing stdlist files more than ${KEEPSTDL} days old"

# list files to be removedrmstdlist -p ${KEEPSTDL}

# remove old stdlist filesrmstdlist ${KEEPSTDL}

# all doneexit ${OK}

The age of the stdlist directories is specified using the variable KEEPSTDL. This parameter can be created on the Fault Tolerant Agent using the parms command or using the JSC. When you run the parms command with the name of the variable (such as parms KEEPSTDL), the command returns the current value of the variable.

Files in the audit directory The auditing log files can be used to track changes to the IBM Tivoli Workload Scheduler database and Plan (the Symphony file).

The auditing options are enabled by two entries in the globalopts file in the IBM Tivoli Workload Scheduler server:

plan audit level = 0|1database audit level = 0|1

If either of these options is set to the value of 1, the auditing is enabled on the Fault Tolerant Agent. The auditing logs are created in the following directories:

TWShome/audit/plan

TWShome/audit/database

If the auditing function is enabled in IBM Tivoli Workload Scheduler, files will be added to the audit directories every day. Modifications to the IBM Tivoli Workload Scheduler database will be added to the database directory:

TWShome/audit/database/date (where date is in ccyymmdd format)

Modification to the IBM Tivoli Workload Scheduler Plan (the Symphony) will be added to the Plan directory:

TWShome/audit/plan/date (where date is in ccyymmdd format)

330 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 347: Tivoli Workloud Scheduler Guide

We suggest that you regularly clean out the audit database and Plan directories, for example, on a daily basis. The clean out in the directories can be defined in a job in a job stream and scheduled by IBM Tivoli Workload Scheduler. You may need to save a backup copy of the audit files, for example, for internal revision or due to company policies. If this is the case, a backup job can be scheduled to run just before the cleanup job.

The job (or more precisely the script) doing the clean up can be coded in different ways. If you are using IBM Tivoli Workload Scheduler parameters to specify the age of your audit files, it will be easy to change this age later if required.

Example 9-15 shows an example of a shell script where we use the UNIX find command in combination with IBM Tivoli Workload Scheduler parameters.

Example 9-15 Shell script to clean up files in the audit directory based on age

# @(#)clean_auditlogs.sh 1.1 2003/09/12## IBM Redbook TWS 8.2 New Features - sample clean up old audit log files job

# specify TWS parameter to use herePARM="KEEPADTL"

# return codes: ${OK=0} ${FAIL=1}

# start hereecho "clean_auditlogs.sh 1.1 2003/09/12"

# set UNISON_DIR to $HOME if unset, this enables the script to be tested# from the command line but the TWSuserif [ ! "${UNISON_DIR}" ]then UNISON_DIR="${HOME}"fi

# add TWShome/bin to default PATHPATH=${PATH}:${UNISON_DIR}/bin

case "${#}" in 0) # TWS parameter specifies how long to keep audit log files, retrieve at # run time from parameter database on the local workstation KEEPADTL=`parms ${PARM}`

if [ ! "${KEEPADTL}" ] then echo "ERROR: TWS parameter ${PARM} not defined" exit ${FAIL}

Chapter 9. Best practices 331

Page 348: Tivoli Workloud Scheduler Guide

fi ;;

1) # number of days to keep stdlist files is passed in as a command line # argument. KEEPADTL="${1}" ;; *) # usage error echo "ERROR: usage: ${0} <days>" exit ${FAIL} ;;esac

# remove old audit logsecho "Removing audit logs more than ${KEEPADTL} days old"find `maestro`/audit -type f -mtime +${KEEPADTL} -exec rm -f {} \; -printrc=${?}if [ "${rc}" -ne 0 ]then echo "ERROR: a problem was encountered while attempting to remove files" echo " exit code returned by find command was ${rc}" exit ${FAIL}fi

# all doneexit ${OK}

The age of the audit files is specified using variables. A variablehas been assigned the value from the KEEPADTL parameter. The KEEPADTL parameter can be created on the Fault Tolerant Agent using the parms command or using the JSC.

Files in the schedlog directory

Every day, Jnextday executes the stageman command on the IBM Tivoli Workload Scheduler Master Domain Manager. Stageman saves archives of the old Symphony file into the schedlog directory. We suggest that you create a daily or weekly job that removes old Symphony files in the schedlog directory..

Note: This applies only to Tivoli Workload Scheduler Master Domain Manager and to Backup Master Domain Manager.

332 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 349: Tivoli Workloud Scheduler Guide

The job (or more precisely the script) doing the cleanup can be coded in different ways. If you are using IBM Tivoli Workload Scheduler parameters to specify the age of your schedlog files, it will be easy to change this age later if required.

Example 9-16 shows a sample of a shell script where we use the UNIX find command in combination with IBM Tivoli Workload Scheduler parameters.

Example 9-16 Shell script to clean out files in the schedlog directory

:# @(#)clean_schedlogs.sh 1.1 2003/09/12## IBM Redbook TWS 8.2 New Features - sample clean up old schedlog files

# specify TWS parameter to use herePARM="KEEPSCHL"

# return codes: ${OK=0} ${FAIL=1}

# start hereecho "clean_schedlogs.sh 1.1 2003/09/12"

# set UNISON_DIR to $HOME if unset, this enables the script to be tested# from the command line but the TWSuserif [ ! "${UNISON_DIR}" ]then UNISON_DIR="${HOME}"fi

# add TWShome/bin to default PATHPATH=${PATH}:${UNISON_DIR}/bin

case "${#}" in 0) # TWS parameter specifies how long to keep audit log files, retrieve at # run time from parameter database on the local workstation KEEPSCHL=`parms ${PARM}`

if [ ! "${KEEPSCHL}" ] then echo "ERROR: TWS parameter ${PARM} not defined" exit ${FAIL} fi ;;

1) # number of days to keep schedlog files is passed in as a command line # argument.

Chapter 9. Best practices 333

Page 350: Tivoli Workloud Scheduler Guide

KEEPSCHL="${1}" ;; *) # usage error echo "ERROR: usage: ${0} <days>" exit ${FAIL} ;;esac

# remove old audit logsecho "Removing archived Symphony files more than ${KEEPSCHL} days old"find `maestro`/schedlog -type f -mtime +${KEEPSCHL} -exec rm -f {} \; -printrc=${?}if [ "${rc}" -ne 0 ]then echo "ERROR: a problem was encountered while attempting to remove files" echo " exit code returned by find command was ${rc}" exit ${FAIL}fi

# all doneexit ${OK}

Notice from the script that the age of the schedlog files is specified using the variable KEEPSCHL. The schedlog files older than what is specified in the KEEPSCHL parameters will be removed.

Files in the TWShome/tmp directoryExample 9-17 shows a sample of a shell script where we use the UNIX find command in combination with IBM Tivoli Workload Scheduler parameters to delete the files in the TWShome/tmp directory that are older than a certain amount of days.

Example 9-17 Shell script to clean up files in the tmp directory based on age

:# @(#)clean_tmpfiles.sh 1.1 2003/09/12## IBM Redbook TWS 8.2 New Features - sample clean up old tmpfiles files

# specify TWS parameter to use herePARM="KEEPTMPF"

Note: If there are more than 150 files in schedlog, this might cause conman listsym and JSC Alternate Plan option to hang.

334 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 351: Tivoli Workloud Scheduler Guide

# return codes: ${OK=0} ${FAIL=1}

# start hereecho "clean_tmpfiles.sh 1.1 2003/09/12"

# set UNISON_DIR to $HOME if unset, this enables the script to be tested# from the command line but the TWSuserif [ ! "${UNISON_DIR}" ]then UNISON_DIR="${HOME}"fi

# add TWShome/bin to default PATHPATH=${PATH}:${UNISON_DIR}/bin

case `uname -n` in Linux) MaxDepth="-maxdepth 1" ;; *) MaxDepth="" ;;esac case "${#}" in 0) # TWS parameter specifies how long to keep files in the tmp directory, retrieve at run time from parameter database on the local workstation KEEPTMPF=`parms ${PARM}`

if [ ! "${KEEPTMPF}" ] then echo "ERROR: TWS parameter ${PARM} not defined" exit ${FAIL} fi ;;

1) # number of days to keep files in the tmp directory is passed in as a command line # argument. KEEPTMPF="${1}" ;; *) # usage error echo "ERROR: usage: ${0} <days>" exit ${FAIL} ;;esac

# remove old audit logsecho "Removing TWS temporary files more than ${KEEPTMPF} days old"

Chapter 9. Best practices 335

Page 352: Tivoli Workloud Scheduler Guide

find /tmp -type f -name 'file*' ${MaxDepth} -mtime +${KEEPTMPF} -exec rm -f {} \; -printrc="${?}"if [ "${rc}" -ne 0 ] && [ "${rc}" -ne 1 ]then echo "ERROR: a problem was encountered while attempting to remove files" echo " exit code returned by find command was ${rc}" exit ${FAIL}fi

find `maestro`/tmp -type f -name 'TWS*' ${MaxDepth} -mtime +${KEEPTMPF} -exec rm -f {} \; -printrc=${?}if [ "${rc}" -ne 0 ]then echo "ERROR: a problem was encountered while attempting to remove files" echo " exit code returned by find command was ${rc}" exit ${FAIL}fi

# all doneexit ${OK}

Notice from the script that the age of the schedlog files is specified using the variable KEEPTMPF. The temporary files older than what is specified in the KEEPTMPF parameters will be removed.

9.7.2 Backup considerationsIn this section we cover best practices for backing up your IBM Tivoli Workload Scheduler and Tivoli Framework environment.

IBM Tivoli Workload Scheduler To be sure that you can recover from disk or system failures on the system where the IBM Tivoli Workload Scheduler engine is installed, you should make a backup of the installed engine on a daily or weekly basis.

The backup can be done in several ways. You probably already have some backup policies and routines implemented for the system where the IBM Tivoli Workload Scheduler engine is installed. These backups should be extended to make a backup of files in the TWShome directory.

We suggest that you have a backup of all the IBM Tivoli Workload Scheduler files in the TWShome directory and the directory where the autotrace library is installed.

336 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 353: Tivoli Workloud Scheduler Guide

When deciding how often a backup should be generated, consider the following:

� Are you using parameters on the IBM Tivoli Workload Scheduler agent?

If you are using parameters locally on the IBM Tivoli Workload Scheduler agent and do not have a central repository for the parameters, you should consider making daily backups.

� Are you using specific security definitions on the IBM Tivoli Workload Scheduler agent?

If you are using specific Security file definitions locally on the IBM Tivoli Workload Scheduler agent and do not use the centralized security option that is available with IBM Tivoli Workload Scheduler Version 8.2, you should consider making daily backups.

Another approach is to make a backup of the IBM Tivoli Workload Scheduler agent files, at least before doing any changes to the files. The changes can, for example, be updates to configuration parameters or a patch update of the IBM Tivoli Workload Scheduler agent.

If the IBM Tivoli Workload Scheduler engine is running as a Tivoli Workload Scheduler Master Domain Manager, you should at least make daily backups. For a Tivoli Workload Scheduler Master Domain Manager, it is also good practice to create text copies of the database files, using the composer create command for all database files. The composer create command will create text copies of the database files. The database files can be recreated from the text files using the composer add and composer replace commands.

Tivoli Management Framework environmentSince IBM Tivoli Workload Scheduler uses Tivoli Management Framework (or Tivoli Framework) services, backing up your Tivoli environment is important. If you have other Tivoli applications in your environment, and IBM Tivoli Workload Scheduler is using the same Tivoli Management Region (TMR) server that is used by other Tivoli applications, then your TMR server is probably being backed up regularly by the Tivoli administrators as part of the regular Tivoli maintenance. But if you are using a stand-alone TMR server just for IBM Tivoli Workload Scheduler use, the recommendations in this section might be helpful to you.

Using the following, you can avoid reinstalling your Tivoli Framework environment in case of major problems:

� Back up your pristine environment. � Back up your environment after making changes.

The standard practice is to use the Tivoli Framework object database backup process. However, alternatives exist.

Chapter 9. Best practices 337

Page 354: Tivoli Workloud Scheduler Guide

After you have installed your Tivoli Framework environment, Job Scheduling Service, IBM Tivoli Workload Scheduler Connector, Connector instances, and interconnected your Tivoli Management Regions (TMRs) if necessary, tar or zip up the entire Tivoli Framework directory. This directory has all of the Tivoli Framework binaries and holds the Tivoli Framework database with the IBM Tivoli Workload Scheduler Connector and Job Scheduling objects you have installed.

Put the tar or zip file on a machine that is backed up regularly. Do the previous for each machine in your IBM Tivoli Workload Scheduler network that requires the installation of Tivoli Framework.

If you make any changes to your environment (for example, install new applications into the Tivoli Framework, disconnect TMRs, or remove Connector instances) you must tar or zip up the directory again. Date it and put it on the same system as your original pristine backup.

When you encounter major problems in the Tivoli Framework that require a full reinstall of your original environment or last change, all you have to do is get your tar or zip file and untar or unzip it back into place. By doing so, you have your pristine Tivoli Framework environment or last changed environment back in place.

This can become a bit more complicated when managed nodes enter the picture. You need to back up your managed nodes the same way you back up your TMR as listed previously. However, if you must restore your entire Tivoli Framework environment to the pristine or changed backup, then you must untar or unzip for all managed nodes in the region as well.

This process is not a standard Tivoli Framework backup procedure, but this can be a necessity in environments for preservation and saving time.

9.7.3 Rebuilding IBM Tivoli Workload Scheduler databasesWhen users make changes or deletions to the objects in the databases, the deleted objects are flagged, and the original objects that are changed are not removed. Therefore, over time, your databases can become quite large if your organization frequently changes and deletes objects. Rebuilding your databases regularly can help keep the size of your databases to a minimum and help keep corruption out.

Note: For Windows managed nodes or TMR servers, you also need to update the registry manually after restoring your backup. Refer to Tivoli Management Framework Maintenance and Troubleshooting Guide, GC32-0807 for more information on required registry updates for Tivoli.

338 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 355: Tivoli Workloud Scheduler Guide

Before rebuilding any database, you must make a backup of the database and optionally its associated key file. The build command makes a copy of the database, but it does not make a copy of the key file.

You must stop all instances of conman and composer plus stop the Connectors before doing a composer build.

See the build command options in the Tivoli Workload Scheduler Version 8.2, Reference Guide, SC32-1274 for further options.

9.7.4 Creating IBM Tivoli Workload Scheduler Database objectsA good backup strategy is to create the IBM Tivoli Workload Scheduler Database objects by using the composer command line and creating all the IBM Tivoli Workload Scheduler Database objects as text files. This eases migration from systems and can act as part of your overall backup strategy. Creating IBM Tivoli Workload Scheduler Database objects is very simple using the command line (see Example 9-18.)

Example 9-18 I IBM Tivoli Workload Scheduler objects creation

composer create calendars.txt from CALENDARScomposer create workstations.txt from CPU=@composer create jobdef.txt from JOBS=@#@composer create job stream.txt from SCHED=@#@composer create parameter.txt from PARMScomposer create resources.txt from RESOURCEScomposer create prompts.txt from PROMPTScomposer create users.txt from USERS=@#@

9.8 Basic fault finding and troubleshootingIf your IBM Tivoli Workload Scheduler network develops a problem, there are some quick checks that can be made to establish the nature of the fault. Details of some common IBM Tivoli Workload Scheduler problems and how to solve them are in the following sections.

Tip: Key files can be recreated from the data files, which is why they are not backed up by default, but saving the key files might save you some time should you need to restore the database.

Chapter 9. Best practices 339

Page 356: Tivoli Workloud Scheduler Guide

9.8.1 FTAs not linking to the Master Domain Manager� If netman is not running on the FTA:

– If netman has not been started, start it from the command line with the StartUp command. Note that this will start only netman, not any other IBM Tivoli Workload Scheduler processes.

– If netman started as root and not as a TWSuser, bring IBM Tivoli Workload Scheduler down normally, and then start up as a Tivoli Workload Scheduler user through the conman command line on the Master or FTA:

• On UNIX:

$ conman unlink$ conman "shut;wait"$ ./StartUp

• On Windows:

C:\win32app\maestro> conman stop;waitC:\win32app\maestro> Shutdown.cmdC:\win32app\maestro> StartUp.cmd

– If netman could not create a standard list directory:

• If the file system is full, open some space in the file system.

• If a file with the same name as the directory already exists, delete the file with the same name as the directory. The directory name would be in a yyyy.mm.dd format.

• If the directory or netman standard list is owned by root and not IBM Tivoli Workload Scheduler, change the ownership of the directory standard list from the command line in UNIX with the chown TWSuser yyyy.mm.dd command. Note that this must be done as the root user.

� If the host file or DNS changes, it means that:

– The host file on the FTA or Master has been changed.– The DNS entry for the FTA or Master has been changed.– The host name on the FTA or Master has been changed.

� If the communication processes are hung or the mailman process is down or hung on FTA:

• IBM Tivoli Workload Scheduler was not brought down properly.

Try to always bring IBM Tivoli Workload Scheduler down properly via the conman command line on the Master or FTA as follows:

On UNIX:

$ conman unlink$ conman "shut;wait"

340 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 357: Tivoli Workloud Scheduler Guide

On Windows:

C:\win32app\maestro> conman stop;waitC:\win32app\maestro> Shutdown.cmd

• If the mailman read corrupted data, try to bring IBM Tivoli Workload Scheduler down normally. If this is not successful, kill the mailman process as follows:

On UNIX:

Run ps -u TWSuser to find the process ID.

Run kill process id or if that fails, kill -9 process id to kill the mailman process.

On Windows (commands in the TWShome\Unsupported directory):

Run listproc to find the process ID.

Run killproc process id to kill the mailman process.

• If batchman is hung:

Try to bring IBM Tivoli Workload Scheduler down normally. If not successful, kill the mailman process as explained in the previous bullet.

� If the writer process for FTA is down or hung on the Master, it means that:

– FTA was not properly unlinked from the Master.– The writer read corrupted data.

– Multiple writers are running for the same FTA.

Use ps -ef | grep maestro to check that the writer processes are running. If there is more than one process for the same FTA, perform the following steps:

i. Shut down IBM Tivoli Workload Scheduler normally.ii. Check the processes for multiple writers again.iii. If there are multiple writers, kill them.

� If the netman process is hung:

– If multiple netman processes are running, try shutting down netman properly first. If this is not successful, kill netman using the following commands:

On UNIX:

Use ps -ef | grep maestro to find the running processes.

Issue kill process id or if that fails, kill -9 process id to kill the mailman process.

On Windows (commands in the TWShome\Unsupported directory):

Use listproc to find the process ID.

Chapter 9. Best practices 341

Page 358: Tivoli Workloud Scheduler Guide

Run killproc process id to kill the mailman process.

– Hung port/socket; FIN_WAIT2 on netman port.

• Use netstat -a | grep netman port for both UNIX and Windows systems to check if netman is listening.

• Look for FIN_WAIT2 for the IBM Tivoli Workload Scheduler port.

• If FIN_WAIT2 does not time out (approximately 10 minutes), reboot.

� Network problems to look for outside of IBM Tivoli Workload Scheduler include:

– The router is down in a WAN environment.– The switch or network hub is down on an FTA segment.– There has been a power outage.– There are physical defects in the network card/wiring.

9.8.2 Batchman not up or will not stay up (batchman down)� If the message file has reached 10,000,000 bytes (approximately 9.5 MB):

Check the size of the message files (files whose names end with .msg) in the IBM Tivoli Workload Scheduler home directory ad pobox subdirectory. 48 bytes is the minimum size of these files.

– Use the evtsize command to expand temporarily, and then try to start IBM Tivoli Workload Scheduler:

evtsize <filename> <new size in bytes>

For example:

evtsize Mailbox.msg 20000000

– If necessary, remove the message file (only after failing with the EVTSIZE and start).

� Jobman not owned by root.

If jobman (in the bin subdirectory of the IBM Tivoli Workload Scheduler home directory) is not owned by root, correct this problem by logging in as root and running the command chown root jobman.

Important: Message files contain important messages being sent between IBM Tivoli Workload Scheduler processes and between IBM Tivoli Workload Scheduler agents. Remove a message file only as a last resort; all data in the message file will be lost. Also never remove message files while any IBM Tivoli Workload Scheduler processes are running.

342 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 359: Tivoli Workloud Scheduler Guide

� Read bad record in Symphony file.

This can happen for the following reasons:

– Initialization process interrupted or failed.– Cannot create Jobtable.– Corrupt data in Jobtable.

� Message file corruption.

This can be for the following reasons:

– Bad data– File system full– Power outage– CPU hardware crash

9.8.3 Jobs not running� Jobs not running on Windows

– If Windows authorizations for IBM Tivoli Workload Scheduler users are not in place, you can try the following:

• Act as part of the operating system.• Increase quotas.• Log on as a batch job.• Log on as a service.• Log on locally.• Replace a process level token.

– Valid Windows or Domain user for FTA not in the IBM Tivoli Workload Scheduler user database

Add the IBM Tivoli Workload Scheduler user for FTA in the IBM Tivoli Workload Scheduler user database.

– Password for Windows user has been changed.

Do one of the following:

• Change the password on Windows to match the one in the IBM Tivoli Workload Scheduler user database.

• Change the password in the IBM Tivoli Workload Scheduler user database to a new password.

Tip: When checking whether root owns the jobman, also check that the setuid bit is present and the file system containing TWShome is not mounted with the nosetuid option.

Chapter 9. Best practices 343

Page 360: Tivoli Workloud Scheduler Guide

Note that changes to the IBM Tivoli Workload Scheduler user database will not take effect until Jnextday. process.

If the user definition existed previously, you can use the altpass command to change the password for the production day.

� Jobs not running on Windows or UNIX.

– Batchman down.

See “Batchman not up or will not stay up (batchman down)” on page 342.

– Limit set to 0.

To change the limit to 10 via the conman command line:

For a single FTA:

lc <FTA name>;10

For all FTAs:

lc @;10;noask

– FENCE set above the limit.

To change FENCE to 10 via the conman command line:

For all FTAs:

f @;10;noask

If dependencies are not met, it could be for the following reasons:

• Start time not reached yet, or UNTIL time has passed.• OPENS file not present yet.• Job FOLLOW not complete.

9.8.4 Jnextday is hung or still in EXEC state� Stageman cannot get exclusive access to Symphony.

� Batchman and/or mailman was not stopped before running Jnextday from the command line.

� Jnextday not able stop all FTAs.

– Network segment down and cannot reach all FTAs.– One or more of the FTAs has crashed.– Netman not running on all FTAs.

� Jnextday not able to start or initialize all FTAs.

– The Master or FTA was manually started before Jnextday completed stageman.

Reissue a link from the Master to the FTA.

344 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 361: Tivoli Workloud Scheduler Guide

– The Master was not able to start batchman after stageman completed.

See “Batchman not up or will not stay up (batchman down)” on page 342.

– The Master was not able to link to FTA.

See “FTAs not linking to the Master Domain Manager” on page 340.

9.8.5 Jnextday in ABEND state� Jnextday not completing compiler processes.

This may be due to bad or missing data in the schedule or job. You can perform the following actions:

– Check for missing calendars.– Check for missing resources.– Check for missing parameters.

� Jnextday not completing stageman process.

This may be due to bad or missing data in the CARRYFORWARD schedule. You can perform the following actions:

– Run show jobs or show schedules to find the bad schedule.– Add missing data and rerun Jnextday.– Cancel the schedule and rerun Jnextday.

9.8.6 FTA still not linked after Jnextday� Symphony file corruption

Corruption during transfer of Sinfonia file

– Byte order problem between UNIX and Intel

Apply patches that correct byte order problem. Recent versions of IBM Tivoli Workload Scheduler are unaffected by this problem.

� Symphony file, but no new run number, date, or time stamp

You can perform the following actions:

– Try to link FTA. See 9.8.1, “FTAs not linking to the Master Domain Manager” on page 340.

– Remove Symphony, Sinfonia and message files (on FTA only) and link from the Master again. Before removing the files be sure that all IBM Tivoli Workload Scheduler processes are stopped.

Chapter 9. Best practices 345

Page 362: Tivoli Workloud Scheduler Guide

� Run number, Symphony file, but no date or time stamp

You can perform the following actions:

– Try to link FTA. See 9.8.1, “FTAs not linking to the Master Domain Manager” on page 340.

– Remove Symphony and message files (on FTA only) and link from the Master again.

9.8.7 Troubleshooting toolsIn IBM Tivoli Workload Scheduler there are two tools that are used for data gathering that help determine the cause of problems: autotrace and metronome.

Autotrace featureThis is a built-in flight-recorder-style trace mechanism that logs all activities performed by the IBM Tivoli Workload Scheduler processes. In case of product failure or unexpected behavior, this feature can be extremely effective in finding the cause of the problem and in providing a quick solution.

In case of problems, you are asked to create a trace snap file by issuing some simple commands. The trace snap file is then inspected by the Tivoli support team, which uses the logged information as an efficient problem determination base. The Autotrace feature, already available with Version 8.1, has now been extended to run on additional platforms.

This feature is now available with Tivoli Workload Scheduler on the following platforms:

� AIX� HP-UX� Solaris Operating Environment� Microsoft Windows NT and 2000� Linux

The tracing system is completely transparent and does not have any impact on file system performances, because it is fully handled in memory. It is automatically started by StartUp, so no further action is required.

Autotrace uses a dynamic link library named libatrc. Normally, this library is located in /usr/Tivoli/TWS/TKG/3.1.5/lib (UNIX) and not in the installation path, as would be expected.

346 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 363: Tivoli Workloud Scheduler Guide

Metronome scriptMetronome is a PERL script that takes a snapshot of Tivoli Workload Scheduler instantly and generates an HTML report. It is a useful tool for the Tivoli Workload Scheduler user for describing a problem to Customer Support. For best results, the tool should be run as soon as the problem is discovered. Metronome copies all the Tivoli Workload Scheduler configuration files in a directory named:

TWShome /snapshots/snap_date _time.

The expected action flow is the following:

1. The user runs Metronome after discovering the problem.

2. The user opens a problem to Customer Support and includes the HTML report found in TWShome /snapshots/snap_date _time /report.html.

3. Customer Support reads the report and either resolves the problem or asks for the package of files produced by the script.

4. Customer Support receives the package of files and resolves the problem.

The Metronome files are copied in the TWShome/bin directory when Tivoli Workload Scheduler is installed.

Format:

perl path/maestro/bin/Metronome.pl [MAESTROHOME=TWS_dir] [-html] [-pack] [-prall]

Where:

� MAESTROHOME is the directory where the script is located if it is different from the installation directory of Tivoli Workload Scheduler.

� –html generates an HTML report.

� –pack creates a package of configuration files.

� –prall prints all variables.

Example 9-19 on page 348 shows how to run the command from the Tivoli Workload Scheduler home directory.

Tip: Each time Autotrace could be an option for problem determination, snap files should be taken as soon as possible. If you need to gather a lot of information quickly, use Metronome, which we cover next.

Chapter 9. Best practices 347

Page 364: Tivoli Workloud Scheduler Guide

Example 9-19 Metronome script

$ metronome.pl

Example 9-20 shows how to run the command from a directory that is not the Tivoli Workload Scheduler home directory.

Example 9-20 Metronome script

$ metronome.pl MAESTROHOME=E:\TWS82_driver1\win32app\maestroMAESTROHOME=E:/TWS82_driver1/win32app/maestro

9.9 Finding answers The following are some useful online help sites for IBM Tivoli Workload Scheduler:

� Tivoli support

http://www-3.ibm.com/software/sysmgmt/products/support/

� Product documentation

http://publib.boulder.ibm.com/tividd/td/tdmktlist.html

� Tivoli user groups

http://www-3.ibm.com/software/sysmgmt/products/support/Tivoli_User_Groups.html

� IBM Tivoli Workload Scheduler Fix Pack

ftp://ftp.software.ibm.com/software/tivoli_support/patches/

http://www3.software.ibm.com/ibmdl/pub/software/tivoli_support/patches/

� Public IBM Tivoli Workloads Scheduler mailing list

http://groups.yahoo.com/group/maestro-l

Note: To run the command successfully, you need to follow the instructions for installing Perl in 3.10, “Installing Perl5 on Windows” on page 133, on both UNIX and Windows.

Note: This public mailing list has many contributors from all over the world and is an excellent forum for new and more experienced Tivoli Workload Scheduler users.

348 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 365: Tivoli Workloud Scheduler Guide

Appendix A. Code samples

This appendix includes code samples referenced in the chapters of this redbook.

These script files are as follows:

� “README file for JSC Fix Pack 01” on page 350� “Sample freshInstall.txt” on page 359� “Customized freshInstall.txt” on page 365� “maestro_plus rule set” on page 372� “Script for performing long-term switch” on page 382

A

© Copyright IBM Corp. 2003. All rights reserved. 349

Page 366: Tivoli Workloud Scheduler Guide

README file for JSC Fix Pack 01

Date: August, 29, 2003

Patch: 1.3-JSC-FP01

PTF Number: U495374

Components: Tivoli Job Scheduling Console Version 1.3 Tivoli Workload Scheduler 8.2.0 for z/OS Connector

General Description: Tivoli Job Scheduling Console fix pack for Version 1.3 and Tivoli Workload Scheduler for z/OS Connector fix pack for Version 1.3

PROBLEMS FIXED:

New APARs in 1.3-JSC-FP01 from superseded fix pack: APAR IY41985 Symptoms: UNABLE TO DEFINE 40 EXTERNAL DEPENDENCIES VIA JSC

APAR IY44472 Symptoms: JSC 1.2 BROWSE JOB LOG DOES NOT OPEN ON LARGE STDLIST FILES * APAR IY47022 Symptoms: Cannot submit job that is more than 15 chars

APAR IY47230 Symptoms: AD HOC JOB LOGIN FIELD SHOWS RED X WHEN >8 CHARACTERS ARE ENTERED

New fixes in 1.3-JSC-FP01 from superseded fixpack: Internal CMVC Defect 10374 Symptoms: Inconsinstency between the old JSC and new about prop. of js Internal CMVC Defect 10867 Symptoms: Job Instance Filter doesn’t load all internals status.

Internal CMVC Defect 10957 Symptoms: “Properties-Workstatons in Database” in backgorund after error

Internal CMVC Defect 10977 Symptoms: About database lists of jobs

Internal CMVC Defect 10978 Symptoms: Wrong error message if the user is not authorized to modify job

350 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 367: Tivoli Workloud Scheduler Guide

Internal CMVC Defect 10982 Symptoms: “Task” field edition mot allowed Internal CMVC Defect 10983 Symptoms: OPC Connector - Problem with query panels

Internal CMVC Defect 10992 Symptoms: Problem using Time Restriction for All scheduled jobs

Internal CMVC Defect 10994 Symptoms: Recovery Job: Not saved if modified submitting

Internal CMVC Defect 10997 Symptoms: Change password allowed after setting alternate plan Internal CMVC Defect 10998 Symptoms: Wrong message when adding a dep after setting alternatate plan Internal CMVC Defect 10999 Symptoms: Set alternate plan, change job properties: message showed twice

Internal CMVC Defect 11000 Symptoms: “Monitored Job” property for job in plan is lost.

Internal CMVC Defect 11001 Symptoms: Wrong message when adding a dep to a job in an alternate plan

Internal CMVC Defect 11003 Symptoms: “Change Limit” window hangs after setting alternate plan Internal CMVC Defect 11004 Symptoms: Reply to a prompt in an alternate plan: error message missing Internal CMVC Defect 11005 Symptoms: Change a resource unit on alternate plan: wrong error message

Internal CMVC Defect 11006 Symptoms: Wrong error message when creating a res with a wrong cpu name

Internal CMVC Defect 11008 Symptoms: Icon missing in the “Restore plan” message window

Internal CMVC Defect 11009 Symptoms: Wrong cursor shape after changing job stream properties Internal CMVC Defect 11010 Symptoms: Wrong error message when changing time restriction Internal CMVC Defect 11011

Appendix A. Code samples 351

Page 368: Tivoli Workloud Scheduler Guide

Symptoms: No error message deleting the Highest acc. return code

Internal CMVC Defect 11013 Symptoms: Scriptname showed as wrong if it contains two parameters

Internal CMVC Defect 11014 Symptoms: Ad hoc submissioTwo error windows when omitting the task type

Internal CMVC Defect 11017 Symptoms: OPC:Some fields in “Find job instance” window are wrong Internal CMVC Defect 11034 Symptoms: incorrect values reported in JSC enabling time zone Internal CMVC Defect 11044 Symptoms: ESP: The special character “%” is not accepted in any filter.

Internal CMVC Defect 11045 Symptoms: ESP: Special characters are not accepted for SR in the DB.

Internal CMVC Defect 11046 Symptoms: ESP: The “Destination” field is not editabled for WS in the DB.

Internal CMVC Defect 11052 Symptoms: Time restriction randomly modified in Job Stream Instance List Internal CMVC Defect 11065 Symptoms: Wrong hour submitting a job stream Internal CMVC Defect 11066 Symptoms: Problems submitting a job stream from action list

Internal CMVC Defect 11067 Symptoms: ESP:workstation name field is too small in z/OS

Internal CMVC Defect 11069 Symptoms: Preferences not saved at login when the port is specified

Internal CMVC Defect 11071 Symptoms: ESP: The ‘Browse Job Log’ button is wrongly disabled. Internal CMVC Defect 11074 Symptoms: Change status of job via jobstream-selection doesn’t work. Internal CMVC Defect 11076 Symptoms: ESP: Error when Name+Extended is multiple of 20 chars

Internal CMVC Defect 11078 Symptoms: Cannot open the list after closing the Hiperbolic View

352 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 369: Tivoli Workloud Scheduler Guide

Internal CMVC Defect 11081 Symptoms: Properties window of a job are to be modal

Internal CMVC Defect 11082 Symptoms: Menu Resource Dependencies doesn’t work Internal CMVC Defect 11088 Symptoms: Problem copying Run Cycle Type Internal CMVC Defect 11092 Symptoms: Expand All,Collaps All don’t work in Actions List

Internal CMVC Defect 11093 Symptoms: Name and workstation not editable for a job in the job stream

Internal CMVC Defect 11095 Symptoms: Filter by Extended job name doesn’t work.

Internal CMVC Defect 11098 Symptoms: Wrong icons for Job and Job stream in plan Internal CMVC Defect 11099 Symptoms: Bad Time Restriction is saved Internal CMVC Defect 11100 Symptoms: ESP;JS editor opens empty in both DB and Plan

Internal CMVC Defect 11101 Symptoms: No error message if edit a wrong WS name in Submit JS Panel

Internal CMVC Defect 11119 Symptoms: ESP:Delete Dialog of a job in JS change the focus

Internal CMVC Defect 11120 Symptoms: ESP:Filter Criteria lost during the restart of GUI Internal CMVC Defect 11121 Symptoms: ESP:Job Stream locked 15 seconds after Properties update Internal CMVC Defect 11122 Symptoms: ESP:Use the double click to allow a new Job Stream Definition

Internal CMVC Defect 11126 Symptoms: Error message shows information related to a method Java

Internal CMVC Defect 11129 Symptoms: Time Zone Indication for z/OS

Appendix A. Code samples 353

Page 370: Tivoli Workloud Scheduler Guide

Internal CMVC Defect 11153 Symptoms: opening a Job Stream editor use a wrong JS Internal CMVC Defect 11156 Symptoms: JS Editor doesn’t open if the object is locked

Dependencies: JSC 1.3 GA, Tivoli Workload Scheduler 8.2.0 for z/OS Connector GA, ++APAR for apar PQ76778 on TWS for z/OS and apar PQ76409.

Files replaced or added by this Patch: Fixpack Contents on CD:+---README.U495374 (this file)+---1.3-JSC-fp01_VSR.txt+---CONNECT¦ +---U_OPC.image¦ ¦ ¦ PATCHES.LST¦ ¦ ¦ FILE1.PKT¦ ¦ ¦ FILE2.PKT¦ ¦ ¦ FILE3.PKT¦ ¦ ¦ FILE4.PKT¦ ¦ ¦ FILE5.PKT¦ ¦ ¦ FILE6.PKT¦ ¦ ¦ FILE7.PKT¦ ¦ ¦ FILE8.PKT¦ ¦ ¦ FILE9.PKT¦ ¦ ¦ U_OPC.IND¦ ¦ ¦¦ ¦ +---CFG¦ ¦ FILE1.CFG¦ ¦ FILE2.CFG¦ ¦ FILE3.CFG¦ ¦ FILE4.CFG¦ ¦ FILE5.CFG¦ ¦ FILE6.CFG¦ ¦ FILE7.CFG¦ ¦ FILE8.CFG¦ ¦ FILE9.CFG¦ ¦¦ +---U_OPC_L.image¦ ¦ PATCHES.LST¦ ¦ FILE1.PKT¦ ¦ FILE2.PKT¦ ¦ FILE3.PKT¦ ¦ FILE4.PKT¦ ¦ FILE5.PKT

354 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 371: Tivoli Workloud Scheduler Guide

¦ ¦ U_OPC_L.IND¦ ¦¦ +---CFG¦ FILE1.CFG¦ FILE2.CFG¦ FILE3.CFG¦ FILE4.CFG¦ FILE5.CFG¦+---JSC¦ +---Aix¦ ¦ ¦ setup.bin¦ ¦ ¦¦ ¦ +---SPB¦ ¦ TWSConsole_FixPack.spb¦ ¦¦ +---HP¦ ¦ ¦ setup.bin¦ ¦ ¦¦ ¦ +---SPB¦ ¦ TWSConsole_FixPack.spb¦ ¦¦ +---Linux¦ ¦ ¦ setup.bin¦ ¦ ¦¦ ¦ +---SPB¦ ¦ TWSConsole_FixPack.spb¦ ¦¦ +---Solaris¦ ¦ ¦ setup.bin¦ ¦ ¦¦ ¦ +---SPB¦ ¦ TWSConsole_FixPack.spb¦ ¦¦ +---Windows¦ ¦ setup.exe¦ ¦¦ +---SPB¦ TWSConsole_FixPack.spb¦+---DOC

README_PDF

FIX PACK CONTENT ON FTP-SITE+---README.U495374 (this file)¦+---JSC_AIX¦ JSC_Aix.tar

Appendix A. Code samples 355

Page 372: Tivoli Workloud Scheduler Guide

¦+---JSC_HP¦ JSC_HP.tar¦+---JSC_Linux¦ JSC_Lnx.tar¦+---JSC_Solaris¦ JSC_Solaris.tar¦+---JSC_Windows¦ JSC_Win.zip¦+---zOSCON_U_OPC¦ con_u_opc.tar¦+---zOSCON_U_OPC_L con_u_opc_l.tar

Applying the patch:

Job Scheduling Console:

The Job Scheduling Console fix pack installation is based on the ISMP technology. You can install the fix pack only after you have installed the Job Scheduling Console. When you start the fix pack installation a welcome panel is displayed. If you click OK a discovery action is launched and a panel with the JSC instance and the discovered Job Scheduling Console directory is displayed. If no instance is discovered an error message appears. When you click Next, a panel with the following actions appears: Apply, Rollback, and Commit. The first time you install the fix pack you can only select the Apply action. With this action the fix pack is installed in undoable mode and a backup copy of the product is stored on your workstation. If the fix pack is already installed the Rollback and Commit actions are available. If you select Rollback, you remove the fix pack installation and return to the previous installation of the Job Scheduling Console. If you select Commit, the installation backup copy is deleted and the fix pack installation mode changes to commit. Use the commit action after you have tested the fix pack. If the fixpack installation is in commit mode and is corrupt, you can run the setup program and select the Repair action. This action is displayed in the panel in place of the Apply action.

356 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 373: Tivoli Workloud Scheduler Guide

Connector:IMPORTANT: Before installing the fix pack, create a backup copy of the Job Scheduling Console current installation.

1) Open the Tivoli desktop.

2) Choose the menu: Desktop -> Install -> Install Patch...

3) Follow the install instructions stated on the manual

Known defects and limitations (Installation):

1) Do not launch the Job Scheduling Console fresh install after the fix pack installation.

Workaround to add languages to a JSC instance if the fix pack has been applied:

- If the fix pack is in undoable mode, you can perform the following steps: 1)Remove the fix pack with the Rollback action 2)Add languages with the fresh install 3)Apply the fix pack

- If the fix pack is not in undoable mode, you can perform the following steps: 1) Remove the fresh install 2) Run the fresh install to install the languages 3) Apply the fix pack

2) Before installing the fix pack in undoable mode, ensure that you have 35 MB of free space in the root/Administrator home directory for the installation backup.

3) All the panels are identical, after the discovery of the installed instance, regardless of the installation actions you are running.

4) The panel that contains the installation summary displays the wrong disk space size.

5) During the installation a warning message might be displayed for replacing the Java Virtual Machine. You must answer YES.

6) On HP platforms, if you have performed a full or custom installation of the Job Scheduling Console with some languages, you may

Appendix A. Code samples 357

Page 374: Tivoli Workloud Scheduler Guide

not be able to apply the fix pack. The problem occurs only if the installation path is different from the default. As a workaround, install the fixpack on the GA version of the product by using the silent mode with option -P installLocation=<path> where <path> points to the location where the GA version was installed. After you installed the fix pack, use the normal (rather than the silent) installation process to perform the Rollback and Commit actions on the fix pack. Use the setup.bin command to do this. To uninstall the product (GA version plus fix pack): 1. Run uninstall.bin in the uninstaller2 folder. 2. Manually delete all the remaining files.

7) On all platforms, silent install proceeds in the following order regardless of your specifications: 1. Apply 2. Commit

Known defects and limitations: 1) The recovery option of a job is not saved if you open the Properties panel during its submission. Note: defect 10994 fixes only the problem related to the recovery job. 2) If you add a dependency to an existing job stream, when you try to save it, the following error message is displayed: Cannot save the job stream. Reason: <JOB STREAM NAME> already exists.

3) The following limitations are present in the filter of the lists: - In the plan lists of job streams and jobs, the filter on the columns does not work. - In the plan list of job stream, the filter on the Action in the “Time Restrictions” page is not saved. - If you try to filter job stream instances with status = “Error”, the list appears empty.

4) On HP workstations you cannot use the special character “%” in Job Instances filters.

5) You can submit a job into the plan using an alias that starts with a numeric character.

6) If you open a job stream in the plan and try to cancel a job, the job will not be cancelled.

358 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 375: Tivoli Workloud Scheduler Guide

7) In the Job Stream Instance Editor, select a job stream, click on the Properties button. In the Properties panel modify some properties and click “OK”. The Properties window remains in the background, the Job Stream Instance Editor hangs. If you run Sumbit and Edit, the Job Stream Instance panel is opened in the background.

Sample freshInstall.txt

################################################################################## InstallShield Response File for Installing a New Instance## This file contains values that can be used to configure the installation program with the options # specified below when the wizard is run with the "-options" command line option. # Read the instructions contained in each section to customize the options and their respective values.# # A common use of a response file is to run the wizard in silent mode. This enables you to# specify wizard keywords without running the# wizard in graphical mode.## The keywords that can be specified for the wizard are listed below. To install in silent mode,# follow these steps:# # 1. Customize the value of the required keywords (search for the lines starting with -W). # This file contains both UNIX and Windows specific keywords. The required keywords are # customized for a Windows workstation.# # 2. Enable the optional keywords by removing the leading '###' characters from the# line (search for '###' to find the keywords you can set).# # 3. Specify a value for a keyword by replacing the characters within double quotation # marks "<value>".

Appendix A. Code samples 359

Page 376: Tivoli Workloud Scheduler Guide

# # 4. Save the changes to the file.# # 5. Enter the following command:# On UNIX:# SETUP.bin -options freshInstall.txt# On Windows:# SETUP.exe -options freshInstall.txt## ################################################################################## INSTALLATION PATH AND IBM TIVOLI WORKLOAD SCHEDULER USER## [Installation directory]# The fully qualified path to the location of the IBM# Tivoli Workload Scheduler installation. This path# cannot contain blanks. On Windows workstations, the directory# must be located on an NTFS file system.# On UNIX workstations, this path must be the user's home# directory:### -W twsLocationPanel.directory="/opt/TWS/<tws_user>" -W twsLocationPanel.directory="<system_drive>\win32app\TWS\<tws_user>"

# [Windows IBM Tivoli Workload Scheduler user name]# The user name for which IBM Tivoli Workload Scheduler is being# installed. On Windows systems, if this user account does not already exist, it is# automatically created by the installation program.

-W userWinPanel.inputUserName="twsuser"

# [Windows IBM Tivoli Workload Scheduler password name]# The password associated with the <tws_user> user name. This value is# required only on Windows workstations.

-W userWinPanel.password="twsuser"

# [Windows IBM Tivoli Workload Scheduler domain name]# The hostname of the workstation on which you are performing the installation.

-W userWinPanel.twsDomain="cpuName"

360 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 377: Tivoli Workloud Scheduler Guide

# [UNIX IBM Tivoli Workload Scheduler user name]# On UNIX systems, this user account must be created manually before running the# installation. Create a user with a home directory. IBM Tivoli Workload Scheduler # will be installed under the HOME directory of the specified user. ### -W userUNIXPanel.inputUserName="twsuser"

################################################################################## CPU DEFINITION## [Workstation name of the installation]# The name assigned to the workstation where the IBM Tivoli Workload Scheduler installation runs.# This name cannot exceed 16 characters. If you are installing a master domain manager, # the value of this keyword must be identical to the cpubean.masterCPU keyword.

-W cpubean.thisCPU="twsCpu"

# [Master domain manager name]# The name of the master domain manager. This name cannot exceed 16# characters. If you are installing a master domain manager, the value of this keyword # must be identical to the cpubean.thisCPU keyword.

-W cpubean.masterCPU="twsCpu"

# [TCP port]# The TCP port number that will be used by the instance being installed.# It must be an unassigned 16ñbit value in the range 1ñ65535. The default value is 31111.# When installing more than one instance on the same workstation, use# different port numbers for each instance.

-W cpubean.tcpPortNumber="31111"

# [Company name]# The name of the company. This name is registered in the localopts file and appears in# program headers and reports.

-W cpubean.company="IBM"

Appendix A. Code samples 361

Page 378: Tivoli Workloud Scheduler Guide

################################################################################## INSTALLATION TYPE## Do not modify the value of this keyword. It is customized to perform# fresh install.

-W twsDiscoverInstances.typeInstance="NEW"

################################################################################## AGENT TYPE## Do not modify the value of this keyword.

-W setupTypes.typeInstall="custom"

# The type of IBM Tivoli Workload Scheduler agent to install. Enable the keyword for # the type of agent you want to install. This file is customized to install a # Fault Tolerant Agent.# Standard agent:### -W agentType.stAgent="true"# Master domain manager:### -W agentType.mdmAgent="true"# Backup master:### -W agentType.bkmAgent="true"# Fault Tolerant Agent:

-W agentType.ftAgent="true"################################################################################## LANGUAGE PACKS# # [All supported language packs]# The English language pack and the language locale of# the operating system are installed by default. To install all supported language packs# enable the keyword. ### -W languageChoice.catOption="true"

# [Single language pack selection]# The types of language packs you can install. Enable the keyword for the language packs

362 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 379: Tivoli Workloud Scheduler Guide

# you want to install. Language packs that remain commented, default to false.### -W languageChoice.chineseSimplified="true"### -W languageChoice.chineseTraditional="true"### -W languageChoice.german="true"### -W languageChoice.french="true"### -W languageChoice.italian="true"### -W languageChoice.japanese="true"### -W languageChoice.korean="true"### -W languageChoice.portuguese="true"### -W languageChoice.spanish="true"################################################################################## OPTIONAL FEATURES## [Tivoli Plus Module]# Installs the Tivoli Plus Module feature. ### -W featureChoice.pmOption="true"# To create a task which enables you to launch the Job # Scheduling Console from the Tivoli desktop, optionally, specify the location # of the Job Scheduling Console installation directory.### -W twsPlusModulePanel.jscHomeDir=""# The path to the Tivoli Plus Module images and index file. These paths are identical.# To install the Tivoli Plus Module, enable both keywords.### -P checkPlusServerCDAction.imagePath="D:\disk1\TWS_CONN"### -P checkPlusServerCDAction.indPath="D:\disk1\TWS_CONN"### [Connector]# Installs the Connector feature.### -W featureChoice.conOption="true"# The Connector instance name identifies the instance in the Job Scheduling Console. The# name must be unique within the scheduler network.### -W twsConnectorPanel.jscConnName="TMF2conn"# Customize the path to the Job Scheduling Services and Connector images and index files.# These paths are identical. To install the Connector, enable all keywords.### -P checkJssServerCDAction.imagePath="D:\disk1\TWS_CONN"### -P checkJssServerCDAction.indPath="D:\disk1\TWS_CONN"### -P checkConnServerCDAction.imagePath="D:\disk1\TWS_CONN"### -P checkConnServerCDAction.indPath="D:\disk1\TWS_CONN"################################################################################## TIVOLI MANAGEMENT FRAMEWORK# The Connector and Tivoli Plus Module features prerequisite the Tivoli# Management Framework, Version 3.7.1 or 4.1.

Appendix A. Code samples 363

Page 380: Tivoli Workloud Scheduler Guide

## [Tivoli Management Framework installation path]# The directory where you want to install Tivoli Management Framework.### -W twsFrameworkPanel.tmfHomeDir="D:\framework"## [Remote access account name and password]# Optionally, specify a Tivoli remote access account name and password that allow Tivoli# programs to access remote file systems.### -W twsFrameworkPanel.tmfUser="twsuserTMF2"### -W twsFrameworkPanel.tmfPassword="twsuserTMF2"# # [Tivoli Management Framework images location for Windows]# Customize the path to the Tivoli Management Framework images and the # index file for Windows. These paths are identical. To install# Tivoli Management Framework, enable both keywords.### -P checkFrameworkCdAction.imagePath="D:\disk1\fwork\new1"### -P checkFrameworkCdAction.indPath="D:\disk1\fwork\new1"# # [Tivoli Management Framework images location for UNIX]# Customize the path to the Tivoli Management Framework images and the # index file for UNIX. These paths are identical. To install# Tivoli Management Framework, enable both keywords.### -P checkUnixFrameworkCdAction.imagePath="\mnt\cdrom"### -P checkUnixFrameworkCdAction.indPath="\mnt\cdrom"# # [Tivoli Management Framework language packs location]# Customize the path to the images and index files of the Tivoli Management Framework # language packs. These paths are identical. To install a language pack,# enable both keywords for each language you want to install.# German### -P checkTmfGermanCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfGermanCdAction.indPath="D:\disk1\fwork\new1"# Spanish### -P checkTmfSpanishCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfSpanishCdAction.indPath="D:\disk1\fwork\new1"# French### -P checkTmfFrenchCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfFrenchCdAction.indPath="D:\disk1\fwork\new1"# Italian### -P checkTmfItalianCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfItalianCdAction.indPath="D:\disk1\fwork\new1"# Korean### -P checkTmfKoreanCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfKoreanCdAction.indPath="D:\disk1\fwork\new1"# Japanese### -P checkTmfJapaneseCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfJapaneseCdAction.indPath="D:\disk1\fwork\new1"

364 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 381: Tivoli Workloud Scheduler Guide

# Simplified Chinese### -P checkTmfSimplifiedCnCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfSimplifiedCnCdAction.indPath="D:\disk1\fwork\new1"# Traditional Chinese### -P checkTmfTraditionalCnCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfTraditionalCnCdAction.indPath="D:\disk1\fwork\new1"# Portuguese### -P checkTmfPortugueseCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfPortugueseCdAction.indPath="D:\disk1\fwork\new1"################################################################################## DO NOT CHANGE THE FOLLOWING OPTIONS OR THE INSTALLATION WILL FAIL# ################################################################################-silent-G replaceNewerResponse="Yes to All"-W featuresWarning.active=False-W winUserNotFound.active=False-W featuresWarning23.active=False-W featuresWarning2.active=False-W featuresWarning.active=False-W featuresWarning222.active=False-W featuresWarning22.active=False

Customized freshInstall.txt

################################################################################## InstallShield Response File for Installing a New Instance## This file contains values that can be used to configure the installation program with the options # specified below when the wizard is run with the "-options" command line option. # Read the instructions contained in each section to customize the options and their respective values.# # A common use of a response file is to run the wizard in silent mode. This enables you to# specify wizard keywords without running the# wizard in graphical mode.#

Appendix A. Code samples 365

Page 382: Tivoli Workloud Scheduler Guide

# The keywords that can be specified for the wizard are listed below. To install in silent mode,# follow these steps:# # 1. Customize the value of the required keywords (search for the lines starting with -W). # This file contains both UNIX and Windows specific keywords. The required keywords are # customized for a Windows workstation.# # 2. Enable the optional keywords by removing the leading '###' characters from the# line (search for '###' to find the keywords you can set).# # 3. Specify a value for a keyword by replacing the characters within double quotation # marks "<value>".# # 4. Save the changes to the file.# # 5. Enter the following command:# On UNIX:# SETUP.bin -options freshInstall.txt# On Windows:# SETUP.exe -options freshInstall.txt## ################################################################################## INSTALLATION PATH AND IBM TIVOLI WORKLOAD SCHEDULER USER## [Installation directory]# The fully qualified path to the location of the IBM# Tivoli Workload Scheduler installation. This path# cannot contain blanks. On Windows workstations, the directory# must be located on an NTFS file system.# On UNIX workstations, this path must be the user's home# directory:### -W twsLocationPanel.directory="/opt/TWS/<tws_user>" -W twsLocationPanel.directory="C:\win32app\tws"

# [Windows IBM Tivoli Workload Scheduler user name]# The user name for which IBM Tivoli Workload Scheduler is being# installed. On Windows systems, if this user account does not already exist, it is# automatically created by the installation program.

366 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 383: Tivoli Workloud Scheduler Guide

-W userWinPanel.inputUserName="tws"

# [Windows IBM Tivoli Workload Scheduler password name]# The password associated with the <tws_user> user name. This value is# required only on Windows workstations.

-W userWinPanel.password="chuy5"

# [Windows IBM Tivoli Workload Scheduler domain name]# The hostname of the workstation on which you are performing the installation.

-W userWinPanel.twsDomain="6579IMAGE"

# [UNIX IBM Tivoli Workload Scheduler user name]# On UNIX systems, this user account must be created manually before running the# installation. Create a user with a home directory. IBM Tivoli Workload Scheduler # will be installed under the HOME directory of the specified user. ### -W userUnixPanel.inputUserName="twsuser"

################################################################################## CPU DEFINITION## [Workstation name of the installation]# The name assigned to the workstation where the IBM Tivoli Workload Scheduler installation runs.# This name cannot exceed 16 characters. If you are installing a master domain manager, # the value of this keyword must be identical to the cpubean.masterCPU keyword.

-W cpubean.thisCPU="TWS2"

# [Master domain manager name]# The name of the master domain manager. This name cannot exceed 16# characters. If you are installing a master domain manager, the value of this keyword # must be identical to the cpubean.thisCPU keyword.

-W cpubean.masterCPU="MASTER"

Appendix A. Code samples 367

Page 384: Tivoli Workloud Scheduler Guide

# [TCP port]# The TCP port number that will be used by the instance being installed.# It must be an unassigned 16ñbit value in the range 1ñ65535. The default value is 31111.# When installing more than one instance on the same workstation, use# different port numbers for each instance.

-W cpubean.tcpPortNumber="31111"

# [Company name]# The name of the company. This name is registered in the localopts file and appears in# program headers and reports.

-W cpubean.company="ITSO/Austin"################################################################################## INSTALLATION TYPE## Do not modify the value of this keyword. It is customized to perform# fresh install.

-W twsDiscoverInstances.typeInstance="NEW"

################################################################################## AGENT TYPE## Do not modify the value of this keyword.

-W setupTypes.typeInstall="custom"

# The type of IBM Tivoli Workload Scheduler agent to install. Enable the keyword for # the type of agent you want to install. This file is customized to install a # Fault Tolerant Agent.# Standard agent:### -W agentType.stAgent="true"# Master domain manager:### -W agentType.mdmAgent="true"# Backup master:### -W agentType.bkmAgent="true"# Fault Tolerant Agent:

-W agentType.ftAgent="true"

368 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 385: Tivoli Workloud Scheduler Guide

################################################################################## LANGUAGE PACKS# # [All supported language packs]# The English language pack and the language locale of# the operating system are installed by default. To install all supported language packs# enable the keyword. ### -W languageChoice.catOption="true"

# [Single language pack selection]# The types of language packs you can install. Enable the keyword for the language packs # you want to install. Language packs that remain commented, default to false.### -W languageChoice.chineseSimplified="true"### -W languageChoice.chineseTraditional="true"### -W languageChoice.german="true"### -W languageChoice.french="true"### -W languageChoice.italian="true"### -W languageChoice.japanese="true"### -W languageChoice.korean="true"### -W languageChoice.portuguese="true"### -W languageChoice.spanish="true"################################################################################## OPTIONAL FEATURES## [Tivoli Plus Module]# Installs the Tivoli Plus Module feature. ### -W featureChoice.pmOption="true"# To create a task which enables you to launch the Job # Scheduling Console from the Tivoli desktop, optionally, specify the location # of the Job Scheduling Console installation directory.### -W twsPlusModulePanel.jscHomeDir=""# The path to the Tivoli Plus Module images and index file. These paths are identical.# To install the Tivoli Plus Module, enable both keywords.### -P checkPlusServerCDAction.imagePath="D:\disk1\TWS_CONN"### -P checkPlusServerCDAction.indPath="D:\disk1\TWS_CONN"### [Connector]# Installs the Connector feature.### -W featureChoice.conOption="true"

Appendix A. Code samples 369

Page 386: Tivoli Workloud Scheduler Guide

# The Connector instance name identifies the instance in the Job Scheduling Console. The# name must be unique within the scheduler network.### -W twsConnectorPanel.jscConnName="TMF2conn"# Customize the path to the Job Scheduling Services and Connector images and index files.# These paths are identical. To install the Connector, enable all keywords.### -P checkJssServerCDAction.imagePath="D:\disk1\TWS_CONN"### -P checkJssServerCDAction.indPath="D:\disk1\TWS_CONN"### -P checkConnServerCDAction.imagePath="D:\disk1\TWS_CONN"### -P checkConnServerCDAction.indPath="D:\disk1\TWS_CONN"################################################################################## TIVOLI MANAGEMENT FRAMEWORK# The Connector and Tivoli Plus Module features prerequisite the Tivoli# Management Framework, Version 3.7.1 or 4.1.## [Tivoli Management Framework installation path]# The directory where you want to install Tivoli Management Framework.### -W twsFrameworkPanel.tmfHomeDir="D:\framework"## [Remote access account name and password]# Optionally, specify a Tivoli remote access account name and password that allow Tivoli# programs to access remote file systems.### -W twsFrameworkPanel.tmfUser="twsuserTMF2"### -W twsFrameworkPanel.tmfPassword="twsuserTMF2"# # [Tivoli Management Framework images location for Windows]# Customize the path to the Tivoli Management Framework images and the # index file for Windows. These paths are identical. To install# Tivoli Management Framework, enable both keywords.### -P checkFrameworkCdAction.imagePath="D:\disk1\fwork\new1"### -P checkFrameworkCdAction.indPath="D:\disk1\fwork\new1"# # [Tivoli Management Framework images location for UNIX]# Customize the path to the Tivoli Management Framework images and the # index file for UNIX. These paths are identical. To install# Tivoli Management Framework, enable both keywords.### -P checkUnixFrameworkCdAction.imagePath="\mnt\cdrom"### -P checkUnixFrameworkCdAction.indPath="\mnt\cdrom"# # [Tivoli Management Framework language packs location]# Customize the path to the images and index files of the Tivoli Management Framework # language packs. These paths are identical. To install a language pack,# enable both keywords for each language you want to install.# German

370 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 387: Tivoli Workloud Scheduler Guide

### -P checkTmfGermanCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfGermanCdAction.indPath="D:\disk1\fwork\new1"# Spanish### -P checkTmfSpanishCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfSpanishCdAction.indPath="D:\disk1\fwork\new1"# French### -P checkTmfFrenchCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfFrenchCdAction.indPath="D:\disk1\fwork\new1"# Italian### -P checkTmfItalianCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfItalianCdAction.indPath="D:\disk1\fwork\new1"# Korean### -P checkTmfKoreanCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfKoreanCdAction.indPath="D:\disk1\fwork\new1"# Japanese### -P checkTmfJapaneseCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfJapaneseCdAction.indPath="D:\disk1\fwork\new1"# Simplified Chinese### -P checkTmfSimplifiedCnCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfSimplifiedCnCdAction.indPath="D:\disk1\fwork\new1"# Traditional Chinese### -P checkTmfTraditionalCnCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfTraditionalCnCdAction.indPath="D:\disk1\fwork\new1"# Portuguese### -P checkTmfPortugueseCdAction.imagePath="D:\disk1\fwork\new1"### -P checkTmfPortugueseCdAction.indPath="D:\disk1\fwork\new1"################################################################################## DO NOT CHANGE THE FOLLOWING OPTIONS OR THE INSTALLATION WILL FAIL# ################################################################################-silent-G replaceNewerResponse="Yes to All"-W featuresWarning.active=False-W winUserNotFound.active=False-W featuresWarning23.active=False-W featuresWarning2.active=False-W featuresWarning.active=False-W featuresWarning222.active=False-W featuresWarning22.active=False

Appendix A. Code samples 371

Page 388: Tivoli Workloud Scheduler Guide

maestro_plus rule set

/* * Component Name: TivoliPlus for TWS * * $Source: /usr/local/SRC_CLEAR/favi/SMITH/maestroplus/custom/RCS/maestro_plus.rls,v $ * * $Revision: 1.4 $ * * Description: * * (C) COPYRIGHT IBM corp. 2002 * Unpublished Work * All Rights Reserved * Licensed Material - Property of IBM corp. */

/************************************************************************//*********** TWS Event Rules **********************//************************************************************************/rule: job_started: (

description: 'Clear job prompt events related to this job',

event: _event of_class 'TWS_Job_Launched' where [

status: outside ['CLOSED'],job_name: _job_name,schedule_cpu: _schedule_cpu,schedule_name: _schedule_name

],reception_action: close_all_job_prompt_events: (

all_instances(event: _job_prompt_event of_class 'TWS_Job_Prompt'

where [job_name: equals _job_name,schedule_cpu: equals _schedule_cpu,schedule_name: equals _schedule_name,status: outside ['CLOSED']

]),set_event_status(_job_prompt_event, 'CLOSED')

)).

rule:job_done: (

description: 'Acknowledge that the job is done and close all \

372 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 389: Tivoli Workloud Scheduler Guide

outstanding job started events',

event: _event of_class 'TWS_Job_Done' where [

status: outside ['CLOSED'],hostname: _hostname,job_name: _job_name,job_cpu: _job_cpu,schedule_name: _schedule_name

],reception_action: auto_acknowledge_job_done: (

set_event_status(_event,'CLOSED')),reception_action: close_all_job_started_events: (

all_instances(event: _job_started_event of_class 'TWS_Job_Launched'

where [hostname: equals _hostname,job_name: equals _job_name,job_cpu: equals _job_cpu,schedule_name: equals _schedule_name,status: outside ['CLOSED']

]),set_event_status(_job_started_event, 'CLOSED')

)).

rule:job_ready_started: (

description: 'Acknowledge that the job is launched and close all \outstanding job ready events',

event: _event of_class 'TWS_Job_Launched' where [

status: outside ['CLOSED'],hostname: _hostname,job_name: _job_name,job_cpu: _job_cpu,schedule_name: _schedule_name

],

reception_action: close_all_job_ready_events: (all_instances(

event: _job_ready_event of_class 'TWS_Job_Ready'where [

hostname: equals _hostname,job_name: equals _job_name,

Appendix A. Code samples 373

Page 390: Tivoli Workloud Scheduler Guide

job_cpu: equals _job_cpu,schedule_name: equals _schedule_name,status: outside ['CLOSED']

]),set_event_status(_job_ready_event, 'CLOSED')

)).

rule: schedule_done: (description: 'Acknowledge that the schedule is done and close all \

outstanding schedule started events',

event: _event of_class 'TWS_Schedule_Done' where [

status: outside ['CLOSED'],hostname: _hostname,schedule_cpu: _schedule_cpu,schedule_name: _schedule_name

],reception_action: auto_acknowledge_schedule_done: (

set_event_status(_event,'CLOSED')),reception_action: close_all_schedule_started_events: (

all_instances(event: _schedule_started_event of_class 'TWS_Schedule_Started'

where [hostname: equals _hostname,schedule_cpu: equals _schedule_cpu,schedule_name: equals _schedule_name,status: outside ['CLOSED']

]),set_event_status(_schedule_started_event, 'CLOSED')

)).

rule: schedule_started: (description: 'Clear schedule stuck or schedule abend events \

or schedule prompt events related to this schedule',

event: _event of_class 'TWS_Schedule_Started' where [

status: outside ['CLOSED'],hostname: _hostname,schedule_cpu: _schedule_cpu,schedule_name: _schedule_name

374 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 391: Tivoli Workloud Scheduler Guide

],reception_action: close_all_schedule_started_stuck_events: (

all_instances(event: _schedule_stopped_event of_class _Class within

['TWS_Schedule_Stuck','TWS_Schedule_Abend', 'TWS_Schedule_Prompt']where [

hostname: equals _hostname,schedule_cpu: equals _schedule_cpu,schedule_name: equals _schedule_name,status: outside ['CLOSED']

]),set_event_status(_schedule_stopped_event, 'CLOSED')

)).

rule: schedule_stopped: (description: 'Start a timer rule',

event: _event of_class _Class within ['TWS_Schedule_Stuck','TWS_Schedule_Abend']

where [status: outside ['CLOSED'],hostname: _hostname,schedule_cpu: _schedule_cpu,schedule_name: _schedule_name

],reception_action: (

set_timer(_event,900,''))

).

timer_rule:timer_schedule_stopped: ( description: 'Calls a script that takes further action if schedule_stopped event is still OPEN',

event: _event of_class _Class within ['TWS_Schedule_Stuck','TWS_Schedule_Abend'] where [

status: _status outside ['CLOSED'],hostname: _hostname,schedule_cpu: _schedule_cpu,schedule_name: _schedule_name

], action: (

/* Phone technincal group - can acknowledge TEC events. */

Appendix A. Code samples 375

Page 392: Tivoli Workloud Scheduler Guide

exec_program(_event, '/tivoli/scripts/alarmpoint/send_ap_action.sh','',[],'NO'),

/* Page Management - no interaction with TEC */ exec_program(_event, '/tivoli/scripts/alarmpoint/send_info_down.sh','',[],'NO')

)).

rule: job_abend: (description: 'Check to see if this event has occurred in the last 24

hours',

event: _event of_class 'TWS_Job_Abend'where [

status: equals 'OPEN',hostname: _hostname,source: _source,origin: _origin,job_name: _job_name,job_cpu: _job_cpu,schedule_name: _schedule_name,job_pid: _job_pid

],reception_action: (

atomconcat(['Multiple failures of Job ', _schedule_name,'#',_job_name, ' in 24 hour period'],_fail_msg),

first_duplicate(_event, event: _duplicate

where [],

_event - 86400 - 86400 ),generate_event(

'TWS_Job_Repeated_Failure',[

severity='CRITICAL',source=_source,

origin=_origin,hostname=_hostname,job_name=_job_name,job_cpu=_job_cpu,schedule_name=_schedule_name,msg=_fail_msg

])

),

376 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 393: Tivoli Workloud Scheduler Guide

reception_action: (/* sends an email message to tech support and attaches stdlist */

exec_program(_event, '/tivoli/scripts/alarmpoint/send_ap_action.sh','EMAIL',[],'NO'),

)

).

rule: link_established: (description: 'Clear TWS_Link_Dropped and TWS_Link_Failed events for this

hostname cpu pair',

event: _event of_class 'TWS_Link_Established' where [

status: outside ['CLOSED'],hostname: _hostname,to_cpu: _to_cpu

],reception_action: close_all_link_dropped_failed: (

all_instances(event: _link_down_event of_class _Class within

['TWS_Link_Dropped','TWS_Link_Failed']where [

hostname: equals _hostname,to_cpu: equals _to_cpu,status: outside ['CLOSED']

]),set_event_status(_link_down_event, 'CLOSED'),set_event_status(_event,'CLOSED')

)).

rule: job_events_mgmt: (description: 'sets time posix var for events from 101 to 118',

event: _event1 of_class within ['TWS_Job_Abend','TWS_Job_Failed','TWS_Job_Launched','TWS_Job_Done','TWS_Job_Suspended','TWS_Job_Submit',

'TWS_Job_Cancel','TWS_Job_Ready','TWS_Job_Hold','TWS_Job_Restart','TWS_Job_Cant','TWS_Job_SuccP','TWS_Job_Extern',

Appendix A. Code samples 377

Page 394: Tivoli Workloud Scheduler Guide

'TWS_Job_INTRO','TWS_Job_Stuck','TWS_Job_Wait','TWS_Job_Waitd','TWS_Job_Sched']where [job_dead_eval: _job_dead_eval,

job_estst_eval : _job_estst_eval, job_estdur_eval : _job_estdur_eval, job_effst_eval: _job_effst_eval, schedule_name : _schedule_name, job_name : _job_name, job_cpu: _job_cpu],

reception_action: job_estdur_issuing: (

/* Est Duration */convert_local_time(_job_estdur_eval, _est_dur_locale),convert_ascii_time(_est_dur_locale, _est_dur_locale_ascii),atompart(_est_dur_locale_ascii, _est_dur_time, 12, 8), bo_set_slotval(_event1,estimated_duration, _est_dur_time )

),

reception_action: job_dead_issuing: (

/* deadline */_job_dead_eval > 0x0,convert_local_time(_job_dead_eval, _dead_locale),convert_ascii_time(_dead_locale, _dead_ascii),bo_set_slotval(_event1, deadline, _dead_ascii)

),

reception_action: job_estst_issuing: (

/* Estimated Start Time */_job_estst_eval \== 0x0, convert_local_time(_job_estst_eval, _est_start_locale),convert_ascii_time(_est_start_locale, _est_start_ascii),bo_set_slotval(_event1, estimated_start_time, _est_start_ascii)

),

reception_action: job_effst_issuing: (

/* Effective Start Time */_job_effst_eval > 0x0,convert_local_time(_job_effst_eval, _eff_start_locale),convert_ascii_time(_eff_start_locale, _eff_start_ascii),bo_set_slotval(_event1, effective_start_time, _eff_start_ascii)

)

).

378 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 395: Tivoli Workloud Scheduler Guide

rule: job_events_warnings: (description: 'evaluate time values vs deadline for 3 events',

event: _event2 of_class within ['TWS_Job_Launched','TWS_Job_Hold','TWS_Job_Ready']

where [job_dead_eval: _job_dead_eval outside [0x0],status: outside ['CLOSED'],job_estst_eval : _job_estst_eval,job_estdur_eval : _job_estdur_eval,job_effst_eval: _job_effst_eval,schedule_name : _schedule_name,

job_name : _job_name, job_cpu: _job_cpu

],

reception_action: job_estst_evaluation: (

/* Estimated Start Time */_job_estst_eval \== 0x0,

/* foreseen duration */pointertoint(_job_estdur_eval, _int_estdur_eval),_job_foreseen =? _job_estst_eval + _int_estdur_eval,

/* evaluation duration vs deadline */pointeroffset(_job_dead_eval, _time_diff, _job_foreseen),_time_diff > 0,set_event_message(_event2 , 'Job %s.%s on CPU %s, could miss its

deadline', [_schedule_name,_job_name,_job_cpu]),

set_event_severity(_event2 , 'WARNING')),

reception_action: job_effst_evaluation: (

_job_estst_eval == 0x0,_job_effst_eval > 0x0,

/* foreseen duration */pointertoint(_job_estdur_eval, _int_estdur_eval),_job_foreseen =? _job_effst_eval + _int_estdur_eval,

/* evaluation duration vs deadline */pointeroffset(_job_dead_eval, _time_diff, _job_foreseen),_time_diff > 0,set_event_message(_event2 , 'Job %s.%s on CPU %s, could miss its

deadline', [_schedule_name,_job_name,_job_cpu]),

set_event_severity(_event2 , 'WARNING')

Appendix A. Code samples 379

Page 396: Tivoli Workloud Scheduler Guide

)

).

/* ****************************************************************************** *//* *************************** TIMER RULES ******************************** *//* ****************************************************************************** */

rule: job_ready_open : (description: 'Start a timer rule for ready',

event: _event of_class 'TWS_Job_Ready'where [

status: outside ['CLOSED'],schedule_name: _schedule_name,job_cpu: _job_cpu,job_name: _job_name

],reception_action: (

set_timer(_event,600,'ready event'))

).

timer_rule:timer_job_ready_open: ( description: 'Generate a warning on the event if job ready event \

is till waiting for its job launched event',

event: _event of_class 'TWS_Job_Ready' where [

status: _status outside ['CLOSED'],schedule_name: _schedule_name,job_cpu: _job_cpu,job_name: _job_name

],timer_info: equals 'ready event',

action: (set_event_message(_event , 'Start delay of Job %s.%s on CPU %s',

[_schedule_name,_job_name,_job_cpu]),set_event_severity(_event , 'WARNING')

380 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 397: Tivoli Workloud Scheduler Guide

)).

rule: job_launched_open : (description: 'Start a timer rule for ready',

event: _event of_class 'TWS_Job_Launched'where [

status: outside ['CLOSED'],schedule_name: _schedule_name,job_estdur_eval : _job_estdur_eval,job_cpu: _job_cpu,job_name: _job_name

],reception_action: (

pointertoint(_job_estdur_eval, _int_estdur_eval),set_timer(_event,_int_estdur_eval,'launched event')

)).

timer_rule:timer_job_launched_open: ( description: 'Generate a warning on the event if job launched event \

is till waiting for its job done event',

event: _event of_class 'TWS_Job_Launched' where [

status: _status outside ['CLOSED'],schedule_name: _schedule_name,job_cpu: _job_cpu,job_name: _job_name

],timer_info: equals 'launched event',

action: (set_event_message(_event , 'Long duration for Job %s.%s on CPU %s',

[_schedule_name,_job_name,_job_cpu]),set_event_severity(_event , 'WARNING')

)).

Appendix A. Code samples 381

Page 398: Tivoli Workloud Scheduler Guide

Script for performing long-term switch

#!/bin/ksh

# This script will perform the tasks required to perform a TWS long term switch.# The short term switch MUST be performed prior to this script being run. export PATH=$PATH:/usr/local/tws/maestro:/usr/local/tws/maestro/binexport BKDIR=/usr/local/tws/maestro/TWS_BACKUPSexport DBDIR=/usr/local/tws/maestro/mozartexport SCDIR=/usr/local/tws/maestro/scriptsexport NEWMASTER=BMDM_workstation_nameexport NPW=`cat $BKDIR/user_info`export DATE=`date +%d_%m_%y_%H.%M`export TIME=`date +%H%M`export LOGFILE=/usr/local/tws/maestro/lts_log.$DATE

# The file 'user_info' contains the password that will be used for all windows users.# This file is owned and is read/writable ONLY by the root user.# The variable $NPW will be set to the windows password.

# Set the NEWMASTER variable to the workstation name of the agent that will take over us the# MDM. This allows this script to be used on multiple workstations with minimal amendment.

# ------------ Define Functions ---------------

function confswch{CHECK=nTYPE=XDRSWITCH=ntput clearecho You are about to perform a Long-Term Switch echo of the TWS Production Master Workstation.echo $NEWMASTER will take over as the MDM upon completion of this scriptechoecho Ensure that the TWS Database backup and copy scripts have run on the echo Current MDM before you continue with this switch.echoecho These are run automatically between the MDM and the BMDMecho at the start of each day.echo However, if a Long Term switch has already taken place and this is aecho return back to the master during the same processing day then the

382 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 399: Tivoli Workloud Scheduler Guide

echo back and copy scripts will need to be run on the system that you are echo switching back from, before you run this script.echo This will ensure that any database updates that have been made during the echo switch are maintained.echo echo Also, a Short Term Switch Needs to be performed before continuing.echo echo "***********************************************************"echo "* *"echo "* Y) Perform Long Term Switch to BMDM. *"echo "* *"echo "* X) Exit from this script. *"echo "* *"echo "***********************************************************"echoprintf "Please select the Y to continue with the switch or X to exit.(Y/X): "read TYPEecho "Please select the Y to continue with the switch or X to exit.(Y/X): " >> $LOGFILEecho Operator Reply to Type of switch menu is :$TYPE >> $LOGFILEif [ $TYPE = "Y" ] then echo The Long-Term switch process to echo Backup Master Domain Manager. echo Please Wait.......... echo The switch will create a logfile. Please check this upon completion. echo Logfile name is $LOGFILE else if [ $TYPE = "X" ] then echo Long-Term switch process is stopping. echo Re-run when ready to do this. echo The long Term Switch script was exited at `date +%d_%m_%y_%H:%M.%S` >> $LOGFILE exit 1 else confswch fifi}

echo Long Term Switch script started at `date +%d_%m_%y_%H:%M.%S` > $LOGFILEecho This script will perform the tasks required to perform a TWS long term switchecho so that $NEWMASTER will become the master.

echo This script will perform the tasks required to perform a TWS long term switch >> $LOGFILEecho so that $NEWMASTER will become the master. >> $LOGFILE

confswch

Appendix A. Code samples 383

Page 400: Tivoli Workloud Scheduler Guide

# Copy the globalopts file for the BMDM to mozart directory.# Required so that the BMDM can assume the full role of the MDM.

echo Copy the globalopts file for the BMDM to mozart directory. >> $LOGFILEecho Required so that the BMDM can assume the full role of the MDM. >> $LOGFILE

cp $BKDIR/globalopts.bkup $DBDIR/globalopts

# Remove any TWS DataBase files prior to load of lastest copy. # Note: These should be cleared down after switch of Master if possible.

echo Remove any TWS DataBase files prior to load of lastest copy. >> $LOGFILEecho Note: These should be cleared down after switch of Master if possible. >> $LOGFILE

cd $DBDIR

rm calendars*rm job.sched*rm jobs*rm mastsked*rm prompts*rm resources*

cd ../unison/network

rm cpudata*rm userdata*

# runmsgno file copied to mozart directory at start of day.echo runmsgno file copied to mozart directory at start of day. >> $LOGFILE

# Update the users.txt file with the password as file created with string of ****echo Update the users.txt file with the password as file created with string of **** >> $LOGFILE

cat $BKDIR/users.txt | sed "s/\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*/$NPW/g" >> $BKDIR/users.tmp

# Issue TWS DataBase replace commands to build the TWS DataBases on the BMDM.echo Issue TWS DataBase replace commands to build the TWS DataBases on the BMDM.>> $LOGFILE

composer replace $BKDIR/cpus.txtcomposer replace $BKDIR/calendars.txtcomposer replace $BKDIR/prompts.txtcomposer replace $BKDIR/parms.txt

384 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 401: Tivoli Workloud Scheduler Guide

composer replace $BKDIR/users.tmpcomposer replace $BKDIR/resources.txtcomposer replace $BKDIR/jobs.txtcomposer replace $BKDIR/scheds.txt

# Remove the file created that contains the windows users and passwords so that it is not# left on the system so that it can be read by other users.

rm $BKDIR/users.tmp

# Replace Final JobStream for BMDM to run everyday and Final JobStream for MDM# to run on request.echo Replace Final JobStream for BMDM to run everyday and Final JobStream for MDM >> $LOGFILEecho to run on request. >> $LOGFILE

# The final_jss file contains the FINAL job stream definitions for the MDM and BMDM with# with the run cycles set so that FINAL on the BMDM will be selected and that FINAL on the# MDM will not.

echo Replacing the FINAL job streams for the MDM and BMDM with the correct Run Cycles.

echo Replacing the FINAL job streams for the MDM and BMDM with the correct Run Cycles. >> $LOGFILE

composer replace $BKDIR/final_jss

# Cancel any Final JobStreams in the PLAN as they are no longer needed.echo Cancel any Final JobStreams in the PLAN as they are no longer needed.>> $LOGFILE

conman "cs @#@FINAL@;noask"

# Submit FINAL JobStream for BMDM as it is now needed.echo Submit FINAL JobStream for BMDM as it is now needed. >> $LOGFILE

# The FINAL job stream will be submitted with a time to make it unique in order to all for# multiple switches during a production day, if required.

conman "sbs $NEWMASTER#FINAL;alias=FINAL_$TIME;noask"

# Submit TWS DataBase backup / copy JobStream for BMDM # as it is now needed.

Appendix A. Code samples 385

Page 402: Tivoli Workloud Scheduler Guide

echo Submit TWS DataBase backup / copy JobStream for BMDM as it is now needed. >> $LOGFILE

conman "sbs $NEWMASTER#TWS_MAINT@DBASE_BKUP_COPY;noask"

echo Long Term Switch script finished at `date +%d_%m_%y_%H:%M.%S`. >> $LOGFILEecho "*****************************************************************"echo "* Long Term switch of MDM to $NEWMASTER is now complete. *"echo "*****************************************************************"

386 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 403: Tivoli Workloud Scheduler Guide

Appendix B. Additional material

This redbook refers to additional material that can be downloaded from the Internet as described below.

Locating the Web materialThe Web material associated with this redbook is available in softcopy on the Internet from the IBM Redbooks Web server. Point your Web browser to:

ftp://www.redbooks.ibm.com/redbooks/SG246628

Alternatively, you can go to the IBM Redbooks Web site at:

ibm.com/redbooks

Select the Additional materials and open the directory that corresponds with the redbook form number, SG246628.

Using the Web materialThe additional Web material that accompanies this redbook includes the following files:

File name DescriptionSG246628.zip Zipped Code Samples

B

© Copyright IBM Corp. 2003. All rights reserved. 387

Page 404: Tivoli Workloud Scheduler Guide

System requirements for downloading the Web materialThe following system configuration is recommended:

Hard disk space: 10 MB minimumOperating System: Windows/UNIX

How to use the Web materialCreate a subdirectory (folder) on your workstation, and unzip the contents of the Web material zip file into this folder.

388 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 405: Tivoli Workloud Scheduler Guide

acronyms

AIX Advanced Interactive Executive

APAR authorized program analysis report

API Application Program Interface

BDM Backup Domain Manager

BMDM Backup Master Domain Manager

CA Certificate Authority

CLI command line interface

CORBA Common Object Request Broker Architecture

cpu central processing unit

CPU ITWS workstation

CRL Certification Revocation List

CSR Certificate Signing Request

DES Data Encryption Standard

DHCP Dynamic Host Configuration Protocol

DM Domain Manager

DMTF Desktop Management Task Force

DNS Domain Name System

DSA Digital Signature Standard

ETL extract, transform, and load

FTA Fault Tolerant Agent

FTP File Transfer Protocol

GID Group Identification Definition

HACMP high-availability cluster multiprocessing

IBM International Business Machines Corporation

IP Internet Protocol

ITEC IBM Tivoli Enterprise Console

Abbreviations and

© Copyright IBM Corp. 2003. All rights reserved.

ITSO International Technical Support Organization

ITWS IBM Tivoli Workload Scheduler

JCL job control language

JSC Job Scheduling Console

JSS Job Scheduling Services

JVM Java Virtual Machine

MAC message authentication code

MDM Master Domain Manager

MIB Management Information Base

OLAP online analytical processing

OPC Operations, Planning and Control

PERL Practical Extraction and Report Language

PID process ID

PTF program temporary fix

RAM random access memory

RC return code

RC-4 Rivest's Cipher-4

RSA Rivest-Shamir-Adleman

RTM Recovery and Terminating Manager

SA Standard Agent

SAF System Authorization Facility

SHA Secure Hash Algorithm

SMS Storage Management Subsystem

SNMP Simple Network Management Protocol

SPB Software Package Block

SSL Secure Socket Layer

389

Page 406: Tivoli Workloud Scheduler Guide

STLIST standard list

TCP Transmission Control Protocol

TMA Tivoli Management Agent

TMR Tivoli Management Region

X-agent Extended Agent

390 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 407: Tivoli Workloud Scheduler Guide

Related publications

The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.

IBM RedbooksFor information on ordering these publications, see “How to get IBM Redbooks” on page 392. Note that some of the documents referenced here may be available in softcopy only.

� IBM Tivoli Monitoring Version 5.1.1 Creating Resource Models and Providers, SG24-6900

� Integrating IBM Tivoli Workload Scheduler and Content Manager OnDemand to Provide Centralized Job Log Processing, SG24-6629

Other publicationsThese publications are also relevant as further information sources:

� Tivoli Workload Scheduler Job Scheduling Console User’s Guide, SH19-4552

� Tivoli Workload Scheduler Version 8.2, Error Message and Troubleshooting, SC32-1275

� IBM Tivoli Workload Scheduler Version 8.2, Planning and Installation, SC32-1273

� Tivoli Workload Scheduler Version 8.2, Reference Guide, SC32-1274

� IBM Tivoli Workload Scheduler Version 8.2 Plus Module User’s Guide, SC32-1276

� Tivoli Management Framework Maintenance and Troubleshooting Guide, GC32-0807

� IBM Tivoli Workload Scheduler for Applications User Guide, SC32-1278

� IBM Tivoli Workload Scheduler Release Notes, SC32-1277

© Copyright IBM Corp. 2003. All rights reserved. 391

Page 408: Tivoli Workloud Scheduler Guide

Online resourcesThese Web sites and URLs are also relevant as further information sources:

� FTP site for downloading Tivoli patches

ftp://ftp.software.ibm.com/software/tivoli_support/patches/patches_1.3/

� HTTP site for downloading Tivoli patches

http://www3.software.ibm.com/ibmdl/pub/software/tivoli_support/patches_1.3/

� Google Web site

http://www.google.com

� Invoqsystems company Web site

http://www.invoqsystems.com

How to get IBM RedbooksYou can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site:

ibm.com/redbooks

392 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 409: Tivoli Workloud Scheduler Guide

Index

Symbols_uninst directory 137

AABEND 152access method 14Action list view 19adapter 228adapter configuration file 228Adapter Configuration Profile 230Adding Server ID 262adhoc 272adhoc jobs 246Admintool 148agent down 224agents unlinked 224AIX 149AIX 5L 258AIX 5L Version 5.1 32AlarmPoint 225, 236, 240AlarmPoint Java Client 233altpass 344anonymous FTP 120archiver process 5Authentication 177Authentication methods

caonly 179cpu 180string 180

Autotrace 8, 346AWSDEB045E 203

BBackup Domain Manager 267, 273Backup Master 34, 249–250Backup Master Domain Manager 244, 273Bad data 343Baltimore 180BAROC 223BAROC files 230bash 252basicConstraint 183

© Copyright IBM Corp. 2003. All rights reserved.

batch work 255batchman 57, 79, 208Batchman LIVES 55, 249BDM

See Backup Domain Managerbehindfirewall attribute 173, 323bm check deadline 208, 300bm check file 299bm check status 299bm check until 208BMDM 250–251, 256

See Backup Master Domain Managerbmevents file 214BmEvents.conf 225, 229build command options 339Bulk Data Transfer service 206

CCA 178

See certificate authorityca_env.sh 184cacert.pem 195Cancel option 215Cannot create Jobtable 343caonly 179CARRYFORWARD schedule 345central data warehouse 2central repository 273centralized security 205cert 181Certificate Authority 178, 180certificate request 191Certificate Revocation Lists 182certs directory 194Change control 277Choosing platforms 258chown 340, 342Cluster environments 272Column layout customization 25command line 246Commit 127Compiler 283components file 34

393

Page 410: Tivoli Workloud Scheduler Guide

composer 199, 261composer build 260composer command line 249composer create 254, 276composer replace 254, 276compressed form 297compression 297concurrency 260config_tme_logadapter.sh 229configuration options 228Configure TME Logfile Adapter task 228Configuring Non-TME Adapters 228Configuring TME Adapters 228conman 55, 78conman canceljob 159conman CLI 249conman enhancement 157conman show 285conman showjobs 157Connect command 292contents list 33continual scheduling availability 244Continue option 212, 214corrupted data 341CPU hardware crash 343CPU information 102cpu-intensive application 258Creating certificates 188Creating private keys 188CRL 182crldays 182cryptographic keys 178cryptography 178current production day 250Customizing timeouts 297

Ddata mining 3dead zone 305DEADLINE 219decision making process 252decompression 297default JSC list 310default_bits 185default_crl_days 182default_days 182default_keyfile 185default_md 182, 185

dependency resolution 270DES 177digital certificate 177–178direct IP connection 278directory structure 315Disk space 260Distinguished Name 178, 185distinguished_name 185DM

See Domain Managerdomain hierarchy 175Domain Manager 259, 267, 270Domain Manager infrastructure 271drop duplicate events 231dummy job 303duplicate events 231

Eelectronic ID card 177encrypt 191Encryption 177end to end scheduling 296Event Console 223Event Definition 223Event Server 222event.log 229Events 223evtsize 245, 342exchange resources 292Explorer View 22Extended Agent 14–15Extended Autotrace Feature 7extra mailman process 263

Ffacilitate global resources 272failover scenario 231faster dependency resolution 270fatal severity 233fault tolerance 268Fault Tolerant Agent 12–13, 34, 78, 173, 204, 246, 261, 265, 268, 272, 274, 299, 319, 326, 330FENCE 256, 344file checksum 12file dependency resolution 299File Status list 304file system 260, 340File system full 343

394 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 411: Tivoli Workloud Scheduler Guide

file system space 286file system syncronization level 296FINAL job stream 251finding answers 348fire proof 253Firewall Support 176firewalls 172, 202, 206, 268Fix Pack 01 134, 162, 230, 328flat file 250–251format files 230Framework

administration 244administrators 244binaries 338database 146network 244products 244See also Tivoli Management Framework

Free inodes 286freshInstall.txt 133

customized file 365sample file 359

Full status 254Full Status option 231

Ggarbage collected heap 314garbage collector 309generating a certificate 189Global option 284globalopts 249, 256globalopts.mstr 250grep 341

HHardware considerations 258hardware upgrade 299hexidecimal number 182high availability 258historical reports 286HKEY_LOCAL_MACHINE 148hostname 279HP 9000 K 258HP 9000 N 258HP ServiceGuard 273HP-UX 148HP-UX 11 258HP-UX 11i 258

html report 347

II/O bottleneck 259IBM DB2 Data Warehouse Center 2IBM HACMP 273IBM pSeries systems 258IBM Tivoli Business Systems Manager 286IBM Tivoli Configuration Manager 4.2 15IBM Tivoli Data Warehouse 286IBM Tivoli Enterprise Console 232–233, 236, 286IBM Tivoli Monitoring 224, 286–287IBM Tivoli Workload Scheduler 258, 286

Adding a new feature 56central repositories 273Common installation problems 136Connector 56FINAL schedule 55Installation CD layout 32Installation overview 32Installation roadmap 35Installing the Job Scheduling Console 109Installing using the twsinst script 130InstallShield MultiPlatform installation 35Jnextday script 55Launching the uninstaller 137Parameters files 276Promoting an agent 78schedlog directory 332scripts files 274Security files 275Silent Install using ISMP 132Troubleshooting the installation 135TWSRegistry.dat 34Uninstalling 137Useful commands 149

IBM Tivoli Workload Scheduler 8.1 296IBM Tivoli Workload Scheduler Client list 229IBM Tivoli Workload Scheduler V8.2 323Ignore box 280increase the uptime 273Information column 211inode 260inode consumption 260Inodes 260installation language 74Installation log files 135

tivoli.cinstall 135

Index 395

Page 412: Tivoli Workloud Scheduler Guide

tivoli.sinstall 135TWS_platform_TWSuser^8.2.log 135TWSInstall.log 135TWSIsmp.log 135

installation wizard language 80installed repository 178Installing Perl5 133InstallShield Multiplatform 15Intel 345Interconnected TMRs 292IP address 278–279IP spoofing 177IP-checking mechanism 177

JJCL field 155Jnextday 258, 260, 269, 284, 344Jnextday job 245, 250JNEXTDAY script 251Job FOLLOW 344job invocation 177Job Scheduling Console 55, 121Job Scheduling Console Fix Pack 120, 127Job Scheduling Services 145, 278Job Stream Editor 309jobinfo 9joblog 323JOBMAN 57, 79jobman 57, 79jobmanrc 166JOBMON 79Jobtable 343JSC 1.2 startup script 314JSC 1.3 startup script 314JSC launcher 314

Kkill 341killproc 341

Llag time 255Language Packs 56Latest Start Time 209, 211, 217leaf node FTA 278least impact 255least scheduling activity 284

Legacy entries 148legacy Maestro terminology 304LIMIT 256linking process 279Linux 148, 258Linux xSeries cluster 273listproc 341listsym 285live systems 256load average 286local copy of plan 274local SSL context 203localopts 104, 188, 251, 293log switch 326logman 283LogSources option 239long-term switch 245Low end machines 272ls command 120, 196

MMAC 177MAC computation 177maestro.fmt 229maestro.rls 235maestro_dm.rls 235maestro_mon.rls 231maestro_nt.rls 235maestro_plus.rls 236maestro_plus.rls rule set 240MaestroDatabase 292MaestroEngine 292maestront.fmt 229MaestroPlan 292mailman 286, 341mailman cache 296–297main TMR 244Major modifications 34Master 34Master Domain Manager 78, 244, 259, 269master option 249MASTERDM Domain 246maximum event size 279MCMAGENT 14MD5 177MDM 249, 251

See Master Domain Managermerge stdlists 204

396 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 413: Tivoli Workloud Scheduler Guide

Message file sizes 279Metronome 347Metronome script 347Metronome.pl 133Microsoft Cluster Services 273mm cache mailbox 297mm cache size 297mm response 298mm retrylink 298mm unlink 298Monitoring 286mount 94mozart directory 249, 251multiple domain architecture 267mvsca7 14mvsjes 14mvsopc 14

Nname resolution 278named pipes 270National Language Support 14NetConf 34, 286netman 340netman process 91NetReq.msg 34netstat 342network availability 251network card 342network performance 302Network status 286nm port 199nm SSL port 199nm tcp timeout 11node field 278node name field 278Non-TME Adapters 228nosetuid 343Not Checked status 304notification server 222

Ood utility 196odadmin 144odadmin set_bdt_port 206odadmin single_port_bdt 206Off-site disaster recovery 252off-site location 254

off-site systems 255On request 251online analytical processing 3On-site disaster recovery 245ONUNTIL 209ONUNTIL CANC 216ONUNTIL SUPPR 209OPENS dependency 304OPENS file 344OpenSSL 181–182, 184Oracle E-Business Suite Access Method 14Oracle E-Business Suite access method 14OutofMemoryError 315overhead 297owner’s distinguished name 178owner’s public key 178

Ppage out 224parms 166parms command 276parms utility 169passphrase 191, 196patch 278pending jobs 137PeopleSoft access method 14–15Perl interpreter 135Perl5 133Perl5 interpreter 133physical database files 250physical defects 342physical disk 258Plan 5, 9, 309Plan run number 273point of failure 254Power outage 343pre-defined message formats 223pre-defined rules 224preferences.xml 312pristine 338pristine Tivoli Framework environment 338private key 178, 191process id 341Processes 259prodsked 283production control database 246Production Control file 284production day 246

Index 397

Page 414: Tivoli Workloud Scheduler Guide

Production Plan 5progress bar 74, 105Progressive message numbering 24prompt 185propagating changes in JSC 309ps command 341psagent 14pSeries 258public key 178, 191public mailing list 348Public-key cryptography 178

Qqualified file dependency 305

RR/3 access method 14r3batch 14RAID array 258RAID technologies 259RAID-0 259RAID-1 259RAID-5 259RC4 177recipient 178recovery job 162Red Hat 258Red Hat Linux Enterprise Server 2.1 32Redbooks Web site 392

Contact us xiiiRegatta servers 258regedit.exe 148registry updates 338remote Master 292remove programs 137rep11 225rep1-4b 225rep7 225rep8 225, 283Report tasks 225reptr 225, 282RERUN 161Resource Model 224response file 132–133restart the TMR 245retcod 157Return Code condition 152Return Codes 152, 156

adding to a job definition 152Boolean expression 154Comparison expression 153conman enhancement 157example 157jobinfo enhancement 160Jobinfo example 161monitoring 155overview 152parent job 163RCCONDSUCC keyword 152retcod 157Return Code Mapping field 152rstrt_retcode option 160

rmstdlist 260Rollback 127root CA 178router 342RSA 177rule base 228rule-based event management 222runmsgno file 250

SSAM application 148SAP BC-XBP 2.0 14schedlog 260, 334schedlog directory 332schedulers 256Schedulr 283Secure Port 200secureaddr 199Security file 206, 275securitylevel 199self-signed digital certificate 178self-signed root certificate 181sendmail method 240serial number 182Server ID 262setsym 285setuid bit 343Setup EventServer for TWS 227SETUP.bin 79, 94SETUP.exe 136setup_env.sh 143SHA 177short-term switch 245SHOWDOMAINS command 249

398 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 415: Tivoli Workloud Scheduler Guide

showjobs 9shutdown 246signature 178signed digital certificate 178silent mode 132Sinfonia 269, 297, 345Sinfonia distribution 270SLL connection 178slow initializing FTAs 266slow link 266Software Distribution process 135software package block 277Solaris 148, 258SPB 277SSL auth string 180SSL session 178SSL support 177stageman 79, 282–283, 344standalone TMR 244Standard Agent 78, 272standard reports 225start of day 306start the adapter 228StartUp 8StartUp command 340stdlist directory 260, 328stop command 175stop the adapter 228submit additional work 245submit job 164SUCC 152successful interconnection 293Sun Cluster Software 273SunBlade systems 258Suppress option 210SuSE Linux 258swap space 224, 286switching 231switchmgr 188, 249switchover 250Symnew 282Symphony file 246, 269, 320system reconfiguration 299

Ttar file 121target workstation 78Task Assistant 23

TEC 3.8 Fixpack 01 230tecad_nt 229Termination Deadline 212, 218terminology changes 208test command 305text copies of the database files 337Tier 1 platforms 32Tier 2 platforms 33time stamp 345time-stamped database files 254Tivoli desktop 292Tivoli Distributed Monitoring 224Tivoli Endpoints 244Tivoli Enterprise Event Management 224Tivoli Framework 224

backup procedure 338directory 338environment 338See also Tivoli Management Framework

Tivoli Job Scheduling Services 1.3 109Tivoli managed environment 228Tivoli Management Framework 3.7.1 109Tivoli Management Framework 4.1 109Tivoli Management Region 292Tivoli Plus Module 223Tivoli Workload Scheduler

auditing log files 330TME Adapters 228TMF_JSS 143TMR failure 245tokensrv 79, 93tomaster.msg 245trace snap file 8Troubleshooting

batchman down 342Byte order problem 345compiler processes 345evtsize 342FTA not linked 345FTAs not linking to the master 340Jnextday in ABEND 345Jnextday is hung 344Jobs not running 343missing calendars 345missing resources 345multiple netman processes 341TWS port 342writer process down 341

two-way communication link 292

Index 399

Page 416: Tivoli Workloud Scheduler Guide

two-way TMR communication 292TWS Distributed Monitors 3.7 224TWS Plus for Tivoli collection 226TWS_Base 229tws_launch_archiver.pl 133TWS_NT_Base 229TWSCCLog.properties 280TWSConnector 143TWShome/network 273twsinst program 131twsinst script 130TWSPlus event group 230TWSRegistry 34TWSRegistry.dat 34, 147type of switchover 245

UUltrastar 258uninstaller 138UNIX 258UNIX logfile adapter 230unmount 77untar 338UNTIL 209Useful commands 149user TMR Role 290Userjob 220

VVeriSign 180Visual Basic 134vpd.properties 148

Wwait time 299warm start 255wchkdb 146wconsole 231Windows 2000 adapter 229Windows NT adapter 229winstall process 135wizard installation 132Wlookup 293Work with engines pane 20Workaround 230Workbench 224workstation definition 254

wr read 298wr unlink 298writer 57, 79, 286wuninstall 144

XX.509 certificate 178XML 233xSeries 258

Zz/OS access method 14

400 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

Page 417: Tivoli Workloud Scheduler Guide

(0.5” spine)0.475”<->

0.875”250 <->

459 pages

IBM Tivoli W

orkload Scheduler Version 8.2: New Features and Best Practices

Page 418: Tivoli Workloud Scheduler Guide
Page 419: Tivoli Workloud Scheduler Guide
Page 420: Tivoli Workloud Scheduler Guide

®

SG24-6628-00 ISBN 0738453129

INTERNATIONAL TECHNICALSUPPORTORGANIZATION

BUILDING TECHNICALINFORMATION BASED ONPRACTICAL EXPERIENCE

IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information:ibm.com/redbooks

IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices

A first look at IBM Tivoli Workload Scheduler Version 8.2

Learn all major new functions and practical tips

Experiment with real-life scenarios

IBM Tivoli Workload Scheduler, Version 8.2 is IBM’s strategic scheduling product that runs on many different platforms, including the mainframe. This IBM Redbook covers the new features of Tivoli Workload Scheduler 8.2, focusing specifically on the Tivoli Workload Scheduler 8.2 Distributed product. In addition to new features and real-life scenarios, you will find a whole chapter on best practices (mostly version independent) with lots of tips for fine tuning your scheduling environment. For this reason, even if you are using a back-level version of Tivoli Workload Scheduler 8.2, you will benefit from this redbook.

Some of the topics that are covered in this book are:� Return code management� Late job handling � Security enhancements� Disaster recovery � Job Scheduling Console enhancements� IBM Tivoli Enterprise Console integration� Tips and best practices

Customers and Tivoli professionals who are responsible for installing, administering, maintaining or using Tivoli Workload Scheduler 8.2 will find this redbook a major reference.

Back cover