db2 data warehousing at - gse young professionals warehousing at tme2.pdf · db2 data warehousing...
TRANSCRIPT
Agenda Overview of Toyota Motor Europe (TME) Our DataWarehouse environment Adding data into our DWH Maintaining our DWH Conclusions The challenges and the future of our DWH
Toyota – in the World
Established in 1937 77 manufacturing companies in 27 countries Vehicles sold in more than 170 countries worldwide Over 8.4 million vehicles sold worldwide in 2010 Market share: 47% in Japan (2009), 15% in US (2010), 4.4% in Europe (2010) Over 3 million cumulative hybrid sales €142 billion net revenue in FY 2008-09 Approx. 318,000 employees worldwide
Regional Headquarters
Toyota Motor Corporation
(TMC)
Toyota Motor Asia Pacific Pte Ltd.
(TMAP-MS)
Toyota Motor Europe NV/SA (TME)
Toyota Motor North America, Inc. (TMA)
Toyota – in Europe
Began selling cars in 1963 9 manufacturing plants in 7 countries Over €7 billion invested since 1990 Approx. €5 billion spent with European-based suppliers per year 808,320 vehicles sold in 2010 Over 300,000 hybrid vehicles sold in Europe to date 4.4 % market share in 2010
30 NMSCs 56 Countries 272 Lexus retailers 2,856 Toyota retailers
National Marketing & Sales Companies
Not shown: Toyota Caucasus LLP, Union Motors Ltd (Israel), Toyota Motor Kazakhstan LLP
14 Parts Logistics Centres 11 Vehicle Logistics Centres
Logistic Companies
Zeebrugge Vehicle Centre
Toyota Parts
Centre Europe
TMIP - Engine Plant
Toyota Caetano Portugal Commercial vehicles
TMUK - Engine Plant
TPCA - Aygo
TMMF - Yaris
TMMP - Engine & Transmission Plant
TMMR - Camry
TMMT - Auris & Verso
TMUK - Auris & Avensis
Manufacturing Facilities
Agenda Overview of Toyota Motor Europe (TME) Our DataWarehouse environment Adding data into our DWH Maintaining our DWH Conclusions The challenges and the future of our DWH
Our z/OS Environment. Two z/196 with 4 zIIP’s & 4 IFL’s. z/OS v1r12, 5 lpar’s. DB2 v9, Data sharing.1 billion SQL’s per day. Websphere MQ v6, IMS DC v11, CICS 2.3, ISPW (SCM), Jetty,… IBM System Storage DS8300 models
DEVELOPMENT DATAWAREHOUSEPRODUCTIONUK PROD & DWH
DEVELOPMENT DB2
ACCEPTANCE DB2
DBT
DBB
DBF
Prod DB
Dwh DB
Dwh DB
DB0P DB1P
DB2P Data Sharing Group
CF
DB2D
DBZB
DBZF
DBZT
DB2A
DBQB
DBQF
DBQT
Prod DB
PRD0 lpar PRD1 lpar PRD2 lparPROD lparTEST lpar
Our z/OS DWH Environment. PRD0 PRD1 PRD2
xxxP
xxxO
DAxP
DWHP.O*
PROD.* EUR.*
PROD
Business Intelligence
Datastage
ETL tools : Cobol applications & Websphere Datastage (running on Sun Blades) Cognos Cubes for data consolidation. 4 Terabytes of data in DWH DB2 DB’s.. Lobs are only replicated on demand. Weekly cleanup of the Audit tables. Archiving tables for the DWHP tables. Weekly backup , reorg & runstats. Batch & online replication using ‘Infosphere Queue Replication and Event Publisher 10’.
– 2000 tables replicated.
Agenda Overview of Toyota Motor Europe (TME) Our DataWarehouse environment Adding data into our DWH Maintaining our DWH Conclusions The challenges and the future of our DWH
Before Online Replication. Before : Batch Replication
TableTable
DB2
PRODUCTION DATAWAREHOUSE
DB2
TableTable
LOGS
B ICRUD
SCAN APPLICATION 1
SCAN APPLICATION 2
SCAN APPLICATION 3
SCAN APPLICATION 4
SCAN APPLICATION 5
APPLY APPLICATION 1
APPLY APPLICATION 2
APPLY APPLICATION 3
APPLY APPLICATION 4
CSV File Datastage
Problems : – Same DB2 Production Logs read again & again. – Time Dependent => Delays. – One chain in the scheduler per business application. – Can’t be integrated with other Websphere tools. – For RDMC : Unload – Reload ;-(
Why Online Replication ? Batch replication : Only 1 price simulation / day Day 1 Day 2
DWH
PROD11:00 Simulation 11:00 Simulation
10:00 Analysis
23:00 Batch Replication
Online Replication : Several price simulations / day
WAIT
Why IBM Q-Replication & Data Event Publisher ? • z/OS tool. • Allows Online and … Batch replication. • Can also produce CSV or XML documents in output.
RACF
From Batch to Online Replication.
TableTable
DB2
PRODUCTION DATAWAREHOUSE
DB2
TableTable
LOGS
B ICRUD
SCAN APPLICATION 1
SCAN APPLICATION 2
SCAN APPLICATION 3
SCAN APPLICATION 4
SCAN APPLICATION 5
APPLY APPLICATION 1
APPLY APPLICATION 2
APPLY APPLICATION 3
APPLY APPLICATION 4
CSV File Datastage
TableTable
MQ Channel
XML /CSV FileDB2 MQ
DB2
TableTable
LOGS
CRUD LOGS BUFFER
MQ
PGM
Datastage
Q-REP
Q CAPTURE
Q-REP
SUB
PUB
Q APPLY
B I
Before : Batch Replication
After : Q-REP Online Replication
Unidirectional Q-Replication Setup
Set up a new Q – REP application: 1. Websphere MQ - Define channels & queues in Websphere MQ. 2. Q-Replication - Define the application Queue-Maps.
• Example of communication with the started tasks:
asnqacmd apply_server=QREP status show details. /F QCAPPRD0,status show details
Q Capture control tables
Source Server
Q Apply control tables
Q CaptureProgram
Q Apply Program
Table
TableTable
TableTable
Table
Log
Target Server
Source Table
Target Table
Replication Queue Map
Administration Queue
Send Q Receive Q
Q Subscription
AsnqccmdModify
AsnqacmdModify
Unidirectional Q-Replication Setup Define Sender Channel QREP.NPAP.NPAO Define Receiver Channel QREP.MQRW.MQXP Define Remote Queue QREP.NPAPNPAO.SENDQ Define Local Queue QREP.ADMINQ
• Define Receiver Channel QREP.NPAP.NPAO • Define Sender Channel QREP.MQRW.MQXP
• Define Local Queue QREP.NPAPNPAO.RECVQ1 • Define Remote Queue QREP.ADMINQ
INSERT INTO QREP.IBMQREP_SENDQUEUES (SENDQ='QREP.NPAPNPAO.SENDQ1' RECVQ = 'QREP.NPAPNPAO.RECVQ1' )
• INSERT INTO QREP.IBMQREP_RECVQUEUES (SENDQ='QREP.NPAPNPAO.SENDQ1' RECVQ = 'QREP.NPAPNPAO.RECVQ1' ADMINQ=‘QREP.ADMINQ)’
IBMQREP_SENDQUEUES IBMQREP_SENDQUEUES
//QCapture //QApply
QREP QREPReplication Queue Map
Administration Queue
Send Q Receive Q
Remote AdminQ
Q-Replication Definitions (Q-MAP’s)
Websphere MQ Definitions (Channels & Queues)
Add a table to your Q-Replication environment.
INSERT INTO QREP.IBMQREP_SIGNAL (SIGNAL_TYPE, SIGNAL_SUBTYPE, SIGNAL_INPUT_IN)
VALUES ('CMD', 'CAPSTART', 'ACCDLR_TBL0001');
INSERT INTO QREP.IBMQREP_SRC_COLS (SUBNAM = 'ACCDLR_TBL0001' SRC_COLNAME = 'BODY_NO‘ IS_KEY = 1
INSERT INTO QREP.IBMQREP_SUBS (NAME= 'ACCDLR_TBL0001' SOURCE = 'NPAP.ACCDLR_TBL’ TARGET = ‘NPAO.ACCDLR_TBL' SENDQ = 'QREP.NPAPNPAO.SENDQ1‘ HAS_LOADPHASE = 'I‘/’N’)
• INSERT INTO QREP.IBMQREP_TARGETS (NAME= 'ACCDLR_TBL0001' SOURCE = 'NPAP.ACCDLR_TBL’ TARGET = ‘NPAO.ACCDLR_TBL' SENDQ = 'QREP.NPAPNPAO.SENDQ1‘ HAS_LOADPHASE = 'I‘/’N’)
INSERT INTO REP.IBMQREP_TRG_COLS SUBNAME = 'ACCDLR_TBL0001’ SOURCE_COLNAME = 'BODY_NO' TARGET_COLNAME = 'BODY_NO'
For each table: ( A subscription = 1 Source Table + 1 Target Table. )
For each column of the table:
Filter at source & target: WHERE COL C = ‘ABC’
Initial Load ?
//QCAPTURE //QAPPLY
IBMQREP_TARGET
HAS_LOADPHASE=’I’
Admin Q
Receive Q
IBMQREP_SUBS
HAS_LOADPHASE=’I’
ABC
ABC
ABC_TBL
ABC_TBL
Log
START
WLM AE ( NUMTCB=1)
Call Stored Procedure SYSPROC.DSNUTILS
//DWPSYSX1
EXEC SQL DECLARE C1 CURSOR FOR SELECT * FROM TMMEDB2P.NPAP.ABC ENDEXEC LOAD DATA LOG NO REPLACE NOCOPYPEND INCURSOR C1 INTO TABLE NPAO.ABC
X-LOAD:
ASN7010I SPIL Q
CRUD
No disruption at the source ! If RI at target, RI dropped & saved in SAVERI before the Load, and put back at the end. No more Init Load? Update IBMQREP_SUBS & REP.IBMQREP_TARGETS
Set HAS_LOADPHASE = ‘N' where SUBNAME = ‘ABC';
Initial Load II. Check Initial Load results : -DIS UTILITY(*)
Q-Replication for multiple applications.
Q Capture control tables
Source Server
Q Apply control tables
Q CaptureProgram
Q Apply Program
C
TableTable
TableTable
C ‘Log
Target Server
Source Tables
Target Tables
Queue Map - A
Administration Queue
Send Q Receive Q
Q Subscriptions – 3 Applications
B
A A ‘
B ‘
Queue Map - BSend Q Receive Q
Queue Map - CSend Q Receive Q
A A
B B
C ‘
A ‘
B ‘
C ‘
A ‘
B ‘
C C
Apply Agents
One Q-capture & one Q-apply stc for all applications. 1 Queue Map per application
Apply can be started at different time, depending on the SLA’s. Administration & restart queues are shared. All messages are persistent. DEFPSIST(YES)
Use the Modify command or asnqccmd /asnqacmd: prune, qryparms, reinit, status show details, stop
/F QCAPPRD0,qryparms startq, stopq, reinitq, startq all
/F QCAPPRD0,startq=QREP.NPAPNPAO.SENDQ1 chgparms
/F QCAPPRD0,chgparms autostop=y
prune, qryparms, reinit, status show details, stop /F QAPPWRH0,status show details
startq, stopq, reinitq, startq all /F QAPPWRH0,stopq=QREP.NPAPNPAO.RECVQ1
chgparms /F QAPPWRH0,chgparms deadlock_retries=20
Data Event Publisher.
Delimited (CSV) format. Code page of published message – IBMQREP.SENDQUEUES.MESSAGE_CODEPAGE
XML format. Always in Unicode.
LOB data supported. 2009-10-08-07.09.16.652664,"VECP","SINSTHIS_TBL","C4E776CF1F95","I"," ",1001 2009-10-08-07.09.16.652712,"VECP","SUBINSTR_TBL","C4E776CF1F95","U","B",1001 2009-10-08-07.09.16.652712,"VECP","SUBINSTR_TBL","C4E776CF1F95","U","A",1001 2009-10-08-07.09.25.605943,"VECP","SUBINSTR_TBL","C4E776D82259","I"," ",1008
Q-REP tools.
IBM Replication Center • Used to generate Q-REP definitions.
IBM Q-Replication Dashboard • Installed on a server and distributed to
end users (Viewers’ role). TME In-house ISPF replication Dashboard –
• Used for daily QREP administration.
Q-REP tools.
High Availability in MQ.
• Queue Sharing Groups vs Shared Disks.
Tips.
• Websphere MQ & DB2 recommendations. • Websphere MQ useful tools & utilities.
– SupportPacs » MQ Explorer : Free Eclipse Plug-in.
About Websphere MQ.
See Annexe 1
Problem 1: Simulate a batch replication.
//Q CaptureProgram
//Q Apply Program
Source Tables
Queue Map
SendQ RecvQ Target Tables
Channel A
Xmit Q A
Log
DB2P DW2PMQRWMQXP
Start Q-Map on the RecvQ
//Q CaptureProgram
//Q Apply Program
Source Tables
Queue Map
SendQ RecvQ Target Tables
Channel A
Xmit Q A
Log
DB2P DW2PMQRWMQXP
Start MQ Channel
//Q CaptureProgram
//Q Apply Program
Source Tables
Queue Map
SendQ RecvQ Target Tables
Channel A
Xmit Q A
Log
DB2P DW2PMQRWMQXP
Stop Q-Map on the RecvQ
Monitor Queue Depth till = 0
%PDK$QMN Job: StartQ Q-map on the Receive Queues : Start of the « Apply » phase..
%PDK$QDP Job: Monitor Q-DePth. %PDK$QMF Job: StopQ Q-map on the Receive Queues.
%PDK$CHF Job: Stop the channel. We are back to the initial situation.
Production Server
//Q CaptureProgram
//Q Apply Program
Datawarehouse Server
Source Tables
Queue Map
SendQ RecvQ Target Tables
Channel A
Xmit Q A
Log
DW2P
Initial Situation
%PDK$CHF Job: Create a consistency point.
//Q CaptureProgram
//Q Apply Program
Queue Map
SendQ RecvQ Target Tables
Channel A
Xmit Q A
Log
DB2P DW2PMQRWMQXP
Stop MQ Channel
Problem 2: From BMC to IBM Q-REP Audit tables. For every Update in production, before & after images are recorded in the audit tables. Problem: BMC & IBM audit tables have a different layout.
– Update on a BMC audit table: 2 records generated.
– Update on a Q-REP audit table: 1 record generated.
More columns on target table => Longer records => switch from 4K to 8K bufferpools for some tablespaces. We don’t want to modify our DWH applications.
Problem 2: From BMC to IBM Q-REP Audit tables.
BMC : before
Q-REP : after
B I
B I
Trigger 1
Trigger 2
Problem 2: From BMC to IBM Q-REP Audit tables.
Trigger 1: AFTER INSERT into SOURCE, INSERT Before_Values (X* Columns) into TARGET
Trigger 2: AFTER INSERT into SOURCE, INSERT Before_Values (* Columns) into TARGET
QA$P.* Source Table
DA$P.* Target Table
Problem 2: From BMC to IBM Q-REP Audit tables.
Batch replication: Performance
Max 50.000 messages processed per minute. Max 200.000 SQL’s applied per minute. Performance Graph of batch replication.
MSU Consumption Replacement of 18 daily BMC scan (LogMaster) & 18 BLC Apply+ applications.
– 18 BMC Scan applications > 2 Q-Capture stc. – 6 Apply+ applications on CCD tables > 1 Q-Appy stc. – 12 Apply+ applications on shadow tables > 1 Q-Apply stc. – No MQ > 2 Queue Managers
Agenda Overview of Toyota Motor Europe (TME) Our DataWarehouse environment Adding data into our DWH Exploiting the data in our DWH Maintaining our DWH Conclusions The challenges and the future of our DWH
Data Model Changes via Q-REP sub-commands. ADDCOL to a subscription
INSERT INTO QREP.IBMQREP_SIGNAL (SIGNAL_TIME,SIGNAL_TYPE,SIGNAL_SUBTYPE,SIGNAL_INPUT_IN,SIGNAL_STATE)
VALUES (CURRENT TIMESTAMP,'CMD',’ADDCOL,'ACCDLR_TBL0001:COL8','P')
Restrictions: – New columns must be NULLable or defined NOT NULL WITH DEFAULT. – Columns renamed, enlarged, new Primary Key ? – Tables renamed, columns filters ? – In some cases, the source table must be reorganized. – Only 20 columns can be added during a single MQ commit interval.
Advantages: – Is Dynamic. No need to stop the subscription.
Data Model Changes at TME. Restrict access to tables to be modified. Apply all pending changes through the SrceN1AODS#TrgtQ application.
Rexx/DB2 pgm to Stop impacted Subscriptions/Publications. Rexx/DB2 pgm to Delete impacted Subscriptions/Publications.
Data model change of Production DB. Data model change of DWH DB.
Rexx/DB2 pgm to re-Create Subscriptions/Publications. Rexx/DB2 pgm to Start Subscriptions/Publications.
Warn Operators. Restart production activity.
Agenda Overview of Toyota Motor Europe (TME) Our DataWarehouse environment Adding data into our DWH Exploiting the data in our DWH Maintaining our DWH Conclusions The challenges and the future of our DWH
Conclusion IBM Q-Replication & Data Event Publisher
– Batch Replication – Online Replication – Capture SQL statements in CSV or XML files
– Flexible & Open
• Managed by ASNCLP commands or DB2 SQL statements. • Use IBM tools or your own customized tools.
– Many other functions
• Bidirectional replication, peer to peer replication • Together with Federation
– Capture from all DBMS (SQL Server excepted). – Apply to all DBMS.
Documentation WMQ Series.
– WMQ Library • http://www-01.ibm.com/software/integration/wmq/library/v60/zos-specific_books.html
– WMQ Support Packs • http://www-01.ibm.com/support/docview.wss?rs=977&uid=swg27007205
WMQ Redbooks
• http://www.redbooks.ibm.com/redbooks/pdfs/sg246864.pdf
Q-Replication & Data Event Publisher. – QREP developersWorks
• http://www.ibm.com/developerworks/data/roadmaps/qrepl-roadmap.html#configuring
Q-REP Redbooks • http://www.redbooks.ibm.com/redbooks/pdfs/sg247215.pdf • http://www.redbooks.ibm.com/redbooks/pdfs/sg246487.pdf • http://www.redbooks.ibm.com/redbooks/pdfs/sg247637.pdf
Other documents • http://www.gsebelux.com/?q=node/60 • http://www-01.ibm.com/support/docview.wss?uid=swg21177206
Agenda Overview of Toyota Motor Europe (TME) Our DataWarehouse environment Adding data into our DWH Exploiting the data in our DWH Maintaining our DWH Conclusions The challenges and the future of our DWH
The challenges and the future of our DWH 1. Use ASNCLP to manage our Q-rep environment.
– Replace SQL scripts by ASNCLP scripts. – Extract ASNCLP scripts from UAT and promote them to Prod.
2. Move Datastage to a z/Linux partition – Replace Datastage 8.0.1 on Sun Solaris by Datastage 8.7 on z/linux.
Annexe 1
DATAWAREHOUSE
PRODUCTION
Dwh DB
DB1P DB0P
DB2P DSG
Prod DB
PRD1 lpar PRD0 lpar
PRD2 lpar
MQYP MQXP
MQRW
MQRP QSG
DW2P
Q-REP
CF
//QApply
XMITQLogs
PSIDsLogs
PSIDs+ SendQ
+ RestartQ + AdminQ
Shared Channels
Queue Sharing Group ?
Q-REP
//QCapture 1
Q-REP
//QCapture 2
Queues are shared and stored in a structure in the CF. Shared channels. Capture can be started alternatively on each lpar. Problem:
– If the sender channel is stopped, the XmitQ is full in just 2 minutes.
⇒ Qcapture stops ⇒ Recover structure
Shared Disks ?
DATAWRH
PRODUCTION
Dwh DB
DB1P DB0P
DB2P DSG
Prod DB
PRD1 PRD0
PRD2 lpar
MQRW
DW2P
Q-REP //QApply
MQXP Shared LOG’s & PSID’s
RECVQ
CF
MQXP
Q-REP
//QCapture
XMITQ MQXP
Q-REP
//QCapture
XMITQ
MQXP & QCapture can both be started on PRD0 or PRD1. => 2 groups defined in Netview-SA.
No shared queues or shared channels. No structures in the CF.
If the sender channel stops, messages are stored in the XMITQ in the PSID’s. => Just restart the channel. => No recover => Qcapture does not stop.
Shared Disks Implementation. /RO *ALL,F RMF,S III
ERB105I III: DATA GATHERER ACTIVE /RO *ALL,F RMF,P III
ERB803I III: MONITOR III TERMINATED Small impact for DSG with 2 members.
MQ & Q-REP Applications defined on both PRD0 & PRD1 in SA-Netview DWH to PROD Channel > Connect to SD.
Sliding job class ’G’ for jobs running on PRD0.
DATAWRH
PRODUCTION
Dwh DB
DB1P DB0P
DB2P DSG
Prod DB
PRD1 PRD0
PRD2 lpar
MQXP
MQRW
DW2P
Q-REP
Q-REP //QApply
//QCapture
XMITQ
RECVQ
IFCID 306
XCF
Log Output BufferLogs
http://www.redbooks.ibm.com/abstracts/SG246864.html
Websphere MQ & DB2 recommendations. Strong naming convention for MQ channels & queues.
– Less RACF profiles to be defined.
Backup your MQ definitions in a dataset. Monitor the MQ channels ( stop & start )
– Trap CSQX528I , CSQX599E messages. => Alerts. MQ Zparms to be adapted.
– CTHREAD=1300 – IDBACK=600 – IDFORE=600
Queues must be indexed.
– eg: DEFINE… QLOCAL … INDXTYPE(MSGID)
Pay attention to MAXDEPTH & MQXMSGL ( must be > Q-rep Msg Length ). – DEFINE… QLOCAL …. MAXDEPTH(999999999) …. MAXMSGL(4194304)
Websphere MQ & DB2 recommendations. Pay attention to MAXUMSGS if you have long running threads.
• ALTER QMGR MAXUMSGS(200000) on MQXP
Don’t mix Q-rep objects & MQ admin objects in the same pageset. • Eg: If a common PSID is full => You won’t be able to pass commands to MQ !
Use Compression for your MQ PSID’s : • Adapt your ACS routines to use: CLUSTER ------- MQRW.PSID00
STORAGECLASS -----DBPRDG > with Guaranteed Space Yes DATACLASS ------DBEXADD2 > with Dsname Type Extended Required > with Compaction Yes
• Save 75% of space !
MQ PSID’s must be multivolumes :
Websphere MQ & DB2 recommendations. DB2 and MQ Active Logs striping => // I/O’s to active Logs.
• Adapt your ACS routines to use: CLUSTER ------- MQRW.LOGCOPY1.DS01 STORAGECLASS ---SCDBLOGG > with Guaranteed Space Yes
> with Data Rate Mb/sec 12 => 12/4=3 Stripes DATACLASS -------DBEXADD > with Dsname Type Extended Required DATA ------- MQRW.LOGCOPY1.DS01.DATA ATTRIBUTES STRIPE-COUNT-----------3 VOLUME VOLSER------------PL0044 STRIPE-NUMBER----------1 VOLUME VOLSER------------PL0045 STRIPE-NUMBER----------2 VOLUME VOLSER------------PL0046 STRIPE-NUMBER----------3
• The 3 volumes must be on 3 different Logical Controller Units – LCU:
• Log N and Log N+1 not on the same volume. • Different storage groups for Active Logs, Archive Logs & Page sets. • Single logging.
LogLog
Websphere MQ & DB2 recommendations. Monitor Queue Depth.
Monitor Bufferpools usage. BP > 85% full => DWT (Deferred Write Task) > 0 and pages are off-loaded to the page sets.
BP0 - Page Set 0 BP1 - System Queues
Checkpoints every 15 minutes - Logload = 500.000 – Impact on MQ Restart time after an abend. Trap CSQY220I message to monitor MQ memory usage. SMF records 115 &116 + Support Pacs MP1E (V6) & MP1B (V7) As from MQ V6, most of Zparms can be adapted dynamically using MQ commands:
SET SYSTEM, SET LOG or SET ARCHIVE
BP2 - SYSTEM.COMMAND, SYSTEM.ADMIN, DLQ BP3 ,… - APPLICATION DATA
Websphere MQ & DB2 recommendations. Define Dead Letter Queue on Target MQ subsystem.
Trap CSQX548E message > Generate Alert > Call DBA.
CHANNEL N
MQXB
MQRWQREP.NPAP.NPAO.SENDQ2 QREP.NPAP.NPAO.RECVQ2
XMITQ N
DEAD LETTER QUEUE
QREP.VECP.VECO.SENDQ4 QREP.NPAP.NPAO.RECVQ4CHANNEL V
XMITQ V
MQPUT
FULL
FULL
CSQX548E
CSQUDLQH UTILITY
Dead-Letter Queue Handler Utility (CSQUDLQH) to put the messages back into the original queues.
Websphere MQ useful tools & utilities.
Websphere MQ SupportPacs: • http://www-01.ibm.com/support/docview.wss?rs=977&uid=swg27007205
– MA18/MA95 SupportPacs : Rexx interface to Webphere MQ – MO71 SupportPac : Websphere MQ for Windows – GUI Administrator
BMC Mainview for MQ Series. (+ BMC AutoOperator for MQ ) PQEdit : ISPF Editor for MQ series. Replication Center V9.7.
– As from V9.7., MQ objects can be generated via the Replication Center. IBM Websphere MQ Explorer
• Free Eclipse plug-in for Websphere MQ.
Special Case: Local Online Replication on UK Replace Old BMC Scan-Apply. Replace Unload-Reload jobs. On-line Replication. 1 Subscription per table. X-load used at subscriptions start-up. One Q-Capture STC – QCAPBR1 One Q-Apply STC – QAPPBR1 1 Queue map / business application. Admin Q & Restart Q shared by all applications.
One single Queue Manager MQRB. No MQ channels.
UK PROD & DWH
DBB
Prod DB Dwh DB
PROD lparMQRB
Q-Rep DB
Q ApplyQ Capture
LOGS
Admin QRestart Q
Q-REP