sql server capture reference - .net framework

110
© 2019 SQData Corporation All rights reserved. SQL Server Capture Reference A Syncsort company Version 4

Upload: others

Post on 02-Mar-2022

23 views

Category:

Documents


0 download

TRANSCRIPT

© 2019 SQData Corporation All rights reserved.

SQL Server CaptureReference

A Syncsort company

Version 4

2

Contents

Chapter 1 SQL Server Data Capture Introduction5

SQL Server Data Capture Summary6

Organization7

Terminology8

Related Documentation9

Chapter 2 SQL Server Capture Considerations10

SQL Server Authorizations and Roles11

Configuration of native SQL Server replication12

Configuration of the SQL Server Log Backup13

Unit-of-Work Integrity14

Storage and Publishing15

Chapter 3 SQL Server Log Reader Capture16

Implementation Checklist17

Prepare Environment19

Identify Source and Target System and Datastores19

Identify/Authorize/Prepare Operating User(s) (DB Server)19

Create SQData Variables Directories21

Reserve TCP/IP Ports (DB Server)22

Generate Public / Private Keys and Authorized Key File22

Configuring SQL Server Replication (DB Server)23

Configuring SQL Server Backup44

Setup CDCStore Storage Agent53

Size Transient Storage Pool53

Create CDCStore CAB file54

Setup Log Reader Capture Agent57

Confirm SQL Server Publications57

Create ODBC System Data Source (DSN)57

Create SQL Server Capture CAB file57

Prepare Log Reader Capture Batch File60

Setup Capture Controller Daemon61

Create Access Control List61

Create Agent Configuration File62

Configure Controller Daemon Batch Script65

Register SQDaemon as a Service65

Verify Controller Daemon Install66

3

Contents

Configure Engine67

Create Application Directory Structure67

Generate Engine Public / Private Key68

Specify SQL Server Source Datastore in Engine Script69

Configure Engine Batch Script70

Configure Engine Controller Daemon71

Component Verification72

Start Controller Daemon72

Start SQL Server Log Reader Capture Agent72

Start Engine72

SQData SQL Test Transactions72

Operating Scenarios73

Capture New SQL Server Data73

Send Existing SQL Server Data to New Target74

Filter Captured Data76

Chapter 4 SQL Server Straight Replication78

Target Implementation Checklist79

Create Target Tables80

Generate Engine Public / Private Keys81

Create Straight Replication Script82

Prepare Engine Batch Script84

Verify Straight Replication85

Chapter 5 SQL Server Active/Active Replication86

Chapter 6 SQL Server Engine Considerations87

SQL Server DDL88

SQL Server Table Names95

Chapter 7 SQL Server Operational Issues96

Starting the Capture Agent97

Determining the Initial Start Point98

Executing the fn_dblog Command98

Manually Setting the Capture Start Point100

Displaying Capture Agent Status and Statistics101

Displaying Storage Agent Statistics103

Stopping the Capture Agent105

Manual Log Truncation106

4

ContentsIndex 107

Rights,Marksand Notices

0

5© 2019 SQData Corporation

1SQL Server Data Capture IntroductionSQData's enterprise data integration platform includes change data capture agents for the leadingsource data repositories, including:

· SQL Server on Windows

This document is a reference manual for the configuration and operation of this capture agentincluding the transient storage and publishing of captured data to Apply Engines running onWindows and other platforms. Included in this reference is an example of Simple Replication of thesource datastore. Apply Engines can also perform complex replication to nearly any form ofstructured target data repository, utilizing business rule driven filters, data transformation logic andcode page translation. We like to call our product "The Swiss Army Knife of Data Integration Tools".

The remainder of this section:

· Summarizes features and functions of the SQL Server change data capture agent

· Describes how this document is organized

· Defines commonly used terms

· Identifies other useful complimentary documents

6

1SQL Server Data Capture Introduction

© 2019 SQData Corporation

SQL Server Data Capture SummarySQData provides one capture agent for SQL Server, Log Reader, which provides for the following:

Attribute SQL Server Log Reader

Data Capture Latency Near-Real-Time or Asynchronous

Capture Method Log Reader with SQL Server Replication component

Unit-of-Work Integrity Committed Only

Output Datastore Options TCP/IP

Runtime Parameter Method SQDCONF

Auto-Disable Feature Yes

Auto-Commit Feature Yes

Multi-Target Assignment Yes

Include/Exclude Filters Yes – through a filter in the Engine

Transaction Include/Exclude Yes – through a filter in the Engine

7© 2019 SQData Corporation

1SQL Server Data Capture Introduction

OrganizationThe following sections provide a detail level Reference to installation, configuration and operationof the SQData Capture Agents for DB2:

· SQL Server Capture Considerations

· SQL Server Log Reader Capture

· SQL Server Straight Replication

· SQL Server Active/Active Replication

· SQL Server Operational Issues

See the Change Data Capture Guide for an overview of the role capture plays in SQData's enterprisedata integration product, the common features of the capture agents and the transient storage andpublishing of captured data to SQData Apply Engines running on all platforms.

8

1SQL Server Data Capture Introduction

© 2019 SQData Corporation

TerminologyTerms commonly used when discussing the Change Data Capture Agents are described below:

Term Meaning

Agent Individual components of SQData's architecture are often referred to as Agents.

CDC An abbreviation for Changed Data Capture.

DatastoreAn object that contains data such as a hierarchical or relational database, VSAMfile, flat file, queue, etc.

ExitA classification for changed data capture components where the implementationutilizes a subsystem exit in IMS, CICS, etc.

File Refers to a sequential (flat) file

JCL An abbreviation for Job Control Language that is used to execute z/OS processes.

Platform Refers to an operation system instance.

Queue WebSphere MQ NOTE: Support is being deprecated.

RecordA record within a relational database table. Used interchangeably with record andmessage.

Source A datastore monitored for content changes by the Capture Agent.

SQDCONFA Utility that manages configuration parameters used by some data capturecomponents.

SQDXPARMA Utility that manages a set of parameters used by some IMS and VSAM changeddata capture components.

TableUsed interchangeably with relational datastore. A table represents a physicalstructure that contains data within a relational database management system.

Target A datastore to which information is being updated/written.

9© 2019 SQData Corporation

1SQL Server Data Capture Introduction

Related DocumentationInstallation Guide - This publication describes the installation and preventive maintenanceprocedures for the SQData for z/OS and SQData for Multiplatforms products.

Data Capture Guide - This publication provides an overview of the role capture plays in SQData'senterprise data integration product, the common features Capture and the methods supported forstore and forward transport of captured data to SQData Apply Engines running on all platforms.

Engine Reference - This document is a detail level reference that describes the operation andcommand language of the SQData Apply Engine component, which support target datastores on z/OSand most other platforms.

Secure Communications Guide - This publication describes the Secure Communications architectureand the process used by SQData to authenticate client-server connections.

Utility Guides - These publications describes each of the SQData utilities such as SQDCONF,SQDMON, SQDUTIL and the zOS Master Controller.

Messages and Codes - This publication describes the messages and associated codes issued by theSQData Parser, Apply Engine, Capture, Publisher, Storage agents and Utilities in all operatingenvironments including z/OS, UNIX, and Windows.

Quickstart Guides - Tutorial style walk through for some common configuration scenarios includingCapture and Replication. z/OS Quickstarts make use of the SQData ISPF interface. While eachQuickstart can be viewed in WebHelp, you may find it useful to print the PDF version of a QuickstartGuide to use as a checklist.

10

2SQL Server Capture Considerations

© 2019 SQData Corporation

While the configuration and operation of the SQData Integration Engine is virtually identical on allplatforms, the SQData Capture Agents operate under constraints that vary from one platform toanother.

SQL Server is somewhat unique in that while its internals, like that of other database managers isproprietary, the specifications for its log are also unpublished and considered proprietary. Theconsequence is that some portions of SQL Server's own replication framework must be used for thirdparty tools to gain access to log data. The other factor that complicates use of SQL Server'stransaction logs is that unlike other database managers, the logs are not archived in a manner thatmakes it possible to re-capture changed data from any point in time. Using the SQData's SQL ServerCapture component requires special consideration be given in four areas:

· SQL Server Authorizations and Roles

· Configuration of native SQL Server replication

· Configuration of the SQL Server Log Backup

· Unit-of-Work Integrity

· Storage and Publishing

11© 2019 SQData Corporation

2SQL Server Capture Considerations

SQL Server Authorizations and RolesSQL Server supports two different user Authentication Modes:

· Windows Authentication - This mode enables Windows User Authentication and disables SQLServer user Authentication

· Mixed Authentication - This mode enables both Windows User Authentication and SQL ServerUser Authentication

Microsoft’s best practice recommendation is to use Windows Authentication mode for SQL Serverwhenever possible. This mode allows centralization of account administration for the entireenterprise in a single place; Active Directory, dramatically reducing the chance of error or oversight. While SQData can be configured to operate with either mode, we highly recommend the use ofWindows Authentication as it eliminates the specification of both user_name and password fromboth the Capture configuration and SQL Server Apply Engine scripts. This is particularly importantwhen performing Remote Capture. The following scenario best illustrates the case for WindowsAuthentication:

"A trusted database administrator leaves the organization on unfriendly terms. Windowsauthentication mode revoking that user’s access when you disable or remove the DBA’s ActiveDirectory account. Mixed authentication mode requires the DBA’s Windows account to bedisabled on each database server to ensure that no local accounts exist where the DBA may knowthe password. That’s a lot of work!"

12

2SQL Server Capture Considerations

© 2019 SQData Corporation

Configuration of native SQL Server replicationSQL Server supports three types of replication, Transactional, Merge and Snapshot. Configuration ofMicrosoft SQL Server for SQData Capture requires Transactional Replication and the followingcomponents of the native replication framework:

· Publisher - a server that makes data available (publishes) for replication to other servers.

· Subscribers - processes that receive replicated data and apply (replicate) that data to a seconddatabase. Depending on the type of replication chosen, Subscribers can also act as Publishers.

· Distributor - a server that manages the flow of data through the replication system by storingreplication status data, metadata about the publication, and in some cases, acts as a queue fordata moving from Publishers to the Subscribers. Often, a single database server instance actsas both the Publisher and the Distributor. This is known as a local Distributor. When thePublisher and the Distributor are configured on separate database server instances, theDistributor is known as a remote Distributor. Database servers that use the SQData SQL ServerCapture Agent must be configured as a Local Distributor.

13© 2019 SQData Corporation

2SQL Server Capture Considerations

Configuration of the SQL Server Log BackupSQL Server backup supports three recovery models. :

· Full - Contains all the data in a specific database and also enough log to allow for recoveringthat data to a point in time.

· Bulk-logged - Contains all the data in a specific database and but logs contain only minimalinformation about certain "bulk" operations so full recovery to the point of a failure is notpossible.

· Simple - Contains only the committed data in a specific database at a point in time. Logs arenot backed up so any changes to the data since the last backup are lost.

SQData's Change Data Capture utilizes the Transaction Log as its source of changed data andtherefore the requires use of the Full Recovery Model. Additionally, the SQL Server Log Backup mustbe configured to ensure that the Logs are not truncated prematurely.

Refer to the section below, Managing Transaction Log Truncation and Microsoft SQL Serverdocumentation for further information

14

2SQL Server Capture Considerations

© 2019 SQData Corporation

Unit-of-Work IntegrityCommit and rollback information is managed for each Unit-of-work by the CDCStore transientStorage Manager and Publisher. Downstream Apply Engines process only committed UOW changeddata.

The Publisher keeps track of the in-flight transactions and knows, at all times the safe restart point,that is the LSN of the oldest record of the oldest in-flight transaction. This guarantees a safe restartwithout risk of losing part of a transaction but is only effective if the archived logs are available.

If on restart the Log Reader is unable to access the required log, an error message will be passed onto the Publisher which will stop until archived logs have been restored. In a normal productionenvironment, where the Capture Agent is regularly monitored, this kind of extreme situation is veryunlikely and would only occur if a capture agent remains in a stopped state for a significant amountof time.

15© 2019 SQData Corporation

2SQL Server Capture Considerations

Storage and PublishingThe SQData enterprise data integration product utilizes a four part framework; Capture, transientstorage, Publisher and the Apply Engine. SQL Server's native replication mechanism is used to makeSQL Server changed data available for capture in the Transaction Log. SQL Server changed data is thenretrieved from the transaction by the Log Reader based on requests from the Publisher. Memorymapped storage is then used by the Store and the Publisher to avoid "landing" the data. ThePublisher uses TCP/IP to transport captured data and ensures that the Store "discards" data uponconfirmation that a Unit-of-work has been committed by all subscribing Apply Engines.

Note, the Log Reader cannot mark the transaction log for Truncation after the a UOW has beencaptured and published. That responsibility rests with a separate sp_repldone script provided bySQData. See Managing Transaction Log Truncation.

16

3SQL Server Log Reader Capture

© 2019 SQData Corporation

SQData's SQL Server Log Reader Capture is multi-threaded and comprised of three componentswithin the SQDMSSQLC module; the Log Reader based Capture agent and the CDCStore multi-platform transient Storage Manager and Publisher. The Storage Manager and Publisher togethermaintain both transient storage and UOW integrity. Only Committed Units-of-Work are sent by thePublisher to Engines via TCP/IP .

17© 2019 SQData Corporation

3SQL Server Log Reader Capture

Implementation ChecklistThis checklist covers the tasks required to prepare the operating environment and configure the SQLServer CDCStore Data Capture Agent. Before beginning these tasks however, the base SQDataproduct must be installed. Refer to the SQData Installation Guide for an overview of the entireproduct and the installation instructions and pre-requisites.

Note, some organizations do not permit third party software to be installed on the SQL Serverplatform. SQData supports remote capture of SQL Server from a Windows APP Server usingessentially the same configuration described below. The configuration tasks that must be performedon the SQL Server platform are identified below.

# Task

Prepare Environment

1 Identify Source and Target System and Datastores

2 Identify/Authorize/Prepare Operating User(s) (DB Server)

3 Create SQData Variables Directories

4 Reserve TCP/IP Ports

5 Configure SQL Server Replication (DB Server)

6 Configure SQL Server Backup (DB Server)

7 Generate Public / Private keys

Environment Preparation Complete

Setup CDCStore Storage Agent

1 Size the Storage Pool

2 Create the CDCStore CAB file

CDCStore Storage Agent Setup Complete

Setup Log Reader Capture Agent

1 Confirm SQL Server Publications (DB Server)

2 Create ODBC System Data Source (DSN) (DB Server)

3 Create SQL Server Capture CAB file

4 Prepare Log Reader Capture Batch file

Capture Agent Setup Complete

Setup Controller Daemon

1 Generate Public / Private keys

2 Create Authorized Key File

3 Create Access Control List

4 Create Agent Configuration File

5 Configure Controller Daemon Batch Script

6 Register SQDaemon as a Service

7 Verify Controller Daemon Install

Controller Daemon Setup Complete

Configure Engine

1 Generate Public / Private Keys

18

3SQL Server Log Reader Capture

© 2019 SQData Corporation

2 Specify Source Datastore in Engine Script

3 Configure Engine Batch Script

Engine Configuration Complete

Component Verification

1 Start the Controller Daemon

2 Start the Capture Agent

3 Start the Apply engine

4 Execute Test Transactions

Verification Complete

19© 2019 SQData Corporation

3SQL Server Log Reader Capture

Prepare EnvironmentImplementation of the SQL Server Log Reader Capture agent requires a number of environmentspecific activities that often involve people and resources from different parts of an organization.This section describes those activities so that the internal procedures can be initiated to completethose activities prior to the actual setup and configuration of the SQData capture components.

· Identify Source and Target System and Datastores

· Identify/Authorize Operating Users

· Create SQData Variable directories

· Reserve TCP/IP Ports

· Generate Public / Private Keys and Authorized Key FIle

· Configure SQL Server Replication (DB Server)

· Configure SQL Server Backup

Identify Source and Target System and Datastores

Configuration of the Capture Agents, Apply Engines and their Controller Daemon's requireidentification of the system and type of datastore that will be the source of and target for thecaptured data. Once this information is available, requests for ports, accounts and the necessary fileand database permissions for the Apply Engines that will run on each system should be submitted tothe responsible organizational units.

Identify/Authorize/Prepare Operating User(s) (DB Server)

Configuration of SQL Server for SQData Capture requires a SYSADMIN account because because itincludes configuration of portions of both SQL Server Replication and Backup. Operation of theSQData SQL Server Capture Agent requires a user-id granted the Db_owner for the database to becaptured and Db_datareader for both the database to be captured and for the "Master" database.

Check the current authentication mode using the SQL Server Management Studio (SSMS).

20

3SQL Server Log Reader Capture

© 2019 SQData Corporation

Right click on the Server node in SSMS and click on the Properties to see the Server authenticationmode.

Refer to the SQL Server documentation for further discussion regarding the advantages of WindowsAuthentication mode for SQL Server as well as the steps to follow when changing this configuration.

21© 2019 SQData Corporation

3SQL Server Log Reader CaptureNote, unless SQL Server is actually configured to use Windows authentication mode, it will view aSQL Server user with the same name as a Windows user as two different unrelated users. This can beverified through a query of the sys.server_principles view which is located in SystemDatabases/Master/Views/System Views using a query similar to the following:

SELECT TOP 1000 [name] ,[principal_id] ,[sid] ,[type] ,[type_desc] ,[is_disabled] ,[create_date] ,[modify_date] ,[default_database_name] ,[default_language_name] ,[credential_id] FROM [master].[sys].[server_principals]

Create SQData Variables Directories

Once source and target systems and datastores have been identified, the configuration of theCapture Agents, Apply Engines and their Controller Daemon's can begin. That will require thecreation of directories and files for variable portions of the configuration. At this point we assumethe base SQData product has already been installed according to the instructions in the SQDataInstallation Guide and the Operating Systems specific $Start_Here_<operating_systems>.pdf. Therecommended location and Environment Variable values for this static data are:

<SQDATA_DIR>=/opt/sqdata or /home/<sqdata_user>/sqdata

Controller Daemons and Capture Agents require the creation of directories and files for variableportions of their configurations. Just as the location of the base product installation can be modified,the location of variable directories can be adjusted conform to the operating system and toaccommodate areas of responsibility, including the associated "application" and optionally Testingor Production environments. This document will refer to the location and Environment Variablevalue most commonly used on Linux, AIX and Windows:

<SQDATA_VAR_DIR>=/var/opt/sqdata[/<application>[/<environment>]] or

<SQDATA_VAR_DIR>=/home/<sqdata_user>/sqdata[/<application>[/<environment>]] orsimply

<SQDATA_VAR_DIR>=/home/sqdata[/<application>[/<environment>]]

While only the base variable directory is required and the location of the daemon and agentdirectories is optional, we recommend the structure described below:

<SQDATA_VAR_DIR>/daemon The working directory used by sqdaemon, the controller daemon

<SQDATA_VAR_DIR>/daemon/cfg The sqdaemon configuration directory that will contain twofiles

<SQDATA_VAR_DIR>/daemon/logs The sqdaemon logs directory, though not required, issuggested to store log files used by the controller daemon. Its location below must matchthe file locations specified in the Global section of the sqdagents.cfg file.

22

3SQL Server Log Reader Capture

© 2019 SQData Corporation

<SQDATA_VAR_DIR>/cdcstore The working directory used by the capture agent, if any

<SQDATA_VAR_DIR>/cdcstore/data Files will be allocated in this directory as needed by theCDCStore Storage Agent when transient data exceeds allocated in-memory storage. The location below must match the "<data_path>" specified in the Storage agent configuration(.cab file). While not generally critical in a test environment, a dedicated File System isrecommended in production with this directory as the "mount point".

Note, the User-ID(s) under which the capture CDCStore and the Controller Daemon will run must beauthorized for Read/Write access to these directories.

The following commands will create the directories described above:

$ mkdir -p <SQDATA_VAR_DIR>/daemon/cfg --mode=664$ mkdir -p <SQDATA_VAR_DIR>/daemon/log --mode=664$ mkdir -p <SQDATA_VAR_DIR>/cdcstore/data --mode=664

Reserve TCP/IP Ports (DB Server)

TCP/IP ports are required by the Controller Daemons on source systems and are referenced by theApply Engines on the target system(s) where captured Change Data will be applied. Once the sourcesystems are known, request port number assignments for use by SQData on those systems. SQDatadefaults to port 2626 if not otherwise specified.

Generate Public / Private Keys and Authorized Key File

The Controller Daemon uses a Public / Private key mechanism to ensure component communicationsare valid and secure. A key pair must be created for the sqdaemon process User-ID and the User-ID'sof all the Agent Jobs that interact with the Controller Daemon. By default on unix, the private key isgenerated in ~/.nacl.id_nacl and the public key in ~/.nacl/id_nacl.pub. These two files will be usedby the daemon in association with a sequential file containing a concatenated list of the Public Keysof all the Agents allowed to interact with the Controller Daemon. The Authorized Key File mustcontain at a minimum, the public key of the sqdaemon process User-ID and is usually usually namednacl_auth_keys and placed in the <SQDATA_VAR_DIR>/daemon directory.

The file will also include the Public key's of Capture Agents running on the same platform as theController Daemon and the Apply Engine, which may be running on another platform. TheAuthorized Key File is usually maintained by a Systems Administrator.

Note:

1. Since the Daemon, Capture Agent and Apply Engine may be running in the same system, theyfrequently run under the same User-ID, in that case they would share the samepublic/private key pair.

2. Changes are not known to the daemon until the configuration file is reloaded, using theSQDMON Utility, or the sqdaemon process is stopped and started.

The sqdutil utility program using the keygen command is used to generate the necessary keys. Thecommand must be run under the User-ID that will be used to run the Controller Daemon process.

$ sqdutil keygen

23© 2019 SQData Corporation

3SQL Server Log Reader Capture

Configuring SQL Server Replication (DB Server)

SQData will replace most of the native SQL Server replication framework with SQData's SQL ServerCapture Agent, Storage Agent and Apply Engine. The operating environment must satisfy threeprimary prerequisites:

1. SQL Server's native replication feature must be installed. If it has not, or if you have receivedthe Microsoft SQL Server Error 21028 [Replication components are not installed on thisserver.], run SQL Server Setup again and select the option to install replication.

2. The systems running the Publisher and the Capture should always be operational and havenetwork connectivity.

3. At least one (1) table must have been created have at least one key field defined before itcan be configured for Replication.

SQData requires configuration changes to the following SQL Server functions and only a user withSYSADMIN privileges can make these changes :

· Publication - Transactional Publication will be selected as the Publication type and configuredto minimize the supplemental logging required by the SQData Capture Agent.

· Log Reader - Component of Transactional Replication will be turned off.

· Backup - Full Recovery Model will be selected and both Database and Log Backups will beconfigured

24

3SQL Server Log Reader Capture

© 2019 SQData Corporation

Configure Publication

Configuration of SQL Server Publication, aka Replication is done through the SQL Server ManagementStudio (SSMS).

Right click on the Replication tree node in SQL Server Management Studio (SSMS) and click on theNew and then the Publication menu item to bring up the New Publication Wizard.

Note, it is also possible to select an existing publication and re-create it using the followinginstructions.

The New Publication Wizard will walk you through each step of the configuration.

25© 2019 SQData Corporation

3SQL Server Log Reader Capture

Click Next to start the Wizard. In this example we will use a server named "W7" and the Databasenamed "AdventureWorks".

Choose the Database to Publish (In this example, AdventureWorks) and click Next.

26

3SQL Server Log Reader Capture

© 2019 SQData Corporation

You must choose type Transactional Publication (aka Replication), Click Next.

Click the Check box next to Tables and Click Article Properties. Note, SQL Server can only publishtables for replication that have primary keys. Expand the list of tables to ensure that no tables existfor which a primary key has not been defined.

27© 2019 SQData Corporation

3SQL Server Log Reader Capture

In the example above the table Culture is marked with a red NOT sign indicating that it does not havea primary key defined and therefore cannot be published for replication.

Confirm all tables to be replicated have Primary keys. Cancel the Publication Wizard to makecorrections if necessary. When ready, Click Article Properties:

and then from the drop down,

Choose Set Properties of all Table Articles.

28

3SQL Server Log Reader Capture

© 2019 SQData Corporation

The following window will open and display the Defaults Settings:

29© 2019 SQData Corporation

3SQL Server Log Reader CaptureChanges must be made to minimize the SQL Server configuration. Remember that SQData will beprocessing the SQL Server transaction logs relieving SQL Server of all replication responsibilitiesother than transaction logging:

30

3SQL Server Log Reader Capture

© 2019 SQData Corporation

Summary of changes to all properties:

Under Copy Objects and Settings to Subscriber: · All settings = FALSE

Under Destination Object: · Action if name is in use = Keep existing object unchanged· All other settings = FALSE

Under Statement Delivery:· INSERT delivery format = Do not replicate INSERT statements· UPDATE delivery format = Do not replicate UPDATE statements· DELETE delivery format = Do not replicate DELETE statements

Click OK, and then Next.

No need to filter rows, Click Next.

31© 2019 SQData Corporation

3SQL Server Log Reader Capture

Check Create a snapshot immediately box. If Replication is not working it may be possible that theSnapshot was not created. The Snapshot must be run once and then can be deleted. Click Next tocontinue.

At this screen Click Security Settings.

32

3SQL Server Log Reader Capture

© 2019 SQData Corporation

SQData recommends selecting the SQL Server Agent service account because it will only be usedonce and that will occur during this publication process. Click OK.

Confirm both Agents are configured to use the SQL Server Agent Account, then Click Next.

33© 2019 SQData Corporation

3SQL Server Log Reader Capture

SQData recommends checking both boxes. While the script will generally not be used again, it canbe used for Disaster Recovery purposes and should be saved and passed on to the responsible party.

Click Next to continue.

Check the Overwrite the existing file and Windows text (ANSI) boxes. Click Next.

34

3SQL Server Log Reader Capture

© 2019 SQData Corporation

Name your Publication, in our example we have named it "AWrepl" for "AdventureWorksReplication" but you should use your installation's naming standards, which should include thedatabase name. Finally, verify your other choices and Click Finish.

The window above will open for you to monitor the Publication process. Note, if any tables are openor other processes are using the tables, the database must be brought offline and the publicationscript must be re-run. If the snapshot agent doesn't start, use a different user_account with thecorrect privileges.

35© 2019 SQData Corporation

3SQL Server Log Reader Capture

Confirm each Publication step was successful and then click Close, the Wizard is finished.

Note, thought the Snapshot Agent may have been successfully started, the Snapshot itself may infact have failed. The status of the Snapshot will be displayed in the next section including theremedy for a failed Snapshot.

Using the Object Explorer, expand Replication / Local Publications and you will see the publicationfor your database listed.

Replication Setup is Complete.

36

3SQL Server Log Reader Capture

© 2019 SQData Corporation

Stopping Log Reader Agent

SQData SQL Server Capture uses the transaction logs created by components of the native SQLServerReplication process. As stated in the the previous section the objective is to minimize the SQLServerinfrastructure required to support the generation of the transaction logs. One additional componentof the SQL Server infrastructure must be turned off, the Log Reader Agent. This process will havebeen started automatically at the conclusion of the previous Configure Publication step.

The following instructions describe how to stop the active agent and also how prevent it fromstarting automatically in the future. Note, once these instructions have been followed, the LogReader Agent should remain stopped. If however, the SP_ReplDone job fails due to "Log Agentalready Running", it will be necessary to repeat the steps in this section.

The Replication Monitor in the SQL Server Management Studio is used to turn off the Log Reader. Right click on the Replication tree node in SQL Server Management Studio and click on the LaunchReplication Monitor menu item.

This is the Replication Monitor.

Navigate the display tree until you reach the Publication you are interested in stopping, in this caseAdventureWorks: AWrepl.

37© 2019 SQData Corporation

3SQL Server Log Reader Capture

Select the Agents tab. It should look like this:

If however you find that the Snapshot Agent encountered an error during the configuration of thepublication, it is most likely that access to the the default path for the Snapshot was denied.

It will be necessary to change the path for the one time snapshot. Right click on the new Publisher(AWrepl in this example) and select Properties. Select the Snapshot page and specify an alternatelocation for the snapshot files.

38

3SQL Server Log Reader Capture

© 2019 SQData Corporation

Then Right click on the Publisher and select Generate Snapshot.

Next, on the Agents tab select the running log reader agent.

Right Click on the Log Reader Agent and select Stop Agent from the menu.

39© 2019 SQData Corporation

3SQL Server Log Reader Capture

Note, the log reader agent is now "Not running".

Now, to ensure that the Log Reader Agent will not be automatically restarted, Right Click again onthe Log Reader Agent and select Properties from the menu.

The Log Reader job will be displayed:

40

3SQL Server Log Reader Capture

© 2019 SQData Corporation

Un-Check the Enabled box for the LogReader. This will disable the job and prevent it from restarting.

This is how the Replication Monitor will look once the Log Reader Agent is stopped.

41© 2019 SQData Corporation

3SQL Server Log Reader Capture

Verify Capture Configuration

The final installation activity is to verify the configuration. That is accomplished through a query ofthe database log table fn_dblog, performed through the SQL Server Management Studio. Athorough test would require insert, update and delete activity against every table being replicated. For the purposes of this section we will assume that the tester has knowledge of the tables andapplications that update those tables. The queries constructed will serve to validate all tables andtypes of transactions. The queries can be modified by the tester to facilitate validation as required.

Start the SQL Server Management Studio

Begin by Right Clicking on your database in the Object Explorer and selecting New Query.

42

3SQL Server Log Reader Capture

© 2019 SQData Corporation

The following are sample queries against the fn_dblog table that can be used to validate the resultsof testing as well as determine the LSN for starting point in time captures.

select * from fn_dblog (null,Null);

select * from fn_dblog (null,Null) where description='REPLICATE';

select "Current LSN", operation, description, allocunitname from fn_dblog (null,Null);

select "Current LSN", operation, description, allocunitname from fn_dblog (null,Null) where description='REPLICATE';

In this example, the fourth query above was used to first verify that no replication log entriesexisted prior to performing any database updates.

Next, a single row in the Person.Contact table was updated using the table editing feature of the SQLServer Management Studio. After first confirming that the content of the table itself has changed wewill now look at the contents of the Log.

43© 2019 SQData Corporation

3SQL Server Log Reader Capture

The same log query was executed again and the resulting log records that were created aredisplayed. Note that three records were created after updating a single column in the row. SQLServer creates a DELETE record containing the before image of the row, an INSERT record containingthe after image and a third record of the COMMIT.

44

3SQL Server Log Reader Capture

© 2019 SQData Corporation

Configuring SQL Server Backup

The Full Recovery Model must be selected for each SQL Server database containing Tables to becaptured so that Logs will NOT be truncated. In our experience, if the database was not originallyconfigured with this mode it is best to drop and re-add the database with the parameter set for FullRecovery. The remainder of this section describes the changes required to ensure proper logtruncation behavior for both cases.

Database Backup (DB Server)

Configuration of SQL Server backup is done through the SQL Server Management Studio (SSMS).

Right click on the database node in SQL Server Management Studio (SSMS).

45© 2019 SQData Corporation

3SQL Server Log Reader Capture

Then select the Properties menu item.

Choose the Options page and then confirm that Recovery Model is set to Full, changing it if necessaryand then click OK.

Transaction Log Backup (DB Server)

The SQData Capture Agent for SQL Server utilizes the SQL Server Transaction Log as its source ofchanged data. While SQData's use of the transaction log in no way impacts the native Backup andRestore functions of SQL Server, there are two (2) key areas to consider when capturing changed datafrom the SQL Server transaction log.

· Transaction Log Backup

· Transaction Log Sizing

Note, SQData like SQL Server's native replication function, does not process archived logs. Shouldthere be a failure involving either the data store containing the captured data or the platform

46

3SQL Server Log Reader Capture

© 2019 SQData Corporation

hosting the replicated target database, it may be impossible to recapture from the source transactionlog unless it is large enough to withstand the period of time required to recover from the failure.

47© 2019 SQData Corporation

3SQL Server Log Reader Capture

Configuring Transaction Log Backup

Configuration of Transaction Log backup is also done through the SQL Server Management Studio(SSMS).

Right click again on the database node in SQL Server Management Studio (SSMS).

Then select Tasks and then Backup from the cascading menu.

48

3SQL Server Log Reader Capture

© 2019 SQData Corporation

Choose the General page and then verify that Backup type is set to Transaction Log and then click OK.If you receive an error that Backup Failed, it is likely because the database is new and there is nocurrent database backup. In that case it will be necessary to run an initial backup.

49© 2019 SQData Corporation

3SQL Server Log Reader Capture

Choose the Options page and then verify that Transaction log is set to Truncate the transaction logand then either select the Script option at the top of the window to save the generated script or clickOK to run the Transaction Log Backup.

It may be necessary to select the second option to back up to a new media set and run a full, nottransaction only, backup if no backup has been previously performed for the database

50

3SQL Server Log Reader Capture

© 2019 SQData Corporation

Transaction Log Sizing

SQData recommends that the active transaction log be sized as large as practically possible to allowfor an extended outage of the SQData Capture Agent components and/or to provide an acceptablewindow for recapture, if required. Generally, the larger the retention window the better. However,for databases with high transaction volumes, you may only be able to keep a few hours of activitybefore the log fills and can no longer simply wrap, causing the system to fail and issue the SQL Server9002 error. The retention window should be sized to best fit your environment while leaving enoughroom for outages and recapture.

Managing Transaction Log Truncation

Once the Database has been configured for Transactional Replication and the database configuredfor a FULL Backup Model, the behavior of the Transaction Log backup will change. SQL Servertransactions logs are truncated by a BACKUP DATABASE and/or a BACKUP LOG command based on theexistence of a record inserted into the log by the stored procedure sp_repldone, that identifies thelast captured and successfully replicated transaction. While Transaction Log backups may run on theirnormal schedule, the active transaction log will not have been truncated until it has been marked fortruncation by the sp_repldone command.

The default log truncation behavior depends on whether or not native SQL Server replication haspreviously been in use. It is therefore very important to review and if necessary update theconfiguration because under native SQL Server Replication, the Distributor regularly notifies thePublisher of the last log sequence number (LSN) successfully processed and the Publisher marks thetransaction log as truncatable. This configuration can be left unchanged, so long as the SQL Server LogReader Agent has been permanently stopped. If the Log Reader were to restart and managed tocommunicate with a still running Distributor it could cause a portion of the log to be truncatedbefore the SQData Capture Agent has consumed the data from the log.

If you are not currently using native SQL Server replication, it is likely that backup was originallyconfigured either as either a Simple Backup model with no Log backups or a Full Backup model whichinitiates transaction log Backup, truncating the log in the process based on other factors including thestate of the most recent transaction commit points.

When using SQData for replication, the log must not be truncated until the SQData SQL ServerCapture Agent has mined the changed data that is subject to the truncation because the SQDataCapture cannot read archived logs. SQData provides a Batch File (.bat) that executes the sp_repldonecommand, that marks the transaction log as truncatable up to the Safe Restart Point. It is importantfor the DBA to determine how long data will remain in the active log for remine purposes.

SP_ReplDone Procedure

Log truncation will be based on the most recent sp_repldone which marks transactions that havebeen committed by the SQData Apply Engine rather than simply committed to the local database.The sp_repldone procedure may be run locally on the SQL Server database server or remotely from aWindows Application server. Because it requires access to the SQData Capture configuration CABfile, it is most often run on the same system running the SQData SQL Server Capture. The proceduremay be run manually or scheduled to mark the log for truncation using a script or Batch file similar tothe SQD_sp_repldone.bat file supplied with the distribution and displayed below:

Rem Execute sp_repldone based on time interval specified@echo off SETLOCAL ENABLEDELAYEDEXPANSION

51© 2019 SQData Corporation

3SQL Server Log Reader CaptureSETLOCAL ENABLEEXTENSIONSrem set Parms (as required for local or remote execution)set DBsys=<sql_server_host_name>set DBname=<odbc_dsn>set DBuser=<sql_server_user_id>set DBpsw=<sql_server_user_password>set Cabfile=<path_to/capture_agent.cab>

rem Set time delay for repldone. Specify the number of minutes before the lastreplicated transaction or current time, as a negative integer.

rem For example, to mark done only transactions completed [replicated] more than 24hours ago, set Minutes=-1440

rem The script will mark done the youngest transaction completed that is older thana) the current time -24 hrs and older than b) capture remine point -24 hrs

set /a Minutes=-1440set SQL=select [Current LSN], operation, description, [Transaction Begin],

[Transaction ID], [End Time] from fn_dblog (null,Null) + where description='REPLICATE' and Operation = 'LOP_COMMIT_XACT';

rem Cleanup previous runsdel sqd_display.txtdel output.txt

rem Get restart point lsnsqdconf display %Cabfile% --sysprint sqd_display.txtfor /F "tokens=1,2,3,4,5,6* delims= " %%A in (sqd_display.txt) DO ( if "%%A" == "SQDF908I" ( set Last_lsn=%%F set Last_lsn=!Last_lsn::=! call :UpCase Last_lsn)if "%%A" == "SQDF981I" ( set Restart_lsn=%%F set Test_restart=%%F set Test_restart=0x!Test_restart! set Restart_lsn=!Restart_lsn::=! call :UpCase Restart_lsn )if "%%A" == "SQDF986I" ( set Remine_lsn=%%F set Test_remine=%%F set Test_remine=0x!Test_remine! set Remine_lsn=!Remine_lsn::=! call :UpCase Remine_lsn echo Remine lsn is !Remine_lsn! if "!Remine_lsn!" == "0x0" ( echo No valid Remine lsn goto:eof ) )if "%%A" == "SQDF987I" ( set Last_commit_time=%%F set Last_commit_time=!Last_commit_time:-=/! set Last_commit_time=!Last_commit_time! %%G set Last_commit_time=!Last_commit_time:~0,23! set Last_commit_time=!Last_commit_time:.=:! echo Last Commit Time = !Last_commit_time! )

52

3SQL Server Log Reader Capture

© 2019 SQData Corporation

) echo Restart= !Restart_lsn! echo ReMine= !Remine_lsn!rem get date for remine lsnrem sqlcmd -S %DBsys% -d %DBname% -U %DBuser% -P %DBpsw% -Q "set NOCOUNT on;select

TOP 1 [Current LSN], operation, description, [Transaction Begin], [TransactionID], [End Time] from fn_dblog ('!Test_remine!',Null) wheredescription='REPLICATE' and Operation = 'LOP_COMMIT_XACT'; " -h -1 -ooutput.txt

sqlcmd -S %DBsys% -d %DBname% -U %DBuser% -P %DBpsw% -Q "set NOCOUNT on;select TOP1 [Current LSN], operation, description, [Transaction Begin], [Transaction ID],[End Time] from fn_dblog ('!Test_remine!',Null) where description='REPLICATE'and Operation = 'LOP_COMMIT_XACT'; " -h -1 -o output.txt

for /F "tokens=1,2,3,4,5,6* delims= " %%A in (output.txt) DO ( set Remine_time=%%F %%G echo Remine_time = !Remine_time!)rem get youngest commit/abort before interval startrem sqlcmd -S %DBsys% -d %DBname% -U %DBuser% -P %DBpsw% -Q "set NOCOUNT on; %SQL%

" -h -1 -o output.txtrem echo sqlcmd -S %DBsys% -d %DBname% -U %DBuser% -P %DBpsw% -Q "set NOCOUNT

on;select [Current LSN], operation, description, [Transaction Begin],[Transaction ID], [End Time] from fn_dblog (null,'!Test_remine!') wheredescription='REPLICATE' and Operation = 'LOP_COMMIT_XACT' and [End Time] <=DATEADD(minute,!Minutes!,'!Remine_time!') and [End Time] <=DATEADD(minute,!Minutes!,GETDATE()); " -h -1 -o output.txt

sqlcmd -S %DBsys% -d %DBname% -U %DBuser% -P %DBpsw% -Q "set NOCOUNT on;select[Current LSN], operation, description, [Transaction Begin], [Transaction ID],[End Time] from fn_dblog (null,'!Test_remine!') where description='REPLICATE'and Operation = 'LOP_COMMIT_XACT' and [End Time] <=DATEADD(minute,!Minutes!,'!Remine_time!') and [End Time] <=DATEADD(minute,!Minutes!,GETDATE()); " -h -1 -o output.txt

rem Match to restart lsnfor /F "tokens=1,2,3,4,5,6* delims= " %%A in (output.txt) DO ( set xdesid=%%D set xdesid=!xdesid::=! set xdesid=0x!xdesid! set seqno=%%A set seqno=!seqno::=! set seqno=0x!seqno! rem echo !xdesid! rem echo !seqno!)rem do repldone and exitrem execute sp_repldone to truncate logsqlcmd -S %DBsys% -d %DBname% -U %DBuser% -P %DBpsw% -Q "set NOCOUNT on;exec

sp_replflush" sqlcmd -S %DBsys% -d %DBname% -U %DBuser% -P %DBpsw% -Q "exec sp_repldone

@xactid='!xdesid!', @xact_seqno='!seqno!';"

goto:eof:Upcasefor %%i in ("a=A" "b=B" "c=C" "d=D" "e=E" "f=F") DO CALL SET "%1=%%%1:%%~i%%"goto:eof

endlocal

53© 2019 SQData Corporation

3SQL Server Log Reader Capture

Setup CDCStore Storage AgentThe SQL Server Log Reader Capture utilizes the CDCSTORE Storage Agent, similar in function to SQLServers Distributor, to manage the transient storage of both committed and in-flight or uncommittedunits-of-work using auxiliary storage. The Storage Agent must be setup before configuring theCapture Agent.

Size Transient Storage Pool

The CDCStore Storage Agent utilizes a memory mapped storage pool to speed captured change dataon its way to Apply Engines. It is designed to do so without "landing" the data again, after it has beenmined from a database log. Configuration of the Storage Agent requires the specification of both thememory used to cache changed data as well as the disk storage used if not enough memory can beallocated to hold large units of work and other concurrent workload.

Memory is allocated in 8MB blocks with a minimum of 4 blocks allocated or 32MB of system memory.The disk storage pool is allocated in files made up of 8MB blocks. While ideally the memory allocatedwould be large enough to maintain the log generated by the longest running transaction AND allother transactions running concurrently, that will most certainly be impractical if not impossible.

Ultimately, there are two situations that must be avoided which govern the size of the disk storagepool:

Large Units of Work - While never advisable, some process usually run in batch may update verylarge amounts of data before committing the updates. Often such large units of work may beunintentional or even accidental but must still be accommodated. The storage pool must be able toaccommodate the entire unit of work or a DeadLock condition will be created.

Archived Logs - Depending on workload, data logged will eventually be archived at which point thedata remains accessible to the the Capture Agent but at a higher cost in terms of CPU and I/O. Undernormal circumstances, captured data should be consumed by Apply Engines in a timely fashionmaking the CDCStore FULL condition one to be aware of but not necessarily concerned about. Ifhowever the cause is a stopped Engine, the duration of the outage could result in un-captured databeing archived.

The environment and workload may make it impossible to allocate enough memory to cache a worsecase or even the average workload, therefore we recommend two methods for sizing the storagepool based on the availability of logging information.

If detailed statistics are available:

1. Gather information to estimate the worse case log space utilization (longest running DB2transaction AND all other DB2 transactions running concurrently) - We will refer to thisnumber as MAX.

2. Gather information to estimate the log space consumed by an "Average size" DB2 transactionand multiply by the number of average concurrent transactions - We will refer to this numberas AVG.

3. Plan to allocate disk files in your storage pool as large as the Average (AVG) concurrenttransaction Log space consumed. Divide the value of AVG by 8MB - This will give you thenumber of Blocks in a single file which we will refer to as B.

54

3SQL Server Log Reader Capture

© 2019 SQData Corporation

4. Divide the value of MAX by 8MB and again by B to calculate the number of files to allocatewhich we will refer to as N. Note, dividing the value of MAX by AVG and rounding to thenearest whole number should result in the same value for N.

In summary:

B =AVG / 8MB

N= MAX / 8MB / B or MAX / AVG

If detailed statistics are NOT available:

1. SQData recommends using a larger number of small disk files in the storage pool and suggestsbeginning with 256MB files. Dividing 256MB by the 8MB block size gives the number of Blocks"B" in a single file, 32.

2. SQData recommends allocating a total storage pool of 2GB as the starting point. Divide thatnumber by 256MB to calculate the number of files "N" required to hold 2GB of active LOG. "N"would have the value 8.

In summary:

B = 256MB / 8MB or 32

N = 2048MB / 256MB or 8

Use these values to configure the CDCStore Storage Agent in the next section.

Notes:

1. Remember that it is possible to adjust these values once experience has been gained andperformance observed. See the section "Display Storage Agent Statistics" in the Operationssection below.

2. Think of the value for N as a file Extent, in that another file will be allocated only if theMEMORY cache is full and all of the Blocks "B" in the first file have been used and none arereleased before additional 8MB Blocks are required to accommodate an existing incompleteunit of work or other concurrent units of work

3. While the number of blocks "B" and files "N" can be dynamically adjusted they will apply onlyto new files allocated. It will be necessary to stop and restart the Storage Agent for changes toMEMORY.

4. Multiple Directories can also be allocated but this is only practical if the File system itself fillsand a second directory becomes necessary.

Create CDCStore CAB file

The CDCStore Storage Agent configuration (CAB) file is a binary file created and maintained by theSQDCONF utility. While this section focuses primarily on the initial configuration of the StorageAgent, sequences of SQDCONF commands to create and configure the storage agent can/should bestored in scripts. See the SQDCONF Utility Reference for a full explanation of each command, theirrespective parameters and the utility's operational considerations. The SQDCONF create commandwill be used to configure the CDCStore Storage agent.

Syntax

55© 2019 SQData Corporation

3SQL Server Log Reader Capturecreate <cab_file_name>--type=store--alias=<storage_agent_alias>--data-path=<directory_name>[--number-of-blocks=<blocks_per_file>][--number-of-logfiles=<number_of_files>]

Keyword and Parameter Descriptions

<cab_file_name> - Configuration file for the Storage Agent, including its path. There is a one toone relationship between the CDCStore Storage Agent and Capture Agent. SQDatarecommends including the Capture Agent alias as the first node of the file name. In ourexample, db2cdc_store.cab, oracdc_store.cab, etc. In a Windows environment the .cfg maybe used since .cab files have special a meaning.

--type=store - Agent type for the Storage Agent.

--alias=<storage_agent_alias> - The Alias name of the storage agent. We recommended using"cdcstore".

--data-path=<directory_name>) - Directory, including its path, where transient data files forthe storage agent will be created. The directory must exist and the user-id associated withthe agent must have the right to create and delete files in that directory. Multiple --data-path(s) can be specified in the same create statement, each will add an entry in thestorage agent for the specified data-path.

[--number-of-blocks=<blocks_per_file> | -b <blocks_per_file> ] - The number of 8MB blocksthat will be allocated for each File defined for transient CDC storage. If this parameter isnot specified, a default value of 32 is used.

[--number-of-logfiles=<number_of_files> | -n <number_of_files>] - The number of files thatcan be allocated in a data-path. Files will be allocated on an as needed basis one full file ata time, during the storage agent operation. File blocks are recycled when possible, andrecycled blocks are reused before new storage is allocated. If this parameter is notspecified, a default value of 8 is used.

Notes:

1. The SQDCONF create command defines the .cab file name and the location and size of thetransient data store. Once created, this command should never never be run again unless thestorage agent is being recreated.

2. Unlike the Capture/Publisher configuration files, changes to the CDCStore configuration filetake effect immediately and do not require the usual --stop/apply/start sequence.

3. The Directory path reference, in our example /home/sqdata/cdcstore/data can be modifiedto conform to the operating environment but must match the SQData Variable Directorycreated in the Prepare Environment Section above.

Example

Create the SQData CDCSTORE Storage Agent for a SQL Server Capture using the followingSQDCONF command:

$ sqdconf create <SQDATA_VAR_DIR>/cdcstore/mssqlcdc_store.cab --type=store

56

3SQL Server Log Reader Capture

© 2019 SQData Corporation

--alias=cdcstore --number-of-blocks=32 --number-of-logfiles=8 --data-path=<SQDATA_VAR_DIR>/cdcstore/data

Display the content of the Storage Agent .CAB file using the following SQDCONF command:

$ sqdconf display <SQDATA_VAR_DIR>/cdcstore/mssqlcdc_store.cab --details

57© 2019 SQData Corporation

3SQL Server Log Reader Capture

Setup Log Reader Capture AgentThe SQL Server Capture agent actually performs two functions: Mining the SQL Server Log andPublishing captured data, managed by the Storage agent. The Publisher pushes committed datadirectly to Engines using TCP/IP. The TCP/IP Publishing function manages the captured data until ithas been transmitted and consumed by SQData Apply Engines, ensuring that captured data is not lostuntil the Engines, which may operate on other platforms, signal that data has been applied to theirtarget datastores.

Setup and configuration of the Capture Agent include:

· Confirm SQL Server Table Publications

· Create ODBC System Data Source (DSN)

· Create SQL Server Capture Agent CAB file

· Prepare Log Reader Capture Batch file

Confirm SQL Server Publications

Use of the SQL Server Publication portion of SQL Server Replication often results in all tables within aDatabase being configured for Full Transaction Logging. SQData recommends confirmation of eachtable's publication status before proceeding with the Capture Agent configuration to ensure nonehave been added since the Publication was created. Review the Local Publications using SQL ServerManagement Studio.

Create ODBC System Data Source (DSN)

Access to the SQL Server Log is through ODBC and a stored procedure known as fn_dblog. It isnecessary to create an ODBC System Data Source (DSN) for each SQL Server Database to be captured.Use the ODBC Data Source Administrator to create the DSN. There are two important parameters thatmust be set for Capture to work correctly:

DSN Name - Should be the same as the SQL Server Database name

Default database - Should be changed to match the SQL Server Database name whose tablesare to be captured.

In addition to these parameters it is important to know if your environment utilizes Windows or SQLServer authentication. While both Microsoft and SQData generally recommend using WindowsAuthentication, your organization may utilize one or the other or even both.

If SQL Server Authentication is used it is important to make note of the User-ID and Passwordspecified in the ODBC DSN. Both of those parameters will have to be supplied at "run time" in theCapture configuration (CAB) file when the SQData Capture agent initiates its ODBC connection.

Create SQL Server Capture CAB file

The SQL Server Capture Agent configuration (CAB) file is created and maintained by the SQDCONFutility. While this section focuses primarily on the initial configuration of the Capture Agent,sequences of SQDCONF commands to create and configure the capture agent can/should be stored

58

3SQL Server Log Reader Capture

© 2019 SQData Corporation

in batch scripts. See SQDCONF Utility Reference for a full explanation of each command, theirrespective parameters and the utility's operational considerations.

Syntax

$ sqdconf create <cab_file_name> --type=mssql --database=<database_name> --user=<sqdata_user> --password=<sqdata_user_password>[--encryption][--auth-keys-list="<nacl.auth.keys.file>"] --store=<SQDATA_VAR_DIR>/cdcstore/<store_cab_file_name>

Keyword and Parameter Descriptions

<cab_file_name>= This is where the Capture Agent configuration file, including its path is firstcreated. There is only one CAB file per Capture Agent. In our example<SQDATA_VAR_DIR>/cdcstore/mssqlcdc.cab

<type>= Agent type, in the case of the SQL Server Log Reader Capture, is "mssql"

<database>= The SQL Server database name

<sqdata_user>= The user_name is required, only if specified in the ODBC DSN

<sqdata_user_password>= The user_password is required, only if specified in the ODBC DSN

[--encryption] - Rarely used, specifies that published CDC record payload will be encrypted.

[--auth-keys-list="<nacl.auth.keys.file>"] - Rarely used, required for encrypted CDCrecord payload. File name must be enclosed in quotes and must contain public key(s) ofsubscribing Engines.

<store_cab_file_name>= The CAB file name, including its path, previously defined for theStorage Agent. In our example, <SQDATA_VAR_DIR>/cdcstore/mssqlcdc_store.cab

Next, the configuration file must be updated by adding an entry for each SQL Server Table to becaptured using the add command. Note, only one command can be executed at a time. SQData highlyrecommends keeping a Recover/Recreate configuration script available should circumstancesrequire recovery from a specific Start LSN.

Syntax

add --active | --inactive --key=<schema_name>.<table_name> --datastore=cdc:////<engine_agent_alias> <capture_cab_file_name>

Keyword and Parameter Descriptions

--active | --inactive This parameter marks the added source active for capture when thechange is applied and the agent is (re)started. If this parameter is not specified thedefault is --inactive and capture will not be initiated inspite of changes to the agent beingapplied and started.

--key= Specifies Source object where:

59© 2019 SQData Corporation

3SQL Server Log Reader Capture<schema_name> SQL Server schema name. In our example the schema is the SQL Server

default, dbo.

<table_name> SQL Server table name. In our example the first table is EMP.

--datastore= URL formatted parameter that specifies the transient datastore where captureddata is placed for a consuming engine where:

<engine_agent_alias>= Also known as the Engine name, the <engine_agent_alias> providedhere does not need to match the one specified in the sqdagents.cfg file however there isno reason to make them different. This defines a grouping of keys that will be served tothe consuming engine. In our example we have used SQLTOSQL.

<cab_file_name>= Must be specified and must match the name specified in a previous createcommand.

Example

In the following example CDCStore is used as the transient datastore for two tables.The final step executes the display command to output the current content of theconfiguration file:

$ sqdconf create <SQDATA_VAR_DIR>/cdcstore/mssqlcdc.cab --type=mssql --database=<database_name> --

store=<SQDATA_VAR_DIR>/cdcstore/mssqlcdc_store.cab

$ sqdconf add <SQDATA_VAR_DIR>/cdcstore/mssqlcdc.cab --key=dbo.EMP --datastore=cdc:////SQLTOSQL --active

$ sqdconf add <SQDATA_VAR_DIR>/cdcstore/mssqlcdc.cab --key=dbo.DEPT --datastore=cdc:////SQLTOSQL --active --active

$sqdconf display <SQDATA_VAR_DIR>/cdcstore/mssqlcdc.cab

Notes:

1. The sqdconf create command defines the location of the Capture agent's configuration file.Once created, this command should never never be run again unless you want to destroy andrecreate the Capture agent.

2. Destroying the Capture agent cab file means that the current position in the log and therelative location of each engine's position in the Log will be lost. When the Capture agent isbrought back up it will start from the beginning of the oldest active log and will resendeverything. After initial configuration, changes in the form of add and modify commandsshould be used instead of the create command. Note: You cannot delete a cab file if theCapture is mounted and a create on an existing configuration file will fail.

3. There must be a separate ADD command executed for every source table to be captured.

4. The command will fail if the same table is added more than one time for the same TargetDatastore/Engine. See section below "Adding/Removing Output Datastores".

60

3SQL Server Log Reader Capture

© 2019 SQData Corporation

5. If TCP/IP is used for communication with SQData Engines the target datastore will bespecified as:--datastore=cdc://[localhost]//<engine_agent_alias>

6. The <engine_agent_alias> is case sensitive in that all references should be either upper orlower case. Because references to the "Engine" in z/OS JCL must be upper case, referencesto the Engine in these examples are all in upper case for consistency.

7. The display command when run against an active configuration (CAB) file will include otherinformation including:

· The current status of the table (i.e. active, inactive)

· The starting and current point in the log where data has been captured

· The number of inserts, updates and deletes for the session (i.e. the duration of thecapture agent run)

· The number of inserts, updates and deletes since the creation of the configuration file

Prepare Log Reader Capture Batch File

Once the CDCStore configuration (CAB) file has been created, the SQDCONF utility is used to Mount,Apply and Start the Log Reader Capture Agent process.

Example

$ sqdconf --mount <SQDATA_VAR_DIR>/cdcstore/msqlcdc.cab$ sqdconf --apply <SQDATA_VAR_DIR>/cdcstore/msqlcdc.cab$ sqdconf --start <SQDATA_VAR_DIR>/cdcstore/msqlcdc.cab

Notes:

1. One or more batch script are commonly created containing commands for this purpose.

2. The sqdconf utility is used to create the Capture Agent configuration and perform most ofthe other management and control tasks associated with Agents including the function of theMOUNT command.

3. The first time this command script is run you may choose to include both the sqdconf applyand sqdconf start commands. After the initial creation, apply should not be used in thisscript, unless all changes made since the agent was last Stopped are intended to take effectimmediately upon the Start. The purpose of apply is to make it possible to add/modify theconfiguration while preparing for an implementation of changes without affecting thecurrent configuration. Note, apply and start can and frequently will be separated intodifferent SQDCONF command scripts.

4. The Controller Daemon uses a Public / Private key mechanism to ensure componentcommunications are valid and secure. While it is critical to use unique key pairs whencommunicating between platforms, it is common to use the same key pair for componentsrunning together on the same platform. Consequently, the key pair used by a Log ReaderCapture agent may be the same pair used by it's Controller Daemon.

61© 2019 SQData Corporation

3SQL Server Log Reader Capture

Setup Capture Controller DaemonThe Controller Daemon enables command and control features as well as use of the the browserbased Control Center. The Controller Daemon plays a special role on platforms running Data CaptureAgents, managing secure communication between Capture, Storage, Publisher Agents and Enginesusually running on another platform. See the SQData Secure Communications Guide for more detailsregarding the Controller Daemon's role in security and how it is accomplished.

The primary difference between Controller Daemons on Capture platforms and Engine onlyplatforms is that the Authorized Key File of the Capture Controller Daemon must contain the Publickeys of Engines that will be requesting connections to its Storage/Publisher agents. Setup andconfiguration of the Capture Controller Daemon, SQDAEMON, includes the following steps:

· Create the Access Control List

· Create the Agent Configuration File

· Configure Controller Daemon Shell Script

Create Access Control List

The Controller Daemon requires an Access Control List (ACL) that assigns privileges (admin, query) byuser or group of users associated with a particular client / server on the platform. While the ACL filename is not fixed it is typically named acl.conf or acl.cfg and it must match the name specified in thesqdagents.cfg file by the acl= <location/file>. The file contains 3 sections. Each section consists ofkey-argument pairs. Lines starting with # and empty lines are interpreted as comments. Sectionnames must be bracketed while keywords and arguments are case-sensitive:

Syntax

Global section - not identified by a section header and must be specified first.

allow_guest=no - Specify whether guest is allowed to connect. Guests are clients that canprocess a NaCl handshake, but whose public key is not in the server's authorized_keys_listfile. If guests are allowed, they are granted at least the right to query. The default value isNo.

guest_acl=none - Specify the ACL for a guest user. This should be specified after theallow_guest parameter, otherwise guests will be able to query, regardless of thisparameter. The default value is "query".

default_acl=query - Specify the default ACL which is used for authenticated clients that donot have an ACL rule explicitly associated to them, either directly or via a group.

Group section - allows the definition of groups. The only purpose of the group section is tosimplify the assignment of ACLs to groups of users.

[groups]

<group_name>=<sqdata_user> [,user_name…] - Defining an ACL associated with thegroup_name in the ACLS section, will propagate that ACL to all users in the group. Note,ACLs are cumulative, that is, if a user belongs to two or more groups or has an ACL of it’sown, the total ACL of the user is the union of the ACLs of itself and all the groups to which

62

3SQL Server Log Reader Capture

© 2019 SQData Corporation

the user belongs.

ACLS section - assigns one or more "rights" to individual users or groups and has the followingsyntax:

[acls]

<sqdata_user> | <group_name> =acl_list -When an acl_list is assigned to a group_name, the listwill propagate to all users in the group. The acl_list is a comma separated list composed ofone or more of the following terms:

none | query |,read |,write |,exec |,sudo |,admin

1. If none is present in the ACL list, then all other elements are ignored. The terms query,read, write and exec, grant the user the right to respectively query, read from, write to,and execute agents.

2. If admin is used it grants admin rights to the user. Sudo will allow a user with Admin rightsto execute a request in the name of some other user. In such a case the ACL of theassumed user is tested to determine if the requested action is allowable for the assumeduser. Note, this functionality is present for users of the SQData Control Center. See theSQData Control Center documentation for more information.

3. If sudo is granted to an admin user, then that user can execute a command in the name ofanother user, regardless of the ACL of the assumed user.

Example

allow_guest=yesguest_acl=nonedefault_acl=query

[groups]admin=<sqdata_user>cntl=<user_name1>,<user_name2>status=<user_name3>,<user_name4>

[acls]admin=admin,sudocntl=query,read,writestatus=query,read

Note: Changes are not known to the daemon until the configuration file is reloaded, using theSQDMON Utility, or the sqdaemon process is stopped and started.

Create Agent Configuration File

The Agent Configuration File lists alias names for agents and provides the name and location ofagent configuration files. It can also defines startup arguments and output file information for agentsthat are managed by the Controller Daemon. The sqdagents.cfg file begins with global parametersfollowed by sections for each SQData agent controlled by the daemon.

63© 2019 SQData Corporation

3SQL Server Log Reader CaptureSyntax

Global section - not identified by a section header and must be specified first.

acl= Location (fully qualified path or relative to the working directory) and name of the aclconfiguration file to be used by the Controller Daemon. While the actual name of this fileis user defined, we strongly recommend using the file name acl.cfg.

authorized_keys= (Non-z/OS only) Location of the authorized_keys file to be used by theController Daemon. On z/OS platforms, specified at runtime by DD statement.

identity= (Non-z/OS only) Location of the NaCl private key to be used by the ControllerDaemon. On z/OS platforms, specified at runtime by DD statement.

message_level= Level of verbosity for the Controller Daemon messages. This is a numericvalue from 0 to 8. Default is 5.

message_file= Location of the file that will accumulate the Controller Daemon messages. Ifno file is specified, either in the config file or from the command line, then messages aresend to the syslog.

service= Port number or service name to be used for the Controller Daemon. Service can bedefined using the SQDPARM DD on z/OS, on the command line starting sqdaemon, in theconfig file described in this section or, on some platforms, as the environment variableSQDAEMON_SERVICE, in that order of priority. Absent any specification, the default is2626. If for any reason a second Controller Daemon is run on the same platform they musteach have a unique port specified.

Agent sections - Each section represents an individual agent in square brackets and heads ablock of properties for that agent. Section names must be alphanumeric and may alsocontain the underscore "_" character

[<capture_agent_alias>] | [<publisher_agent_alias>] Must be unique in theconfiguration file of the daemon on the same machine as the Capture / Publisher process.Will be referenced in the Engine script. Must be associated with the cab=<*.cab> filename specified in the sqdconf create command for the capture or publisher Agent setupin the previous section.

[<engine_agent_alias>] Only present in the configuration file of the daemon on the samemachine as the apply Engine process. Also known as the Engine name, the<engine_agent_alias> provided here does not need to match the one specified in thesqdconf add command --datastore parameter, however there is no reason to make themdifferent. Will also be used by sqdmon agent management and display commands. In ourexample we have used DB2TODB2

[<program_alias> ] | [<process_alias>] Only present in the configuration file of thedaemon on the same machine where the program or process associated with the alias willexecute. Any string of characters may be used, examples include process names and tablenames.

64

3SQL Server Log Reader Capture

© 2019 SQData Corporation

type= Type of the agent. This can be engine, capture or publisher. It is not necessary to specifythe type for Engines, programs, scripts or batch files.

program= The name of the load module (or nix shell script / or windows batch file) to invoke inorder to start an agent. This can be a full path or a simple program name. In the latter case,the program must be accessible via the PATH of the sqdaemon - context. The value mustbe "sqdata" for Engines but may also be any other executable program, shell script (nix) orbatch file (Windows).

args= Parameters passed on the command line to the program=<name> associated with theagent on startup. In the case of an SQData Engine, it must be the "parsed" Engine scriptname ie <engine.prc>. This is valid for program= entries only.

working_directory= Specify the working directory used to execute the agent.

cab=<*.cab> Location and name of the configuration (.cab) file for capture and publisheragent entries. The file is created by sqdconf and required by sqdconf commands. In awindows environment .cfg may be substituted for .cab since ".cab" files have specialmeaning.

stdout_file= File name used for stdout for the agent. If the value is not a full path, it isrelative to the working directory of the agent. The default value is agent_name.stdout.Using the same file name for stdout_file and stderr_file is recommended and will result ina concatenation of the two results, for example <engine_name.rpt>. This is valid for engineentries only.

stderr_file= File name used for as stderr for the agent. If the value is not a full path, it isrelative to the working directory of the agent. The default value is agent_name.stderr.Using the same file name for stdout_file and stderr_file is recommended and will result ina concatenation of the two results, for example <engine_name.rpt>. This is valid for engineentries only.

report= Synonym of stderr_file. If both are specified, report takes precedence.

comment= User specified comment associated with the agent. This is only used for displaypurposes.

auto_start= A boolean value (yes/no/1/0), indicating if the associated agent should beautomatically started when sqdaemon is started. This also has an impact in the return codereported by sqdmon when an agent stops with an error. If an agent is marked as auto_startand it stops unexpectedly, this will be reported as an Error in the sqdaemon log, otherwiseit is reported as a Warning. This is valid for engine entries only.

Notes:

1. Directories and paths specified must exist before being referenced. Relative names may beincluded and are relative to the working directory of the sqdaemon "-d" parameter or asspecified in the file itself.

2. While message_file is not a required parameter we generally recommend its use or allmessages, including authentication and connection errors, will go to the system log. On z/OS

65© 2019 SQData Corporation

3SQL Server Log Reader Capturehowever the system log may be preferable since other management tools used to monitorthe system, use the log as their source of information.

3. All references to .cab files names must be fully qualified.

Example

A sample sqdagent.cfg file for the Controller Daemon follows. Changes are not known to thedaemon until the configuration file is reloaded (see SQDMON Utility) or the daemon process isstopped and started.

acl=<SQDATA_VAR_DIR>/daemon/cfg/acl.cfgauthorized_keys=<SQDATA_VAR_DIR>/daemon/nacl_auth_keysidentity=<SQDATA_VAR_DIR>/id_naclmessage_level=5message_file=../logs/daemon.logservice=2626

[msqlcdc]type=capturecab=<SQDATA_VAR_DIR>/cdcstore/msqlcdc.cab

Configure Controller Daemon Batch Script

Starting the Controller Daemon requires only the SQDAEMON command. A batch script can becreated with the optional parameters.

$ sqdaemon --service=2626 --tcp-buffer-size=262144 --[ipv4] | [ipv6] -d<SQDATA_VAR_DIR>/daemon

Register SQDaemon as a Service

A program to register Controller Daemon (sqdaemon) as a windows service is included in thedistribution. This is the preferred method for starting/stopping the sqdaemon on windows. Whenadding the daemon as a service, you need to use an elevated administrator command prompt. Opena command windows by Right clicking on a Command prompt shortcut and choose "Run asAdministrator":

· To add sqdaemon as a service:

> sqdsvc.exe [-install | /install]

· To remove sqdaemon as a service:

> sqdsvc.exe [-remove | /remove]

• To specify a log level for the daemon to run with:.

> sqdsvc.exe [-install | /install] --log-level=8

After -install or -remove verify the service has truly been installed/removed by trying to locatethe sqdaemon process in the "Services" list. If the install/remove command did not workproperly verify you have logged on with the "ADMINISTRATOR" account.

• The suggested method for running sqdaemon as a service is to use a real user instead of theSYSTEM user.

66

3SQL Server Log Reader Capture

© 2019 SQData Corporation

– In the "Services" window, find "sqdaemon" in the list. "Services" are usually found under"Administrative Tools".

– Right click on "Properties".

– Find the tab that lets you define the user to "log on as" for that process.

– Enter the username/password for the user that you want the daemon to run as. This usermust have admin privileges. If you try to run with a user with lesser privileges you will getan message "Error: Access is Denied". When the user has been successfully accepted amessage will pop up saying the user has been granted "Log on a Service" privileges.

• Starting and stopping the sqdaemon.

– In the "Services" list window, find "sqdaemon" in the list.

– Right click on "sqdaemon" and choose start/stop. The daemon is now running.

Notes:

1. Environment variables that are needed to run any engine under the daemon, like PATH,ORACLE_HOME,and the SQDAEMON_DIR, need to be setup before the sqdaemon is started.

2. SQDaemon gets a copy of the environment variables at startup and never reacquires themwithout bouncing the daemon.

3. The program used to register sqdaemon as a service (sqdsvc) will not install the daemon as aservice if SQDAEMON_DIR is not present in the environment variables.

Verify Controller Daemon Install

Communication with the Controller Daemon (sqdaemon) process is done through the sqdmonutility. After the sqdaemon process has been started you can verify it is running by issuing a sqdmoninventory command. If you get an unauthorized key message, verify that the public key displayed isthe one that is in your "id_nacl.pub" file. If it is not, then sqdmon could be having problems locatingyour keys. If one is not found, then a public key is generated on the fly to give the user the potentialof being authorized as a guest. If the key matches your "id_nacl.pub" file then your key may not be inthe "authorized_keys" file the daemon is using or the daemon could be having problems parsing the"authorized_keys" file.

Sqdaemon will need to be reloaded if the sqdagents.cfg or authorized_keys list changes.Starting/stopping/reloading the sqdaemon will not affect engines running under the daemon. If anengine needs the changes, it must first be stopped then restarted for the changes to take effect.

To reload both the sqdagents.cfg configuration file and the authorized_keys list issue the sqdmonreload command. Reload can only be executed by a user with admin privileges. Since a reload is runas a user it only works after the daemon has been started with at least one user added to theauthorized_keys list. Either do not start the daemon until this user is properly setup or manuallyrestart the sqdaemon service after this user is configured.

67© 2019 SQData Corporation

3SQL Server Log Reader Capture

Configure EngineThe function of an Engine may be one of simple replication, data transformation, event processing ora more sophisticated active/active data replication scenario. The actions performed by an Engine aredescribed by an Engine Script the complexity of which depends entirely on the intended functionand business rules required to describe that function. Engines may receive data from a variety ofsources for application to target datastores. SQData utilizes your existing native TCP/IP network forpublishing data captured on one platform to Engines running on any another platform.

The following tasks must be performed to configure Engines for input via TCP/IP initiated throughcommunication with the Controller Daemon on the platform where the data capture takes place:

· Generate Engine Public / Private keys

· Specify Source Datastore

· Create Engine Script

· Prepare Engine JCL or Shell Script

· Configure optional z/OS Master Controller (If applicable)

· Configure optional Engine Controller Daemon

Note, see the Engine Reference for a full explanation of the capabilities provided by Engine scripts.

Create Application Directory Structure

While the SQData Variable Directory <SQDATA_VAR_DIR> location works fine for Capture Agents andthe Controller daemon, Apply Engine Script development also requires a structure accommodatingsimilar items from dissimilar platforms such as DDL from DB2 and Oracle. For that reason thefollowing directory nodes are recommended at the next level for script development and partsmanagement:

./<directory_name> Description

ENGINE Main Engine scripts

CDCPROC CDC Engine Called Procedures referenced by#INCLUDE

LOADPROC Load (UnLoad) Engine Called Proceduresreferenced by #INCLUDE

DSDEF Datastore Definition referenced by #INCLUDE

<TYPE>DDL RDBMS specific DDL, eg DB2DDL, ORADDL,MSQDDL, etc

IMSSEG IMS Segment Copybooks

IMSDBD IMS DBD

<TYPE>COB System specific COBOL copybooks, eg: VSAMCOB,SEQCOB (sequential files)

XMLDTD XML Document Type Definitions that will be usedin a DESCRIPTION command

68

3SQL Server Log Reader Capture

© 2019 SQData Corporation

./<directory_name> Description

<TYPE>CSR RDBMS specific Cursors, eg DB2CSR, ORACSR,etc

<TYPE>LOAD RDBMS specific Load Control, eg DB2LOAD,ORALOAD ,etc

Notes:

1. While it may be more convenient to user lower case directory names, if your environmentincludes the z/OS Platform, consideration should be given to reusability as some z/OSreferences must be in Upper Case.

2. Engine scripts are typically Platform specific in that they cannot be used on another type ofPlatform, eg z/OS and UNIX without at least minor modification.

3. Called Procedures can frequently be used with little or no changes on another platform, evenwhen they contain platform specific Functions, unless they require direct access to adatastore on another platform, an atypical requirement.

4. Throughout the remainder of this document, part locations will usually refer only to the lastnode of standard z/OS Partitioned Datasets and UNIX or Windows directory hierarchy.

Unzip the $SQData_Apply_Engine_Parts.zip file to create the full structure along with sample partsand shell scripts.

Alternatively, commands similar to the following may be used to create the recommended directorystructures.

$ mkdir -p <SQDATA_VAR_DIR>/DB2DDL --mode=664$ mkdir -p <SQDATA_VAR_DIR>/ORADDL --mode=664$ mkdir -p <SQDATA_VAR_DIR>/IMSDBD --mode=664$ mkdir -p <SQDATA_VAR_DIR>/IMSSEG --mode=664$ mkdir -p <SQDATA_VAR_DIR>/ENGINE --mode=664$ mkdir -p <SQDATA_VAR_DIR>/CDCPROC --mode=664

Generate Engine Public / Private Key

As previously mentioned, Engines usually run on a different platform than the Data Capture Agent.The Controller Daemon on the Capture platform manages secure communication between Enginesand their Capture/Publisher Agents. Therefore a Public / Private Key pair must be generated for theEngine on the platform where the Engine is run. The SQDUTIL program must be used to generate thenecessary keys and must be run under the user-id that will be used by the Engine.

Syntax

$ sqdutil keygen

On z/OS, JCL similar to the sample member NACLKEYS included in the distribution executes theSQDUTIL program using the keygen command and generates the necessary keys.

The Public key must then be provided to the administrator of the Capture platform so that it can beadded to the nacl.auth.keys file used by the Controller Daemon.

Note, there should also be a Controller Daemon on the platform running Engines to enablecommand and control features and the browser based Control Center. While it is critical to use

69© 2019 SQData Corporation

3SQL Server Log Reader Captureunique key pairs when communicating between platforms, it is common to use the same key pair forcomponents running together on the same platform. Consequently, the key pair used by an Enginemay be the same pair used by it's Controller Daemon.

Specify SQL Server Source Datastore in Engine Script

The actions performed by an Engine are described by an Engine Script. In this example the scriptmight specify straight replication of the captured CDC records from SQL Server on one system to atarget SQL Server database on another system. The script must contain a DATASTORE specificationsimilar to the one below, which describes the source of the CDC records:

Syntax

DATASTORE cdc://<host><:port>/<capture_agent_alias>/<engine_agent_alias>

Keyword and Parameter Descriptions

<host> Location of the Controller Daemon managing the Capture agent.

<:port> Optional, required only if non-standard port is specified by the service parameter inthe Controller Daemon configuration.

<capture_agent_alias>= must match the alias specified in the capture Controller Daemonagents configuration file. The engine will connect to the Controller Daemon on thespecified host and request to be connected to that agent.

<engine_agent_alias> Must match the alias provided in the "sqdconf add" command "--datastore" parameter when the Capture agent was configured. Once connected to theCapture agent, the engine requests records from the <engine_agent_alias> group.

Example

DATASTORE cdc://<host><:port>/msqlcdc/SQLTOSQL OF UTSCDC AS CDCIN DESCRIBED BY <schema_name>.<table_name>

Notes:

1. The <engine_agent_alias> is case sensitive in that all references should be either upper orlower case. Because references to the "Engine" in z/OS JCL will be upper case, references tothe Engine in these examples are all in upper case for consistency.

2. Engine scripts require a physical description of target datastores. SQData has been designedto reuse existing data description information. The same SQL DDL used to by a DBA to createSQL Server tables can be used by the script define the database tables and columns of bothsource and target datastores. It is also possible to use dynamically generated Relational DML.

3. Fully qualified table names may contain very long fully qualified schemas. In order toshorten the name when it is used to qualify columns in SQData scripts an ALIAS is often used,e.g:HumanResources.EmployeeDepartmentHistory_StartDate AS HR.Dept_StartDate

4. Additional information about creating SQData scripts can be found in the Engine Reference

70

3SQL Server Log Reader Capture

© 2019 SQData Corporation

Configure Engine Batch Script

Parsing and Starting the Engine is a two step process each consisting of a single command that can berun at the command line or more frequently using two separate batch scripts.

The SQData Parser (SQDPARSE) creates a compiled or parsed command script from a source scriptfile.

Syntax

SQDPARSE <engine>.sqd <engine>.prc [LIST=ALL|SCRIPT] [ <parm1> <parm2> …<parmn>] [> <engine>.rpt]

Example

A batch script containing a single line similar to the following is frequently used to execute theparser:

sqdparse ./ENGINE/SQLTOSQL.sqd ./ENGINE/SQLTOSQL.prc ENGINE=SQLTOSQL HOST=winsvr1PORT=2626 > ./ENGINE/SQLTOSQL.rpt

Note: In the Parser example the working directory is one level up from the directory containingthe actual engine script, "ENGINE". References in the script to the location of SQL Server DDLtable descriptions, called procedures, etc must be sensitive to their location in the directoryhierarchy, ie: ./<directory_name>/<file_name>. In this case probably ./MSQLDDL (note the singledot).

The SQData Engine uses the parsed command script. The engine can be either executed directly atthe command line or using a batch script or started using the Daemon.

Syntax

SQDATA <engine>.prc > <engine>.out

or using the sqdmon start command along with a properly configured daemon:

SQDMON start ///<engine_alias> --service=2626

Example

A simple batch script is frequently used to execute the engine:

sqdata ./ENGINE/SQLTOSQL.prc > SQLTOSQL.out

or using the sqdmon start command along with a properly configured daemon:

sqdmon start ///SQLTOSQL --service=2626

[SQLTOSQL]type=engineprogram=sqdata.exeargs=./ENGINE/SQLTOSQL.prcworking_directory=c:/sqdata/demostdout_file=/SQLTOSQL.outstderr_file=/SQLTOSQL.outauto_start=no

71© 2019 SQData Corporation

3SQL Server Log Reader Capture

Note: In the Engine example the working directory, c:/sqdata/demo is one level up from thedirectory containing the actual engine script, "ENGINE". References to the parsed script must beto the location of parsed script.

Configure Engine Controller Daemon

The Controller Daemon manages secure communication between SQData components running onother platforms, enabling command and control features as well as use of the the browser basedControl Center. See the SQData Secure Communications Guide for more details regarding theController Daemon's role in security and how it is accomplished.

The primary difference between a Controller Daemon on Capture platforms and Engine onlyplatforms is that the Authorized Key File of the Engine Controller Daemon need only contain thePublic keys of the Control Center and or users of the SQDMON utility on other platforms. Setup andconfiguration of the Engine Controller Daemon, SQDAEMON, includes:

· Generate Public / Private keys

· Creating the Authorized Key File

· Create the Access Control List

· Create the Engine Agent Configuration File

· Preparing the Controller Daemon JCL or shell script

Example

A sample sqdagent.cfg file for a Controller Daemon containing the Engine DB2TOORA follows.Changes are not known to the daemon until the configuration file is reloaded, using theSQDMON Utility, or the sqdaemon process is stopped and started.

acl=<SQDATA_VAR_DIR>/daemon/cfg/acl.cfgauthorized_keys=<SQDATA_VAR_DIR>/daemon/nacl_auth_keysidentity=<SQDATA_VAR_DIR>/id_naclmessage_file=../logs/daemon.logservice=2626

[DB2TOORA]type=engineprogram=sqdataargs=DB2TOORA.prcworking_directory=<SQDATA_VAR_DIR>message=<SQDATA_VAR_DIR>stderr_file=<SQDATA_VAR_DIR>/DB2TOORA.rptstdout_file=<SQDATA_VAR_DIR>/DB2TOORA.rptauto_start=yes

See the Setup Capture Controller Daemon section for a detailed description of these activities.

72

3SQL Server Log Reader Capture

© 2019 SQData Corporation

Component VerificationThis section describes the steps required to verify that the SQData SQL Server Log Reader DataCapture Agent is working properly. If this is your first implementation of the SQData SQL ServerCapture we recommend a review with SQData support before commencing operation.

Start Controller Daemon

Starting the Controller Daemon requires only the execution of the sqdaemon program or the shellscript configured earlier. Implementing changes made to any of the Controller Daemon'sconfiguration files (acl.cfg, sqdagents.cfg, nacl_auth_keys) can be accomplished using the SQDMONUtility reload command without killing and re-starting the Controller Daemon.

Start SQL Server Log Reader Capture Agent

The Capture agent is both configured and started using the SQDCONF program. The followingcommands Mount (execute), apply all changes made to the the configuration (CAB) file and start theactual capture.

$ sqdconf --mount /<SQDATA_VAR_DIR>/cdcstore/mssqlcdc.cab$ sqdconf --apply /<SQDATA_VAR_DIR>/cdcstore/mssqlcdc.cab$ sqdconf --start /<SQDATA_VAR_DIR>/cdcstore/mssqlcdc.cab

It is important to realize that the message from SQDCONF indicating that the start command wasexecuted successfully, does not necessarily mean that the agent is still in started state. It only meansthat the start command was accepted by the capture agent and that the initial setup necessary tolaunch a capture thread were successful. These preparation steps involve connecting to SQL Serverand setting up the necessary environment to start a log mining session.

The capture agent runs as a daemon, so it does not have a terminal window to emit warnings orerrors. Such messages are instead posted in the system log. The daemon name for the SQL Servercapture is sqdmssqlc. If there is a mechanism in place to monitor the system log, it is a good idea toinclude the monitoring of sqdlogm messages. This will allow you to detect when a capture agent ismounted, started, or stopped - normally or because of an error. It will also contain, for most usualproduction error conditions, some additional information to help diagnose the problem.

Start Engine

Starting the Engines on the target platform requires only the execution of the SQDATA program withthe appropriate parsed Engine script, in our example OracleAPP1.prc. Alternatively the Engine canbe started using the Controller daemon.

$ sqdata SQLTOSQL.prc

SQData SQL Test Transactions

1. Execute an online transaction or SQL Server SQL statement using SQL Server ManagementStudio, updating the candidate tables that are to be captured

2. Execute a program, updating the candidate tables that are to be captured.

3. Examine the results in the target datastore using the appropriate tools.

73© 2019 SQData Corporation

3SQL Server Log Reader Capture

Operating ScenariosThis section covers several operational scenarios likely to be encountered after the initial SQL ServerCapture has been placed into operation, including changes in the scope of data capture andadditional use or processing of the captured data by downstream Engines.

One factor to consider when contemplating or implementing changes in an operational configurationis the implementation sequence. In particular processes that will consume captured data must betested, installed and operational before initiating capture in a production environment. This iscritical because the volume of captured data can overwhelm transient storage if the processes thatwill consume the captured data are not enabled in a timely fashion.

While the examples in this section will generally proceed from capture of changed data to theapplication at the target by an Engine, it is essential to know what the expected results are beforeconfiguring the data capture for example:

1. A new column populated in an existing table by an existing engine from an existing SQLServer Table; all the changes would be made at the target side where the Engine runs.

2. New tables to be populated from new SQL Server Tables maintained by a new transactions,will require configuration changes from one end to the other.

When working through the examples below be prepared to continue from one example to the nextwhen thinking about how to implement your own new scenario.

Some common scenarios encountered after an initial implementation has proven successful include:

· Capture New SQL Server Data

· Send Existing Data to New Target

· Filter Captured Data

· Straight Replication

· Active/Active Replication

Capture New SQL Server Data

Whether initiating Change Data Capture for the first time, expanding the original selection of data orincluding a new source of data from the implementation of a new application, the steps are verysimilar. The impact on new or existing Capture and Apply processes can be determined once thesource of the data is known, precisely what data is required from the Source, whether business rulesrequire filters or data transformations and where the Target of the captured data will reside.

Example:

Our example starts with the addition of a new Table to an existing SQL Server Database that willbe a new source for an existing Engine.

Solution:

In order to capture the changes for SQL Server tables the table must configured for capture.Review the steps required to configure the table for Publication.

74

3SQL Server Log Reader Capture

© 2019 SQData Corporation

Next the Capture configuration must be updated to include the new table. The SQDCONF utilitywill be used to add a table to the capture that will be sent to the existing target Engine.

$ sqdconf add /<SQDATA_VAR_DIR>/cdcstore/msqlcdc.cab --key=<new_table_name> --datastore=cdc:////SQLTOSQL --active

Finally, determine how the new data will affect the the existing Engine and modify the Enginescript accordingly. See the Engine Reference for all the options available through Engine Scriptcommands and functions.

Note:

Whenever a new source is to be captured for the first time, consideration must be given to theexisting state of the source datastore when capture is first initiated. The most common situation isthat the source already contains data that would have qualified to be captured and applied to thetarget, if the CDC and Apply process had already been in place.

Depending on the type of source and target datastore there are two solutions that can insure sourceand target are in sync when Change Data Capture is implemented:

1. While utilities may be available to unload the source datastore and load the target datastore,they will generally be restricted to both the same type (RDBMS, IMS, etc) of source andtarget datastore. Those utilities generally also require the source and target datastores tohave identical structure (columns, fields, etc). SQData recommends the use of utilityprograms if those two constraints are acceptable.

2. If however, the source and target are not identical, SQData recommends that a specialversion of the already tested Apply Engine script be used for the initial load of the targetdatastore. This approach has the additional benefit of providing a mechanism for"refreshing" target datastores if for some reason an "out of synchronization" situation occursbecause of an operational problem or a business rule change affecting filters ortransformations. Contact SQData support to discuss both the benefits and techniques forimplementing and perhaps more importantly maintaining, a load/refresh Engine solution.

Send Existing SQL Server Data to New Target

Our example began with the addition of a new SQL Server Table to the data capture process andsending the captured data to an existing Engine. Often however, a change results from recognitionthat a new downstream process or application can benefit from the ability to capture changes toexisting data. Whether the scenario is event processing or some form of Straight replication theimplementation process is essentially the same.

Our example continues with the addition of a new Engine (SQLTOORA) that will populate relationaltables with columns corresponding to one or more fields from the same SQL Server source. The newEngine will be running on second platform, Linux, not previously used by an SQData Agent and willbe updating Oracle tables.

While no changes are required to the Storage agent to support the new Engine, the Capture agentwill require configuration changes and the new Engine must be added.

75© 2019 SQData Corporation

3SQL Server Log Reader Capture

Reconfigure Log Reader Capture Agent

One or more output Datastores, also referred to as Subscriptions, may be specified for each table inthe configuration file. Once the initial configuration file has been created, Datastores can be addedor or removed using the SQDCONF modify command.

The following example adds a subscription for a second Target Engine (SQLTOORA) for changes to thesqdata.dept table using TCP/IP for communication.

$ sqdconf modify /<SQDATA_VAR_DIR>/cdcstore/msqlcdc.cab --key=<sqdata.dept> --

datastore=cdc:////SQLTOORA --active

Note, the configuration file changes must be followed by an apply in order to have the capture agentrecognize the updated configuration file.

Add New Engine

Adding a new Engine on the target platform requires only the sqdmon start command once theplatform's Engine Controller Daemon has also been configured. In our example the new Engine isnamed SQLTOORA

The actions performed by an Engine are described by an Engine Script. In this example the script willspecify only simple mapping of columns in a DDL description to columns in the target relationaltable. See the Engine Reference for all the options available through Engine Script commands andfunctions.

While not as simple as straight replication due to potentially different names for correspondingcolumns, the most important aspect of the script will be the DATASTORE specification for the sourceCDC records:

DATASTORE cdc://<host><:port>/<capture_agent_alias>/<engine_agent_alias>

Keyword and Parameter Descriptions

<host> Location of the Capture Controller Daemon.

<:port> Optional, required only if non-standard port is specified by the service parameter inthe Controller Daemon configuration.

<capture_agent_alias>= Must match the [<capture_agent_alias>] Section defined in theController Daemon sqdagents.cfg configuration file. The Capture agent manages transientdata routing for multiple SQL Server tables. The capture_agent_alias will be used again inEngine script to specify the source DATASTORE and also on sqdmon agent displaycommands. In our example we have used msqlcdc.

<engine_agent_alias> Must match the alias provided in the "sqdconf add" command "--datastore" parameter when the Storage agent is configured to support the new targetEngine.

Example

DATASTORE cdc://<host><:port>/msqlcdc/SQLTOORA OF UTSCDC AS CDCIN DESCRIBED BY <schema_name>.<new_table_name>

76

3SQL Server Log Reader Capture

© 2019 SQData Corporation

;

Update Capture Controller Daemon

In our example, the Data Capture Agent and its controlling structures existed prior to the addition ofthis new Engine. Consequently the only modification required to the Capture Controller Daemon isthe addition of the new Engine's Public Key to the Authorized Key File.

Applying the Configuration File changes

Once again, changes made to the Agent Configuration (CAB) file are not effective until they areapplied. For example, let’s imagine that the new SQL Server tables will be rolled out next weekend:

If changes were effective immediately or automatically at the next start, then these changes couldnot be performed until the production capture is stopped for the migration during the weekend.Otherwise, the risk exists that the capture agent may go down for unrelated production issues, andthe new change would be activated prematurely.

Forcing a distinct and explicit apply step, insures that such changes can be planned and prepared inadvance, without putting the current production replication in jeopardy. This allows capture agentmaintenance to be done outside of the critical upgrade path.

In order to apply changes, the agent must first be stopped. This operation in effect pauses the agenttask and permits the additions and/or modifications to the configuration to be applied. Once theagent is restarted, the updated configuration will be active.

$ sqdconf stop /<SQDATA_VAR_DIR>/cdcstore/msqlcdc.cab$ sqdconf apply /<SQDATA_VAR_DIR>/cdcstore/msqlcdc.cab$ sqdconf start /<SQDATA_VAR_DIR>/cdcstore/msqlcdc.cab

Filter Captured Data

The introduction of Data Capture necessarily adds some overhead to the processing of theoriginating transaction by the source database / file manager. For that reason it is customary toperform as little additional processing of the data during the actual capture operation as possible.Filtering data from the capture process is therefore broken into two types:

Capture Side Filters

In addition to controlling which tables are captured, it is also possible to add and remove items to beexcluded from capture based on other parameters including: User. The basic syntax varies slightlybased on the current state of the configuration and can be can be specified one or more times percommand line:

SQDCONF Create state syntax: --exclude-user=<variable>

SQDCONF Modify state syntax: --add-excluded-user=<variable> | --remove-excluded-user=<variable>

The only filter avaailable in the SQL Server capture is the value of the the user-id associated with thetransaction or program making SQL Server data changes.

77© 2019 SQData Corporation

3SQL Server Log Reader CaptureThe following example adds a user exclusion for all Engines to prevent cyclic updates in anActive/Active Replication configuration:

$ sqdconf modify <SQDATA_VAR_DIR/cdcstore/msqlcdc.cab --add-excluded-user=<sqdata_user>

Notes:

· The modification to the configuration file must be applied to have the capture agent recognizethe updated configuration.

· In the case where active/active replication is to be deployed, there must be an --exclude-user=<sqdata_user> where <sqdata_user> must be the SQL Server user under which theengine runs.

Engine Side Filters

Both record and field level evaluation of data content including cross-reference to external files notpart of the original data capture. See the SQData User Guide for all the options available throughEngine Script commands and functions.

78

4SQL Server Straight Replication

© 2019 SQData Corporation

Simple replication is often used when a read only version of an existing datastore is needed or aremote hot backup is desired. The SQData Apply Engine provides an easy to implement simplereplication solution requiring very few instructions. It will also automatically detect out of syncconditions that have occurred due to issues outside of SQData's control and perform compensationby converting updates to inserts (if the record does not exist in the target), inserts to updates (if therecord already exists in the target) and drop deletes, if the record does not exist in the target.

Note, this section assumes two things: First, that the environment on the target platform fullysupports the type of datastore being replicated. Second, that an SQData Change Data Capturesolution for the source datastore type has been selected, configured and tested.

79© 2019 SQData Corporation

4SQL Server Straight Replication

Target Implementation ChecklistThis checklist covers all the tasks required to preparing the Target operating environment andconfigure Straight SQL Server Replication. It assumes two things: First that a SQL Server system existson the target platform. Second, that a SQL Server Log Reader Capture has been configured and testedon the Source platform, see that Implementation Checklist .

# Task

Prepare Environment

1 Perform the base product installation on the Target System

2 Identify/Authorize/Prepare Operating User(s)

3 Verify Execution Authorization of SQData components

4 Create SQData Variables Directories

Environment Preparation Complete

Engine Configuration Tasks

1 Create Application Directory Structure

2 Collect DDL for tables to be replicated and Create Target tables

3 Generate Public/Private Keys for Engine, Update Auth Key File on SourceSystem

4 Configure and Parse Straight Replication Script

5 Prepare Engine Batch Script

Capture Agent Setup Complete

Verification Tasks

1 Start the SQL Server Capture agent and SQDAEMON on Source System

2 Start the Engine on Target System

3 Apply changes to the source tables using SQL Server Mgmt. Studio or othermeans.

4 Verify that changes were captured and processed by Engine

Verification Complete

The following sections focus on the Engine Configuration and communication with the Sourceplatform Controller Daemon. Detailed descriptions of the other steps required to prepare theenvironment for SQData operation are described in previous sections.

80

4SQL Server Straight Replication

© 2019 SQData Corporation

Create Target TablesUsing SQL Server Managment Studio or other means create duplicates of the Source tables on theTarget system.

81© 2019 SQData Corporation

4SQL Server Straight Replication

Generate Engine Public / Private KeysAs previously mentioned, Engines usually run on a different platform than the Data Capture Agent.The Controller Daemon on the Capture platform manages secure communication between Enginesand their Capture/Publisher Agents. Therefore a Public / Private Key pair must be generated for theEngine on the platform where the Engine is run. The SQDUTIL program must be used to generate thenecessary keys and must be run under the user-id that will be used by the Engine.

Syntax

$ sqdutil keygen

On z/OS, JCL similar to the sample member NACLKEYS included in the distribution executes theSQDUTIL program using the keygen command and generates the necessary keys.

The Public key must then be provided to the administrator of the Capture platform so that it can beadded to the nacl.auth.keys file used by the Controller Daemon.

Note, there should also be a Controller Daemon on the platform running Engines to enablecommand and control features and the browser based Control Center. While it is critical to useunique key pairs when communicating between platforms, it is common to use the same key pair forcomponents running together on the same platform. Consequently, the key pair used by an Enginemay be the same pair used by it's Controller Daemon.

82

4SQL Server Straight Replication

© 2019 SQData Corporation

Create Straight Replication ScriptA Simple Replication script requires DESCRIPTIONS for each Source and Target table as well as aeither straight mapping procedure for each table or use of the REPLICATE Command as shown in thesample script below. In the example DDL is provided through external files for each table referencedas a GROUP. In the sample script, a CDCSTORE type capture and TCP/IP are used to captured andtransport data to the target Apply Engine. The Main Select section contains only references to theSource and Target Datastore aliases and the REPLICATE Command. Individual mapping proceduresare not required in this case. See the Engine Reference for more details regarding the use of theREPLICATE Command.

The sample script, SQLTOSQL listed below together with DDL created by SQL Server ManagementStudio can be parsed and executed.

If you choose to exercise this script, which is based on two simple SQL Server tables, it will benecessary to create two copies of the DEPT and EMP tables as referenced in the script on the targetsystem. Once that is complete, the script can be parsed and exercised.

----------------------------------------------------------------- SQL Server REPLICATION SCRIPT FOR ENGINE: SQLTOSQL----------------------------------------------------------------- SUBSTITUTION VARS USED IN THIS SCRIPT:-- %(ENGINE) - ENGINE / REPORT NAME-- %(HOST) - HOST OF Capture-- %(PORT) - TCP/IP PORT SQDAEMON-- %(AGNT) - Capture Agent alias in sqdagents.cfg ----------------------------------------------------------------- CHANGE LOG:----------------------------------------------------------------- 2018/01/01: INITIAL RELEASE ---------------------------------------------------------------RDBMS ODBC DEMO;OPTIONS CDCOP('I','U','D');

----------------------------------------------------------------- DATA DEFINITION SECTION-------------------------------------------------------------------------------------------------- Source Descriptions

---------------------------------

BEGIN GROUP SOURCE_DDL;DESCRIPTION MSSQL ./MSSQL_DDL/EMP AS S_EMP;DESCRIPTION MSSQL ./MSSQL_DDL/DEPT AS S_DEPT;

END GROUP;

--------------------------- -- Target Data Descriptions --------------------------- -- None required for Straight Replication --------------------------- -- Source Datastore(s) ---------------------------

83© 2019 SQData Corporation

4SQL Server Straight ReplicationDATASTORE cdc://%(HOST):%(PORT)/%(AGNT)/%(ENGINE) OF UTSCDC AS CDCIN RECONNECT DESCRIBED BY GROUP SOURCE_DDL ; --------------------------- -- Target Datastore(s) --------------------------- DATASTORE RDBMS OF RELATIONAL AS TARGET FORCE QUALIFIER TGT DESCRIBED BY GROUP SOURCE_DDL FOR CHANGE ; --------------------------- -- Variables --------------------------- -- None required for Straight Replication --------------------------- -- Procedure Section --------------------------- -- None required for Straight Replication --------------------------------------------------- Main Section - Script Execution Entry Point -------------------------------------------------PROCESS INTO TARGET SELECT { -- OUTMSG(0,STRING(' TABLE=',CDC_TBNAME(CDCIN) -- ,' CHGOP=',CDCOP(CDCIN) -- ,' TIME=' ,CDCTSTMP(CDCIN))) -- Source and Target Datastores must have the same table names REPLICATE(TARGET) } FROM CDCIN;

84

4SQL Server Straight Replication

© 2019 SQData Corporation

Prepare Engine Batch ScriptThe parsed replication Engine script SQLTOSQL will run on a Windows platform. A simple batch scriptcan be created:

$ sqdata ./ENGINE/SQLTOSQL.prc > SQLTOSQL.out

85© 2019 SQData Corporation

4SQL Server Straight Replication

Verify Straight ReplicationVerification begins with the Capture Agent and the specific steps depend on the type of Capturebeing used. Follow the verification steps described previously depending on which Capture hasbeen implemented. Then start the Engine on the target system.

Using SQL Server Management Studio or other means, perform a variety of insert, update and deleteactivities against source tables. Then on the Target system, again using SQL Server ManagementStudio or other means, verify that the content of the target tables match the source.

86

5SQL Server Active/Active Replication

© 2019 SQData Corporation

An overview of Active/Active Replication is provided in the Change Data Capture Guide.Implementing such a configuration for SQL Server is a two step process:

1. Create of a single Simple Replication Engine script that can be reused on each system.

2. Creation of Capture configurations on each of the systems.

add --active | --inactive --key=<TABLE_NAME>--datastore=cdc:////<engine_agent_alias><cab_file_name>

Note: The SQL Server Log Reader Capture, by default excludes from capture all updates made by anSQData Apply Engine. The reason for this is to avoid circular replication.

87© 2019 SQData Corporation

6SQL Server Engine ConsiderationsThis section describes some common considerations that apply to SQData Integration Engines.

88

6SQL Server Engine Considerations

© 2019 SQData Corporation

SQL Server DDLSQData Apply Engine scripts require a physical description of target datastores. While each platformand target datastore type have unique characteristics, SQData has been designed to reuse existingdata description information. In the zOS IMS database world that generally consists of copy bookscontaining the COBOL descriptions of the data. In DB2 and Oracle, the same SQL DDL used to by a DBAto create a table can be used to define the database tables and columns of both source and targetdatastores. It is also possible to dynamically generate the DML from Relational databasemanagement systems, ensuring that SQData Scripts use current table descriptions. SQL Serverhowever, has some unique characteristics that must be taken into consideration when generatingthe DDL for use by SQData.

The SQL Server Management Studio (SSMS) can be used to generate properly formed DDL that can besaved and used by SQData Scripts. The following screens walk through the process.

Begin by Right Clicking on your database in the Object Explorer.

89© 2019 SQData Corporation

6SQL Server Engine Considerations

Select Tasks and then Generate Scripts from the cascading menus.

Choose the Database to Publish (In this example, AdventureWorks) and click Next.

90

6SQL Server Engine Considerations

© 2019 SQData Corporation

Change all script options to False except those noted above in blue which should be set to True:

1. Script Create

2. Script Owner

Click Next.

91© 2019 SQData Corporation

6SQL Server Engine Considerations

Select Tables and click Next.

92

6SQL Server Engine Considerations

© 2019 SQData Corporation

Select the tables for which you want to generate DDL, in this case all the Person related tables wereselected. Click Next.

Specify the name of the script, in this example script.sql, select Single file and ANSI text. Click Next.

Confirm your selections and click Finish.

93© 2019 SQData Corporation

6SQL Server Engine Considerations

The window above will open for you to monitor the script generation process.

Confirm that the DDL was created for each object was successful and then click Close, the DDL hasbeen created. Open the file created, in this example script.sql, using any editor.

94

6SQL Server Engine Considerations

© 2019 SQData Corporation

The DDL should be saved to a file so that it can be later used in SQData Studio for constructing thereplication script.

95© 2019 SQData Corporation

6SQL Server Engine Considerations

SQL Server Table NamesFully qualified table names on SQL Server frequently contain very long fully qualified schemas. Inorder to shorten the name when it is used to qualify columns in SQData scripts an ALIAS is oftenused.

In the AdventureWorks database for example one table is namedHumanResources.EmployeeDepartmentHistory which contains the column StartDate. The ALIASparameter can be used to shorten the table name to something more manageable using the syntaxbelow:

DESCRIPTION MSSQL ALIAS(

HumanResources.EmployeeDepartmentHistory_StartDate AS HR.Dept_StartDateHumanResources.EmployeeDepartmentHistory_EmployeeID AS HR.Dept_EmployeeIDHumanResources.EmployeeDepartmentHistory_DeptID AS HR.Dept_DeptIDHumanResources.EmployeeDepartmentHistory_ShiftID AS HR.Dept_ShiftID)

Then in a subsequent procedure statement in the script a reference to StartDate could look like this:

If HR.Dept_StartDate < V_Current_Year

Rather than:

If HumanResources.EmployeeDepartmentHistory_StartDate < V_Current_Year

96

7SQL Server Operational Issues

© 2019 SQData Corporation

This section describes some common operational issues that may be encountered while using theSQL Server capture agent.

· Starting the capture agent

· Determining and manually setting the Initial Starting point of the capture agent

· Displaying the status of the capture agent

· Displaying the status of the storage agent

· Stopping the capture agent

· Manual Log Truncation

If this is your first implementation of the SQData SQL Server Capture Agent we recommend a reviewwith SQData support before commencing operation.

97© 2019 SQData Corporation

7SQL Server Operational Issues

Starting the Capture AgentThe first time the Capture Agent is started both the sqdconf apply and sqdconf start commandsshould be issued. After the initial creation, apply should not be used, unless all changes made sincethe agent was last Stopped are intended to take effect immediately upon the Start. The purpose ofapply is to make it possible to add/modify the configuration while preparing for an implementationof changes without affecting the current configuration.

Note, apply and start can and frequently will be separated into different SQDCONF commandscripts.

With the agent mounted, the changes would be applied and then, the capture agent could now bestarted.

Syntax

sqdconf --apply | --start <capture CAB file name>

Example

sqdconf c:/sqdata/cdcstore/mssqlcdc.cab/mssqlcdc.cab --applysqdconf c:/sqdata/cdcstore/mssqlcdc.cab/mssqlcdc.cab --start

It is important to realized that the return code, and message from sqdconf indicating that the startcommand was executed successfully, does not necessarily mean that the agent is still in startedstate. It only means that the start command was accepted by the capture agent and that the initialsetup necessary to launch a capture thread were successful. These preparation steps involveconnecting to SQL Server and setting up the necessary environment to start a log mining session.

The capture agent runs as a daemon, so it does not have a terminal window to emit warnings orerrors. Such messages are instead posted in the system log. The daemon name for SQL Servercapture is sqdmssqlc NOT sqdconf. If there is a mechanism in place to monitor the system log, it is agood idea to include the monitoring of sqdmssqlc messages. This will allow you to detect when acapture agent is mounted, started, or stopped - normally or because of an error. It will also contain,for most usual production error conditions, some additional information to help diagnose theproblem.

98

7SQL Server Operational Issues

© 2019 SQData Corporation

Determining the Initial Start PointWhen the capture agent is started for the very first time it will start mining from the beginning of theactive transaction log. It is often preferable however to specify the point-in-time that capture starts.That is accomplished by querying the transaction log using the fn_dblog and then selecting a startingLSN based on the Begin Time of a logged transaction to ensures that the capture agent starts on atransaction boundary that does not contain any in-flight units-of-work.

Executing the fn_dblog Command

If you chose to manually set the start LSN, the recommended approach is to Query fn_dblog (thecontents of the active transaction log) to determine the LSN where you want to start the capture:

select "Current LSN", Operation, AllocUnitName, "Begin Time", Left(Description,25) from fn_dblog(null,null) where Operation = 'LOP_BEGIN_XACT' OR (Description =

'REPLICATE' OR Description = 'COMPENSATION');

You can also specify the table_name(s) you want to capture to determine a more precise startingpoint:

select "Current LSN", Operation, AllocUnitName, "Begin Time", Left(Description,25) from fn_dblog(null,null) where Operation = 'LOP_BEGIN_XACT' OR (Description =

'REPLICATE' OR Description = 'COMPENSATION' AND (allocunitname = 'SQDATA.DEPT_TGT' OR allocunitname = 'dbo.table_2'));

99© 2019 SQData Corporation

7SQL Server Operational IssuesThe results of the sample query is shown below. If you wanted to start capture at 6:00PM (18:00:00)on 2016/04/29, you would set the start LSN to 00000064:0000017d:0001.

Current LSN Operation AllocUnitName Begin Time(No column name)

00000064:00000166:0002 LOP_BEGIN_XACT NULL 2016/04/2915:32:17:447 user_transaction;0x010500

00000064:00000166:0003 LOP_DELETE_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:00000166:0004 LOP_INSERT_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:00000166:0005 LOP_COMMIT_XACT NULL NULLREPLICATE

00000064:0000016c:0001 LOP_BEGIN_XACT NULL 2016/04/2916:17:38:763 user_transaction;0x010500

00000064:0000016c:0002 LOP_DELETE_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:0000016c:0005 LOP_COMMIT_XACT NULL NULLREPLICATE

00000064:0000016e:0004 LOP_BEGIN_XACT NULL 2016/04/2916:18:11:037 user_transaction;0x010500

00000064:0000016e:0005 LOP_INSERT_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:0000016e:0006 LOP_COMMIT_XACT NULL NULLREPLICATE

00000064:00000170:0001 LOP_BEGIN_XACT NULL 2016/04/2916:28:21:617 user_transaction;0x010500

00000064:00000170:0002 LOP_DELETE_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:00000170:0003 LOP_INSERT_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:00000170:0004 LOP_COMMIT_XACT NULL NULLREPLICATE

00000064:00000176:0001 LOP_BEGIN_XACT NULL 2016/04/2917:06:20:037 user_transaction;0x010500

00000064:00000176:0002 LOP_DELETE_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:00000176:0003 LOP_INSERT_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:00000176:0004 LOP_COMMIT_XACT NULL NULLREPLICATE

00000064:00000178:0001 LOP_BEGIN_XACT NULL 2016/04/2917:17:59:373 user_transaction;0x010500

00000064:00000178:0002 LOP_INSERT_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:00000178:0003 LOP_COMMIT_XACT NULL NULLREPLICATE

00000064:00000179:0001 LOP_BEGIN_XACT NULL 2016/04/2917:47:15:783 user_transaction;0x010500

00000064:00000179:0002 LOP_DELETE_ROWS SQDATA.DEPT_TGT NULLREPLICATE

100

7SQL Server Operational Issues

© 2019 SQData Corporation

00000064:00000179:0003 LOP_INSERT_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:00000179:0004 LOP_COMMIT_XACT NULL NULLREPLICATE

00000064:0000017b:0001 LOP_BEGIN_XACT NULL 2016/04/2917:47:23:623 user_transaction;0x010500

00000064:0000017b:0002 LOP_DELETE_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:0000017b:0003 LOP_INSERT_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:0000017b:0004 LOP_COMMIT_XACT NULL NULLREPLICATE

00000064:0000017d:0001 LOP_BEGIN_XACT NULL 2016/04/2918:02:49:753 user_transaction;0x010500

00000064:0000017d:0002 LOP_DELETE_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:0000017d:0003 LOP_INSERT_ROWS SQDATA.DEPT_TGT NULLREPLICATE

00000064:0000017d:0004 LOP_COMMIT_XACT NULL NULLREPLICATE

Manually Setting the Capture Start Point

Once the desired LSN has been determined the capture can be started (re-started) from the selectedpoint in time.

Syntax

Using sqdconf, Stop the capture (if it is running)

sqdconf stop <capture CAB file name>

Using sqdconf, Re-Start the capture using the --safe-restart parameter

sqdconf start <capture CAB file name> --safe-restart=<lsn>

Example

sqdconf start c:/sqdata/cdcstore/mssqlcdc.cab/mssqlcdc.cab --safe-

restart=00000064:0000017d:0001

101© 2019 SQData Corporation

7SQL Server Operational Issues

Displaying Capture Agent Status and StatisticsCapture agents keep track of statistical information for the last session and for the lifetime of theconfiguration file.The storage agent maintains statistics about the raw logs it has mined. Thesestatistics can be very useful to determine if the storage agent is sized correctly. These statistics canbe accessed with the display action of sqdconf.

Syntax

sqdconf display <capture CAB file name>

Example

sqdconf display c:/sqdata/cdcstore/mssqlcdc.cab

SQDF901I Configuration file : c:\sqdata\cdcstore\mssqlcdc.cabSQDF902I Status : Not MountedSQDF903I Configuration key : cab_E2FF1C6452827D70SQDF904I Allocated entries : 31SQDF905I Used entries : 3SQDF906I Active Database : GFSSQDF907I Start Log Point : 0x0SQDF908I Last Log Point : 0000002d:0000009c:0004SQDF940I Last Log Timestamp :SQDF987I Last Commit Time : 2016-04-09 08:55:49.556666 (1e0148)SQDF981I Safe Restart Point : 0000002d:0000009c:0004SQDF986I Safe Remine Point : 0x0SQDF910I Active User : (null)SQDF913I Fix Flags : RETRYSQDF914I Retry Interval : 30SQDF919I Active Flags : CDCSTORE,RAW LOGSQDF915I Active store name : c:\sqdata\cdcstore\mssqlcdc_store.cabSQDF916I Active store id : cab_EB9774D952F8C632

SQDF920I Entry : # 0SQDF930I Key : SQDATA.DEPT_TGTSQDF923I Active Flags : ACTIVESQDF928I Last Log Point : 0000002d:0000009a:0004SQDF950I session # insert : 3687SQDF951I session # delete : 3673SQDF952I session # update : 1855SQDF960I cumul # of insert : 14748SQDF961I cumul # of delete : 14692SQDF962I cumul # of update : 7411SQDF925I Active Datastore : cdc:///mssqlcdc/MSQLAPPL

SQDF920I Entry : # 1SQDF930I Key : SQDATA.EMP_TGTSQDF923I Active Flags : ACTIVESQDF928I Last Log Point : 0000002d:0000009a:0004SQDF950I session # insert : 0SQDF951I session # delete : 0SQDF952I session # update : 0SQDF960I cumul # of insert : 0SQDF961I cumul # of delete : 0SQDF962I cumul # of update : 0SQDF925I Active Datastore : cdc:///mssqlcdc/MSQLAPPL

102

7SQL Server Operational Issues

© 2019 SQData Corporation

If a captured log record belongs to a transaction for which the begin transaction record has not beenseen by the capture agent, such a record is counted as a orphan record. If the transaction iscommitted before the start LSN point or if the transaction is rolled back, the orphaned records of thattransaction are voided (that is not counted as orphan record). Therefore it is possible that thestatistics, immediately after a re-start, show some orphan records, but does not show them later.

SQDF920I Entry : # 2SQDF930I Key : cdc:///mssqlcdc/MSQLAPPLSQDF842I Is connected : NoSQDF932I Ack Log Point : 0000002d:0000009a:0004SQDF843I Last Connection : 2016-04-09 13:54:56SQDF955I session # bytes : 165SQDF953I session # records : 1SQDF954I session # txns : 1SQDF965I cumul # bytes : 165SQDF963I cumul # records : 1SQDF964I cumul # txns : 1

SQDC017I sqdconf(pid=23380) terminated successfully

The Last Log Point for each individual entry of the configuration, indicates the lsn of the commitpoint of the most recent transaction that impacted this table. The rest of the statistics are fairlyself-explanatory.

103© 2019 SQData Corporation

7SQL Server Operational Issues

Displaying Storage Agent StatisticsThe storage agent maintains statistics about the raw logs it has mined. These statistics can be veryuseful to determine if the storage agent is sized correctly. If the capture agent is mounted, thestorage statistics can be obtained with the following command:

Syntax

sqdconf display <cdcstore CAB file name>

Example

sqdconf display c:/sqdata/cdcstore/mssqlcdc_store.cab

SQDF801I Configuration file : mssqlcdc_store.cabSQDF802I Configuration key : cab_5C720E2301C259F0SQDF820I High Water Mark : 49133512SQDF821I Low Water Mark : 46137544SQDF850I Session Statistics -SQDF851I Txn Max Record : 99033SQDF852I Txn Max Size : 42386124SQDF853I Txn Max Log Range : 1673SQDF855I Max In-flight Txns : 0SQDF856I # Txns : 2321SQDF857I # Effective Txns : 1SQDF861I # Commit Records : 1SQDF858I # Rollbacked Txns : 0SQDF859I # Data Records : 99033SQDF860I # Orphan Data Records : 0SQDF862I # Rollbacked Records : 0SQDF863I # Compensated Records : 0SQDF864I # Spilled Txns : 22SQDF865I # Re-Spilled Txns : 21SQDF866I # Orphan Txns : 0SQDF870I Life Statistics -SQDF871I Max Txn Record : 99033SQDF872I Max Txn Size : 42386124SQDF873I Max Txn Log Range : 1673SQDF875I Max In-flight Txns : 0SQDF876I # Txns : 2333SQDF877I # Effective Txns : 2SQDF881I # Commit Records : 1SQDF878I # Rollbacked Txns : 0SQDF879I # Data Records : 198066SQDF880I # Orphan Data Records : 0SQDF882I # Rollbacked Records : 0SQDF883I # Compensated Records : 0SQDF884I # Spilled Txns : 44SQDF885I # Re-Spilled Txns : 42SQDF886I # Orphan Txns : 0SQDC017I sqdconf terminated successfully

Some fields can give indications about the storage need for our workload.

· Txn Max Record :This indicates the maximum number of records contained in any giventransaction. Here the biggest transaction had 99033 records.

104

7SQL Server Operational Issues

© 2019 SQData Corporation

· Txn Max Size: This is the maximum size of the payload associated with any given transaction.Here the total amount of data carried by the biggest transaction we’ve seen was a little morethan 40MB.

· Txn Max Log Ranges: This indicates the largest difference in lsn from start to the end of atransaction.

105© 2019 SQData Corporation

7SQL Server Operational Issues

Stopping the Capture AgentThe SQData SQL Server Capture Agent is stopped with the sqdconf program. A capture agent must bemounted and started before it can be stopped.

Syntax

sqdconf stop <capture CAB file name>

Example

sqdconf stop c:/sqdata/cdcstore/mssqlcdc.cab/mssqlcdc.cab

If you desire to terminate the stopped but still mounted capture agent, the command is:

Syntax

sqdconf unmount <capture CAB file name>

Example

sqdconf unmount c:/sqdata/cdcstore/mssqlcdc.cab/mssqlcdc.cab

106

7SQL Server Operational Issues

© 2019 SQData Corporation

Manual Log TruncationWhile Log Truncation in a production environment should never occur until after SQData SQL ServerCapture Agent captured changed data AND it has been consumed by the target Integration Engine,there may be a reason to force truncation in a test scenario.

Note: This procedure should be used with extreme care. To quote Microsoft, "If you executesp_repldone manually, you can invalidate the order and consistency of delivered transactions.sp_repldone should only be used for troubleshooting replication as directed by an experiencedreplication support professional."

More can be read about this and related commands by searching for "SQL Server sp_repldone" andat: http://msdn.microsoft.com/en-us/library/ms173775.aspx

Example

Using same results seen in the section Verify Capture Configuration , we decide that that wewish to truncate the transaction log following the INSERT at LSN 00000064:00000178:0002.

The SQL Server sp_repltrans stored procedure is executed at the Publisher on the publicationdatabase. In our example the command is entered as follows:

EXEC sp_repltrans;

The result looks like the following:

xdesid xact_seqno0x00000064000001660002 0x000000640000016600050x000000640000016c0001 0x000000640000016c00050x000000640000016e0004 0x000000640000016e00060x00000064000001700001 0x000000640000017000040x00000064000001760001 0x000000640000017600040x00000064000001780001 0x000000640000017800030x00000064000001790001 0x000000640000017900040x000000640000017b0001 0x000000640000017b00040x000000640000017d0001 0x000000640000017d0004

While the LSN's are presented in a slightly different format, missing the semi-colons andprefixed with "0x", the transaction we identified above falls in the range at:

0x00000064000001780001 0x00000064000001780003

Next, we construct and execute the following sp_repldone statement.

EXEC sp_repldone @xactid = 0x00000064000001780001, @xact_segno =0x00000064000001780003, @numtrans = 0, @time = 0, @reset = 0;

A subsequent sp_repltrans procedure execution results in the following, indicating that data upto and including 0x00000064000001780003 will indeed been truncated once the next transactionlog backup is run:

xdesid xact_seqno0x00000064000001790001 0x000000640000017900040x000000640000017B0001 0x000000640000017B00040x000000640000017D0001 0x000000640000017D0004

107

Index

Ind

ex

© 2019 SQData Corporation

AActive Directory 19Active/Active 86Active/Active Replication 57affinity 86ALIAS 95--alias 53, 54Apply 76

Ccdcstore 53, 54, 57compensate 86concurrency 86Controller Daemon 61cyclic update 86

Ddaemon 72, 97Datastores 75DDL 88Distributor 8

E--exclude 76

Ffn_dblog 98, 100Full Recovery Model 23

LLog Reader 23LSN 57, 100

Mmarkdone 57Mixed Authentication 8modify 75mount 105

NNACLKEYS 22

PPrivate 22Public 22Public / Private key 22Publication 24Publisher 8

RRecovery Model 8, 44

SSafe Restart Point 57Scripts 88Sizing 45Smart Apply 86snapshot 24SQDAEMON 61, 65SQDATA 72sqdconf 53, 54, 57, 72, 97, 101, 103, 105sqdmssqlc 97SQL Server 9002 error 50start 72, 97stop 105Store and Forward 15Subscriber 8

TTransactional Publication 24Transactional Replication 23, 24Truncation 45--type 53, 54

WWebSphere MQ 53, 54Windows Authentication 8, 19

108

Rights, Marks and Notices

© 2019 SQData Corporation

Copyright SQData Corporation

This manual describes proprietary software features of SQData, version 3 (V3), © 2008 - 2019SQData Corporation.

This printed material and the subject matter it presents are the property of SQData Corporation,and may not be reproduced in any form without prior authorization requested and received inwriting from SQData Corporation.

Readers’ comments may be sent by e-mail to [email protected]

This information contains sample application programs in source language, which illustratesprogramming techniques on various operating platforms. You may copy, modify, and distributethese sample programs in any form without payment to SQData Corporation, for the purposes ofdeveloping, using, marketing or distributing application programs conforming to the applicationprogramming interface for the operating platform for which the sample programs are written.

These examples have not been thoroughly tested under all conditions. SQData Corporation,therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

109

Trademarks and Service Marks

SQData is a trademark of the SQData Corporation in the United States and/or other countries.

The following terms are trademarks of the IBM Corporation in the United States and/or othercountries or both:

AIX

DB2 / DB2 LUW

IMS, IMS z/OS

iSeries i5/OS

WebSphere MQ

z/OS

Microsoft, Windows, Microsoft Access, SQL Server and the Windows logo are trademarks ofMicrosoft Corporation in the United States and/or other countries.

UNIX is a defined trademark in the United States and/or other countries licensed exclusivelythrough X/Open Company Limited.

Oracle is a trademark of the Oracle Corporation in the United States and/or other countries.

Java and all Java-based trademarks and logos are trademarks of Oracle Corporation in the UnitedStates and/or other countries.

Apache JSON and AVRO are Open systems projects supported by the Apache SoftwareFoundation

Confluent Schema Registry is part of the Confluent Platform from Confluent Corporation

Other company, product or service names may be trademarks or service marks of others.

© 2019 SQData Corporation

Rights, Marks and Notices

110

Notices

This information was developed for products and services offered in the U.S.A. SQData may not offer theproducts, services, or features discussed in this document in other countries. Contact the SQDataCorporation for information on the products and services currently available in your geographic location.SQData Corporation may have patents or pending patent applications covering subject matter described inthis manual. The furnishing of this document does not give you any l icense to these patents. You can sendlicense inquiries, in writing, to:

SQData Director of LicensingSQData Corporation4620 Sunbelt Drive Suite 202Addison, Texas 75001U.S.A.

The following paragraph does not apply to the UK or any other country where such provisions areinconsistent with local law:

SQDATA CORPORATION PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHEREXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of expressor implied warranties in certain transactions, therefore, this statement may not apply to you. Thisinformation could include technical inaccuracies or typographical errors. Changes are periodically made tothe information herein; these changes will be incorporated in new editions of the publication. SQDataCorporation may make improvements and/or changes in the product(s) and/or the program(s) described inthis publication at any time without notice.

Licensees of this program who wish to have information about it for the purpose of enabling: (i) theexchange of information between independently created programs and other programs (including this one)and (i i) the mutual use of the information which has been exchanged, should contact:

SQData Director of LicensingSQData Corporation4620 Sunbelt Drive Suite 202Addison, Texas 75001U.S.A.

Such information may be available, subject to appropriate terms and conditions, including in some cases,payment of a fee.

The licensed program described in this information and all l icensed material available for it are providedby SQData Corporation under terms of the SQData Program License Agreement or any equivalent agreementbetween us. Any performance data contained herein was determined in a controlled environment. Therefore,the results obtained in other operating environments may vary significantly. Some measurements may havebeen made on development-level systems and there is no guarantee that these measurements will be thesame on generally available systems. Furthermore, some measurement may have been estimated throughextrapolation. Actual results may vary. Users of this document should verify the applicable data for theirspecific environment.

Information concerning non-SQData products was obtained from the suppliers of those products, theirpublished announcements or other publicly available sources. SQData has not tested those products andcannot confirm the accuracy of performance, compatibil ity or any other claims related to non-SQData

© 2019 SQData Corporation

Rights, Marks and Notices