h2r user's guide · kernel.....9 io programs ... h2r user's guide iv

61
H2R User's Guide

Upload: hoangdan

Post on 20-Jun-2018

256 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

Page 2: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv
Page 3: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

iii

Table Of Contents Scope of the product ........................................................................................................1

Supported platforms .....................................................................................................1

General concepts .............................................................................................................3

Aim of the product.........................................................................................................3

Unchanged user interface.............................................................................................3

Performance .................................................................................................................3

Utilities ..........................................................................................................................4

Integration.....................................................................................................................4

Automatic migration......................................................................................................4

New structures automatic definition and creation .........................................................5

Multiplatform usage ......................................................................................................5

Forced modularity .........................................................................................................5

Preliminary elaboration area ............................................................................................7

Internal tables ...............................................................................................................7

Copies analyzing ..........................................................................................................7

DBD/PSB analyzing......................................................................................................7

IO programs generation................................................................................................7

User tables generation..................................................................................................8

Conversion rules and load programs generation ..........................................................8

Data Unload Utility and data elaboration ......................................................................8

User programs elaboration ...........................................................................................8

IMS format compiling ....................................................................................................8

Runtime execution area ...................................................................................................9

Kernel ...........................................................................................................................9

IO programs..................................................................................................................9

H2R flow design .............................................................................................................10

IMS flow design..............................................................................................................11

Prerequisites ..................................................................................................................13

Installation ......................................................................................................................13

Environment creation .....................................................................................................15

Page 4: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

iv

Prerequisites...............................................................................................................15

UNIX User creation.....................................................................................................15

Environment definition ................................................................................................16

Internal h2r tables creation .........................................................................................17

Internal h2r tables loading ..............................................................................................19

DBD and PSB analyzing.............................................................................................19

Copies analyzing ........................................................................................................19

Components generations ...............................................................................................25

IO programs generation..............................................................................................25

Users tables scripts generation ..................................................................................25

Conversion program generation .................................................................................25

Conversion rules definition .........................................................................................26

Load programs generation..........................................................................................27

User program elaboration ...........................................................................................27

IMS formats compiling ................................................................................................27

Data management..........................................................................................................29

Data unloading ...........................................................................................................29

Data conversion..........................................................................................................29

User tables creation and data loading ........................................................................29

User guide......................................................................................................................31

Flow............................................................................................................................31

DL/1 IMS/DB configuration .........................................................................................31

DL1 BATCH runtime switches ....................................................................................32

H2R utilities ....................................................................................................................35

UNIX scripts................................................................................................................35

SQL scripts .................................................................................................................36

Ant utilities ..................................................................................................................36

Miscellanea.................................................................................................................36

Internal H2r tables description........................................................................................38

DBD_SEGM ...............................................................................................................38

DBD_FIELD................................................................................................................38

Page 5: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Table Of Contents

v

DBD_XDFLD ..............................................................................................................38

DBD_LIST ..................................................................................................................39

DBD_COPY................................................................................................................39

PSB_PCB ...................................................................................................................39

PSB_SENSEG ...........................................................................................................40

COPY_FIELDS...........................................................................................................40

Logical DBD elaboration: example .................................................................................41

Implosion ....................................................................................................................51

Explosion ....................................................................................................................51

How to deal with a new PSB? ........................................................................................53

How to deal with a new physical DBD? ..........................................................................54

How to deal with a new search field, a new segment, a new secondary index? ............55

Page 6: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv
Page 7: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

1

Introduction Scope of the product The H2R product permits:

the data migration from a hierarchic structure to a relational one, this may permit the substitution of the mainframe with a UNIX systems

development of the new applications with the standard mainframe tools (such as CICS, IMS, etc)

development of the applications on low-cost systems in order to port them in the exercise on mainframe

Supported platforms

HP-UX

HP-UX 11.0 HP-UX 11i HP-UX 11i v2 (11.23)

PA-RISC

Itanium2

AIX

AIX 4 AIX 5.x

PowerPC

Solaris

Solaris 7 Solaris 8 Solaris 9

Sparc

Linux

SuSE RedHat

Intel x86

Page 8: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv
Page 9: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

3

Product overview General concepts Aim of the product The aim of the H2R product is to provide the interface with a relational Data Base to the users of the DB DL1/IMS of IBM, maintaining the data with the new organization acceding them however in the traditional manner.

The most important points are:

Same preexistent user interfaces to access data in transactional or batch mode. Performance at least the same if not better than in the previous situation Surrounding utilities. Possibility to integrate the new Data Base with new applications. Automatic data migration and conversion. Automatic definition and creation of the new structures. Multiplatform usage. High modularity that allows easy personalization in the case of particular

requirements and/or incongruence

Unchanged user interface In order to obtain this functionality, two specific modules with two specific functions are disposed:

The first module with the standard Entry Point names (CBLTDLI, PLITDLI) is used for the case of CALL level interface. It analyses the parameters passed to the called program and normalizes them for the KERNEL of H2R.

The second module, for the High Level Program Interface , executes the static transformation of the EXEC DLI commands into a CALL type syntaxes.

More, from the general point of view, the same RETURN-CODES are maintained and are passed to the user programs in the same manner and with the same location, also the entire concept of DLI scheduling is reproduced .

Batch programs execution is performed as in the original environment , the same execution utility name and the same number of parameters with the same values.

Performance In spite of the fundamental differences between the IMS and H2R data structures and although the relational type of H2R structure allows more rapid and easier researches, before very hard or impossible. There are some particular types of data base scrolling that may result more expensive in the new format.

Page 10: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

4

The IMS pointer structure allows to find out, without a specific data base access, if a son segment exists or not while the data splitting into the different tables composing the whole DBD doesn't give the same possibility.

The creation of the specific indexes and a possibility to fulfill the commands of a direct reading instead of a sequential one allows the efficient obviation of this problem.

The transformation of the data from a hierarchic structure to a relational one simplifies the resolution of the logical structures that can be directly scrolled with the use of the specific indexes.

The possibility to create the CONSTRAINT restriction between the connected tables guarantees the integrity of the data.

Utilities All routines of recovery, backup, reorganization, and optimization are the direct properties of the target relational DB.

In order to allow the simplified access to the data following the IMS logic and in order to measure the number and the type of accesses to DB specific utilities are present.

Integration The DBS resident data are immediately accessible to the old and new programs in the relational mode.

Both the old IMS mode and the SQL standard data access can be inserted in the same programs, so that both the integration of the old data with the new information bases and the more efficient access to the old information can be obtained.

In case of access the DBS image of the old IMS in insert/update mode directly via SQL, all the H2R rules should be respected; for this reason this type of data approach is retained inadvisable.

Automatic migration H2R is supplied with an old structures (PSB and DBD) analyzer, which allows the collection of all necessary information about the old database.

By means of the standard IMS data export program and basing on the information captured by the analyzer (see above), H2R generates automatically the relational data bases import programs, taking in consideration all the differences that should be managed as fields typology, eventual redefinitions, dirty fields, etc.

Regarding the EBCDIC-ASCII conversion, such programs allows to keep some data in the original EBCDIC format in order to maintain the same elaborative sequences; so that these data are to be automatically converted runtime at the moment of every access.

Page 11: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Product overview

5

New structures automatic definition and creation Always basing on the information receipted by the analyzer, H2R provides to generate everything that is necessary for the creation of new tables, indexes, constraints, etc.

Information reported by the analyzer is memorized in a number of the relational tables that can be easily accessed both by the users for consultation and by H2R itself during the elaboration phase.

Multiplatform usage In order to enable the usage of H2R on the various environments, it was realized principally in three languages: C, Perl and COBOL.

The parts of the product that are to operate just once during the migration and installation phase is written mostly in C and Perl and can operate as on UNIX as on LINUX environments.

All runtime operative parts such as the generated programs are realized in COBOL.

Regarding the level, COBOL Level II actually present on all PC and Mainframe platforms is used.

Also the data access utilities for the users are realized in COBOL.

Forced modularity In order to be able to modulate the H2R functions and be able to execute the punctual intervention on the particular DB IMS data access, the way of generating many Cobol programs for the different Physical, Logical DBD and Indexes access was chosen.

In this manner, also the modification aimed to increase the performances can be applied, taking advantage of the additional information furnished by the customer but not present explicitly in the data definition.

Furthermore, in this way the eventual problems with one of the database don't interfere with the others and can be tested and diagnosed separately.

Page 12: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv
Page 13: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

7

Product logical structure Preliminary elaboration area The H2R activity can be logically divided in two main phases: the preliminary elaboration phase and the runtime execution phase. From this point of view two correspondent main areas of the product can be distinguished. On the flow diagram below the runtime execution phase is indicated with the thin double arrows (right upper corner of the design).

Internal tables The H2R functionality is based on the use of the eight internal RDBS tables that contains all necessary information about the original DL1 structure. The tables have the standard definitions and are automatically created at the beginning of the migration project. The tables are loaded step by step on the base of the information obtained during the analysis of original Copies, DBD and PSB (follows). Further the information memorized in the tables is vastly used both in the preliminary and execution phases.

(see appendixes, internal H2r tables description for details)

Copies analyzing Takes the information from the Cobol copybooks corresponding to the DL1 segments and memorizes it in the COPY_FIELDS internal RDBS table. This table represents just one side of the global DL1 view and will be taken in consideration during the creation of the final DBD_COPY internal table together with other kind of information described further.

DBD/PSB analyzing Analyzes the DBD source furnished by the customer and loads the DBD-related internal tables. The obtained information is confronted with the COPY_FIELDS contents (see above) and the result is written into the final DBD_COPY internal table.

The same utility analyzes also the PSB sources and memorizes the result in the PSB-related tables used only runtime.

IO programs generation Creates the Cobol IO programs using the information memorized in the DBD-related internal tables. For each DBD (logical, physical and index) corresponds exactly one IO program. The programs can be adjusted manually in order to provide the better performance for some particular cases of IO operation; in this case the customer should furnish the further information about the internal application logic.

Page 14: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

8

User tables generation Creates the user RDBS tables composing the new relational DB that will substitute the old IMS base. One table corresponds to one segment. The table structure, the primary and secondary keys, the eventual constraints and the relations between the different tables are designed automatically basing on the information memorized in the internal DBD-related tables.

Conversion rules and load programs generation The layout structures describing the records to be converted EBCDIC to ASCII are created automatically; the conversion rules are defined basing on the segment name. In the case when more layout structures can correspond to one segment, the customer is asked to provide the additional recognition rules.

The programs to load the data are created basing on the information in internal DBD-related tables.

Data Unload Utility and data elaboration Data are unloaded by the means of the standard Cobol program DBLOADY, converted using the conversion programs and rules and then loaded by the load programs (see above).

User programs elaboration The user programs are slightly modified in order to bring the Mainframe Cobol syntax to a Microsoft one. The tp programs are also precompiled and EXEC DLI formalism is expanded into a CALL level one.

IMS format compiling The IMS is managed with CICS system, so each XIMS terminal should have the corresponding XCICS terminal logical name (see DL1 Configuration paragraph bellow). Main DL1 entry-point CBLTDLI recognizes the type of the request, analyzing the type of corresponding PCB, and evokes the IMSKERN or DL1KERN programs in front of IMS or DL1 request (See IMS flow design bellow).

The IMS formats are precompliled, and for each format one mapset file and more messages files are created.

Page 15: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Product logical structure

9

Runtime execution area Kernel The H2R Kernel is the central part of the execution phase, a kind of dispatcher that passes the call to the IO program corresponding to the requested DBD. It decides where the current DL1 operation requested by the user program is to be directed and how is to be performed. In the most part of the cases the operation can be executed by

IO programs IO programs are the application-depended modules, generated automatically in the preliminary phase, to each DBD corresponds one IO program. The structure of each IO program reflexes the correspondent DBD structure and contains a number of sections describing all the eventual DLI operations to execute.

Usually the operation can be performed by means of the statically precompiled EXEC SQL statement, predisposed in the IO program. The more complex cases are recognized by the Kernel that composes the relative SQL condition and passes it as a parameter to the IO program. In this case the dynamic SQL statement will be executed by the IO program.

The IO programs corresponding to the logical DBD resolve also the logical relations between the segments.

Page 16: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

10

H2R flow design

Page 17: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Product logical structure

11

IMS flow design

Page 18: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv
Page 19: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

13

H2R installation Prerequisites For a correct use of h2r, the following software products should be installed on your computer:

1. XFRAME 2. Correct XFRAME+H2R license obtained from H.T.W.C. S.r.l. 3. RDBMS 4. Microfocus ServerExpress 2.x 5. Perl + last version drivers DBI and DBD-RDBMS installed (you can obtain it from

CPAN site)

Installation Login as xframe unix user Make sure that the variables XFRAMEHOME and ORACLE_HOME (in the case of

Oracle using) are set and path to the right directories Put the h2r.tar.gz file in the home directory of XFRAME (where XFRAMEHOME

points) Issue the following unix commands:

cd $XFRAMEHOME gzip -dc h2r.tar.gz | tar xvf - cd h2r ksh install.ksh

it creates the h2r directory and install there all H2Rcomponents. Create the Oracle user for internal use of H2R.

Page 20: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv
Page 21: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

15

Using H2R preliminary phase Environment creation As mentioned above, the H2R activity includes two main phases: the preliminary elaboration phase and the runtime execution phase. The first one is to be fulfilled just once during the migration from the host machine two UNIX. The second is involved in each DL1 operation instead. This chapter is dedicated to the description of the phases, step by step.

Prerequisites For a correct installation the following software products should be installed on your computer:

XFRAME and relative standard software components H2R Correct XFRAME + H2R license obtained from H.T.W.C. S.r.l. RDBS Microfocus ServerExpress 2.x Perl + last version Perl-RDBS connection drivers (DBD and DBI in the case of

Oracle RDBS) installed (you can obtain it from CPAN site)

UNIX User creation Create a UNIX user i.e. 'xprod' with the same group of the XFRAME user. Create RDBS user for data tables. Copy the file 'xframelocal.conf' from home directory of XFRAME to home directory

of xprod and change it's permissions for writing. Edit .profile and comment the instruction

set -u

if you find it. Add the following lines to your .profile

export JAVA_HOME=<path to java> # this three lines only if you have xframe compiled with java extensions export SHLIB_PATH=$JAVA_HOME/jre/lib/PA_RISC/hotspot:$SHLIB_PATH export SHLIB_PATH=$JAVA_HOME/jre/lib/PA_RISC:$SHLIB_PATH export XFRAMEHOME=<path to xframe> export XKEYHOME=$XFRAMEHOME/xkey export XFRAME_ARCH=32 . $XFRAMEHOME/xframeenv.conf export SHLIB_PATH=$XFRAMEHOME/lib:$HOME/lib:$SHLIB_PATH . ./xframelocal.conf echo "XFRAMEHOME=$XFRAMEHOME" export ORACLE_HOME=<path to RDBS> export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin

Page 22: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

16

Environment definition Create RDBMS user for internal H2R use Login as xprod UNIX user. Edit xframelocal.conf file adding the following settings:

# H2R Settings export H2RHOME=$XFRAMEHOME/h2r export H2RPERL=$H2RHOME/perl export H2RSCRIPTS=$H2RHOME/sql export SHLIB_PATH=$SHLIB_PATH:$H2RHOME/lib export H2RSQLID=<RDBS connection string> # connection with RDBS for internal h2r use export COBPATH=$COBPATH:$H2RHOME/release export PATH=$PATH:$H2RHOME/bin:$H2RHOME/etc export DL1PSBLIST=$HOME/etc/psblist # the name of PSB list for XCICS startup preloading export DL1SHMEMID=$HOME/DL1_SHMEMID # the name of temporary XCICS shared memory handler

N.B: The value of H2RHOME environment variable depends on the type of H2R product installation. Normally, there are two types of it. In the first one, the H2R is installed as a subdirectory of XFRAME product. In the second one, H2R is installed as a separate UNIX user. In any case, H2RHOME variable should point to the appropriate directory of the product installation.

Repeat login as xprod UNIX user to make the settings effective. Make UNIX command:

h2r_create_environment

to create (if not existing) the following directories:

src # original programs (<program-name>.pre)

src/tp # online programs

src/bt # batch programs

bms # original mapsets (<mapset>.pre)

sdf # compiled mapsets

cpy # copies

etc # configuration files

objs # compiled

objs/gnt # compiled for production

objs/int # compiled for animation

tmp # temporary files, temporary storage

data # VSAM data

logs # XCICS logs

bin # product utilities

Page 23: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Using H2R preliminary phase

17

h2r/src # original sources

h2r/src/dbd # original DBD sources (<dbd_name>.pre)

h2r/src/psb # original PSB sources (<psb_name>.pre)

h2r/tables # sql scripts for internal h2r use

h2r/fields # copy fields description

h2r/fields/cpy # original copies

h2r/dbd # compiled DBD (<dbd_name>.sql)

h2r/psb # compiled PSB (<psb_name>.sql)

h2r/cpy # copy for IMS/DB segments (one for segment: <dbd_name>_<segment_name>.cpy)

h2r/load # general load data utilities

h2r/load/logs # load logs

h2r/load/sql # sql scripts for data tables generation

h2r/load/prog # data loading programs

h2r/import # general data conversion directory

h2r/import/ebcdic # original data files (one for DBD: <dbd_name>)

h2r/import/ascii # converted data files (one for DBD: <dbd_name>.ASC)

h2r/import/diction # copy for data conversion (one for DBD)

h2r/import/diction/errc # conversion logs

h2r/prog_io # H2R I-O management programs

Internal h2r tables creation Login as xprod UNIX user.

Make UNIX command:

sqlplus $H2RSQLID @$H2RSCRIPTS/tables.sql

to create the H2R internal tables:

DBD_LIST # DBD list

DBD_SEGM # segments

DBD_FIELD # fields

DBD_XDFLD # secondary indexes

DBD_COPY # combined DBD and copy fields description

PSB_PCB # PCB of PSB

Page 24: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

18

PSB_SENSEG # senseg of PSB

COPY_FIELDS # fields from copy description

Page 25: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Using H2R preliminary phase

19

Internal h2r tables loading DBD and PSB analyzing Load all the original DBD of IMS/DB into the $HOME/h2r/src/dbd directory, renaming them as “<dbd_name>.dbd”. Each physical DBD should be loaded together with all corresponding secondary indexes.

Load all the original PSB of IMS/DB into the $HOME/h2r/src/psb directory, renaming them as “<psb_name>.psb”.

In order to load the data into internal H2R tables tables, the sequence of the ant commands should be executed from the $HOME/h2r directory. Every ant command produces and then executes the sql scripts that loads the data in the various tables, as follows:

ant dbd

For each DBD source <dbd_name> in the $HOME/h2r/src/dbd directory; this command produces the corresponding SQL script <dbd_name>.sql in the $HOME/h2r/dbd directory and executes it, loading DBD_SEGM, DBD_FIELD, DBD_XDFLD, DBD_LIST tables.

ant psb

For each PSB source <psb_name> in the $HOME/h2r/src/psb directory; this command produces the corresponding SQL script <psb_name>.sql in the $HOME/h2r/psb directory and executes it, loading PSB_PCB, PSB_SENSEG tables.

ant dbd_list

produces the “dbd_list.lst” flat file under $HOME/h2r/dbd directory. It is a reference guide file describing all DBD's.

As to DBD_COPY and COPY_FIELDS tables, the loading of it requires the special procedure that will be described bellow.

Copies analyzing In this step, the DBD segment copies are analyzed both for user’s RDBS tables definition and for the future data conversion.

For each physical DBD <dbd_name> a COPY <dbd_name>.cpy in $HOME/h2r/import/diction directory should be composed from the relative segments copies in the following form:

01 <segment1_name>. 02 filed1. 02 field2. 01 <segment2_name>. 02 filed1. 02 field2.

Page 26: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

20

where 01 levels describe the relative segments’ COPY.

The lists of the segments corresponding to every DBD can be found with the sqlplus query:

select dbd_name, segm_name from dbd_segm where dbd_name in (‘<dbd_name1>’, ‘<dbd_name2>’, … , ‘<dbd_nameN>’)

Physical DBD names list can be obtained by taking the records with the initial P flag from $HOME/h2r/dbd/dbd/list.lst file.

The easier way to create the DBD COPY is to execute a UNIX “cat” command, pasting together the relative SEGMENTS’ COPIES :

cat “segment1_copy” “segment2_copy” “segmentN_copy” > $HOME/h2r/import/diction/dbd_name.cpy

For example: cat PGSATR.cpy PGSCSD.cpy PGSCUM.cpy > h2r/import/diction/TESTDL1.cpy

TESTDL1.cpy: 01 PGSATR.

02 TCSATR-DATA PIC X(6). 02 TTCODICE-1 PIC X(9). 02 TTCODICE-2 PIC X(9). 02 TCSATR-FILLER-1 PIC X(1).

01 PGSCSD.

02 TCSCSD-TEYCSD PIC X(22). 02 TMPROGRESSIVO PIC 9(4).

03 TMPBIN PIC S9(8) COMP REDEFINES TMPROGRESSIVO. 02 TCSCSD-FILLER-1 PIC X(4).

01 PGSCUM.

02 TCSCUM-TEYCUM PIC X(24). 02 TUPNUM1 PIC 9(5) COMP-3. 02 TUPNUM2 PIC 9(5) COMP-3. 02 TUPNUM3 PIC 9(5) COMP-3. 02 TUPNUM4 PIC 9(5) COMP-3. 02 TUPNUM5 PIC 9(5) COMP-3.

The segment name coincides with a 01 level field name and its structure defines a particular record description to use for record conversion and relative RDBS table definition. You have so many record descriptions how many DLI segments are present in the specified DBD.

In a case when one of the COPY segment contains a REDEFINE instruction, it should be appropriately changed in two or more different 01 levels. The names of new 01 level fields should have the following format: <segment_name>-<suffix>. For example,

01 PGSCSD. 02 TCSCSD-TEYCSD PIC X(22). 02 TMPCHAR PIC X(4). 02 TMPBIN PIC S9(8) COMP REDEFINES TMPCHAR. 02 TCSCSD-FILLER-1 PIC X(4).

should become

01 PGSCSD-XXX.

Page 27: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Using H2R preliminary phase

21

02 TCSCSD-TEYCSD PIC X(22). 02 TMPCHAR PIC X(4). 02 TCSCSD-FILLER-1 PIC X(4).

01 PGSCSD-YYY. 02 TCSCSD-TEYCSD PIC X(22). 02 TMPBIN PIC S9(8) COMP. 02 TCSCSD-FILLER-1 PIC X(4).

(the relative suffixes XXX and YYY will be used during the data conversion, see Copy rules definition paragraph below).

Then for each DBD <dbd_name> the following sequence of commands should be executed from $HOME/h2r/import/diction directory:

cpy2xml -o dbd_name.xml dbd_name.cpy

produces a dbd_name.xml intermediate xml-format file under the $HOME/h2r/import/diction current directory.

xmlconverter –h2r –p –n -o dbd_name.cbl dbd_name.xml

produces a dbd_name.cbl COBOL program for the data conversion (see also Copy rules creation and Data conversion paragraphs bellow) under the $HOME/h2r/import/diction directory.

In a case when the multirecord segment is present in DBD (REDEFINE case described above), this command needs also a –c <copy_rules> option:

xmlconverter –h2r –p –n –c copy_rule -o dbd_name.cbl dbd_name.xml

where <copy_rules> is the name of the COPY file describing the relative data conversion management (see Copy rules creation and Data conversion paragraphs below).

xmlconverter -db -o dbd_name.sql dbd_name.xml

produces a dbd_name.sql SQL script for RDBS tables creation under the $HOME/h2r/import/diction directory. This last command can be executed for all DBD together by means of “ant” utility.

During the adjusting of a DBD COPY with the multirecord segments, the first layout (01 level) of the SEGMENT will describe also the corresponding RDBS table. So, from all possible segment layouts, the more generic one should be chosen, taking in consideration that the PIC X COBOL type in this case is “good” also for the other types. The chosen layout should be placed as the first in the 01 level sequence.

For example: 01 PGSCSD.

02 TCSCSD-TEYCSD PIC X(22). 02 TMPBIN PIC S9(8) COMP. 02 TMPCHAR PIC X(4) REDEFINES TMPBIN. 02 TCSCSD-FILLER-1 PIC X(4).

becomes

01 PGSCSD-XXX. 02 TCSCSD-TEYCSD PIC X(22). 02 TMPCHAR PIC X(4). 02 TCSCSD-FILLER-1 PIC X(4).

Page 28: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

22

01 PGSCSD-YYY.

02 TCSCSD-TEYCSD PIC X(22). 02 TMPBIN PIC S9(8) COMP. 02 TCSCSD-FILLER-1 PIC X(4).

In this case the layout with TMPCHAR field of PIC X(4) type is chosen as a generic one and put at the first place. The RDBS table TESTDL1_PGSCSD will have the following structure:

TCSCSD_TEYCSD CHAR (22) TMPCHAR CHAR(4) TCSCSD_FILLER_1 CHAR(4)

If there is no preexisting layout that describes all the others, it should be manually created by changing the disaccording fields in a new generic PIC X type. For example,

01 PGSCSD. 02 TCSCSD-TEYCSD PIC X(22). 02 TMPBIN PIC S9(8) COMP. 02 TMPNUM PIC 9(4) REDEFINES TMPBIN. 02 TCSCSD-FILLER-1 PIC X(4).

becomes

01 PGSCSD-XXX. 02 TCSCSD-TEYCSD PIC X(22). 02 TMPCHAR PIC X(4). 02 TCSCSD-FILLER-1 PIC X(4).

01 PGSCSD-YYY.

02 TCSCSD-TEYCSD PIC X(22). 02 TMPBIN PIC S9(8) COMP. 02 TCSCSD-FILLER-1 PIC X(4).

01 PGSCSD-ZZZ.

02 TCSCSD-TEYCSD PIC X(22). 02 TMPNUM PIC 9(4). 02 TCSCSD-FILLER-1 PIC X(4).

Or; in the more complicated situation:

01 PGSCSD. 02 TCSCSD-TEYCSD PIC X(22). 02 TMP-1.

03 FIELD-1 PIC S9(8) COMP. 03 FIELD-2 PIC X(4). 02 TMP-2 REDEFINES TMP-1. 03 FIELD-3 PIC 9(5) . 03 FIELD-4 PIC S9(5) COMP-3.

02 TCSCSD-FILLER-1 PIC X(4).

becomes

01 PGSCSD-XXX. 02 TCSCSD-TEYCSD PIC X(22). 02 TMP-GEN PIC X(8). 02 TCSCSD-FILLER-1 PIC X(4).

01 PGSCSD-YYY.

02 TCSCSD-TEYCSD PIC X(22). 02 TMP-1.

Page 29: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Using H2R preliminary phase

23

03 FIELD-1 PIC S9(8) COMP. 03 FIELD-2 PIC X(4).

02 TCSCSD-FILLER-1 PIC X(4). 01 PGSCSD-ZZZ.

02 TCSCSD-TEYCSD PIC X(22). 02 TMP-2.

03 FIELD-3 PIC 9(5) . 03 FIELD-4 PIC S9(5) COMP-3.

02 TCSCSD-FILLER-1 PIC X(4).

Now the COPY_FIELDS internal table should be loaded:

ant copy_fields

For each xml-file <dbd_name>.xml created previously in the $HOME/h2r/import/diction directory (see Copy analyzing paragraph above); this command produces the corresponding SQL script <dbd_name>.sql in the same $HOME/h2r/import/diction directory. These scripts should be then executed manually:

sqlplus –s $H2RSQLID @$HOME/h2r/import/diction/<dbd_name>.sql

It loads the COPY_FILEDS internal table.

Copies analyzing for logical DBD The proceeding of logical DBD needs a manual personalization in order to take in consideration the different logical connections between the segments. See Appendixes for the examples of such intervention. However, the corresponding elaboration of your logical DBD can be provided also directly by HTWC.

DBD and COPIES cross analyzing For each physical DBD, execute the

h2r_generate_dbd <dbd_name>

command.

This command for every DBD dbdname compares the information memorized in the COPY_FIELDS and DBD_FIELD tables and loads the result into the DBD_COPY internal table. Also creates for every DBD the following objects:

The segments copies dbdname_segname.cpy in $HOME/h2r/cpy directory

The user RDBS tables’ creation scripts in the $HOME/h2r/load/sql directory

The data load program in the $HOME/h2r/load/prog directory

The IO program in the $HOME/h2r/prog_io directory

The programs are also compiled.

The compilation of some IO programs may require some additional copybooks, you will note it in the COBOL compiler error messages. That means, that the corresponding physical DBD has a logical relationship and some virtual segment virtualsegmname is

Page 30: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

24

present in DBD. In this case the following COPY should be manually created as empty dummies in the $HOME/prog_io directory

dbdname_DL1_EVALUATE.cpy dbdname_DL1_WORKING.cpy dbdname_virtualsegmname.cpy

while the COPY dbdname_PAIRED_virtualsegmname.cpy should just contain the following instructions:

virtualsegmname-PROCEDURE. continue. virtualsegmname-PROCEDURE-EX. exit.

To control the result, make sure that the following objects were created:

In $HOME/h2r/load/sql directory:

dbdname_CREATE_FOREIGN.sql dbdname_CREATE_KEYS.sql dbdname_CREATE_TABLES.sql dbdname_CREATE_TRIGGERS.sql dbdname_CREATE_XKEYS.sql dbdname_DROP_FOREIGN.sql dbdname_DROP_KEYS.sql dbdname_DROP_TABLES.sql dbdname_DROP_TRIGGERS.sql dbdname_DROP_XKEYS.sql

In $HOME/h2r/load/prog directory:

dbdname_LOAD.pco dbdname_LOAD.cob dbdname_LOAD.gnt

In $HOME/h2r/cpy directory:

dbdname_segname.cpy

for every segment segname of DBD dbdname

in the $HOME/h2r/prog_io directory

dbdname.pco dbdname.cob dbdname.gnt

Page 31: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Using H2R preliminary phase

25

Components generations IO programs generation For every dbd is associated a corresponding IO program that manages read and write accesses for this dbd. These COBOL programs are automatically generated in the $HOME/h2r/ prog_io directory during the DBD analyzing phase (see above) when executing the h2r_generate_dbd command.

In order to create and compile the programs corresponding to one physical dbd, h2r_prog_dbd <dbd_name> script should be executed.

Users tables scripts generation For each DBD segment corresponds a user RDBS table. The tables are created with by the means of SQL scripts that are automatically generated in the $HOME/h2r/load/sql directory during the DBD analyzing phase (see above) when executing the

h2r_generate_dbd <dbd_name>

command.

For each DBD dbdname there are 10 tables creation scripts:

dbdname_CREATE_TABLES.sql creates tables for user data

dbdname_CREATE_KEYS.sql creates primary keys

dbdname_CREATE_XKEYS.sql creates secondary indexes

dbdname_CREATE_TRIGGERS.sql creates triggers for secondary keys management

dbdname_CREATE_FOREIGN.sql creates FOREIGN KEY constraints for parentage integrity conservation

dbdname_DROP_FOREIGN.sql drops FOREIGN KEY constraints

dbdname_DROP_TRIGGERS.sql drops triggers

dbdname_DROP_XKEYS.sql drops secondary indexes

dbdname_DROP_KEYS.sql drops primary keys

dbdname_DROP_TABLES.sql drops user tables

Eventually each of the scripts can be executed separately.

Conversion program generation In the phase of copies analyzing (see below) the COBOL conversion program is generated automatically in the $HOME/h2r/import/diction directory, taking consideration of the internal DBD copy structure. The segment name of a DBD COPY coincides with a 01 level field name and its structure defines a particular record description to use for record conversion and relative RDBS table definition. For example,

Page 32: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

26

TESTDL1.cpy: 01 PGSATR.

02 TCSATR-DATA PIC X(6). 02 TTCODICE-1 PIC X(9). 02 TTCODICE-2 PIC X(9). 02 TCSATR-FILLER-1 PIC X(1).

01 PGSCSD.

02 TCSCSD-TEYCSD PIC X(22). 02 TMPROGRESSIVO PIC 9(4).

03 TMPBIN PIC S9(8) COMP REDEFINES TMPROGRESSIVO.

02 TCSCSD-FILLER-1 PIC X(4). 01 PGSCUM.

02 TCSCUM-TEYCUM PIC X(24). 02 TUPNUM1 PIC 9(5) COMP-3. 02 TUPNUM2 PIC 9(5) COMP-3. 02 TUPNUM3 PIC 9(5) COMP-3. 02 TUPNUM4 PIC 9(5) COMP-3. 02 TUPNUM5 PIC 9(5) COMP-3.

this DBD COPY describes three different segments with three different layouts to distinguish during the conversion of a single DBD record. You have so many record descriptions how many DLI segments. Each record of unloaded data has 16 bytes header where the DBD name and segment name are stored. So the segment name is translated from EBCDIC to ASCII and then tested in order to use the corresponding record description to convert the area.

Conversion rules definition In a case when a multirecord segment is present in a DBD, the related record has more conversion layouts. Which layout should be taken during the conversion of a single record, is determined by the internal application rules regard the value of one or more fields of the record. For example,

01 PGSCSD-XXX. 02 TCSCSD-TEYCSD PIC X(22). 02 TMPCHAR PIC X(4). 02 TCSCSD-FILLER-1 PIC X(4).

01 PGSCSD-YYY. 02 TCSCSD-TEYCSD PIC X(22). 02 TMPBIN PIC S9(8) COMP. 02 TCSCSD-FILLER-1 PIC X(4).

defines two different layouts of the record. To chose the “right” one, we may have for example the following rule: when TCSCSD-FILLER-1 is filled with the spaces, the first layout (with the suffix XXX) should be taken, otherwise the second one (with the suffix YYY) is used.

In order to manage the multirecord conversion, the relative COPY should be manually created for each multirecord segment. This COPY is automatically included in the corresponding COBOL conversion program, the name of the COPY is decided during the creation of this program (see Copy analyzing paragraph above). In the COPY the rules of conversion should be described with the means of an appropriate COBOL instruction. After the recognition of the rule the segment layout suffix (‘XXX’ or ‘YYY’ in our case)

Page 33: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Using H2R preliminary phase

27

should be moved to the special buffer field with the fixed name RECORD-XX. For example,

IF TCSCSD-FILLER-1 = X”40404040” THEN MOVE 'XXX' TO RECORD-XX ELSE MOVE 'YYY' TO RECORD-XX END-IF

The test of the value (for the character fields) should be executed in EBCDIC, otherwise the field should be moved to a fixed buffer ASCII-BUF and converted in ASCII.

If you have more than one multirecord segment in one DBD, the segment name should be also specified in the rule testing, as follows:

IF XCONV-SEGNAME = ‘PGSCSD’ THEN IF TCSCSD-FILLER-1 = X”40404040” THEN MOVE 'XXX' TO RECORD-XX ELSE MOVE 'YYY' TO RECORD-XX END-IF END-IF

where the XCONV-SEGNAME is a special internal field, containing always the current segment name in ACSII.

Load programs generation This group of programs is necessary for loading of converted data into RDB. For every physical DBD corresponds one load program under $HOME/h2r/load/prog directory. You need no execution scripts to create them; they are already created automatically during the previous "DBD and COPIES cross requires" phase by h2r_generate_dbd command.

User program elaboration User programs are elaborated with the standard XFRAME xcob utility with "- k" additional option in order to proceed the BLL pointer management. The online DLI programs need also a "-O EXECDLI" parameter that invokes the dl1exec precompiler that expanses EXEC DLI formalism (if present) into a CALL level one.

The batch DLI programs need a "-O DL1PRE" parameter that invokes the dl1pre precompiler that controls the entry-point DLITCBL management by adding a GOBACK target after PROCEDURE DIVISION.

IMS formats compiling The IMS formats should be unloaded from host into the $HOME/fmt UNIX directory. In order to compile an IMS format <format_name>, go to $HOME/fmt directory and execute the following command:

fmt2bms <format_name>.ims

This command will produce the <format_name>.bms SDF mapset file and a number of <message_name1>.msg … <message_nameN>.msg messages files.

Page 34: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

28

Page 35: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Using H2R preliminary phase

29

Data management Data unloading Data unload is executed on the HOST machine by DBDLOADY standard program. For each physical DBD, this program should be appropriately adapted and then executed. A sequential fixed-length file is created for every DBD. The file name should coincide with the physical DBD name.

The record format is the following: DBD_NAME (8 byte), SEGMENT_NAME (8 byte), record contents in a binary format. So for each DBD, the file length should be calculated as the maximum segments' length of the DBD plus the 16 bytes of a header. After the appropriate adaptation regard the record length and the DBD name, the program will scroll the entire DBD reading the segments with GN command, memorizing them and filling the eventual rest of a record with the spaces up to the maximum length. The result of unloading should be transferred in the binary format on the UNIX machine and put into $HOME/h2r/import/ebcdic directory.

Data conversion Put the unloaded IMS/DB original archives into $HOME/h2r/import/ebcdic directory with the name “<dbd_name>”, ftp should be executed in a binary mode without any type of conversion. The format of the file is described above in "Data unloading" section. Naturally the son-segment records should be positioned directly below the corresponding father-segment record. Go to the conversion directory:

cd $HOME/h2r/import/diction

Compile a conversion program:

cob -u <dbd_name>.cbl

Fulfill the conversion:

xvsamRts <dbd_name> ../ebcdic/<dbd_name> ../ascii/<dbd_name>.ASC

User tables creation and data loading A “dbd_len” file should be created in the $HOME/h2r/load directory. This file should contain the list of physical DBDs; for each DBD the maximum segments' size of this DBD should be calculated, and the 16 bytes added. The result is to be written in the “dbd_len" after the DBD name, as follows:

CT9589 2064 …

CT9589 - physical DBD name, 2048 - this DBD largest segment's length, 2064 = 2048 + 16

Open $HOME/h2r/load/loadall.ksh file and control the value of XRUN_LIBRARIES environment variable on the line

Page 36: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

30

export XRUN_LIBRARIES=$H2RHOME/lib/libdl1.so

Make sure that the extension of shared library corresponds to your system syntaxes. If not, change it.

Execute

ksh loadall.ksh

command from $HOME/h2r/load directory to start the data loading. The log files “<dbd_name>.log” will be situated in the $HOME/h2r/load/logs directory.

The duration of this operation depends on the IMS/DB archives’ size and may require much time. In this case the background mode of execution is advised:

nohup ksh loadall.ksh> loadall.log&

User program elaboration User programs are elaborated with the standard XFRAME xcob utility with "- k" additional option in order to proceed the BLL pointer management. To compile the online DLI program <program_name>, enter

xcob –ksua <program_name>

The online DLI programs with EXEC DLI instructions need also a "-O EXECDLI" parameter that invokes the dl1exec precompiler that expanses EXEC DLI formalism into a CALL level one:

xcob –ksua -O EXECDLI <program_name>

The batch DLI programs need a "-O DL1PRE" parameter that invokes the dl1pre precompiler that controls the entry-point DLITCBL management by adding a GOBACK target after PROCEDURE DIVISION:

xcob –kbsua -O DL1PRE <program_name>

Page 37: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

31

Using H2R Run-time phase User guide Flow

DL/1 IMS/DB configuration Edit the XCICS configuration file $HOME/etc/xcics.conf and put the right values for RDBS connection taking in consideration your system shared library extension

load library=$H2RHOME/lib/libdl1.so; define dbc name=DL1, database=..., user=..., password=...; bind dbc=DL1 default;

Set use_dl1 and use_rdbms flags to yes.

set use_rdbms=yes; set use_dli=yes;

Page 38: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

32

IMS/DC configuration Create $HOME/etc/psblist file and put there the list of all PSB used for online programs of your application, as follows:

<psb_name1> <psb_name2> <psb_name3>

The first time that CICS starts, it will create a shared memory, containing all PSB information; that will take some time. In future, in order to optimize the starting performances, CICS will execute the “warm” start, using the existing shared memory. If for some reason (for example a PSB added) you will need to recreate and reload the shared memory, execute the following command before starting your CICS session:

h2r_reload_psb

CICS in this moment should be down.

Edit also your xjobinit.csh batch configuration file in $HOME/etc directory, adding the h2r library setting and taking in consideration your system shared library extension.

sentenv XRUN_LIBRARIES “$H2RHOME/lib/libdl1.so”

If you work with IMS, add following configurations in $HOME/etc/xcics.conf file:

load library=$H2RHOME/lib/libims.so; set use_xims=yes; set xims_format_path=$HOME/fmt;

Then define a set of IMS terminals mapped to CICS logical terminals, as follows:

define ims_terminal name=….., cics=….;

where name is a IMS internal name up to 8 characters, and cics parameter value define the external logical name of a terminal that will be recognized by XCICS.

Define a set of IMS transactions, as follows:

define ims_transaction name=……, program=……, spa=…., psb=……;

where for every transaction name corresponds the program, the psb and spa length.

For example,

define ims_terminal name=TN00IMS0, cics=TN00; define ims_transaction name=IVTCB, program=DFSIVA34, spa=80, psb=DFSIVP34;

DL1 BATCH runtime switches DL1 batch program is called from a job using a standard header module, such as DFSRRC00, DLZRRC00, DLZMPI00. These modules are put under $HOME/bin directory and can be personalized for the needs of application. These csh scripts read the first string

DLI,<PROGRAM>,<PSB>

from the standard input and recognize the parameters.

Page 39: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Using H2R Run-time phase

33

In order to call the corresponding batch program the following command is performed:

xvsamRts DL1DSPT $SQLID <PROGRAM> <PSB> [additional parameters]

The additional parameters may have one or more of the following values:

-T application program with DLITCBL entry point

-t SQL trace activation

-s prints execution DL1 statistics on exit

-b program with the first PCB of BMP type

-a COBOL animation switch

-L<COB|PL1> defines original program language

-l<00-15> defines debug level for log file

$XVSAM/<PROGRAM>.log

00 – no log file created

15 – full logging level

Page 40: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv
Page 41: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

35

Appendixes H2R utilities UNIX scripts

h2r_create_environment

This command creates the H2R environment directories, if not existing.

h2r_dbd <dbd_name>

This command analyses a DBD source $HOME/h2r/dbd/src/<dbd_name>.dbd and loads the result in a dbd-related tables DBD_LIST, DBD_SEGM, DBD_FIELD, DBD_XDFLD

h2r_psb <psb_name>

This command analyses a PSB source $HOME/h2r/psb/src/<psb_name>.psb and loads the result in a psb-related tables PSB_PCB, PSB_SENSEG

h2r_create_dbd_list

Creates a guide list of DBDs: $HOME/h2r/dbd/dbd_list.lst h2r_generate_dbd <dbd_name>

For physical DBD <dbd_name>, this command compares the information memorized in the COPY_FIELDS and DBD_FIELD internal tables and loads the result into the DBD_COPY internal table. Also creates for this DBD the following objects (the programs are also compiled):

the segments copies dbdname_segname.cpy in $HOME/h2r/cpy directory the user RDBS tables’ creation scripts in the $HOME/h2r/load/sql directory the data load program in the $HOME/h2r/load/prog directory he IO program in the $HOME/h2r/prog_io directory the IO programs for all corresponding secondary indexes in the

$HOME/h2r/prog_io directory

h2r_generate_all_dbd

Launch a h2r_generate_dbd script for each physical DBD from a dbd_list.lst

h2r_prog_dbd <dbd_name>

This command creates and compiles the programs corresponding to the physical DBD <dbd_name> and it’s secondary indexes

h2r_prog_all_dbd

Launch a h2r_prog_dbd script for each physical DBD from a dbd_list.lst

h2r_reload_psb

This command reloads the shared memory, containing all PSB information for the CICS usage.

Page 42: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

36

load.ksh <dbd_name> <length>

This command loads the user data. It should be executed from $HOME/h2r/load directory. The log files <dbd_name>.log will be situated in the $HOME/h2r/load/logs directory.

fmt2bms <format_name>.imsFrom the IMS format <format_name>.ims, this command produces the <format_name>.bms SDF mapset file and a number of <message_name1>.msg … <message_nameN>.msg messages files.

SQL scripts sqlplus –s $H2RSQLID @$H2RSCRIPTS/tables.sql

creates the H2R internal tables.

sqlplus –s $H2RSQLID @$HOME/h2r/import/diction/<dbd_name>.sql

Loads the COPY_FILEDS internal table for a DBD <dbd_name>

Ant utilities ant dbd

For each DBD source <dbd_name> in the $HOME/h2r/src/dbd directory; this command produces the corresponding SQL script <dbd_name>.sql in the $HOME/h2r/dbd directory and executes it, loading DBD_SEGM, DBD_FIELD, DBD_XDFLD, DBD_LIST tables.

ant psb

For each PSB source <psb_name> in the $HOME/h2r/src/psb directory; this command produces the corresponding SQL script <psb_name>.sql in the $HOME/h2r/psb directory and executes it, loading PSB_PCB, PSB_SENSEG tables.

ant copy_fields

For each xml-file <dbd_name>.xml in the $HOME/h2r/import/diction directory; this command produces the corresponding SQL script <dbd_name>.sql in the same $HOME/h2r/import/diction directory.

ant dbd_list

produces the dbd_list.lst flat file under $HOME/h2r/dbd directory. It is a reference guide file describing all DBD's.

Miscellanea The following utilities should be starter from $HOME/h2r/import/diction directory:

cpy2xml -db -o dbd_name.xml dbd_name.cpy

Produce a .xml copy description.

xmlconverter –h2r –p –n -o dbd_name.cbl dbd_name.xml

Produce a COBOL data conversion program

xmlconverter –h2r –p –n -o –c copy_rule dbd_name.cbl dbd_name.xml

Page 43: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Appendixes

37

Produce a COBOL data conversion program for a multirecord case

xmlconverter -db -o dbd_name.sql dbd_name.xml

Produce a .sql script for a COPY_FIELDS loading

Page 44: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

38

Internal H2r tables description DBD_SEGM

DBD_NAME CHAR(8) NOT NULL SEGM_NAME CHAR(8) NOT NULL PARENT_NAME CHAR(8) PARENT_TYPE CHAR(4) LPARENT_NAME CHAR(8) LPARENT_DBD CHAR(8) BYTES_MIN INT BYTES_MAX INT SEGM_POINTER CHAR(8) RELATI_RULES CHAR(8) INSERT_RULES CHAR(8) COMPRTN_NAME CHAR(8) COMPRTN_TYPE CHAR(1) COMPRTN_INIT CHAR(4) SOURCE_SEGM CHAR(8) SOURCE_TYPE CHAR(1) SOURCE_DBD CHAR(8) LSOURCE_SEGM CHAR(8) LSOURCE_TYPE CHAR(1) LSOURCE_DBD CHAR(8) USED CHAR(1) PRIMARYKEY (DBD_NAME, SEGM_NAME)

DBD_FIELD DBD_NAME CHAR(8) NOT NULL SEGM_NAME CHAR(8) NOT NULL FIELD_NAME CHAR(8) NOT NULL FIELD_SEQ CHAR(1) FIELD_BYTES INT FIELD_START INT FIELD_TYPE CHAR(1) PRIMARYKEY (DBD_NAME, SEGM_NAME, FIELD_NAME)

DBD_XDFLD DBD_NAME CHAR(8) NOT NULL SEGM_NAME CHAR(8) NOT NULL PROCSEQ CHAR(8) NOT NULL KEY_NAME CHAR(8) SEGMENT CHAR(8) SRCH VARCHAR2(1024) SUBSEQ VARCHAR2(1024) NULLVAL CHAR(5)

Page 45: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Appendixes

39

EXTRTN CHAR(8) PRIMARYKEY (DBD_NAME, SEGM_NAME, PROCSEQ)

DBD_LIST DBD_NAME CHAR(8) NOT NULL DBD_ACCESS CHAR(8) PRIMARYKEY (DBD_NAME)

DBD_COPY DBD_NAME CHAR(8) SEGM_NAME CHAR(8) FIELD_NAME VARCHAR2(100) FIELD_LEVEL NUMBER FIELD_START NUMBER FIELD_LENGTH NUMBER FIELD_TYPE CHAR(1) FIELD_DEC NUMBER FIELD_SIGN NUMBER FIELD_USE CHAR(1) BODY_START NUMBER H2R_TYPE CHAR(1) KEY_SEQ CHAR(1) SEGM_LEVEL NUMBER CONCAT_START NUMBER REFERENCE_NAME VARCHAR2(1000) REFERENCE_START NUMBER PRIMARYKEY (DBD_NAME, SEGM_NAME, FIELD_START)

PSB_PCB PSB_NAME CHAR(8)NOTNULL DBD_NAME CHAR(8) PCB_NUM INTNOTNULL PCB_TYPE CHAR(2) PCB_PROCOPT CHAR(8) PCB_KEYLEN INT PCB_PROCSEQ CHAR(8) PCB_POSCHAR (1) PCB_LTERM CHAR(8) PCB_NAME CHAR(8) PCB_ALTRESP CHAR(1) PCB_SAMETRM CHAR(1) PCB_MODIFY CHAR(1) PCB_EXPRESS CHAR(1) PCB_PCBNAME CHAR(8) PCB_LIST CHAR(1) PRIMARYKEY (PSB_NAME, PCB_NUM)

Page 46: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

40

PSB_SENSEG PSB_NAME CHAR(8) PCB_NUM INT SEGM_NUM INT SEGM_NAME CHAR(8) PARENT_NAME CHAR(8) SEGM_PROCOPT CHAR(4) PRIMARYKEY (PSB_NAME, PCB_NUM, SEGM_NUM)

COPY_FIELDS NOME_OGGETTO CHAR(8) CAMPO VARCHAR2(100) CAMPO01 VARCHAR2(100) REDEFINE VARCHAR2(100) STRUTTURA NUMBER(1) LIVELLO NUMBER(2) OCCURS NUMBER TIPO CHAR(1) BYTES NUMBER DECIMALI NUMBER SEGNATO NUMBER USO CHAR(1)

Page 47: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Appendixes

41

Logical DBD elaboration: example So, let us consider the following physical DBD HTDBDDA :

DBD NAME=HTDBDDA, * ACCESS=HIDAM DATASET DD1=PNDATA, * DEVICE=3390,SIZE=6144, * FRSPC=(10,10),SCAN=19 SEGM NAME=PNITM, * BYTES=120, * PARENT=0, * PTR=TB, * RULES=PPV LCHILD NAME=(PNIND,HTDBDIN), * PTR=INDX LCHILD NAME=(PNIPT,HTDBDDA), * PTR=DBLE, * PAIR=PNIVP FIELD NAME=(PNITMPN,SEQ,U), * BYTES=19, * START=1 FIELD NAME=PNITMCP, * BYTES=10, * START=22 SEGM NAME=PNIVP, * PARENT=PNITM, * SOURCE=((PNIPT,,HTDBDDA)), * PTR=PAIRED FIELD NAME=(PNIVPKEY,SEQ,U), * BYTES=54, * START=1 SEGM NAME=PNPST, * BYTES=50, * PARENT=PNITM, * PTR=TB FIELD NAME=(PNPSTPN,SEQ,U), * BYTES=35, * START=1 SEGM NAME=PNIPT, * BYTES=19, * PARENT=((PNPST),(PNITM,V,HTDBDDA)), * PTR=(T,LTB), * RULES=PVV DBDGEN FINISH END

This physical DBD contains two normal segments PNIPT and PNPST, one virtual segment PNIVP and one pointer PNIPT. Follows the COPY of this DBD in $HOME/h2r/import/diction directory :

01 PNITM. 02 PNITMPN PIC X(19). 02 FILLER-1 PIC X(2). 02 PNITMCP PIC X(10). 02 FILLER-2 PIC X(1). 02 ITMUMC PIC X(2). 02 PARTPN PIC X(86).

Page 48: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

42

01 PNPST. 02 PNPSTPN PIC X(35). 02 SSTRVAL2 PIC X(4). 02 SSTRQTA PIC 9(9) COMP-3. 02 SSTRCALT PIC X(1). 02 SSTRKIT PIC X(1). 02 SSTRTC PIC X(1). 02 SSTRPROV PIC X(3). 01 PNIPT. 02 PNWUI PIC X(19).

In relation with this DBD we have two logical DBD’s HTDBDEX for the explosion and HTDBDIM for the implosion:

DBD NAME=HTDBDEX,ACCESS=LOGICAL DATASET LOGICAL SEGM NAME=PNITM,PARENT=0,SOURCE=((PNITM,,HTDBDDA)) SEGM NAME=PNPST,PARENT=PNITM,SOURCE=((PNPST,,HTDBDDA)) SEGM NAME=PNEXP,PARENT=PNPST, * SOURCE=((PNIPT,,HTDBDDA),(PNITM,,HTDBDDA)) DBDGEN FINISH END DBD NAME=HTDBDIM,ACCESS=LOGICAL DATASET LOGICAL SEGM NAME=PNITM,PARENT=0,SOURCE=((PNITM,,HTDBDDA)) SEGM NAME=PNIMP,PARENT=PNITM, * SOURCE=((PNIVP,,HTDBDDA),(PNPST,,HTDBDDA)) SEGM NAME=PNWUI,PARENT=PNIMP,SOURCE=((PNITM,,HTDBDDA)) DBDGEN FINISH END

In order to elaborate these DBD, you should create the corresponding COPY under $HOME/h2r/import/diction :

HTDBDEX: 01 PNITM. 02 PNITMPN PIC X(19). 02 FILLER-1 PIC X(2). 02 PNITMCP PIC X(10). 02 FILLER-2 PIC X(1). 02 ITMUMC PIC X(2). 02 PARTPN PIC X(86). 01 PNPST. 02 PNPSTPN PIC X(35). 02 SSTRVAL2 PIC X(4). 02 SSTRQTA PIC 9(9) COMP-3. 02 SSTRCALT PIC X(1). 02 SSTRKIT PIC X(1). 02 SSTRTC PIC X(1). 02 SSTRPROV PIC X(3). 01 PNEXP. 02 PNITMPN PIC X(19). 02 FILLER-1 PIC X(2). 02 PNITMCP PIC X(10). 02 FILLER-2 PIC X(1).

Page 49: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Appendixes

43

02 ITMUMC PIC X(2). 02 PARTPN PIC X(86). HTDBDIM: 01 PNITM. 02 PNITMPN PIC X(19). 02 FILLER-1 PIC X(2). 02 PNITMCP PIC X(10). 02 FILLER-2 PIC X(1). 02 ITMUMC PIC X(2). 02 PARTPN PIC X(86). 01 PNIMP. 02 PNITMKEY PIC X(19). 02 PNPSTKEY PIC X(35). 02 PNPSTPN PIC X(35). 02 SSTRVAL2 PIC X(4). 02 SSTRQTA PIC 9(9) COMP-3. 02 SSTRCALT PIC X(1). 02 SSTRKIT PIC X(1). 02 SSTRTC PIC X(1). 02 SSTRPROV PIC X(3). 01 PNWUI. 02 PNITMPN PIC X(19). 02 FILLER-1 PIC X(2). 02 PNITMCP PIC X(10). 02 FILLER-2 PIC X(1). 02 ITMUMC PIC X(2). 02 PARTPN PIC X(86).

Create pared.sql file in $HOME/h2r/dbd directory or edit an existed one adding the following information:

delete from DBD_FIELD where DBD_NAME='HTDBDDA' and SEGM_NAME = 'PNIPT' and FIELD_NAME = 'PNWUI'; insert into DBD_FIELD values ( 'HTDBDDA', 'PNIPT', 'PNWUI', 'S', 19, 1, 'C' ); commit; quit;

and execute it:

sqlplus $H2RSQLID @$HOME/h2r/dbd/pared.sql

Create manually the following copies under $HOME/h2r/prog_io directory:

HTDBDDA_DL1_EVALUATE.cpy – empty dummy

HTDBDDA_DL1_WORKING.cpy – empty dummy

HTDBDDA_PNIVP.cpy– empty dummy

HTDBDDA_PAIRED_PNIVP.cpy –

Page 50: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

44

PNIVP-PROCEDURE. continue. PNIVP-PROCEDURE-EX. exit. HTDBDIM_DL1_EVALUATE.cpy

IF SEGM-NAME = "PNITM" THEN MOVE 'N' TO KT-KEY-CHAR (1) (200 : 1) END-IF. HTDBDIM_DL1_WORKING.cpy

01 HT-APPO PIC X(54). EXEC SQL INCLUDE HTDBDDA_PNPST.cpy END-EXEC. HTDBDIM_PAIRED_PNWUI.cpy

PNWUI-PROCEDURE. EVALUATE COP WHEN "READFRST" PERFORM PNWUI-READ-FIRST THRU PNWUI-READ-FIRST-EX WHEN "READNEXT" PERFORM PNWUI-READ-NEXT THRU PNWUI-READ-NEXT-EX WHEN "READCOND" PERFORM PNWUI-READ-COND THRU PNWUI-READ-COND-EX WHEN "READPATH" PERFORM PNWUI-READ-COND THRU PNWUI-READ-COND-EX WHEN "ISRT " PERFORM PNWUI-ISRT THRU PNWUI-ISRT-EX WHEN "DLET " PERFORM PNWUI-DLET THRU PNWUI-DLET-EX WHEN "REPL " PERFORM PNWUI-REPL THRU PNWUI-REPL-EX END-EVALUATE. PNWUI-PROCEDURE-EX. EXIT. PNWUI-READ-FIRST. move KT-KEY-CHAR (1) (100 : 19) to PNITM-PNITMPN. IF FLAG-HOLD = 'Y' THEN EXEC SQL SELECT * INTO :PNITM FROM HTDBDDA_PNITM WHERE DBD_PNITMPN = :PNITM-PNITMPN FOR UPDATE END-EXEC ELSE EXEC SQL SELECT * INTO :PNITM FROM HTDBDDA_PNITM WHERE DBD_PNITMPN = :PNITM-PNITMPN END-EXEC END-IF. IF SQLCODE = 0 THEN

Page 51: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Appendixes

45

MOVE PNITM-PNITMPN(1 : 19) TO KT-KEY-CHAR(3)(1:19) MOVE 'Y' TO DL1-LAST-KEY-DEF MOVE 38 TO DL1-FB-KEY-LEN MOVE PNITM-PNITMPN(1 : 19) TO DL1-FB-KEY-ARR(20 : 19) MOVE PNITM (1 : 120) TO DL1-IO-AREA (1 : 120) ELSE MOVE 'N' TO DL1-LAST-KEY-DEF END-IF. MOVE SQLCODE TO RETCODE. PNWUI-READ-FIRST-EX. EXIT. PNWUI-READ-NEXT. MOVE 1000100 TO RETCODE. PNWUI-READ-NEXT-EX. EXIT. PNWUI-READ-COND. PERFORM PNWUI-READ-FIRST THRU PNWUI-READ-FIRST-EX. PNWUI-READ-COND-EX. EXIT. PNWUI-ISRT. MOVE 1000102 TO RETCODE. PNWUI-ISRT-EX. EXIT. PNWUI-DLET. MOVE 1000102 TO RETCODE. PNWUI-DLET-EX. EXIT. PNWUI-REPL. MOVE 1000102 TO RETCODE. PNWUI-REPL-EX. EXIT. HTDBDIM_PAIRED_PNIMP.cpy

PNIMP-PROCEDURE. EVALUATE COP WHEN "READFRST" PERFORM PNIMP-READ-FIRST THRU PNIMP-READ-FIRST-EX WHEN "READNEXT" PERFORM PNIMP-READ-NEXT THRU PNIMP-READ-NEXT-EX WHEN "READCOND" PERFORM PNIMP-READ-COND THRU PNIMP-READ-COND-EX WHEN "READPATH" PERFORM PNIMP-READ-COND THRU PNIMP-READ-COND-EX WHEN "ISRT " PERFORM PNIMP-ISRT THRU PNIMP-ISRT-EX WHEN "DLET " PERFORM PNIMP-DLET THRU PNIMP-DLET-EX WHEN "REPL " PERFORM PNIMP-REPL THRU PNIMP-REPL-EX END-EVALUATE. PNIMP-PROCEDURE-EX.

Page 52: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

46

EXIT. PNIMP-READ-FIRST. MOVE KT-KEY-CHAR (1) (1 : 19) TO PNIPT-PNWUI IF FLAG-HOLD = 'Y' THEN EXEC SQL SELECT * INTO :PNIPT FROM HTDBDDA_PNIPT WHERE ROWNUM = 1 AND DBD_PNWUI = :PNIPT-PNWUI ORDER BY PNPST_PNPSTPN, CURR_KEY FOR UPDATE END-EXEC ELSE EXEC SQL SELECT * INTO :PNIPT FROM HTDBDDA_PNIPT WHERE ROWNUM = 1 AND DBD_PNWUI = :PNIPT-PNWUI ORDER BY PNPST_PNPSTPN, CURR_KEY END-EXEC END-IF. IF SQLCODE = 0 THEN MOVE PNIPT-CURR-KEY TO KT-KEY-NUM (2) MOVE PNIPT-PNPST-PNPSTPN (1 : 35) TO KT-KEY-CHAR (2) (1 : 35) MOVE PNIPT-PNWUI TO KT-KEY-CHAR (1) (1 : 19) move PNIPT-PNITM-PNITMPN TO KT-KEY-CHAR (1) (100: 19) MOVE 'Y' TO DL1-LAST-KEY-DEF MOVE 'Y' TO KT-KEY-CHAR (1) (200 : 1) MOVE PNIPT-PNWUI (1 : 19) TO DL1-IO-AREA (1 : 19) MOVE PNIPT-PNPST-PNPSTPN (1 : 35) TO DL1-IO-AREA (20 : 35) ELSE MOVE 'N' TO DL1-LAST-KEY-DEF MOVE 'N' TO KT-KEY-CHAR (1) (200 : 1) END-IF. MOVE SQLCODE TO RETCODE. IF SQLCODE = 0 THEN PERFORM READ-PNPST-PNIPT THRU READ-PNPST-PNIPT-EX END-IF. PNIMP-READ-FIRST-EX. EXIT. READ-PNPST-PNIPT. MOVE PNIPT-PNPST-PNPSTPN (1 : 35) TO PNPST-PNPSTPN (1 : 35) MOVE PNIPT-PNITM-PNITMPN (1 : 19) TO PNPST-PNITM-PNITMPN (1 : 19) IF FLAG-HOLD = 'Y' THEN

Page 53: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Appendixes

47

EXEC SQL SELECT DBD_PNPSTPN, SSTRVAL2, SSTRQTA, SSTRCALT, SSTRKIT, SSTRTC, SSTRPROV, PNITM_PNITMPN INTO :PNPST FROM HTDBDDA_PNPST WHERE DBD_PNPSTPN = :PNPST-PNPSTPN AND PNITM_PNITMPN = :PNPST-PNITM-PNITMPN FOR UPDATE END-EXEC ELSE EXEC SQL SELECT DBD_PNPSTPN, SSTRVAL2, SSTRQTA, SSTRCALT, SSTRKIT, SSTRTC, SSTRPROV, PNITM_PNITMPN INTO :PNPST FROM HTDBDDA_PNPST WHERE DBD_PNPSTPN = :PNPST-PNPSTPN AND PNITM_PNITMPN = :PNPST-PNITM-PNITMPN END-EXEC END-IF. IF SQLCODE = 0 THEN MOVE 'Y' TO DL1-LAST-KEY-DEF MOVE 'Y' TO KT-KEY-CHAR (1) (200 : 1) MOVE PNPST-PNPSTPN TO KT-KEY-CHAR (2)(1 : 35) MOVE 54 TO DL1-FB-KEY-LEN MOVE PNPST-PNPSTPN TO DL1-FB-KEY-ARR(20:35) MOVE PNPST-PNITM-PNITMPN TO DL1-FB-KEY-ARR(1:19) MOVE PNPST(1 : 50) TO DL1-IO-AREA (55 : 50) ELSE MOVE 'N' TO DL1-LAST-KEY-DEF MOVE 'N' TO KT-KEY-CHAR (1) (200 : 1) END-IF. MOVE SQLCODE TO RETCODE. READ-PNPST-PNIPT-EX. EXIT. PNIMP-READ-NEXT. MOVE KT-KEY-CHAR (1) (1 : 19) TO PNIPT-PNWUI MOVE KT-KEY-CHAR (2) (1 : 35) TO PNIPT-PNPST-PNPSTPN

Page 54: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

48

MOVE KT-KEY-NUM (2) TO PNIPT-CURR-KEY MOVE PNIPT-PNPST-PNPSTPN TO HT-APPO (20 : 35). MOVE KT-KEY-CHAR (1) (100: 19) TO HT-APPO (1 : 19). IF FLAG-HOLD = 'Y' THEN EXEC SQL SELECT * /*+ INDEX (HTDBDDA_PNIPT HTDBDDA_PNIPT_PNWUI) */ INTO :PNIPT FROM HTDBDDA_PNIPT WHERE DBD_PNWUI = :PNIPT-PNWUI and PNPST_PNPSTPN = :PNIPT-PNPST-PNPSTPN and CURR_KEY > :PNIPT-CURR-KEY AND ROWNUM = 1 ORDER BY DBD_PNWUI, PNPST_PNPSTPN, CURR_KEY FOR UPDATE END-EXEC if SQLCODE = +1403 then EXEC SQL SELECT * /*+ INDEX (HTDBDDA_PNIPT HTDBDDA_PNIPT_PNWUI) */ INTO :PNIPT FROM HTDBDDA_PNIPT WHERE DBD_PNWUI = :PNIPT-PNWUI and PNPST_PNPSTPN > :PNIPT-PNPST-PNPSTPN AND ROWNUM = 1 ORDER BY DBD_PNWUI, PNPST_PNPSTPN FOR UPDATE END-EXEC end-if ELSE EXEC SQL SELECT * /*+ INDEX (HTDBDDA_PNIPT HTDBDDA_PNIPT_PNWUI) */ INTO :PNIPT FROM HTDBDDA_PNIPT WHERE DBD_PNWUI = :PNIPT-PNWUI and PNPST_PNPSTPN = :PNIPT-PNPST-PNPSTPN and CURR_KEY > :PNIPT-CURR-KEY AND ROWNUM = 1 ORDER BY DBD_PNWUI,

Page 55: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Appendixes

49

PNPST_PNPSTPN, CURR_KEY END-EXEC if SQLCODE = +1403 then EXEC SQL SELECT * /*+ INDEX (HTDBDDA_PNIPT HTDBDDA_PNIPT_PNWUI) */ INTO :PNIPT FROM HTDBDDA_PNIPT WHERE DBD_PNWUI = :PNIPT-PNWUI and PNPST_PNPSTPN > :PNIPT-PNPST-PNPSTPN AND ROWNUM = 1 ORDER BY DBD_PNWUI, PNPST_PNPSTPN END-EXEC end-if END-IF. IF SQLCODE = 0 THEN MOVE PNIPT-CURR-KEY TO KT-KEY-NUM (2) MOVE PNIPT-PNPST-PNPSTPN (1 : 35) TO KT-KEY-CHAR (2) (1 : 35) MOVE PNIPT-PNWUI TO KT-KEY-CHAR (1) (1 : 19) move PNIPT-PNITM-PNITMPN TO KT-KEY-CHAR (1) (100: 19) MOVE 'Y' TO DL1-LAST-KEY-DEF MOVE 'Y' TO KT-KEY-CHAR (1) (200 : 1) MOVE PNIPT-PNWUI (1 : 19) TO DL1-IO-AREA (1 : 19) MOVE PNIPT-PNPST-PNPSTPN (1 : 35) TO DL1-IO-AREA (20 : 35) ELSE MOVE 'N' TO DL1-LAST-KEY-DEF MOVE 'N' TO KT-KEY-CHAR (1) (200 : 1) END-IF. MOVE SQLCODE TO RETCODE. IF SQLCODE = 0 THEN PERFORM READ-PNPST-PNIPT THRU READ-PNPST-PNIPT-EX END-IF. PNIMP-READ-NEXT-EX. EXIT. PNIMP-READ-COND. if KT-KEY-CHAR (1) (200 : 1) = 'N' then perform PNIMP-READ-FIRST thru PNIMP-READ-FIRST-EX else perform PNIMP-READ-NEXT thru PNIMP-READ-NEXT-EX end-if. PNIMP-READ-COND-EX. EXIT. PNIMP-ISRT. MOVE 1000102 TO RETCODE. PNIMP-ISRT-EX.

Page 56: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

50

EXIT. PNIMP-DLET. MOVE 1000102 TO RETCODE. PNIMP-DLET-EX. EXIT. PNIMP-REPL. MOVE 1000102 TO RETCODE. PNIMP-REPL-EX. EXIT. Now execute following ksh script:

#!/usr/bin/ksh # for i in HTDBDDA HTDBDEX HTDBDIM do

cd $HOME/h2r/src/dbd h2r_dbd $i.dbd ..sqlplus $H2RSQLID @pared.sql cd $HOME/h2r/dbd sqlplus $H2RSQLID @$H2RHOME/sql/create_dbd_list.sql cd $HOME/import/diction cpy2xml -o $i.xml $i xmlconverter -db -o $i.sql $i.xml sqlplus $H2RSQLID @$i.sql

done xmlconverter -p -n -h2r -o HTDBDDA.cbl HTDBDDA.xml cob -u HTDBDDA.cbl h2r_generate_dbd HTDBDDA h2r_prog_dbd HTDBDEX h2r_prog_dbd HTDBDIM This script creates all the H2R environment for the physical dbd HTDBDDA and both logical HTDBDEM and HTDBDEX. The implosion logical resolution rules are situated in manually created copies: HTDBDIM_PAIRED_PNIMP.cpy and HTDBDIM_PAIRED_PNWUI.cpy Other manually created copies are working and do not contained important information. Notice that the physical DBD HTDBDDA has the following structure:

-- +-----------+ -- | PNITM | -- +-----------+ -- | -- --------------- -- | | -- +..............++-----------+

Page 57: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

Appendixes

51

-- . PNIVP . | PNPST | -- +..............++-----------+ -- | -- | -- | -- +***********+ -- * PNIPT * -- +***********+

Implosion -- +.............+ -- . PNITM . -- +.............+ -- | -- | -- | -- +..............+ -- . PNIMP . -- +..............+ -- | -- | -- | -- +..............+ -- . PNWUI . -- +..............+ There are two manually created copies that define the logical resolution in this case:

The copy HTDBDIM_PAIRED_PNWUI.cpy defines the virtual segment’s PNWUI reading mode by the virtual access field. It defines only PNWUI-READ-FIRST routin while only one virtual segment can be defined for a given implosion key.

The copy HTDBDIM_PAIRED_PNIMP.cpy has the paragraph READ-PNPST-PNIPT that defines a reading mode for the segment PNIMP. This segment logically is a union of a PNITM key and a PNPST physical segment. The program retrives PMIPT virtual segment after PNWUI segment been read.

Both copies use a free space in a KT-KEY-CHAR (1) field to store a critical information between calls.

KT-KEY-CHAR (1) (100 : 19) is used by HTDBDIM_PAIRED_PNWUI.cpy to pass the virtual key of PNWUI segment while KT-KEY-CHAR (1) (200 : 1) is used by HTDBDIM_PAIRED_PNIMP.cpy to decide if it is a first o a following retrieving call.

Explosion -- +..............+ -- . PNITM . -- +..............+

Page 58: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

52

-- | -- | -- | -- +...............+ -- . PNPST . -- +...............+ -- | -- | -- | -- +...............+ -- . PNEXP . -- +...............+ The explosion logical resolution is automatically created. The procedure READ-PNITM-PNIPT is called after PNEXP-READ to obtain the data pointed by PNEXP segment.

Page 59: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

53

FAQ How to deal with a new PSB? If you have added a new PSB <psb_name> or changed an old one, you should execute the following actions to proceed it:

Put a PSB source into a $HOME/h2r/src/psb directory, renaming it “<psb_name>.psb”

Execute a following command:

h2r_psb <psb_name>

See also Internal h2r tables loading, DBD and PSB analyzing paragraph.

If the PSB <psb_name> is a new one, you should add it also in a psblist file under $HOME/etc directory, if not present.

Now shut down your CICS application if active, and execute the following script

reload_psb.sh

that will cancel the shared memory containing the old PSB configuration and so the new CICS session will execute a cold start recreating the shared memory for new PSB configuration.

Page 60: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

H2R User's Guide

54

How to deal with a new physical DBD? If you have added a new DBD <dbd_name> or changed an old one, you should execute the following actions to proceed it:

Put a DBD source into a $HOME/h2r/src/dbd directory, renaming them as “<dbd_name>.dbd”

Execute the following commands:

h2r_dbd <dbd_name> h2r_create_dbd_list

See also Internal h2r tables loading, DBD and PSB analyzing paragraph.

Create or modify appropriately a corresponding COPY <dbd_name>.cpy in $HOME/h2r/import/diction directory and proceed it as usually (see Internal h2r tables loading, Copies analyzing paragraph).

Execute the following command:

h2r_generate_dbd <dbd_name>

If a DBD <dbd_name> is a new one, the relative data should be unloaded, converted and loaded, taking in consideration the multirecord management if present (see Conversion rules definition). Create the relative PSB and proceed it as usually (see How to deal with a new PSB? paragraph above).

If you have just changed your old DBD, the necessity of reloading the data depends on the changes, in some cases it is enough to change the configuration RDBS tables without reloading the data (for example, an empty field added to the end of the table corresponding to one of the segments).

Page 61: H2R User's Guide · Kernel.....9 IO programs ... H2R User's Guide iv

FAQ

55

How to deal with a new search field, a new segment, a new secondary index? If you have added a new segment, a new search field or a new secondary index for one of your, the whole DBD should be upgraded (see How to deal with a new physical DBD? paragraph above)