prepared by stephen a. brobst [email protected] (617) 422-0800 copyright © 2000, 2001. stephen...

39
prepared by Stephen A. Brobst [email protected] (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission. 1 High Performance Data Warehouse Design and Construction ETL Processing

Post on 20-Dec-2015

217 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

prepared by

Stephen A. [email protected]

(617) 422-0800

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

1

High Performance Data WarehouseDesign and Construction

ETL Processing

Page 2: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

2

ETL Processing

Operational Data

Data Transformation

Enterprise Warehouse and Integrated Data Marts

Replication

Dependent Data Marts orDepartmental Warehouses

IT Users

Business Users

Page 3: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

3

Data Acquisition from OLTP Systems

Why is it hard? Multiple source systems technologies. Inconsistent data representations. Multiple sources for the same data element. Complexity of required transformations. Scarcity and cost of legacy cycles. Volume of legacy data.

Page 4: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

4

Data Acquisition from OLTP Systems

Many possible source systems technologies:

* Flat files * Excel * Model 204

* VSAM * Access * DBF Format

* IMS *Oracle * RDB

* IDMS *Informix * RMS

* DB2 (many flavors) * Sybase * Compressed

* Adabase *Ingres * Many others...

Page 5: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

5

Data Acquisition from OLTP Systems

Inconsistent data representation: Same data, different domain values...

Examples: Date value representations:

- 1996-02-14

- 02/14/1996

- 14-FEB-1996

- 960214

- 14485 Gender value representations:

- M/F - M/F/PM/PF

- 0/1 - 1/2

Page 6: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

6

Data Acquisition from OLTP Systems

Multiple sources for the same data element: Need to establish precedence between source

systems on a per data element basis. Take data element from source system with

highest precedence where element exists. Must sometimes establish “group precedence”

rules to maintain data integrity.

Page 7: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

7

Data Acquisition from OLTP Systems

Complexity of required transformations: Simple scalar transformations.

– 0/1 => M/F One to many element transformations.

– 6x30 address field => street1, street2, city, state, zip Many to many element transformations.

– Householding and Individualization of customer records

Page 8: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

8

Data Acquisition from OLTP Systems

Scarcity and cost of legacy cycles: Generally want to off-load transformation

cycles to open systems environment. Often requires new skill sets. Need efficient and easy way to deal with

mainframe data formats such as EBCDIC and packed decimal.

Page 9: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

9

Data Acquisition from OLTP Systems

Volume of legacy data: Need lots of processing and I/O to effectively

handle large data volumes. 2GB file limit in older versions of UNIX is not

acceptable for handling legacy data - need full 64-bit file system.

Need efficient interconnect bandwidth to transfer large amounts of data from legacy sources.

Page 10: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

10

Data Acquisition from OLTP Systems

What does the solution look like? Meta data driven transformation architecture. Modular software solutions with component

building blocks. Parallel software and hardware architectures.

Page 11: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

11

Data Acquisition from OLTP Systems

Meta data driven transformation architecture: Need multiple meta data structures.

–Source meta data

–Target meta data

–Transformation meta data Must avoid “hard coding” for maintainability. Automatic generation of transformations from meta

data structures. Meta data repository ideally accessible by APIs and

end user tools.

Page 12: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

12

Data Acquisition from OLTP Systems

Modular software structures with component building blocks:

Want a data flow driven transformation architecture that supports multiple processing steps.

Meta data structures should map inputs and outputs between each transformation module.

Leverage pre-packaged tools for transformation steps wherever possible.

Page 13: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

13

Data Acquisition from OLTP Systems

Parallel software and hardware architectures: Use data parallelism (partitioning) to allow

concurrent execution of multiple job streams. Software architecture must allow efficient re-

partitioning of data between steps in the transformation process.

Want powerful parallel hardware architectures with many processors and I/O channels.

Page 14: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

14

A Word of Warning

The data quality in the source systems will be much worse than what you expect.

Must allocate explicit time and resources to facilitate data clean-up.

Data quality is a continuous improvement process - must institute TQM program to be successful.

Use “house of quality” technique to prioritize and focus data quality efforts.

Page 15: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

15

ETL Processing

It is important to look at the big picture.

Data acquisition time may include: Extracts from source systems. Data movement. Transformations. Data loading. Index maintenance. Statistics collection. Summary data maintenance. Data mart construction. Backups.

Page 16: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

16

Loading Strategies

Once we have transformed data, there are three primary loading strategies:

1. Full data refresh with “block slamming” into empty tables.

2. Incremental data refresh with “block slamming” into existing (populated) tables.

3. Trickle feed with continuous data acquisition using row level insert and update operations.

Page 17: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

17

Loading Strategies

We must also worry about rolling off “old” data as its economic value drops below the cost for storing and maintaining it.

new data

old data

Page 18: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

18

Loading Strategies

Choice in loading strategy depends on tradeoffs in data freshness and performance, as well as data volatility characteristics.

What is the goal? Increased data freshness. Increased data loading performance.

( Delayed Availability )Real-Time Availability Minimal Load Time

Low Update Rates High Update Rates

Page 19: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

19

Loading Strategies

Should consider:

Data storage requirements. Impact on query workloads. Ratio of existing to new data. Insert versus update workloads.

Page 20: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

20

Loading Strategies

Tradeoffs in data loading with a high percentage of data changes per data block:

0

5000

10000

15000

20000

25000

30000

35000

40000

0

100

200

300

400

500

600

Rows/DB affected

Ro

ws/

CP

U/S

ec

IncrementalInsert

Full Refresh

Trickle Feed

Shadow Table + Insert-Select

Table Copy

Incremental Update

Page 21: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

21

Loading Strategies

Tradeoffs in data loading with a low percentage of data changes per data block:

0

100

200

300

400

500

0 1 2

Rows/DB affected

Ro

ws/

CP

U/S

ec

Incremental UpdateIncremental Insert

Trickle Feed

Shadow Table + Insert-Select

Table Copy

Page 22: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

22

Full Refresh Strategy

Completely re-load table on each refresh.

Step 1: Load table using block slamming.

Step 2: Build indexes.

Step 3: Collect statistics.

This is a good (simple) strategy for small tables or when a high percentage of rows in the data changes on each refresh (greater than 10%).e.g., reference lookup tables or account tables where balances change on each refresh.

Page 23: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

23

Full Refresh Strategy

Performance hints:

Remove referential integrity (RI) constraints from table definitions for loading operations.

– Assume that data cleansing takes place in transformations.

Remove secondary index specifications from table definition.

– Build indices after table has been loaded.

Make sure target table logging is disabled during loads.

Page 24: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

24

Full Refresh Strategy

Consider using “shadow” tables to allow refresh to take place without impacting query workloads.

1. Load shadow table.

2. Replace-view operation to direct queries to refreshed table make new data visible.

Trades storage for availability.

Page 25: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

25

Incremental Refresh Strategy

Incrementally load new data into existing target table that has already been populated from previous loads.

Two primary strategies:

1. Incremental load directly into target table.

2. Use shadow table load followed by insert-select operation into target table.

Page 26: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

26

Incremental Refresh Strategy

Design considerations for incremental load directly into target table using RDBMS utilities:

Indices should be maintained automatically. Re-collect statistics if table demographics have

changed significantly. Typically requires a table lock to be taken during

block slamming operation. Do you want to allow for “dirty” reads? Logging behavior differs across RDBMS products.

Page 27: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

27

Incremental Refresh StrategyDesign considerations for shadow table implementation:

Use block slamming into empty “shadow” table having identical structure to target table.

Staging space required for shadow table. Insert-select operation from shadow table to target table

will preserve indices. Locking will normally escalate to table level lock. Beware of log file size constraints. Beware of performance overhead for logging. Beware of rollbacks if operation fails for any reason.

Page 28: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

28

Incremental Refresh Strategy

Both incremental load strategies described preserve index structures during the loading operation.

However, there is a cost to maintaining indexes during the loads... Rule-of-thumb: Each secondary index maintained during the load

costs 2-3 times the resources of the actual row insertion of data into a table.

Rule-of-thumb: Consider dropping and re-building index structures if the number of rows being incrementally loaded is more than 10% of the size of the target table.

Note: Drop and re-build of secondary indices may not be acceptable due to availability requirements of the DW.

Page 29: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

29

Trickle Feed

Acquire data on a continuous basis into RDBMS using row level SQL insert and update operations.

Data is made available to DW “immediately” rather than waiting for batch loading to complete.

Much higher overhead for data acquisition on a per record basis as compared to batch strategies.

Row level locking mechanisms allow queries to proceed during data acquisition.

Typically relies on Enterprise Application Integration (EAI) for data delivery.

Page 30: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

30

Trickle Feed

A tradeoff exists between data freshness and insert efficiency:

Buffering rows for insertion allows for fewer round trips to RDBMS...

… but waiting to accumulate rows into the buffer impacts data freshness.

Suggested approach: Use a threshold that buffers up to M rows, but never waits more than N seconds before sending a buffer of data for insertion.

Page 31: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

31

ELT versus ETL

There are two fundamental approaches to data acquisition:

ETL is extract, transform, load in which transformation takes place on a transformation server using either an “engine” or by generated code.

ELT is extract, load, transform in which data transformations take place in the relational database on the data warehouse server.

Of course, hybrids are also possible...

Page 32: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

32

ETL Processing

ETL processing performs the transform operations prior to loading data into the RDBMS.

1. Extract data from the source systems.

2. Transform data into a form consistent with the target tables.

3. Load the data into the target tables (or to shadow tables).

Page 33: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

33

ETL Processing

ETL processing is typically performed using resources on the source systems platform(s) or a dedicated transformation server.

Source SystemsPre-Transformations

Data Warehouse

TransformationServer

Page 34: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

34

ETL Processing

Perform the transformations on the source system platform if available resources exist and there is significant data reduction that can be achieved during the transformations.

Perform the transformations on a dedicated transformation server if the source systems are highly distributed, lack capacity, or have high cost per unit of computing.

Page 35: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

35

ETL Processing

Two approaches for ETL processing:

1. Engine: ETL processing using an interpretive engine for applying transformation rules based on meta data specifications.

- e.g., Ascential, Informatica

2. Code Generation: ETL processing using code generated based on meta data specification.

- e.g., Ab Initio, ETI

Page 36: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

36

ELT Processing

First, load “raw” data into empty tables using RDBMS block slamming utilities.

Next, use SQL to transform the “raw” data into a form appropriate to the target tables.

– Ideally, the SQL is generated using a meta data driven tool rather than hand coding.

Finally, use insert-select into the target table for incremental loads or view switching if a full refresh strategy is used.

Page 37: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

37

ELT Processing

DW server is the the transformation server for ELT processing.

Source Systems

Data WarehouseTeradataFastload

Files

Channel

Network

Page 38: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

38

ELT Processing ELT Processing obviates the need for a separate

transformation server.– Assumes that spare capacity exists on DW server to

support transformation operations.

ELT leverages the build-in scalability and manageability of the parallel RDBMS and HW platform.

Must allocate sufficient staging area space to support load of raw data and execution of the transformation SQL.

Works well only for batch oriented transforms because SQL is optimized for set processing.

Page 39: Prepared by Stephen A. Brobst sbrobst@alum.mit.edu (617) 422-0800 Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written

Copyright © 2000, 2001. Stephen A. Brobst. Do not duplicate or distribute without written permission.

39

Bottom Line

• ETL is a significant task in any DW deployment.

• Many options for data loading strategies: need to evaluate tradeoffs in performance, data freshness, and compatibility with source systems environment.

• Many options for ETL/ELT deployment: need to evaluate tradeoffs in where and how transformations should be applied.