nicholas collins clinical analytics and informatics 24 september 2013 a warehouse and reporting...

Post on 30-Mar-2015

216 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Nicholas CollinsClinical Analytics and Informatics24 September 2013

A Warehouse and Reporting Architecture for Healthcare using Oracle Technologies [CON3926]

1. About MD Anderson (and me!)

2. The Future of Cancer Treatment and Research

3. Oracle at MD Anderson

4. Our Warehouse and Reporting Architecture

5. Implementation Conclusions

Topics

2

About MD Anderson1

3

Non-profit cancer hospital and research institution, founded in 1941 as part of The University of Texas System

Named after Monroe Dunaway Anderson (a banker and cotton trader, not an MD)

“Making Cancer History” – our mission is to eradicate cancer

Consistently ranked as the #1 hospital for cancer care

About MD Anderson

4

About MD Anderson

5

About MD Anderson

6

Over 19,000 employees, majority in the Houston area

Occupying over 20 buildings in the Texas Medical Center

The Texas Medical Center has over 50 member institutions, together over 100,000 employees

Began working at MD Anderson while an undergraduate at Rice University (across the street)

Currently in the Clinical Analytics and Informatics (CAI) Department, but before that was in HR Information Management (HRIM), working with PeopleSoft and other HR apps

While in HRIM, built a custom HR data warehouse and reporting system using a combination of Microsoft and Oracle technologies

After hours, a professional stage actor/director in Houston

About Me

7

The Future of Cancer Treatment and Research

2

8

“The Time is Now. Together we will end cancer.”

Target six cancers: Breast/Ovarian, Leukemia (AML/MDS & CLL), Melanoma, Lung, Prostate

Clear focus on the concept that the answer to curing cancer lies in both clinical and genomic data

MD Anderson Moon Shots Program

9

MD Anderson Moon Shots Program

10 http://www.cancermoonshots.org

It’s in the Data!

11

MD Anderson Moon Shots Platforms

12

Massive Data Analytics – An infrastructure for complex analytics and clinical decision support using integrated patient information, including clinical and research data

Big Data – An Information Technology infrastructure/environment that enables centralization, integration and secured access of patient and research data and analytical results

It’s in the Genes!

13

MD Anderson Moon Shots Platforms

14

Clinical Genomics – Clinical gene sequencing infrastructure, including centralized bio-specimen repository and processing

Omics – Bioinformatics – A high-throughput infrastructure for generation and standardization of large-scale “omic” data, including genomics, proteomics and immune profiling

Adaptive Learning in Genomic Medicine – A framework for bringing clinical medicine and genomic research together to enable rapid learning to improve patient management using Clinical Genomics, Omics-Bioinformatics and Massive Data Analytics platforms within the Big Data environment

Genomics in the News

15

Oracle at MD Anderson3

16

Oracle Healthcare Data Warehouse Foundation (HDWF)

Oracle Healthcare Analytics Data Integration (OHADI)

Oracle TRC (Translational Research Center) Cohort Explorer

Oracle TRC Omics Data Bank (ODB)

Oracle Health Sciences Products at MD Anderson

17

Oracle Database 11gR2

Oracle Exadata (x3)

Oracle Business Intelligence (OBIEE)

Oracle GoldenGate*

Oracle Technology at MD Anderson

18

*Oracle GoldenGate was used to demonstrate replication capabilities in a significant POC, but has not been purchased or put into production. Informatica is commonly used at MD Anderson for data integration; ODI is not currently in use at the institution.

Oracle Healthcare Data Warehouse Foundation (HDWF)

19

HDI HDM

Oracle Healthcare Analytics Data Integration (OHADI)

20

HDI HDMOHADI

Integration code that maps from the interface tables (HDI) to the warehouse tables (HDM)

Available as either Informatica or ODI mappings

Oracle Cohort Explorer

21

CDM

Cohort Explorer

Oracle Cohort Explorer

22

Oracle Omics Data Bank (ODB)

23

24

Review of Oracle Health Sciences Products

25

HDI HDMOHADICDM

ODB

Cohort Explorer

Our Warehouse and Reporting Architecture

4

26

FIRE - Federated Institutional Reporting Environment

A program level initiative, with many projects and products involved, to provide a unified BI/Reporting solution for all of MD Anderson

Managed by the Clinical Analytics and Informatics (CAI) Department, part of Oracle SDP (Strategic Development Parnter) Program

The MD Anderson FIRE Program

27

FIRE Program Team Structure (Early Proposal)

28

FIRE Program Team Structure (Current)

29

Custom-built OBIEE Dashboard for orders data, pulling data from HDM, with HDI populated from the GE Centricity source

Dimensional model with orders as the core fact

Included a smaller pre-release of HR data for staff details and testing the FIRE Architecture

Purchased HDWF and OHADI in May 2012, aiming for a fall go-live for our first FIRE release

First FIRE Release – Pharmacy Dashboard

30

Pharmacy Dashboard (OBIEE)

31

Oracle Health Sciences GBU provides products that are a part of an overall solution, the rest is organization specific

We needed a way to effectively get data into HDWF and deliver it to custom Data Marts

Having a pre-built warehouse model helps with speed of delivery, but not necessarily source system mapping and integration

Unifying the Solution

32

Bring all the data processing together on a single Oracle instance for performance benefits of local movement and transformation

Abstract across all commonalities and patterns to the largest extent possible, avoiding needless one-off solutions, use code generation and automation

Ideal candidate for a later “forklift” to Exadata

Architectural Concept

33

The FIRE Architecture

34

Currently no centralized EHR solution in place, a best-of-breed model with many disparate source systems

Data currently brought together for patient-care clinical use in a single UI by a SOA-based custom .NET app called ClinicStation

In July 2013, the institution announced its intention to migrate to Epic’s EHR solution

Source Systems at MD Anderson

35

The FIRE Architecture

36

SR SI UI UDHDI/HDM

Stores replicated data from source systems in the consolidated warehouse environment, provides buffer from sources and their technology, also allows custom indexing and partitioning if needed

Replication can be accomplished in a variety of ways, using a “bag of dirty tricks” to get the data into relational form in Oracle

GoldenGate proposed as standard tool to replicate in from relational sources, transparent gateway and Informatica/ODI are other options

Repliaction done at table level for consistency and ease of change

SR (Staging Replica) Layer

37

Pull data directly from the corresponding SR schema (i.e. the SI_CENTRICITY schema pulls data from the SR_CENTRICITY schema)

Contains views that match the target HDI tables, one-to-one, same column names and data types

Accomplishes selection of the appropriate data, and any necessary pre-transformation

In complex cases, can have materialized pre-processing data in tables or materialized views, in practice this meets the 80/20 rule

SI (Staging Interface) Layer

38

Oracle-defined HDWF interface tables

A “landing zone” schema, can be used as a Persistent Staging Area (PSA)

Conceptually, where you place unrefined and unvalidated data to be processed by OHADI before it goes into the HDM (warehouse tables)

Tables are designed to be insert-only (source-change dated)

HDI Layer

39

Oracle-defined HDWF warehouse tables

HDM stands for Healtcare Data Model

Keys and Referential Integrity (RI) in place for this layer, but RI is disabled by default

Conceptually, where all your data is persistently stored, though reloads from HDI are possible

Can be configured for effective-dating or only current state

HDM Layer

40

Similar in concept to the SI Layer, but pulling data from HDM for use in the UD layer

Contains views that match the target UD tables, one-to-one, same column names and data types

Accomplishes selection of the appropriate data, and any necessary pre-transformation (like SI)

In complex cases, can have materialized pre-processing data in tables or materialized views, in practice this meets the 80/20 rule (like SI)

UI (User Interface) Layer

41

Pull data directly from the corresponding UI schema (i.e. the UD_RX schema pulls data from the views in the UI_RX schema)

Contains the user-layer target tables that are used in the dimensional (star schema) models

Can be used directly by OBIEE or schemas can be replicated to a separate user database for isolated/off-loaded dashboard processing

Data-wise, it’s the end result of the warehouse pipeline

UD (User Data Mart) Layer

42

Data Movement

43

There was a desire from our integration team to use Informatica for ETL because of experience base on the team, not much PL/SQL or ODI knowledge

Architecture proposed use of abstracted code generation via Informatica APIs, jointly used with the push-down optimization option for all non-OHADI internal data movement (i.e. SI to HDI, UI to UD)

Data Movement (Planned)

44

Our integration team initially indicated that code generation with Informatica (or other tools) could not be done on account of complexity, and that the push-down optimization option was too expensive

To demostrate the feasibility, I programmed a PL/SQL-based version of the code generation as proposed in the FIRE Architecture documentation, we used this code in the first release

Data Movement (Actual)

45

Data Movement Code Generation

46

Procedure iv_tv_ip_gen(name_of_sv_view) for SI layer, generates objects for change detection and movement from SR to HDI

Procedure iv_uv_dv_ip_gen(name_of_sv_view) for UI layer, generates objects for change detection and movement from HDM to UD

All that is needed for generation is the SV view, which conforms to the HDI-based structure, data in certain standard HDI columns determine action

A benefit of the generated views is the ability to see what will happen during the next run, without actually running anything

PL/SQL Procedures for Code Generation

47

Had approximately three months to implement, process was difficult, but in the end everything worked and we went to production with the first FIRE release in November 2012

OHADI was slower than expected but got the job done, Informatica version used, but might be faster with ODI?

Integration team wanted a second chance to get code generation going for Informatica, and wanted more Informatica and less SQL and PL/SQL

Committee voted to try Informatica alternatives for the next release

Results/Next Steps

48

In this release, there were four total projects under the FIRE Program:

(1) Second Pharmacy Release, (2) Cohort Explorer and ODB (Moon Shot Analytics), (3) Exadata Implementation for Cohort Explorer and ODB, and (4) OBIEE Infrastructure Upgrade

Beginning to use Omics Data Bank, data volumes required Exadata

License restrictions and no budget yet to put HDWF on Exadata

Second FIRE Release – Moon Shot Analytics and Pharmacy Dashboard

49

Second Release

50

CDM

ODB

Cohort Explorer

Architectural Changes

51

Informatica Code Generation using Java to generate Informatica objects, so not using the PL/SQL code generation with SI and UI views for this release

Integration team wanted an instantiated SI Layer and UI Layer for Informatica-based code generation instead of views in the SI and UI Layer

Architectural Changes

52

Informatica Code Generation with Java

53

Successfully implemented all projects within six months, Cohort Explorer and ODB now live, along with the new Pharmacy Dashboard

Informatica code generation worked successfully, but perfomance issues surfaced, particularly with the new SI and UI instantiated data

The old Pharmacy code (views) was easy to change/update and the integration team did not want to convert old code to the new Informatica-based methodology, hence we have a hybrid model in place currently

Some of the Informatica code had to be abandoned close to go-live, replaced by quickly created SQL in views (materialized views)

Results

54

Had to tinker a little bit with the Exadata install of ODB and CDM, ended up to drop all indexes and implement some materialized views for improved performance, now being incorporated into the tool via our SDP partnership

We noticed that Oracle was now using a view-based methodology for the CDM ETL, Informatica mappings use views off of HDM, differs from OHADI, more similar to our original architecture

Indexing, partitioning, and SQL tuning were especially necessary in working with our conventional HDWF environment, Exadata can help in the future

Aggregation was important to getting OBIEE to perform well with ROLAP

More Results

55

56

CDM

ODB

Exadata Implmentation

for Cohort Explorer and Omics Data Bank

(Moon Shot Analytics)

Database Servers

Storage Servers

Infiniband Switch

Database Clients Administrator

Exadata Architecture

57

Full Rack – 8 DB Servers, 14 Storage Servers

Half Rack – 4 DB Servers, 7 Storage Servers

Quarter Rack – 2 DB Servers, 3 Storage Servers

Eighth Rack – 2 DB Servers*, 3 Storage Servers**

Rack Configurations (x3-2)

* The eighth rack’s DB servers each have one processor disabled via software.** The eighth rack’s storage servers have half the drives and half the flash cards.

58

Exadata x3-2 Capabilities

59

Smart Scans (Query Offloading)

Storage Indexes

Hybrid Columnar Compression

Exadata Smart Flash Cache

Exadata Database Features

These are software-based “Exdata-only” features.

60

Standard Database I/O

Exadata I/O

select …

select …

I/O Request

Smart Scan Request

Database ServerClient

Client Exadata DB Server

Exadata Storage Servers

Conventional Storage

61

CDM/ODB Implementation -Exadata Equipment Purchased

Development/TestEnvironment

ProductionEnvironment

62

Oracle Exadata Readiness Checklist

Prepare Network and Power Connections

Shipping needs to align with Sun technician arrival

SFP Modules – be sure to order if needed

Training from Enkitec

Exadata Pre-installation Activities

63

Photo shown courtesy of Mr. Robert Jeffries, Project Manager64

MD Anderson CAI “War Room” Exadata Implementation Team

January 2013

65

Scripts to verify functionality

Eighth Rack scripts expected Quarter Rack

Most documentation still for x2 equipment, most likely updated by now

Exadata Validation Scripts

66

Crucial to plan storage allocation in advance

Recommended DATA/RECO disk groups: OLTP – 60%/40%, DW 80%/20%, we did a 70%/30% split

DB instances share these disk groups per rack

Implemented DBFS (Oracle Database File System) for use with the ODB Loaders, omics data files are large, particularly genomic reference data

Exadata Storage Allocation

67

Performed after storage allocation

We installed two separate instances on production rack, four separate instances on dev/test rack

All instances are clustered via RAC, ODB and CDM are in the same database instance

Instance creation went fairly quickly for us, but would not have been the case for a consolidation project

Exadata Instance Creation

68

Backup – IBM Tivoli (non-Oracle)

Anti-virus Software (non-Oracle)

Oracle Enterprise Manager 12c

Oracle Platinum Gateway

Additional Components

69

Oracle Enterprise Manager 12c

70

71

Single points of contact are ideal (Oracle, Enkitec, Internal Departments)

Early engagement project planning important

“War Room” concept very effective

Some documentation for new products hard to find (i.e. x3, OEM 12c)

Enkitec training was fantastic, but probably should have happened earlier than the delivery, we were on an accelerated implementation schedule

Exadata Implementation Lessons Learned

72

Assess GoldenGate’s viability toward MD Anderson Use Cases requiring Heterogeneous Data Replication, Flexible Data Deployment and Continuous Availability

Determine ease-of-configuration, deployment, manageability and reliability of GoldenGate as implemented within the use cases

“Should GoldenGate handle these use cases convincingly, the POC will be considered as successful”

GoldenGate POC at MD Anderson – July 2012

73

Use Case #1 - Heterogeneous Data Replication from multiple Source Databases, namely, PICIS (CareSuite), OR Manager (Security, Surgery, Interface DBs), and Sybase (RADDATA) to an Oracle 11g R2 Target Database (staging area for HCM Data Warehouse)

Use Case #2 - Heterogeneous Data Replication from multiple Source Databases, namely PICIS (CareSuite), OR Manager (Security, Surgery, Interface DBs), and Sybase (RADDATA) to a SQL Server 2008 R2 Target Database (Operations Reporting)

Use Case #3 - Flexible Data Deployment topology to merge activity data from multiple Source Tables (Sybase) to a Single SQL Server Target Table (Operations Reporting Database)

Use Case #4 - Architecture capable of handling schema differences across platforms (Audit Reporting Columns added at Oracle Target Database)

GoldenGate POC Use Cases

74

75

GoldenGate POC

Architecture

GoldenGate POC Results

76

Implementation Conclusions5

77

The HDWF, OHADI, Cohort Explorer, and Omics Data Bank products helped us deliver a lot of functionality very quickly, in only one year – our president was very impressed with Cohort Explorer when coupled with ODB, calling it a “game changer”

Having a warehouse model in place helps you avoid a lot of the headaches and time that could be lost to developing your own intermediary models and related enforcement of standards

OHADI gives you a good starting point for the HDI to HDM ETL so you can effectively generate your RI relationships and manage exceptions

Taking some time to think about our overall architecture and try out some new techniques paid off in the long run, still important to deliver functionality of course

Implementation Conclusions

78

The HDM tables cover a large swath of clinical concepts, but sometimes things might not fit perfectly, working with Oracle as an SDP has its advantages, as does understanding how to customize the model

OHADI is supposed to be the step in the process where data is cleansed and validated, but a lot of customization would be required to do so thoroughly, Oracle will be working to improve this portion of the product, until then you can cleanse pre-HDI if OOTB functionality is not there yet

The Informatica version of OHADI had some performance issues on our environments, but I would predict the ODI (Oracle Data Integrator) version would run faster

Having a good vocabulary/terminology approach in place will help tremendously with your implementation of code systems in the model, Oracle is beginning to integrate OHADI with HLI

Implementation Conclusions

79

The Cohort Explorer application aims to deliver the exact functionality that will be needed in cancer research – merging the clinical and genomic data, but the application still needs some more maturity and better use of HDWF structures, we do not want the tail to wag the dog

Omics Data Bank helps to centralize a variety of genomics data using set standards from the academic and scientific community, this concept seems to work well in practice

The complexity of the queries on ODB and CDM were sometimes challenging from a performance perspective – it is clear that Exadata helps tremendously in this area and we would not have been nearly as successful without it

Be sure you have a knowledgeable SME resource for these Oracle technologies to assist you through implementation, they would be difficult to implement on your own

Implementation Conclusions

80

Get HDWF on Exadata, already ordered two new eighth racks

Upgrade Cohort Explorer and ODB to newer versions, when released

Focus on architectural refinements/changes for the FIRE Architecture

Need to get better code system infrastructure in place, will take work, but worth it in the end – it is unclear how use of Epic/HLI will affect us

Beginning NLP pipeline for unstructured data, currently prototyping using IBM ICA platform, but possibly investigate use of Big Data Appliance and/or Hadoop? Semantic technologies?

Prepare for Epic as source, hopefully acquire GoldenGate eventually too

MD Anderson Future Steps

81

Abstraction over commonalities is key in a world driven increasingly by “Big Data”

High-end performance is crucial for BI data delivery, every bit counts

Always mind the “Mythical Man-Month,” the more knowledge and capability a single individual has, the better - streamline your teams for effective agility and delivery, understand the business, beware of “design by committee”

You will never get it right the first time, expect changes and have flexibility to adapt quickly

Virtualization, in-memory database objects, and real-time data are gaining momentum in the realm of BI and warehousing, learn to embrace them

Collins Axioms – Parting Maxims

82

Questions?

83 www.mdanderson.orgncollins@mdanderson.org

top related