layered scalable architecture@sap 2.0

26
Layered Scalable Architecture @ SAP 2.0 Document Owner Martin Gembalzyk Owner´s department IT Predictive Insight & Analytics Version 1.1 Last Update 09.05.2014 Governed by (Accountable) IT Predictive Insight & Analytics

Upload: ashelsha

Post on 03-Feb-2016

79 views

Category:

Documents


1 download

DESCRIPTION

sap layered scalable architecture

TRANSCRIPT

Page 1: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

Document Owner Martin GembalzykOwner´s department IT Predictive Insight & AnalyticsVersion 1.1Last Update 09.05.2014

Governed by (Accountable) IT Predictive Insight & Analytics

Page 2: Layered Scalable Architecture@SAP 2.0

Contents

1 LAYERED SCALABLE ARCHITECTURE (LSA) 2.0 ................................................................... 3

1.1 Quick Reference .......................................................................................................3

1.2 Introduction .............................................................................................................3

1.3 Adjustments and Enhancements of LSA 1.0 .............................................................3

1.4 Concept ....................................................................................................................3

1.5 Layers at SAP ............................................................................................................4

1.6 Information and Data Content Areas .......................................................................51.6.1 Data Content Area ...............................................................................................................6

1.6.1.1 Minimum required EDL data flow for transactional data in Data Content Area (Single SystemSupport) 81.6.1.2 Twin Propagator Data Flow ..................................................................................................91.6.1.3 Minimum required EDL data flow for master data in Data Content Area .............................. 111.6.1.4 Data Content Area Layer Descriptions ................................................................................ 12

1.6.2 Information Content Area ................................................................................................... 18

1.6.2.1 Business Transformation and Integration Layer .................................................................. 191.6.2.2 Data Mart Layer ................................................................................................................. 191.6.2.3 Operational Data Store ....................................................................................................... 191.6.2.4 Exchange Zone .................................................................................................................. 201.6.2.5 Virtual Reporting Layer ....................................................................................................... 20

1.7 Transformations and Lookups ................................................................................201.7.1 Transformations and Lookups in the Transactional Data Flow ............................................ 20

1.7.1.1 Allowed Lookups ................................................................................................................ 201.7.1.2 Remarks ............................................................................................................................ 21

1.7.2 Transformations and Lookups in the Master Data Flow ....................................................... 21

1.8 Allowed Source and Target for Transformations....................................................22

1.9 Data flows between BWP, IWP and OPP ................................................................23

1.10 Real Time and Near Real Time Reporting ...............................................................25

1.11 Master Data Handling ............................................................................................25

1.12 Planning .................................................................................................................26

2 SCHEDULING AND PROCESS CHAINS ............................................................................... 26

Page 3: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

1 LAYERED SCALABLE ARCHITECTURE (LSA) 2.0

1.1 Quick Reference

Quick Reference LSA

Propagation Scenarios

1.2 Introduction

As LSA 2.0 is derived from the corresponding SAP BW Reference Architecture we highly recommendto get familar with it. Very good peresentations can be found here: LSA on SAP BW HANA 2011 + LSAon SAP BW HANA 2012.

Also LSA 2.0 is closely related to the existing Solution Architecture Reference Guide, DeveloperGuidelines, Authorizations and Content Strategy, which can all be accessed via the BI Guidelines pagein the corporate portal. It is mandatoy to get familar with all the guidelines in parallel.

Especially the guidlines around Naming Conventions, ABAP Programming Guidelines, DataReplication and Information Lifecycle Management are highly relevent.

1.3 Adjustments and Enhancements of LSA 1.0

Streamlined terminology (Application Data Layer, Enterprise DataWarehouse Layer, etc.)Mandatory Corporate Memory (CM) Layer for all data flows (replaces the decision tree ofLSA 1.0)New Cross Business Transformation Layer (xBTL) in the Enterprise DataWarehouse Layer(EDL)Adjusted naming conventions (e.g. Quality and Harmonization Layer)Twin Propagator approach in the EDL Propagation LayerMultiple System Support (adaption and preperation of the NewBI platform strategy androadmap)Detailed matrix on the allowed data flow, look up capabilities and DataStore Objects (DSOs)types

1.4 Concept

LSA describes the design of service-level oriented, scalable, best practice SAP BW architecturesfounded on accepted Enterprise DataWarehouse principles as introduced in Bill Inmon's CorporateInformation Factory (CIF) that has been adopted by SAP BW (see SAP BW Reference Architecture inthe Introduction above, both pictures below are taken from these presentations).

This is a SEVEN layered architecture. Every layer of this pattern has been designed to serve theparticular purpose.

Page 4: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

Below is an example for a architecture used for large BW implementation on enterprise level. Thisarchitecture will help in boosting the overall performance of the system and making theimplementation flexible enough to adapt future enhancements.

1.5 Layers at SAP

LSA is built during the NewBI transformation projects to ensure right governance andimplementation technology.

Page 5: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

NewBI systems/platform consists currently of BWP, HCA and HCP. BWP (SAP BW) is clusteredinto the Enterprise DataWarehouse Layer (EDL) and Application Data Layer (ADL) with commonMaster Data Objects.

The extraction of data can be done with minimum effect on the Information Content Area(Application Data Layer and Solution Layer). The purpose of having a LSA is to create a

Logical Single point of entry - Single point of the fact - Single point of distributionEasier access to dataBusiness users and IT easily governed information assets to make IT a strategic asset thatdrives strategy and executionEverybody will be querying information based on the same set of dataGetting data out of source systemStoring all data for later use and reuseClean , Harmonize and Integrate data to be used for creating information

The LSA at SAP is separated into different layers:

1.6 Information and Data Content Areas

The layers are seperated into Information and Data Content Area's (ICA and DCA).

Data and Information Content Area are a logical grouping, it encompasses the ownership,development authorization and are the main building blocks of the BW environment at SAP.Data Content Area are linked to the DataSources and focused on data, the extraction, cleaning andharmonization of data.Information Content Area is linked to the topside of the BW system and focuses on thetransformation of data into information.

Page 6: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

The Data Content Area is physically implemented by the Enterprise DataWarehouse Layer (EDL), theInformation Content Area by the Application Data Layer (ADL) and Solution Layer.

Layers of the Information Content Area:

Virtualization Layer (M)Data Mart Layer (P)Business Transformation and Integration Layer (O)Operational Data Store (H)Exchange Zone (X)

Layers of the Data Content Area:

Cross Business Transformation Layer (C)Propagation Layer (D)Quality and Harmonization Layer (Q)Corporate Memory (T)Acquisition Layer (A)

Attention: The authorization Data and Information Onwer Profiles are related to the InformationContent Area. The Data Content Area has a specific profile. No reporting and SID enablement for thislayer.

The Data and Information Content Area gives the functional part of the scalability by splitting thedata and information into groups of data flows that can be processed in parallel.

1.6.1 Data Content Area

The Data Content Area is centered by data:

Page 7: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

Propagators are organized in Data Content Area'sGetting data out of source systemStoring all data for later use and reuseClean, harmonize and integrate the data to be used for creating informationSpecial InfoObjects (/EDL/ namespace) without master data flags (attributes, text andhierarchy) are used (InfoFields)One DCA is linked to one Propagator (one or more DSO’s) and identifies one or moreDataSource that is used to populate one Propagator (more if it cannot be avoided)If the EDL should also provide corporate business logic a xBTL (Cross Business TransformationLayer) object can be provided optionally after the propagation layer thus supplyingInformations Content Areas with data that have already validated and corporate businesslogic.The DCA is using the naming convention prefix "/EDL/"Only InfoObjects in the /EDL/ namespace are allowed in DataStore Objects (! No Exception !)- references in InfoObjects to non-/EDL/ InfoObjects are for forbiddenAll the DataStore Objects (DSOs) in the DCA should be created as Write-Optimized orStandard DSOs (if a Standard DSO is required it must not be HANA optimized). See also thespecific layer descriptions below for a detailed guidance on this. As the DataStore Objects areintended not to be used for reporting the flag "SID Generation" in the DSO settings has to beset to "Never Create SIDs".Partitioning: if more than 100 Mio. records are expected in a DataStore Object partitioningis mandatory. This could be implemented as a Semantic Partitioned Objects (SPO).Technical fields: Acquisition Layer, Corporate Memory and Propagation Layer are containingall fields of the DataSource + technical fields that have to be enriched (like source system ,request ID and load timestamp; please see Acquistion Layer Description below for a detailedexplanation)

The next table describes in detail with types of DataStore Object are allowed to be used in each layer:

Layer Layer Implementation DataStoreObjectImplementation

DefaultDataStoreObject

Type

ExceptionalDataStoreObject

Type

AcquisitionLayer Mandatory Optional Standard No exception

CorporateMemory

Mandatory(Transactional Data:always / Master Data:see details in the LayerDescription)

MandatoryWrite-Optimized(No semanticalkey!)

No exception

Quality andHarmonizationLayer

Optional Mandatory Write-Optimizedor Standard No exception

PropagationLayer Mandatory Mandatory

Write-Optimized(No semanticalkey!)

No exception

xBTL Optional Mandatory Standard No exception

Page 8: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

1.6.1.1 Minimum required EDL data flow for transactional data in Data Content Area (SingleSystem Support)

Sinlge System Support means that Data and Information Content Areas are residing in the samesystem.

Here the minimum required data flow is described *w/o* usage of the Cross BusinessTransformation Layer:

Here the minimum required data flow is described *with* usage of the Cross BusinessTransformation Layer:

Page 9: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

Here we can see that certain rules have to be applied:

Persistency objects in the (DSOs) have to be shielded by Inbound and OutboundInfoSourcesAcquisition, Corporate Memory and Propagation Layer are mandatory (CorporateMemory with mandatory persistency)Quality/Harmonization and xBTL Layer are optional (if implemented having mandatorypersistency)Propagator Layer implementation as a Twin Propagator (Delta Propagator as Write-Optimized DSO with the latest request, Full Propagator as Write-Optimized DSO with dataonly when request by an application)

1.6.1.2 Twin Propagator Data Flow

The delta propagator is usually used to deliver the periodic deltas to the connected application(s).The full propagator is used for the initialization of a new application or the re-initialization of anexisting application on explicit request from the corporate memory.

Page 10: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

The next picture describes the data flow for both cases (periodic delta and historical data reload onrequest):

Minimum required EDL data flow in Data Content Area (Multiple System Support)Multiple System Support means that Data and Information Content Areas are deployed in separateSAP BW instances. The Export DataSource and the shielding InfoSource in the target system can alsobe considered as part of the Data Content Area (EDL).

Here the minimum required data flow is described *w/o* usage of the Cross BusinessTransformation Layer:

Here the minimum required data flow is described *with* usage of the Cross BusinessTransformation Layer:

Page 11: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

Here we can see that it basically follows the same rules as in the Single System Support scenario.

1.6.1.3 Minimum required EDL data flow for master data in Data Content Area

The data flow for master data differes partitially from the transactional data flow. The followingpicture describes the minimum required data flow:

Here we can see that certain rules have to be applied:

Page 12: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

Acquisition and Propagation Layer are mandatoryQuality/Harmonization Layer is optional (if implemented having mandatory persistency)Corporate Memory is optional (if implemented having mandatory persistency andInfoSource Shielding)Propagator Layer implementation as a Master Data Propagator (InfoSource Shielding canbe by-passed under given circumstandes, see Propagation Layer description below)

1.6.1.4 Data Content Area Layer Descriptions

In the Data Content area, the data model focus on getting data from the DataSource to thePropagators (or xBTL). During this flow the data should be cleaned and harmonized.

In the DCA's the following layers exists:

Acquisition Layer (Prefix ‘A’)The inbound part of the Acquisition Layer corresponds to the PSA objects from the sourcesystem. The purpose of this layer is to do the mapping between fields from the DataSourceto InfoObjects in an Outbound InfoSource + adding technical information. This layer ismandatory.Corporate Memory (Prefix ‘T’)Store all request from all DataSources – the life insurance. This layer is mandatory.Quality and Harmonization Layer (Prefix ‘Q’)Alignment of data to common standard and corporate rules. This layer is optional.Propagation Layer (Prefix ‘D’)Supplies digestible and unflavored data to create information applications in the InformationContent Area. The layer is mandatory.Cross Business Transformation Layer (Prefix ‘C’)Supplies digestible and unflavored data with central corporate business logic to createinformation applications in the Information Content Area. This layer is optional.

1.6.1.4.1 Acquisition Layer

The inbound part of the Acquisition Layer corresponds to the PSA objects from the source system.The purpose of this layer is to do the mapping between fields from the DataSource to InfoObjects inan Outbound InfoSource + adding technical information.

It serves as a fast inbound layer accepting data 1:1 – for temporary storageAll fields of the DataSource must be mapped to a corresponding (naked) InfoObject in theAcquisition Layer Outbound InfoSourceNo transformation of data in the Acquisition Layer is allowed – only the routines needed toadd the technical fields and the mapping between field names and InfoObjects. If theDataSource are of questionable quality use fields of type CHAR in the DataSource and makethe quality check in the Quality and Harmonization Layer:

o If you are expecting questionable dates in your source data, the check of the datesshould be done in the Quality and Harmonization Layer. Make sure that yourInfoObject is not referencing the 0DATE as this will cause a dump.

o Upper/Lower case – If your mapping in the Acquisition Layer can have both upperand lower case characters flag the InfoObjects as “Lower case” – No SID’s aregenerated!

The "no transformation" of data rule also means that you may not flag any keys in thedefinition of the Outbound InfoSource. This is the "No keys in the InfoSource" rule!Main rule is to have only the Outbound InfoSource placed in the Acquisition Layer.

Page 13: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

Special Case: Alternatively the Outbound InfoSource could be replaced by a Standard DSO shieldedby an Outbound InfoSource. A DSO should only be used if the extractor only deliveres full data loads.If a corresponding full load deliveres > 1 Mio records the usage of a Standard DSO shielded by anOutbound InfoSource is mandatory (please consider there a consequent ILM in your processchains, see Information Lifecycle Management Guidelines). In this case the Standard DSO isleveraged to calculate the deltas. Other uses cases as well as the utilization of a Write-Optimized DSOhas to be aligned with Architecture. The transformation into a DSO should still only be 1:1 with theaddition of technical information.

As an Outbound InfoSource has to be placed in the data flow after the DataSource (or DSO) thecreated InfoSource has to add the following technical information in the Outbound Transformationof the DataSource (or DSO):

Original Source (technical name of DataSource and Source System + Source System ID) -> provides unique determiniation of the DataSource and Source System

o /EDL/CS01DATS – Origin: DataSourceo /EDL/CS02SSYS – Origin: Source Systemo /EDL/CS03SSID – Origin: Source System ID

Original DTP Request timestamp, date and time (entering the BW system) -> providestechnical uniqueness and explicit identification of source data

o /EDL/CS04LDAT – Origin: DTP Request Load Dateo /EDL/CS05LTIM – Origin: DTP Request Load Timeo /EDL/CS08TMSP -- Origin: DTP Request Load Timestamp (short)

Original PSA Request (entering the BW system) -> provides technical uniqueness and explicitidentification of source data

o /EDL/CS06LREQ – Origin: PSA/ODS Source Request (GUID)o /EDL/CS07LRNO – Origin: PSA/ODS Source Request (SID)

Original DTP Request, Data Package and Record number (entering the BW system) ->provides technical uniqueness and explicit identification of source data

o /EDL/CS09DPID -- Origin: DTP Request Data Package Numbero /EDL/CS10RECN -- Origin: DTP Request Data Package Record Numbero /EDL/CS11DTPG -- Origin: DTP Request (GUID)o /EDL/CS12DTPS -- Origin: DTP Request (SID)

Adding this information HAS TO BE DONE in the outbound transformation of the DataSource(or DSO).Routing this information to the Acquisition, Q&H, Propagation and Corporate MemoryLayer is MANDATORY.Routing this information to the xBTL and ADL IS NOT NEEDED.Main purpose:- Identification of request that needed to be reloaded from the Corporate Memory into thePropagator- Supporting any other kind of reloading activities from the EDLAdding the Data Package and Record Number is crucial if the upper data flow consists of aSPO in the Corporate Memory Layer. By constructing a semantical key that is MANDATORYin this case with the request, data package and record number undesired aggregations intransformations are getting avoided. It is recommended to use the PSA Request (DTPRequest is also possible, advantage of using the PSA Request is an end-to-end identification).

Special Case: "DataStore in Acquistion Layer to calculate deltas" - Extractors only provide youwith full data loads

o Here place in the transformation from the DataSource to the Acquisiton LayerDataStoreObject only the "Original Source" fields and derive the values with the help

Page 14: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

of the Data Acquisition Layer Routine Library methods. The other fields you need tocalculate in the outbound transformation of the Acquistions Layer DataStore Object."Original Source" fields need to have a 1:1 mapping.

o "PSA Request" fields: in this case it contains the "DataStoreObject ActivationRequest" GUID/SID.

o Only in this case delta requests are calculated in the Acquistion LayerDataStoreObject.

Please Note: no (other) Logic between Acquisition layer outflow InfoSource and CorporateMemory and back

See Tool Box to get a detailed instruction to get the mandatory technical information (DataAcquisition Layer Routine Library). Please read the guideline carefully, do not just stick to existingexamples in the system. Additional information around special case "DataStore in Acquistion Layerto calculate deltas" in combination with HANA InMemoryOptimized DataStoreObjects can be foundhere: Change Log Compression.

1.6.1.4.2 Corporate Memory

Corporate Memory requires the DataSource(s) mapped to a unique DSO in the Corporate MemoryLayer and all requests must be stored in this DSO (only Write-Optimized DSOs are allowed).No transformation of data is allowed loading data into the Corporate Memory Layer from theAcquisition Layer Outbound InfoSource. To create the data flow into the Corporate Memory Layer aswell as the flow back into the Acquisition Layer Outbound InfoSource a Corporate Memory InboundInfoSource as well as Corporate Memory Outbound InfoSource has to be used (shielding of theDSO).

Also for master data flows usage of the Corporate Memory is mandatory if source data can bedeleted (e.g. FlatFile DataSources).

1.6.1.4.3 Quality and Harmonization Layer

In this layer data will be checked for quality and harmonization according to corporate standards.This optional layer has to be implemend by the adaption of a Standard DSO (optional: shielding byInbound and Outbound InfoSources). All deviations (e.g. usage of a Write-Optimized DataStoreObject) have to be alligned with Architecture.

The source is the Data Acquisition Layer and the Corporate Memory Layer.

Flavours of the Quality and Harmonization Layer:

Technical harmonizationFormat, length, etc.Simple format check; Text field, date, etc.Upper caseMaster data referential integrityMaster data integration into one single modelCompounding, concatenation, etc.Best recordCommon transformation, adding non-application specific information, etc.Amount in different currenciesQuantity in different unit

Page 15: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

Common master data derivation

1.6.1.4.3.1 Special Case: Multiple time-dependent sources into time-dependent DSOs

Majorly used in HR contentsNOTE: When there are multiple data sources to be connected to the same DSOs Time dependentstaging is required if one OR more of the DataSources is time-dependent.

This scenario is specificallydeveloped for HR but can also beused in other case where youhave to following scenario(implementation requires usageof Standard DSOs):

DSO #1 Time dependentdata relating to a keyDSO #2 Time dependentdata relating to a keyDSO #3 NOT Timedependent data relatingto a key and you will liketo merge these into ONEtime dependent DSO

A solution (as this is a generalapproach you might able to finda more appropriate solution inspecificcases) is as follows:

The key in thesource DSO correspondsto that of the target DSOThe InfoObjects0DATETO and0DATEFROM is used asthe intervals – 0DATETOis part of the keyAll data fields of thesource DSO will be usedin the target DSOOnly full loads arepermitted from thesource DSO to the targetDSOAll rows of a keymember must be in thesame data package (usesematical grouping in

Page 16: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

DTPs)

Create an InfoSourcethat match theresult DSO 1:1 butwithout any key. Mapall DSO into thisInfoSourceYou cannot map fromone InfoObject toanother. All mappingsmust go 1:1. Only fieldspresent in thesource DSOs are mappedIf you have a non time-dependent DSOs youmust map the fields forDATETO and DATEFROMto constant values of‘19000101’ and‘99991231’

In the transformation betweenthe InfoSource and theTarget DSO you must place thefollowing code.

CALL METHOD /edl/cl_core_time_merge=>do_s_time_mergeEXPORTING

ir_request = p_r_requestCHANGING

ct_source_package = SOURCE_PACKAGE.

This code will lookup the currentcontent for all “keys" for thatspecific key in the target DSO.Merge the content of thepackage data with the existingdata from the target DSO,removing keys based on the0DATETO (RECORDMODE=’D’)that are not recreated.

When loading you must load onesource DSO to the target DSOand activate the data. Then thenext DSO, activate the data…continue until no more datashould be updated in thetarget DSO

1.6.1.4.4 Propagation Layer

Supplies digestible, unflavored data, etc. as one possible source for Information Content Area's (theother possible source is the optional xBTL). In this layer data from different DataSources withidentical information is integrated into one propagator.

Page 17: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

DigestibleReady to consumeUnflavored

o No application specific transformationso Data should give the possibility to compare and verify with the source system

Integratedo Common semanticso Common valueso Cleano All sets of data should be disjuncted - no intersections between Data Content Areas

should existHarmonized data

o Smoothing datao Technically unified values (e.g. compounding)

Trimmed to fit DataSources and data persistency‘s to reduce data complexity for applicationsby

o Extending data by looking up information, which applications frequently ask foro Merging different but highly related DataSources and store data in a single

propagator, if application always or frequently request them togethero Collecting data from the same (or similar) DataSource but from different source

systems to less or a single source system independent propagator

For using the Propagation Layer consider the following:

Transactional Data Flow:

Consist of Write-Optimized DSOs shielded by Inbound and Outbound InfoSources that givesa unified data transfer behaviorThe Twin Propagator approach is mandatory for transactional data. The Delta Propagatorcontains only requests which have not yet been updated to all connected applications, theFull Propagator is filled only upon request when an application needs a reload fromCorporate Memory.Data must be stored at the level of granularity given by the DataSource(s). No informationoriginally deliverd from the DataSource must be lost on its way to the Propagators (thisimplies the necessity of Write-Optimized DSOs without any semantical key fields).Data is integrated; Company Code "SAP AG" in propagator #1 and Company Code "SAP AG"in propagator #2 is in both cases identified as ‘0001’

Master Data Flow:

Consist of Standard DSOs that gives a unified data transfer behaviorFor master data entities that are to be considered as a relevant business object (e.g. materialmaster, employee master, profit center) or for high volume master data (> 1 Mio. records)shielding by Inbound and Outbound InfoSources is mandatoryFor master data entities that are to be considered as a text, organizational or control masterdata (e.g. company code, industry code) and a data volume between 1000 and 1 Mio.records shielding by Inbound and Outbound InfoSource is optionalFor master data entities < 1000 records shielding by Inbound and Outbound InfoSourceis not reasonableData must be stored at the level of granularity given by the DataSource(s)

Page 18: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

1.6.1.4.5 Cross Business Transformation Layer

This layer can be applied if a central business logic should be provided to areas of the InformationContent. This is an optional layer that has to be realized by a Standard DSO shielded by Inbound andOutbound InfoSources. In the data flow it follows the propagation layer and is getting the interfaceto the business transformation/integration layer of the Informations Content Area consumers.WithLSA 1.0 it was not allowed to apply business logic in the EDL, with xBTL it is possible. It is part of thecontent strategy to mainly deploy corporate standards as this a pre-requisit for a consistent reportingacross all solutions on corporate KPIs.

The implementation of a XBTL only because of technical reasons is not allowed.

1.6.2 Information Content Area

The following layers exists in the Information Content Area:

Business Transformation and Integration Layer (Prefix ‘O’)Data from multiple Data Content Areas are combined to create Information. Noreporting is done directly here.Data Mart Layer (Prefix ‘P’)Reporting specific object making the information from Business Transformation/IntegrationLayer available for reporting. No reporting is done directly here.Operational Data Store (Prefix ‘H’)Operational data store, also including RealTime and Near RealTime objects. No reportingis done directly here.Exchange Zone (Prefix 'X')Reserved for data export to external system (not to be used for data export to theoperational and corporate SAP BW system that are already part of the SAP BW platform)Virtual Reporting Layer (Prefix ‘M’)Objects from the Data Mart Layer and Operational Data Store are combined and madeavailable for reporting. It’s allowed to combine information from multiple InformationContent Areas of the ADL (Application Data Layer) in the Informations Content Areas of theVirtual Reporting Layer (Solution Layer).

Transformation of data into information is done in an Information Content Area. You may access allPropagators (xBTL) of all Data Content Areas. No data flow between Information Content Area's isallowed.

When creating an Information Content Area you should reuse any existing propagators (or xBTL):

If there is a need for more fields in the propagator the Data Content Area has to be altered.If the Data Mart DataSource from the leading system is already used and no staging is done;you must do the staging and changes must be done so that the Information Content Area canconsume the DataContent AreaIf the data is not already extracted, you should create a new Data Content Area for thisAlways use the "Cube Qualifier" if the ADL object gets incluced into aMultiProvider/CompositeProvider

Page 19: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

Partitioning:

Implementation as Semantic Partioned Object (SPO) or own partioning strategy ismandatory for a data volume > 100 Mio. records (for ILM purposes, query performance,etc.)

1.6.2.1 Business Transformation and Integration Layer

This is where data are turned into information.Data from multiple data content areas are combined. Data gets transformed into informationfollowing the guidelines given by the business. Whether or not to use the Business Integration Layer(optional) is decided by the reporting needs and the complexity of the transformation of data fromthe Propagation Layer or xBTL. DSOs of the Business Integration Layer have to be created asStandard DSOs with shielding Inbound and Outbound InfoSources. As the DataStore Objects areintended not to be used for reporting the flag "SID Generation" in the DSO settings has to be set to"Never Create SIDs". Between the Propagation Layer (or xBTL) and the Business Integration Layer aBusiness Transformation Layer is mandatory that is virtual (InfoSource). The BusinessTransformation Layer is always mandatory and only virtual (InfoSource), even if no BusinessIntegration Layer is used. So the data supplier of the Business Integration Layer is always thePropagation Layer or xBTL (not the Corporate Memory Layer).

1.6.2.2 Data Mart Layer

The Data Mart (along with the objects in the Virtual Reporting Layer) is build with an eye on thereporting needs. All reporting requirements must be met in the modeling of the Data Marts.In modeling the Data Mart you must consider:*KPI’s to be reported*Granularity of the information

Also performance has to be considered. Please consider also the Pro's und Con's around usingInfoCubes or DSOs (see this article here for InMemory HANA Optimized InfoCubes and DataStoreObjects). As only Standard DataStore Objects are allowed and reporting usage is possible consider toset the flag "Create SIDs" either to "During Reporting" or "During Activation" (in case of using thesetting "During Activation please consider corresponding guidlines on DataStore Object BatchSettings). InfoSets are technical possible but not recommended, CompositeProvider technologyshould be used instead.

The Data Mart Layer could either be connected in the data flow before to a InfoSource of theBusiness Transformation Layer or a shielding Outbound InfoSource of the persistend BusinessIntegration Layer.

InfoCubes are not allowed as a source in the data flow.

No query creation is allowed on this layer, Persistency Objects in the Data Mart Layer can also beshielded by InfoSources.

Objects from this layer have to be included into a MultiProvider or CompositeProvider for reporting.

1.6.2.3 Operational Data Store

Contains Real Time, Near Real Time and Operational Data. For operational data Local Provider in aSAP BW Workspace and Standard/Direct Update DSOs are possible. For Standard DSOs consider to

Page 20: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

set the flag "Create SIDs" either to "During Reporting" or "During Activation" for Direct UpdateDSOs "During Reporting".For Real Time and Near Real Time VirtualProvider and HybridProvider are possible.No query creation is allowed on this layer, InfoSource shielding is possible.Objects from this layer have to be included into a MultiProvider or CompositeProvider for reporting.

1.6.2.4 Exchange Zone

Contains data for external system export (systems that are not part of the NewBI system landscape).This layer could only include Open Hub Destinations, here no InfoSource shielding is relevant.No reporting is allowed on this layer, also no inclusion into a MutliProvider or CompositeProvider isallowed.Data supply only allowed from the Business Transformation/Integration/or Data Mart Layer.

1.6.2.5 Virtual Reporting Layer

Reporting (query creation) is only allowed on MultiProvider or CompositeProvider. MultiProviderand CompositeProvider are the only objects that are allowed in the Virtual Reporting Layer. TheVirtual Reporting Layer is also considered as the Solution Layer. The MultiProvider andCompositeProvider with its depending reporting elements can be considered as solutions.

1.7 Transformations and Lookups

Generel rules for lookups:o No lookup allowed to objects of the Acquistion and Qualtity & Harmonization Layero No lookup allowed between BIG Areas of the ADLo No lookup allowed to Master Data (InfoObjects) of the ADLo Lookups to the xBTL should be prefered to the Business Integration/Transformation

LayerLookup Implementation Guideline:

o Lookup operations have to be wrapped into ABAP OO methodso It is mandatory to document any lookup operation in the Technical

Specification, COM Document and all other mandatory documentationo Within the method own logic can be implemented or methods from the Toolbox can

be re-usedNo "cross" loads between content areas in the EDL and ADL are allowed. Data loads can onlyhappen "bottom-up" in the same content area of the EDL and ADL (exception is for sure atransformation from the Propagation Layer/xBTL in EDL to the "O" Layer in the ADL).Do error handling (see related guideline)Document and explain transformation that are not 1:1 or simple lookupsNo code snippets above 20 lines of code, use ABAP OO methods

1.7.1 Transformations and Lookups in the Transactional Data Flow

1.7.1.1 Allowed Lookups

1. Lookup on Master Data (1)

Source: Master Data Propagator (no Twin Propagator)

within Transformations into the Business Integration/Transformation Layerwithin Transformations into the xBTLwithin Transformations into the Propagation Layer

Page 21: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

2. Lookup on Transactional Data (1)

Source: Full Propagator (of Twin Propagator)

within Transformations into the Business Integration/Transformation Layerwithin Transformations into the xBTLwithin Transformations into the Propagation Layer

Source: xBTL (2)

within Transformations into the Business Integration/Transformation Layer

1.7.1.2 Remarks

(1) = Lookups across EDL areas are allowed

(2) = If not implemented the Propagation Layer can be used instead

The following picture displays an example data flow on the possible derivations:

1.7.2 Transformations and Lookups in the Master Data Flow

In general no lookups are allowed in the master data flow (materialized as an access to an externalDSO in the transformation). Consideration of other master data has to be implemented within thescope of a master data integration and harmonization within the Quality & Harmonization Layer.Exeptions have to be alligned with Architecture (reason: avoidance of loading dependencies that cannot modelled within the data flow of the Sub Process Chain).

Page 22: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

1.8 Allowed Source and Target for Transformations

The following matrix is showing the allowed data flow in the LSA from a Transformation perspective.This is also reflected by the related authorization profiles for developing content (profileBI:EDL:TEAM + Data Owner and Information Owner profiles).Data flow that is already existing and has been created following the LSA 1.0 guidelines can still bechanged, but a full adaption of the LSA 2.0 should be executed nethertheless.

The allowed sources and targets for Transformations and DTPs can be derived from this matrix.

Transactional Data Matrix

Master Data Matrix

Page 23: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

The XLS version can be found here: Allowed Data Flow 1.0

1.9 Data flows between BWP, IWP and OPP

The overall BW architecture concept sees BWP as central staging system for IWP and OPP:

1. IWP and OPP get their data only from BWP. A direct connection from other source systems(like ISP or ICP) to IWP and OPP is not allowed. All data loaded into IWP and OPP must bestaged through BWP's EDL.

2. Exceptions can be made for real-time/virtual accesses to source system data but thosescenarios have to be discussed with the Architecture Team in advance. The generalrecommendation is to build real-time scenarios in BWP.

3. Data flows from IWP or OPP back to BWP are strictly forbidden. Applications that providedata for other applications (through EDL) have to be built in BWP. Exceptions are one-timeloads of historical data if an application is migrated from IWP/OPP to BWP.

Rules 1 and 2 apply only to new applications in IWP and OPP. Existing applications and contents maycontinue to load from other source systems than BWP as long as there is no migration planned.

Temporary exceptions to rules 1 and 2 have been approved by Jürgen Habermeier for the IWPTransformation Program (IWP sunset preprations). The following IWP DataSources may beconnected to BWP, data loads have to follow BWP's architecture guidelines (usage of EDL):

IWP DataSource Description Limited until Solution Architect8DPAPAEA Headcount (Adjustments) MD7.2014 Nicolas Cottin8DPAPAEN Headcount (Closed Months

Interim)MD7.2014 Nicolas Cottin

8DPAPAE1 Headcount (Closed Months) MD7.2014 Nicolas Cottin8DPAPAE2 Headcount (Open Months) MD7.2014 Nicolas Cottin8DPAPACA Personnel Actions

(Adjustments)MD7.2014 Nicolas Cottin

8DPAPACN Personnel Actions (ClosedMonths Interim)

MD7.2014 Nicolas Cottin

8DPAPAC1 Personnel Actions (ClosedMonths)

MD7.2014 Nicolas Cottin

8DPAPAC2 Personnel Actions (OpenMonths)

MD7.2014 Nicolas Cottin

8DPAPAVN Positions and Vacancies(Closed Months Interim)

MD7.2014 Nicolas Cottin

8DPAPAV1 Positions and Vacancies(Closed Months)

MD7.2014 Nicolas Cottin

8DPAPAV2 Positions and Vacancies(Open Months)

MD7.2014 Nicolas Cottin

8PPCA_P01 PCA: Plan Transaction Data MD8.2014 AndreasWeisenberger

8PROLFCS01 Rolling Forecast@SAP: CoS MD9.2014 Sead Pozderac,Tarkan Citirgi

8PROLFFI01 Rolling Forecast@SAP:Finance

MD9.2014 Sead Pozderac,Tarkan Citirgi

8PROLFHR01 Rolling Forecast@SAP: HR MD9.2014 Sead Pozderac,Tarkan Citirgi

8PROLFIV01 Rolling Forecast@SAP: MD9.2014 Sead Pozderac,

Page 24: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

IWP DataSource Description Limited until Solution ArchitectInvestment Tarkan Citirgi

8PROLFMK01 Rolling Forecast@SAP:Markets

MD9.2014 Sead Pozderac,Tarkan Citirgi

8PROLFMA01 Rolling Forecast@SAP:Material

MD9.2014 Sead Pozderac,Tarkan Citirgi

8PROLFSW01 Rolling Forecast@SAP: Sales MD9.2014 Sead Pozderac,Tarkan Citirgi

8PROLFSW02 Rolling Forecast@SAP: Sales(extended)

MD9.2014 Sead Pozderac,Tarkan Citirgi

ZROLF_EXT_PROLFCS01 PROLFCS01 RDA MD9.2014 Sead Pozderac,Tarkan Citirgi

ZROLF_EXT_PROLFFI01 PROLFFI01 RDA MD9.2014 Sead Pozderac,Tarkan Citirgi

ZROLF_EXT_PROLFHR01 PROLFHR01 RDA MD9.2014 Sead Pozderac,Tarkan Citirgi

ZROLF_EXT_PROLFIV01 PROLFIV01 RDA MD9.2014 Sead Pozderac,Tarkan Citirgi

ZROLF_EXT_PROLFMA01 PROLFMA01 RDA MD9.2014 Sead Pozderac,Tarkan Citirgi

ZROLF_EXT_PROLFMK01 PROLFMK01 RDA MD9.2014 Sead Pozderac,Tarkan Citirgi

ZROLF_EXT_PROLFSW01 PROLFSW01 RDA MD9.2014 Sead Pozderac,Tarkan Citirgi

ZROLF_EXT_PROLFSW02 PROLFSW02 RDA MD9.2014 Sead Pozderac,Tarkan Citirgi

ZROLF_EXT_ROLFADM31_ATTR ROLFADM31 RDA MD9.2014 Sead Pozderac,Tarkan Citirgi

ZROLF_EXT_ROLFADM31_TEXT ROLFADM31 RDA MD9.2014 Sead Pozderac,Tarkan Citirgi

8DYBPRA01 YCRM_BP Attributes DL MD10.2014 Iulia Maurer,ChandramouliReddy

8PMKISMA01 Planning data Budget 2015 tbd8PMKISKPIP Planning data Forecast 2015 tbd80CRM_MKTELMH CRM Marketing Element 2015 tbd

Any further exceptions have to be aligned with Jürgen Habermeier, Bert Glaser and BI PlatformTeam.

Page 25: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

1.10 Real Time and Near Real Time Reporting

Real Time (RT) and Near Real Time (NRT) reporting implementations are strongly depending on theavailable DataSource(s) and System Environment. Therefore no explicit guideline can be given here.Each scenario has to be alligned with architecure. The following matrix is giving on overview onavailable technologies and BW* support.

Technology Supported by BW* Landscape ImplementationExamples in BWP

Real Time Data Acquistion(RDA)

Can be implemented in both cases (SingleSystem Support or Multiple System Support)as open request are bound to one system.

Under Development: PCARealtimePlanned: CRMOpportunities (ADRM)

HybridProvider with DirectAccess

Should be modeled in the Reporting System(Multiple System Support). Single SystemSupport for Finance possible.

Under Development: CO-PA Realtime

VirtualProvider basedon DTP

Should be modeled in the Reporting System(Multiple System Support). Single SystemSupport for Finance possible.

Under Development: CO-PA Realtime

VirtualProvider based onFunction Module

Should be modeled in the Reporting System(Multiple System Support). Single SystemSupport for Finance possible.VirtualProvider with access to HANA SbSSystems is not recommended.

Outdated: CRMOpportunities (ADRM)

2nd Database Schema(available as of SAP BW onHANA 7.3 SP8)

Currently not supported, evaluation ongoing -

Other Allignment with Architecture absolutelymandatory. -

1.11 Master Data Handling

1. There are special Information Content Area's for Corporate Master Data (InfoArea under root node'MD'). You are not allowed to create own local, solution-specific Master Data Objects in the ICA, ifthere is a Master Data Object in this area available. If there is no existing, it has to be evaluated if aCorporate Master Data Object should be implemented.

As all InfoObjects have cross usage, the master data data flow cannot be put into the same InfoAreahierarchy as the transactional data flow. The ‘MD’ InfoArea for Master Data has to be used. BothInfoObject as DataProvider and corresponding data flow must be place below this node AND notwithin the transaction InfoArea (BIG areas for the Application Data and Solution Layer) hierarchy.

2. The Master Data Propagators are used to load the InfoObjects in the Special Information ContentAreas for Corporate Master Data (InfoArea under root node 'MD'), see 1. above. The Master DataPropagators as any other Content Area is unique.

Page 26: Layered Scalable Architecture@SAP 2.0

Layered Scalable Architecture @ SAP 2.0

1.12 Planning

1. Planning applications are created in the SAP BW systems and data is created in the ArchitectedData Mart Layer. The general rule for data flow also applies here, e.g. data created in the planningapplications cannot flow down into the Business Transformation/Integration Layer. There is noseparate Planning Layer. All planning solutions (DSOs and Infocubes) have to be developed in O and PLayer

2. Planning data is corporate data and needs to follow the general flow of data, thus it must enter theAcquisition Layer and follow the rules for creating content (if not planned directly in SAP BW).

3. Corporate querying of planning data is done using the designed data flow in SAP BW. You may onlyuse the Planning InfoCube (via the Virtual Layer)) or Aggregation Level for querying as part of theplanning application. The virtualization layer can include data from several applications whichincludes planning applications

4. If there are providers in the business integration layer (O) or data mart layer (P) which have datarequired for planning in the required granularity etc. This data can be used in planning models and beread with planning functionality.

5. If a planning solution requires data with another granularity or additional business logic which iscurrently not available they should realign with the responsible product/solution architect in order tocheck whether an existing content can be enhanced or a new content has to be created for thisplanning solution.

6. If a content requires planning figures from another content these planning figures have to beprovided through the EDL Layer how it is defined in this guideline. Therefore the open hub has to beused to create a table which then can be used in the EDL acquisition layer.

2 SCHEDULING AND PROCESS CHAINS

Details can be found in the Data Replication Guideline.