sap business objects financial consolidation

81
SAP Business Objects Financial Consolidation Tuning Best Practices Guide Version 1, October 2009

Upload: balakrishna-vegi

Post on 18-Nov-2014

1.179 views

Category:

Documents


4 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Sap business objects financial consolidation

SAP Business Objects Financial Consolidation

Tuning Best Practices Guide

Version 1, October 2009

Page 2: Sap business objects financial consolidation

TUNING GUIDE

SAP BUSINESS OBJECTS FINANCIAL

CONSOLIDATION

Typographic Conventions Icons

Icon Meaning

Caution

Example

Note

Recommendation

Syntax

Type Style Represents

Example Text

Words or characters quoted from the screen. These include field names, screen titles, pushbuttons labels, menu names, menu paths, and menu options.

Cross-references to other documentation.

Example text

Emphasized words or phrases in body text, graphic titles, and table titles.

EXAMPLE TEXT

Technical names of system objects. These include report names, program names, transaction codes, table names, and key concepts of a programming language when they are surrounded by body text, for example, SELECT and INCLUDE.

Example

text Output on the screen. This includes file and directory names and their paths, messages, names of variables and parameters, source text, and names of installation, upgrade and database tools.

Example

text Exact user entry. These are words or characters that you enter in the system exactly as they appear in the documentation.

<Example

text> Variable user entry. Angle brackets indicate that you replace these words and characters with appropriate entries to make entries in the system.

EXAMPLE

TEXT

Keys on the keyboard, for example, F2 or ENTER.

Page 3: Sap business objects financial consolidation

TUNING GUIDE

SAP BUSINESS OBJECTS FINANCIAL

CONSOLIDATION

Contents

1 INTRODUCTION ....................................................................................................................... 5

2 BFC TECHNICAL ARCHITECTURE ........................................................................................ 5

3 TUNING BFC INFRASTRUCTURE .......................................................................................... 8

Application Server .............................................................................................................................................................................. 8 Monitoring Tools ............................................................................................................................................................................. 8 OS Optimization ............................................................................................................................................................................ 12 BFC tracing .................................................................................................................................................................................... 24 Optimizing BFC Transfer .............................................................................................................................................................. 26

Web Server ........................................................................................................................................................................................ 27 Monitoring Tools ........................................................................................................................................................................... 32 OS Optimization ............................................................................................................................................................................ 39

Citrix/TSE Server ............................................................................................................................................................................. 39 General Information ....................................................................................................................................................................... 39 Bandwidth Required by ICA Protocol ........................................................................................................................................... 40 Desktop Versus Published Application Connection ...................................................................................................................... 40 Application Installation .................................................................................................................................................................. 41

Database Server ................................................................................................................................................................................ 42 Oracle Database ............................................................................................................................................................................. 46 SQL Server database ...................................................................................................................................................................... 68

4 APPLICATION TUNING ..........................................................................................................74

Performance: A Few Reminders ..................................................................................................................................................... 74

Data Collection .................................................................................................................................................................................. 74 Validating the Data Volume .......................................................................................................................................................... 74 Checking the Standards Calculations ............................................................................................................................................. 75 Limiting External Calculations ...................................................................................................................................................... 76 Avoiding Package Rules ................................................................................................................................................................ 76

Data Processing ................................................................................................................................................................................. 77 Checking the Consolidation logs ................................................................................................................................................... 77 Using SQL Rules Sparingly ........................................................................................................................................................... 77

Data Retrieval ................................................................................................................................................................................... 78 Selecting Data Properly ................................................................................................................................................................. 78 Designing Simple Web Reports ..................................................................................................................................................... 79 Creating index(es) on Data Tables ................................................................................................................................................. 79

Application ........................................................................................................................................................................................ 80

Page 4: Sap business objects financial consolidation

Tuning Best Practices Guide 4

Business Objects Financial Consolidation

©SAP AG 2009

Calibrating the Operating IT Platform ........................................................................................................................................... 80

Page 5: Sap business objects financial consolidation

Tuning Best Practices Guide 5

Business Objects Financial Consolidation

©SAP AG 2009

1 Introduction

This Best Practice document describes the technical architecture of BusinessObjects Financial Consolidation and

provides several tips and tricks to optimize the separate system components. In addition, section 4 includes

miscellaneous recommendations to optimize the performance from an application point of view.

Besides this document, the following further Best Practice documents for BusinessObjects Financial Consolidation are

available:

- Troubleshooting Guide

- Volume Test Optimization

- System Monitoring & Administration

2 BFC Technical Architecture

SAP BusinessObjects Financial Consolidation is a consolidation and management reporting solution. The solution

provides full process control and data transparency, permitting simulation of unlimited scenarios that address all

performance management reporting requirements of an organization.

All calculations and consolidations occur in the database. The solution combines legal and management reporting

structures into one process, permitting consolidation and direct comparison of all possible views in one integrated data

model. The integrated data model also allows organizations to perform side-by-side, what-if simulations.

SAP BusinessObjects Financial Consolidation uses multi-tier client/server architecture. The presentation layer

(Windows and Web interfaces), the functional layer, and the data layer have been developed as independent modules

for Windows Intel 32-bit in an environment comprised of Microsoft Visual C++, DCOM, Microsoft Visual C#, .NET,

ASP.NET and DHTML with a strong object-oriented design.

SAP BusinessObjects Financial Consolidation is a multi-thread application. This means that it can use all of the

processors available on the server simultaneously and automatically (scaling-in, that is, increasing the number of

processors on a single server). This applies to all of the components: database servers, application servers, HTTP

servers and Windows TS/Citrix servers.

SAP BusinessObjects Financial Consolidation also supports a multi-server architecture (scaling-out). You can therefore

increase the number of servers to adapt to the growing number of users with increased availability and deployment

flexibility. This applies to all of the components except for the database servers.

Page 6: Sap business objects financial consolidation

Tuning Best Practices Guide 6

Business Objects Financial Consolidation

©SAP AG 2009

Software components in the SAP BusinessObjects Financial Consolidation architecture

You can install the components on different computers. In a standalone configuration, all of the components will be

installed on the same computer.

The first tier consists of the database server, which contains all of the data that SAP BusinessObjects Financial

Consolidation processes. All of the data in the application is stored here.

The second tier consists of the application server. Its main function is to ensure that the link between the client and

the database exists. This enables you to install the database client and OLE DB drivers on the application server and

not on each client computer. The application server also generates the HTML pages requested by the Web clients. It

represents the only source for connections to the database. This configuration uses a cache that speeds up processing

and limits the need for retrieving data directly from the database, which is slower to access. The size of the application

server cache will be adapted to the specific use of the application as specified in the Technical handbook report. All

background processing is run on the application server.

The third tier consists of the SAP BusinessObjects Financial Consolidation client and can be:

SAP BusinessObjects Financial Consolidation Windows client

SAP BusinessObjects Financial Consolidation Web client (Internet Explorer)

SAP Excel (if Excel Link is used)

Web clients and Web Excel Link will require a HTTP server to access the application server.

In SAP BusinessObjects Financial Consolidation there is no direct communication between application servers. All

application exchanges are done via database synchronization mechanisms.

The client is directed the first time by the CtBroker component to one application server instance (loadbalancing

algorithm). During the whole session life, the client is still attached at the same application server.

Page 7: Sap business objects financial consolidation

Tuning Best Practices Guide 7

Business Objects Financial Consolidation

©SAP AG 2009

Except the HTTP based connection, all other connections are permanent client/server connections.

Flow type Description Used Protocol

CW-REQ WEB client (IE/Excel via http) request to the WEB server (IIS/ASP.Net engine) HTTP

WC-RES WEB server response to WEB client HTTP

WA-REQ WEB server (ASP.Net engine) to application server request DCOM

AW-RES Application server to WEB server (ASP.Net engine) response DCOM

WB-REQ WEB server to broker request (only for connection request) DCOM

BW-RES Broker to WEB response (only for connection request) DCOM

CC-REQ CITRIX client (Finance.exe/ Excel.exe via DCOM) to the CITRIX server request ICA

CC-RES CITRIX server to CITRIX client response ICA

CA-REQ CITRIX server (Finance.exe/Excel.exe) to application server request DCOM

AC-RES Application server to CITRIX server (Finance.exe/Excel.exe) response DCOM

CB-REQ CITRIX server to broker request (only for connection request) DCOM

BC-RES Broker to CITRIX response (only for connection request) DCOM

CTRL Broker – Application server check communication DCOM

AD-REQ Application server (ctserver.exe) to database query (SQL) OleDB/TCPIP

DA-RES Database server to application server response (ROWSET) OleDB/TCPIP

Page 8: Sap business objects financial consolidation

Tuning Best Practices Guide 8

Business Objects Financial Consolidation

©SAP AG 2009

3 Tuning BFC Infrastructure

Application Server

The application server performs the following tasks:

• Manages user connections using the DCOM protocol

• Connects to the database using OLE DB to manage the connection pool

• Uses internal applicative caching for the most frequently used objects

• Runs the processing required for ensuring data integrity and for locking objects

• Generates HTML pages for the Web clients

• Runs batch processing

The main function of the application server is to ensure that the link between the SAP BusinessObjects Financial

Consolidation clients and the database exists. The cache bridges the database and client computers and improves

performance.

The application server runs the processing required for the consolidation and transmission of data, partially formats the

data retrieved, and runs report bundles.

The application server also runs the processing required by the web clients. The SAP BusinessObjects Financial

Consolidation web enables client computers that do not have the application installed on their computers use Internet

Explorer to access the application. Users can perform the same operation tasks using the same setup as the Windows

client.

To be able to use the financial consolidation web application, you need to install a HTTP server. You must also install

activePDF Server, which is used for printing reports, on the application server. By configuring SAP BusinessObjects

Financial Consolidation with multiple application servers connected to the same database, you can manage a large

number of concurrent user connections easily and ensure that the system is fault tolerant.

Note: The application server is identified by the CtServer.exe process.

Monitoring Tools

Monitoring the servers consists of the following tasks:

Verify that the processes ctbroker.exe, ctserver.exe, ctcontroller.exe, and apserver.exe are up and running.

Monitor resource consumption (mainly the CPU and memory).

Configure log monitoring.

SAP BusinessObjects Financial Consolidation Application & CtBroker Servers

Name Description

ctbroker.exe

This executable process corresponds to the data source manager :

It calls up Ctserver.exe, CtController.exe and the BusinessObjects Finance Web connector (via a HTTP query). It is called up by Finance.exe and the BusinessObjects Finance Web connector. For each environment BusinessObjects Finance, it contains:

Page 9: Sap business objects financial consolidation

Tuning Best Practices Guide 9

Business Objects Financial Consolidation

©SAP AG 2009

the list of application servers, source ODBC/OLEDB and information of connection to the base, and the URL used by the Web server if so, etc.

ctserver.exe It is the main component of the BusinessObjects Finance application server; it manages connections between Windows BusinessObjects Finance clients and database. Ctserver.exe is a system process, listed in the Windows Task Manager. This process can be managed neither by the Net start/Net stop command nor by the Windows Control Panel.

CtController.exe CtController.exe is a service. His main purpose is to synchronize the CtServer instances.

Apserver.exe It is the process of ActivePDF Server. It‘s a service.

These processes must be monitored – if they disappear this may reveal a lack in application availability. By default, you

have 1 CtServer process per application server, 1 CtController per application server and 1 Apserver.exe process per

application server. There can only be a single CtBroker.exe module in each BusinessObjects Financial Consolidation

environment.

Certain processes sometimes use a lot of system resources (CPU, memory and disk access). As a result, if heavy

demands are placed on the servers (application servers or RDBMS), a general system overload occurs and not all of

the queries can be handled at the same time.

To evaluate the platform load and monitor resource consumption (mainly in the CPU and memory), you can use a tool

supplied by Microsoft. This tool is known as Perfmon and is available in Windows. It enables you to monitor the

platform using a system of counters that can subsequently be analyzed.

In these situations, we recommend that you monitor certain performance indicators, memory use, and so on.

You can use the Microsoft tool Performance Analyzer to do so.

To run Perfmon on an English language operating system and to load the template provided by SAP Support, please

carry out the following steps:

- Click on Start then Run… Enter perfmon in the window that will open

- In the console root, go to Performance Logs and alerts and then Counter Logs

Page 10: Sap business objects financial consolidation

Tuning Best Practices Guide 10

Business Objects Financial Consolidation

©SAP AG 2009

- To the right of the window, right click and then select New Logs Settings From. Next, select the file

log_SAP_us_vX.htm (where X depends on the file version).

- Choose OK twice.

- Adjust the settings so that the log meets your requirements. Choose OK.

- Right-click the new log generated, and select Start to start the monitoring tool‘s log.

Once the logs have been generated, you must transfer them to SAP Support for analysis. On the basis of these logs,

together with the Magnitude logs for the period during which you encountered problems, our Support team will analyze

the causes of these anomalies. You will receive the result of this analysis in the form of a chart.

Here is an example of a Perfmon template. It is only valid for English language operating systems.

<!-- ************************************ --> <!-- Company : SAP --> <!-- Description : Perfmon Template for --> <!-- OS in english ONLY --> <!-- ************************************ --> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">

Page 11: Sap business objects financial consolidation

Tuning Best Practices Guide 11

Business Objects Financial Consolidation

©SAP AG 2009

<HTML> <HEAD> <META NAME="GENERATOR" Content="Microsoft System Monitor"> </HEAD> <BODY> <OBJECT ID="DISystemMonitor1" WIDTH="100%" HEIGHT="100%" CLASSID="CLSID:C4D2D8E0-D1DD-11CE-940F- 008029004347"> <PARAM NAME="_Version" VALUE="196611"> <PARAM NAME="LogName" VALUE="log_SAP_us_v3"> <PARAM NAME="Comment" VALUE=""> <PARAM NAME="LogType" VALUE="0"> <PARAM NAME="CurrentState" VALUE="0"> <PARAM NAME="RealTimeDataSource" VALUE="1"> <PARAM NAME="LogFileMaxSize" VALUE="-1"> <PARAM NAME="DataStoreAttributes" VALUE="33"> <PARAM NAME="LogFileBaseName" VALUE="log_SAPus"> <PARAM NAME="LogFileSerialNumber" VALUE="1"> <PARAM NAME="LogFileFolder" VALUE="C:\PerfLogs"> <PARAM NAME="Sql Log Base Name" VALUE="SQL:!log_SAP_us"> <PARAM NAME="LogFileAutoFormat" VALUE="1"> <PARAM NAME="LogFileType" VALUE="0"> <PARAM NAME="StartMode" VALUE="0"> <PARAM NAME="StopMode" VALUE="3"> <PARAM NAME="StopAfterUnitType" VALUE="4"> <PARAM NAME="StopAfterValue" VALUE="1"> <PARAM NAME="RestartMode" VALUE="3"> <PARAM NAME="LogFileName" VALUE="C:\PerfLogs\log_SAP_us_000001.csv"> <PARAM NAME="EOFCommandFile" VALUE=""> <!-- Basic counters --> <PARAM NAME="Counter00001.Path" VALUE="\Memory\Available MBytes"> <PARAM NAME="Counter00002.Path" VALUE="\Memory\Page Faults/sec"> <PARAM NAME="Counter00003.Path" VALUE="\Memory\Pages/sec"> <PARAM NAME="Counter00004.Path" VALUE="\PhysicalDisk(_Total)\Avg. Disk Queue Length"> <PARAM NAME="Counter00005.Path" VALUE="\PhysicalDisk(_Total)\Avg. Disk Read Queue Length"> <PARAM NAME="Counter00006.Path" VALUE="\PhysicalDisk(_Total)\Avg. Disk sec/Transfer"> <PARAM NAME="Counter00007.Path" VALUE="\PhysicalDisk(_Total)\Avg. Disk Write Queue Length"> <PARAM NAME="Counter00008.Path" VALUE="\PhysicalDisk(_Total)\Current Disk Queue Length"> <PARAM NAME="Counter00009.Path" VALUE="\PhysicalDisk(_Total)\Disk Read Bytes/sec"> <PARAM NAME="Counter00010.Path" VALUE="\PhysicalDisk(_Total)\Disk Reads/sec"> <PARAM NAME="Counter00011.Path" VALUE="\PhysicalDisk(_Total)\Disk Transfers/sec"> <PARAM NAME="Counter00012.Path" VALUE="\PhysicalDisk(_Total)\Disk Write Bytes/sec"> <PARAM NAME="Counter00013.Path" VALUE="\PhysicalDisk(_Total)\Disk Writes/sec"> <PARAM NAME="Counter00014.Path" VALUE="\Processor(_Total)\% Processor Time"> <PARAM NAME="Counter00015.Path" VALUE="\Processor(0)\% Processor Time"> <PARAM NAME="Counter00016.Path" VALUE="\Processor(1)\% Processor Time"> <PARAM NAME="Counter00017.Path" VALUE="\Processor(2)\% Processor Time"> <PARAM NAME="Counter00018.Path" VALUE="\Processor(3)\% Processor Time"> <!-- CtServer counters [3 instances] --> <PARAM NAME="Counter00018.Path" VALUE="\Process(ctserver#1)\% Processor Time"> <PARAM NAME="Counter00019.Path" VALUE="\Process(ctserver#1)\Page Faults/sec"> <PARAM NAME="Counter00020.Path" VALUE="\Process(ctserver#1)\Private Bytes"> <PARAM NAME="Counter00021.Path" VALUE="\Process(ctserver#1)\Virtual Bytes"> <PARAM NAME="Counter00022.Path" VALUE="\Process(ctserver#2)\% Processor Time"> <PARAM NAME="Counter00023.Path" VALUE="\Process(ctserver#2)\Page Faults/sec"> <PARAM NAME="Counter00024.Path" VALUE="\Process(ctserver#2)\Private Bytes"> <PARAM NAME="Counter00025.Path" VALUE="\Process(ctserver#2)\Virtual Bytes"> <PARAM NAME="Counter00026.Path" VALUE="\Process(ctserver)\% Processor Time"> <PARAM NAME="Counter00027.Path" VALUE="\Process(ctserver)\Page Faults/sec"> <PARAM NAME="Counter00028.Path" VALUE="\Process(ctserver)\Private Bytes"> <PARAM NAME="Counter00029.Path" VALUE="\Process(ctserver)\Virtual Bytes"> <!-- Oracle counters [2 instances] --> <PARAM NAME="Counter00066.Path" VALUE="\Process(oracle#1)\% Processor Time"> <PARAM NAME="Counter00067.Path" VALUE="\Process(oracle#1)\Page Faults/sec"> <PARAM NAME="Counter00068.Path" VALUE="\Process(oracle#1)\Private Bytes"> <PARAM NAME="Counter00069.Path" VALUE="\Process(oracle#1)\Virtual Bytes"> <PARAM NAME="Counter00070.Path" VALUE="\Process(oracle)\% Processor Time"> <PARAM NAME="Counter00071.Path" VALUE="\Process(oracle)\Page Faults/sec"> <PARAM NAME="Counter00072.Path" VALUE="\Process(oracle)\Private Bytes"> <PARAM NAME="Counter00073.Path" VALUE="\Process(oracle)\Virtual Bytes"> </OBJECT> </BODY> </HTML>

Page 12: Sap business objects financial consolidation

Tuning Best Practices Guide 12

Business Objects Financial Consolidation

©SAP AG 2009

In case of troubleshooting, you can consult the following information:

Windows events log

OS Optimization

If changes should be done on the applications servers, it is recommended to change one parameter at a time to

see what performance improvements are offered. If system performance decreases after making a change, then you

need to reverse the change.

Processor Scheduling Windows uses preemptive multitasking to prioritize process threads that the CPU has to attend to. Preemptive

multitasking is a methodology whereby the execution of a process is halted and another is started, at the discretion of

the operating system. This prevents a single thread from monopolizing the CPU.

Switching the CPU from executing one process to the next is known as context-switching.

The Windows operating system includes a setting that determines how long individual threads are allowed to run on the

CPU before a context-switch occurs and the next thread is serviced. This amount of time is referred to as a quantum.

This setting lets you choose how processor quanta are shared between foreground and background processes.

We recommend selecting Background services so that all programs receive equal amounts of processor time.

To change this:

1. Open the System Control Panel.

2. Select the Advanced tab.

3. Within the Performance frame, click Settings.

4. Select the Advanced tab.

Page 13: Sap business objects financial consolidation

Tuning Best Practices Guide 13

Business Objects Financial Consolidation

©SAP AG 2009

Modifying the value using the control panel applet as described above modifies the following registry value to affect the

duration of each quantum:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\PriorityControl\Win32PrioritySeparation

This setting is the preferred setting for most servers.

The Win32PrioritySeparation Registry values in Windows Server 2003 are:

_ 0x00000026 (38) for best performance of Programs _ 0x00000018 (24) for best performance of Background services.

These values remain the same between the 32-bit (x86) and 64-bit (x64) editions of the Windows Server 2003

operating system.

We strongly recommend you use only the control panel for these settings so you always get valid, appropriate,

operating system revision-specific and optimal values in the registry.

Virtual Memory

Memory paging occurs when memory resources required by the processes running on the server exceed the physical

amount of memory installed. Windows, like most other operating systems, employs virtual memory techniques that

allow applications to address greater amounts of memory than what is physically available. This is achieved by setting

aside a portion of disk for paging. This area, known as the paging file, is used by the operating system to page portions

of physical memory contents to disk, freeing up physical memory to be used by applications that require it at a given

Page 14: Sap business objects financial consolidation

Tuning Best Practices Guide 14

Business Objects Financial Consolidation

©SAP AG 2009

time. The combination of the paging file and the physical memory installed in the server is known as virtual memory.

Virtual memory is managed in Windows by the Virtual Memory Manager (VMM).

Physical memory can be accessed at exponentially faster rates than disk. Every time a server has to move data

between physical memory and disk will introduce a significant system delay.

While some degree of paging is normal on servers, excessive, consistent memory paging activity is referred to as

thrashing and can have a very debilitating effect on overall system performance. Thus, it is always desirable to

minimize paging activity. Ideally servers should be designed with sufficient physical memory to keep paging to an

absolute minimum.

The paging file, or pagefile, in Windows, is named PAGEFILE.SYS.

Virtual memory settings are configured through the System control panel. To configure the page file size:

1. Open the System Control Panel.

2. Select the Advanced tab.

3. Within the Performance frame, click Settings.

4. Select the Advanced tab.

5. Click Change.

Windows Server 2003 has introduced new settings for virtual memory configuration, including letting the system

manage the size of the page file, or to have no page file at all. If you let Windows manage the size, it will create a

pagefile of a size equal to physical memory + 1 MB. This is the minimum amount of space required to create a memory

dump in the event the server encounters a STOP event (blue screen).

A pagefile can be created for each individual volume on a server, up to a maximum of sixteen pagefiles and a

maximum 4 GB limit per pagefile. This allows for a maximum total pagefile size of 64 GB. The total of all pagefiles on

all volumes is managed and used by the operating system as one large pagefile.

Page 15: Sap business objects financial consolidation

Tuning Best Practices Guide 15

Business Objects Financial Consolidation

©SAP AG 2009

When a pagefile is split between smaller pagefiles on separate volumes as described above, when it needs to write to

the pagefile, the virtual memory manager optimizes the workload by selecting the less busy disk based on internal

algorithms. This ensures best possible performance for a multiple-volume pagefile.

While not best practice, it is possible to create multiple pagefiles on the same operating system volume. This is

achieved by placing the pagefiles in different folders on the same volume. This change is carried out through editing

the system registry rather than through the standard GUI interface. The process to achieve this is outlined in Microsoft

KB article 237740:

http://support.microsoft.com/?kbid=237740

Configuring the Pagefile for Maximum Performance Gain

Optimal pagefile performance will be achieved by isolating pagefiles to dedicated physical drives running on RAID-0

(striping) or RAID-1 (mirroring) arrays, or on single disks without RAID at all. Redundancy is not normally required for

pagefiles, though performance might be improved through the use of some RAID configurations.

By using a dedicated disk or drive array, this means PAGEFILE.SYS is the only file on the entire volume and risks no

fragmentation caused by other files or directories residing on the same volume. As with most disk-arrays, the more

physical disks you have in the array, the better the performance is. When distributed between multiple volumes on

multiple physical drives, the pagefile size should be kept uniform between drives and ideally on drives of the same

capacity and speed.

We strongly recommend against the use of RAID-5 arrays to host pagefiles, as pagefile activity is write intensive

and thus not suited to the characteristics of RAID-5.

Where pagefile optimization is critical, do not place the pagefile on the same physical drive as the operating system,

which happens to be the system default. If this must occur, ensure that the pagefile exists on the same volume

(typically C:) as the operating system. Putting it on another volume on the same physical drive will only increase disk

seek time and reduce system performance, since the disk heads will be continually moving between the volumes,

alternately accessing the page file, operating system files, and other applications and data. Remember too that the first

partition on a physical disk is closest to the outside edge of the physical disk, the one typically hosting the operating

system, where disk speed is highest and performance is best.

Note if you do remove the paging file from the boot partition, Windows cannot create a crash dump file

(MEMORY.DMP) in which to write debugging information in the event that a kernel mode STOP error message (―blue

screen of death‖) occurs. If you do require a crash dump file, then you will have no option but to leave a pagefile of at

least the size of physical memory + 1 MB on the boot partition.

We recommend setting the size of the pagefile manually. This normally produces better results than allowing the server

to size it automatically or having no pagefile at all.

Page 16: Sap business objects financial consolidation

Tuning Best Practices Guide 16

Business Objects Financial Consolidation

©SAP AG 2009

Best-practice tuning is to set the initial (minimum) and maximum size settings for the pagefile to the same value. This

ensures that no processing resources are lost to the dynamic resizing of the pagefile, which can be intensive. This is

especially given this resizing activity is typically occurring when the memory resources on the system are already

becoming constrained.

Setting the same minimum and maximum pagefile size values also ensure that the paging area on a disk is one single,

contiguous area, improving disk seek time.

Windows Server 2003 automatically recommends a total paging file size equal to 1.5 times the amount of installed

RAM. On servers with adequate disk space, the pagefile on all disks combined should be configured up to twice the

physical memory for optimal performance. The only drawback of having such a large pagefile is the amount of disk

space consumed on the volumes used to accommodate the pagefile(s). Servers of lesser workloads or those tight on

disk space should still try to use a pagefile total of at least equal to the amount of physical memory.

Creating the whole pagefile in one step reduces the possibility of having a partitioned pagefile and therefore improves

system performance.

The best way to create a contiguous static pagefile in one step is to follow this procedure for each pagefile configured:

1. Remove the current page files from your server by clearing the Initial and Maximum size values in the Virtual

Memory settings window or by clicking No Paging File, then clicking Set.

2. Reboot the machine and click OK. Ignore the warning message about the page file.

3. Defragment the disk you want to create the page file on. This step should give you enough continuous space to

avoid partitioning of your new page file.

4. Create a new pagefile with the desired values

5. Reboot the server.

An ever better approach is to re-format the volume entirely and create the pagefile immediately before placing any data

on the disk. This ensures the file is created as one large contiguous file as close to the very outside edge of the disk as

possible, ensuring no fragmentation and best disk access performance. The work and time involved in moving data to

another volume temporarily to achieve this outcome often means, however, that this procedure is not always

achievable on a production server.

Measuring Pagefile Usage

A good metric for measuring pagefile usage is Paging file: %Usage Max in the Windows System Monitor. If this reveals

consistent use of the pagefile, then consider increasing the amount of physical memory in the server by this amount.

For example, if a pagefile is 2048 MB (2 GB) and your server is consistently showing 10% usage, it would be prudent

to add an additional say 256 MB RAM.

Disabling or Removing Unnecessary Services

Page 17: Sap business objects financial consolidation

Tuning Best Practices Guide 17

Business Objects Financial Consolidation

©SAP AG 2009

When Windows is first installed, many services are enabled that might not be necessary for the application server.

While in Windows Server 2003 many more services are disabled by default than in previous editions of the server

operating system, there still remains, on many systems, an opportunity for improving performance further by examining

running services.

Inexperienced users might also inadvertently add additional services when installing or updating the operating system

that are not actually required for a given system. Each service requires system resources and, as a result, it is better to

disable unnecessary services to improve performance.

Disabling services should be done with care. Unless you are completely certain of the purpose of a given service it is

recommended to research it further before choosing to disable it. Disabling some services that the operating system

requires to be running can render a system inoperable and possibly unable to boot.

To view the services running in Windows, complete the following steps:

1. Right-click My Computer and select Manage.

2. Expand the Services and Applications icon.

3. Select the Services icon.

4. Click the Standard tab at the bottom of the right-pane. All the services installed on the system are displayed. The

status, running or not, is shown in the third column.

5. Click twice on Status at the top of the third column shown. This sorts all running (Started) services from those that

are not running.

From this dialog, all services that are not required to be running on the server should be stopped and disabled. This will

prevent the service from automatically starting at system boot time. To stop and disable a service, do the following:

1. Right-click the service and click Properties.

2. Click Stop and set the Startup type to Disabled.

Page 18: Sap business objects financial consolidation

Tuning Best Practices Guide 18

Business Objects Financial Consolidation

©SAP AG 2009

3. Click OK to return to the Services window.

If a particular service has been installed as part of an application or Windows component and is not actually required

on a given server, a better approach is to remove or uninstall this application or component altogether. This is typically

performed through the Add or Remove Programs applet in the Control Panel.

Some services might not be required at system boot time but might be required to start by other applications or

services at a later time. Such services should be set to have a startup type of Manual. Unless a service is explicitly set

to have a startup type of Disabled, it can start at any time and perhaps unnecessarily use system resources.

Optimizing Network Card Settings Many network interface cards in servers today have settings that can be configured through the Windows interface.

Setting these optimally for your network environment and server configuration can significantly affect the performance

of network throughput. Of all the performance tuning features outlined in this chapter, it is the ones in this section that

have been noted to have the biggest improvement on system performance and throughput.

To access this range of settings, follow these steps:

1. Click Start → Settings → Network Connections.

2. Click Properties.

3. Right-click Local Area Connection (or the name of your network connection).

4. Click Properties.

5. Click Configure.

6. Click the Advanced tab.

Click Configure to access the

configuration settings available for the network interface card.

Page 19: Sap business objects financial consolidation

Tuning Best Practices Guide 19

Business Objects Financial Consolidation

©SAP AG 2009

The exact configuration settings available differ from one network interface card to another.

The following settings are the ones that can have the most dramatic impact on performance:

Link Speed and Duplex

Experience suggests that the best practice for setting the speed and duplex values for each network interface

in the server is to configure them in one of two ways:

– Set to auto-negotiation if, and only if, the switch port is also set to auto negotiation. The server and switch

should then negotiate the fastest possible link speed and duplex settings.

– Set to the same link speed and same duplex settings as those of the switch. These settings will, of course,

normally yield the best performance if set to the highest settings that the switch will support.

We do not recommend the use of auto-negotiation for the server network interface combined with manually setting the

parameter on the network switch, or vice-versa. Using such a combination of settings at differing ends of the network

connection to the server has often been found to cause poor performance and instability in many production

environments and should definitely be avoided.

: You apply these settings for each physical network interface, including the individual cards within a set team of

interfaces that are configured for aggregation, load balancing, or fault tolerance. With some teaming software, you

might need to apply these settings to the team also. Note also that some network interface cards are largely self-tuning

and do not offer the option to configure parameters manually.

To summarize, use either auto-negotiation at both interfaces or hard-code the settings at both interfaces, but not a mix

of both of these.

For more information, see the following Cisco Web site:

http://www.cisco.com/warp/public/473/46.html#auto_neg_valid

Page 20: Sap business objects financial consolidation

Tuning Best Practices Guide 20

Business Objects Financial Consolidation

©SAP AG 2009

Other advanced settings are often available with network interface cards than just those described here. The

documentation for the network interface should be consulted to detail the meaning and impacts of changing each

setting.

The /3GB BOOT.INI parameter (32-bit x86) By default, the 32-bit (x86) editions of Windows can address a total of 4 GB of virtual address space. This is a

constraint of the 32-bit (x86) architecture. Normally, 2 GB of this is reserved for the operating system kernel

requirements (privileged-mode) and the other 2 GB is reserved for application (user-mode) requirements. Under normal

circumstances, this creates a 2 GB per-process address limitation.

Windows provides a /3GB parameter to be added to the BOOT.INI file that reallocates 3 GB of memory to be available

for user-mode applications and reduces the amount of memory for the system kernel to 1 GB.

Given the radically increased memory capability of 64-bit (x64) operating systems, neither the /3GB switch nor the

/PAE switch should be used on the 64-bit (x64) editions of the Windows Server 2003 operating system.

To edit the BOOT.INI file to make this change, complete the following steps:

1. Open the System Control Panel.

2. Select the Advanced tab.

3. Within the Startup and Recovery frame, click Settings.

4. Click Edit. Notepad opens, and you can edit the current BOOT.INI file.

5. Edit the current ARC path to include the /3GB switch.

6. Restart the server for the change to take effect.

When the /3GB is added to the BOOT.INI file, increase the parameter MaxServerVirtualMemory in the BFC

Administration Console \Advanced Configuration to 2500

Page 21: Sap business objects financial consolidation

Tuning Best Practices Guide 21

Business Objects Financial Consolidation

©SAP AG 2009

For more information, see:

http://support.microsoft.com/kb/291988 http://support.microsoft.com/kb/851372 http://support.microsoft.com/kb/823440

TCP TIME-WAIT Delay By default, TCP will normally allocate a port with a value between 1024 and 5000 for a socket request for any available

short-lived (ephemeral) user port. When communications over a given socket have been closed by TCP, it waits for a

given time before releasing it. This is known as the TIME-WAIT delay. The default setting for Windows Server 2003 is

two minutes, which is appropriate for most situations. However, some busy systems that perform many connections in

a short time might exhaust all ports available, reducing throughput.

Windows has two registry settings that can be used to control this time-wait delay:

_ TCPTimedWaitDelay adjusts the amount of time that TCP waits before completely releasing a socket connection for

re-use.

_ MaxUserPort sets the number of actual ports that are available for connections, by setting the highest port value

available for use by TCP.

Reducing TCPTimedWaitDelay and increasing MaxUserPort can increase throughput for your system.

Key: HKLM\SYSTEM\CurrentControlSet \Services\Tcpip\Parameters

Value: TCPTimedWaitDelay

Data type: REG_DWORD

Range: 0x0 - 0x12C (0 - 300 seconds)

Default: 0x78 (120 seconds)

Page 22: Sap business objects financial consolidation

Tuning Best Practices Guide 22

Business Objects Financial Consolidation

©SAP AG 2009

Recommendation: 0x1E (30 seconds)

Value exists by default: No, needs to be added.

Key: HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters

Value: MaxUserPort

Data type: REG_DWORD

Range: 0x1388 - FFFE (5000 - 65534)

Default: 0x1388 (5000)

Recommendation: FFFE

Value exists by default: No, needs to be added.

For more information, see Microsoft Windows Server 2003 TCP/IP Implementation Details, which is available from:

http://www.microsoft.com/technet/prodtechnol/windowsserver2003/technologies/networking/tcpip03.mspx

Use NTFS on All Volumes Windows offers multiple file system types for formatting drives, including NTFS, FAT, and FAT32. NTFS should always

be the file system of choice for servers.

NTFS offers considerable performance benefits over the FAT and FAT32 file systems and should be used exclusively

on Windows servers. In addition, NTFS offers many security, scalability, stability and recoverability benefits over FAT.

Under previous versions of Windows, FAT and FAT32 were often implemented for smaller volumes (say <500 MB)

because they were often faster in such situations. With disk storage relatively inexpensive today and operating systems

and applications pushing drive capacity to a maximum it is unlikely that such small volumes will be warranted.

FAT32 scales better than FAT on larger volumes but is still not an appropriate file system for Windows servers.

FAT and FAT32 have often been implemented in the past as they were seen as more easily recoverable and

manageable with native DOS tools in the event of a problem with a volume.

Today, with the various NTFS recoverability tools available, both natively built into the operating system and as third-

party utilities, there should no longer be a valid argument for not using NTFS for file systems.

While it is an easy way to reduce space on volumes, NTFS file system compression is not appropriate for an

application server. Implementing compression places an unnecessary overhead on the CPU for all disk operations and

is best avoided. Think about options for adding additional disk, near-line storage, or archiving data before seriously

considering file system compression.

Monitor drive Space Utilization

Page 23: Sap business objects financial consolidation

Tuning Best Practices Guide 23

Business Objects Financial Consolidation

©SAP AG 2009

The less data a disk has on it, the faster it will operate. This is because on a well defragmented drive, data is written as

close to the outer edge of the disk as possible, as this is where the disk spins the fastest and yields the best

performance.

Disk seek time is normally considerably longer than read or write activities. As noted above, data is initially written to

the outside edge of a disk. As demand for disk storage increases and free space reduces, data is written closer to the

center of the disk. Disk seek time is increased in locating the data as the head moves away from the edge, and when

found, it takes longer to read, hindering disk I/O performance.

This means that monitoring disk space utilization is important not just for capacity reasons but for performance also. It

is not practical or realistic to have disks with excessive free space however.

Use Disk Defragmentation Tools Regularly

Over time, files become fragmented in non-contiguous clusters across disks, and system performance suffers as the

disk head jumps between tracks to seek and re-assemble them when they are required.

Disk defragmentation tools work to ensure all file fragments on a system are brought into contiguous areas on the disk,

improving disk I/O performance. Regularly running a disk defragmentation tool on a server is a relatively easy way to

yield impressive system performance improvements.

Most defragmentation tools work the fastest and achieve the best results when they have plenty of free disk space to

work with. This provides another good reason to monitor and manage disk space usage on production servers.

In addition, try to run defragmentation tools when the server is least busy and if possible, during scheduled system

downtime. Defragmentation of the maximum number of files and directories will be achieved if carried out while

applications and users that typically keep files open are not accessing the server.

Windows Server 2003 includes a basic disk defragmentation tool. While offer good defragmentation features, it offers

little in the way of automated running - each defragmentation process must be initiated manually or through external

scripts or scheduling tools.

A number of high-quality, third-party disk defragmentation tools exist for the Windows operating system, including

tailorable scheduling, reporting, and central management functionality. The cost and performance benefit of using such

tools is normally quickly realized when compared to the ongoing operational costs and degraded performance of

defragmenting disks manually, or not at all.

As a rule of thumb, work towards a goal of keeping disk free space between 20% and 25% of total disk space.

Page 24: Sap business objects financial consolidation

Tuning Best Practices Guide 24

Business Objects Financial Consolidation

©SAP AG 2009

We recommend you defragment your drives weekly if possible.

Log off the Server Console

A simple but important step in maximizing your server‘s performance is to keep local users logged off the console. A

locally logged-on user unnecessarily consumes system resources, potentially impacting the performance of

applications and services running on the system.

Remove CPU-intensive Screen Savers

A server is not the place to run fancy 3D or OpenGL screen savers. Such screen-savers are known to be CPU-

intensive and use important system resources when they are running. It is best to avoid installing these altogether as

an option at server-build time, or to remove them if they have been installed. The basic ―Windows Server 2003‖ or

blank screen savers are the best choice.

Monitor System Performance Appropriately

Running the Windows System (Performance) Monitor directly on the server console of a server you are monitoring will

impact the performance of the server and will potentially distort the results you are examining.

Wherever possible, use System Monitor from a remote system to avoid placing this extra load on the server itself.

Similarly, you should not use remote control software of the server console for performance monitoring nor should you

use a remote client session using thin-client sessions such as Terminal Services or Citrix to carry out the task. Note

however that monitoring a system remotely will affect the network and network interface counters which might impact

your measurements.

Bear in mind that the more counters monitored, the more overhead is required. Do not monitor unnecessary counters.

This tact should be taken regardless of whether carrying out ―live‖ charted monitoring of a system or logging system

performance over longer periods. In the same way, do not publish System Monitor as a Citrix application. Along the

same lines, reduce the monitoring interval to the minimum necessary to obtain the information you are looking to

gather. When logging over an extended period, collecting data every minute instead of the system default of every

second might still provide acceptable levels of information. Monitoring more frequently will generate additional system

overhead and create significantly larger log files than might be required.

BFC tracing

It is possible to enable an SAP BusinessObjects Financial Consolidation technical log to trace SAP BusinessObjects

Financial Consolidation events occurring on the Application Server. This is the only tool giving an accurate technical

Page 25: Sap business objects financial consolidation

Tuning Best Practices Guide 25

Business Objects Financial Consolidation

©SAP AG 2009

info for the SAP BusinessObjects Financial Consolidation application. The settings are described in an XML file and

reflect the event type, the contents log.

The XML files log definition is named CtServerLogConfig.xml. If you want to dedicate an XML log for a named data

source, add the data source name as a prefix of the xml file.

For example, you can create a P38CtServerLogConfig.xml which will log only technical information for CtServer

process linked to P38 data source.

The XML log definition file is read only when the CtServer process is started. So any modification to an XML file will be

effective only after the next CtServer.exe instance run.

The XML log file allows the definition of:

The level of information to collect

The type of information to collect

The output recording media

The XML file is defined by a series of combined tags:

One or more <logger> which allows you to configure the type of information you want to record. One or more

<appender>, which allows you to configure the output media recording. Each type of appender can be configured in

order to format the output format (the <layout> tag)

Page 26: Sap business objects financial consolidation

Tuning Best Practices Guide 26

Business Objects Financial Consolidation

©SAP AG 2009

Once your logger and appender are correctly configured you may link any logger to any appender.

So for example, you can set up a logger which will filter any database error message and redirect them to a dedicated

appender (a file for example).

There are 4 types of information level:

Info: Will log information about the processing being run.

Warning: Will log warning messages concerning unexpected errors but that do not compromise the integrity of

the data

Error: Will log only error messages concerning unexpected errors. This errors allow the application to continue

to run, but they can nevertheless compromise the integrity of the data

Debug: Will log everything

Debug level is not recommended as it will generate a high amount of information that may impact overall performance.

Debug level may be requested by Business Objects support for troubleshooting investigation only.

Costly queries appear in the generated log files if the response time from the database is more that 15 s.

03-26-09 14:44:19 [2272] WARN system.database.command.dml - [Request 4104] - Costly query execution - SELECT a1.header_dim_values, a1.locking_mode, a1.locking_date, a1.session_certificate, a2.user_id FROM ct_md_lock a1 LEFT OUTER JOIN ct_active_session a2 ON (a1.session_certificate = a2.session_certificate) WHERE ((a1.datasource_id = -524279) AND ((a1.header_dim_values LIKE ?) OR a1.header_dim_values IN (?, '00000001'))) - 0 row(s) affected - duration 1881.81 s

It is possible to generate all the queries run by ctserver.exe with the logger system.database.command set to debug.

<logger name="system.database.command"> <level value ="debug" /> <appender-ref ref="myfile" /> </logger>

We recommend verifying the presence of costly queries in technical logs. If the ctserver.exe keeps waiting a response of the database server, at some point it will freeze. If the server cannot update his certificate it will stop:

03-03-09 02:08:44 [5660] WARN system.database.command.dml - [Request 42148] - Costly query execution - UPDATE ct_mutex SET ct_lock = ? WHERE ct_name = ? - 1 row(s) affected - duration 209.046 s 03-03-09 02:08:44 [4384] WARN system.database.command.dml - [Request 42161] - Costly query execution - DELETE FROM ct_active_session WHERE (ct_active_session.session_certificate = ?) - 1 row(s) affected - duration 86.766 s 03-03-09 02:09:14 [5660] ERROR application.controller - Server lease with certificate [{626D6E89-CEEA-448E-91E4-9FC18BDE80C1}] and lease time [125000] renew failed 03-03-09 02:09:14 [5660] FATAL application - Application is stopping because server lease expiration date update failed

Optimizing BFC Transfer

In an SAP BusinessObjects Financial Consolidation architecture where there are local data entry sites and step

consolidation sites, you can exchange information by sending and receiving files.

Page 27: Sap business objects financial consolidation

Tuning Best Practices Guide 27

Business Objects Financial Consolidation

©SAP AG 2009

These files can be sent manually (via CDs or disk) or automatically. Files can be sent automatically via mail systems

that support the MAPI or SMTP/POP3 protocols.

The number of tasks that can be run simultaneously on a given server can be specified in the Administration console (4

by default). The setting you enter is applied to all the servers. To find out more, see the Installation and Administration

Guide.

These requests are then:

• processed directly by the processing units as soon as one is freed up

• sent to the queue until a processing unit is freed up

It is recommended to regularly purge the task view in the BFC application. The greater the number of tasks is, the

longer it will take to access the tasks view.

When you delete a ―receive task‖ by the BFC application the corresponding .tmp file is also deleted. If the directories

Incoming and Outbound are manually cleaned it is recommended to stop the application server before.

When receiving some important objects (like dimension builder elements, category scenario...) there are locks on

tables in the database, and this is slowing the platform. These locks are necessary to ensure the data integrity.

We recommend that you receive these objects when no user is connected to the application (data source started in

exclusive mode), or during dedicated hours.

Web Server

To access SAP BusinessObjects Financial Consolidation via the web, you must add a HTTP server to the architecture.

While the application server can manage requests from web clients, it cannot manage the flow of data in HTTP format.

The HTTP server is independent from SAP BusinessObjects Financial Consolidation; it relies on Microsoft IIS

technology. HTTP server is linked to the SAP BusinessObjects Financial Consolidation application server via the web

connector, an ASP.NET application. The system setup automatically installs the .NET framework. The web connector

monitors user sessions and converts HTTP requests to DCOM remote procedure calls for the application server. The

web connector also formats the HTML pages received by the application server before sending them to the clients.

SAP BusinessObjects Financial Consolidation uses the SSL and VPN security protocols to encrypt the data. The client

computer stores a cookie, consisting of only a number in order to identify SAP BusinessObjects Financial Consolidation

sessions.

We recommend that you use a local network of 100 Mbps to communicate between the SAP BusinessObjects

Financial Consolidation web server and the SAP BusinessObjects Financial Consolidation application server.

IIS 6.0 is not installed on Windows Server 2003 by default. When you first install IIS 6.0, it is locked down — which

means that only request handling for static Web pages is enabled, and only the World Wide Web Publishing Service

Page 28: Sap business objects financial consolidation

Tuning Best Practices Guide 28

Business Objects Financial Consolidation

©SAP AG 2009

(WWW service) is installed. None of the features that sit on top of IIS are turned on, including ASP, ASP.NET, CGI

scripting, FrontPage® 2002 Server Extensions from Microsoft, and WebDAV publishing. If you do not enable these

features, IIS returns a 404 error. You can enable these features through the Web Services Extensions node in IIS

Manager.

Microsoft® ASP.NET is a unified Web development platform that provides the services necessary for developers to

build enterprise-class Web applications. While ASP.NET is largely syntax compatible with ASP, it also provides a new

programming model and infrastructure for more secure, scalable, and stable applications. ASP.NET is part of Microsoft

.NET Framework, a computing environment that simplifies application development in the highly distributed

environment of the Internet.

.NET Framework includes the common language runtime, which provides core services such as memory management,

thread management, code security, and the .NET Framework class library, a comprehensive and object-oriented

collection of types that developers can use to create applications.

The .NET Framework configuration is maintained in Machine.config file (located in the

%SystemRoot%\Microsoft.NET\Framework\%VersionNumber%\CONFIG\ folder). Settings in this file affect all of the

.NET applications on the server.

IIS installation

To install IIS, add components, or remove components using Control Panel

1. From the Start menu, click Control Panel.

2. Double-click Add or Remove Programs.

3. Click Add/Remove Windows Components.

4. In the Components list box, click Application Server.

5. Click Details.

6. Click Internet Information Services Manager.

7. Click Details to view the list of IIS optional components. For a detailed description of IIS optional components, see "Optional

Components" in this topic.

8. Select all optional components you wish to install (IIS, COM+, ASP.NET, Application Server Console)..

When you install IIS 6.0 on a computer that does not contain an earlier version of IIS, IIS 6.0 automatically installs the

following two services:

The WWW service, which hosts Internet and intranet content

The IIS Admin service, which manages the IIS metabase

The table below lists the IIS services and their service hosts.

Table 2.2 Basic Services Provided by IIS 6.0

Service Name Description Service Short Name Core Component Host

World Wide Web Publishing Service (WWW service) Delivers Web publishing services. W3SVC Iisw3adm.dll Svchost.exe

IIS Admin Service Manages the metabase. IISADMIN Iisadmin.dll Inetinfo.exe

Page 29: Sap business objects financial consolidation

Tuning Best Practices Guide 29

Business Objects Financial Consolidation

©SAP AG 2009

World Wide Web Publishing Service

The World Wide Web Publishing Service (WWW service) provides Web publishing for IIS, connecting client HTTP

requests to Web sites running on an IIS-based Web server.

The WWW service manages and configures the IIS core components that process HTTP requests. These core

components include the HTTP protocol stack (HTTP.sys) and the worker processes.

The WWW service includes these subcomponents: Active Server Pages (ASP), Internet Data Connector, Remote

Administration (HTML), Remote Desktop Web Connection, server-side includes (SSI), Web Distributed Authoring and

Versioning (WebDAV) publishing, and ASP.NET.

IIS Admin Service

IIS Admin service is a Windows Server 2003 service that manages the IIS metabase. The metabase stores IIS

configuration data in a plaintext XML file, which you can read and edit using common text editors. IIS Admin service

makes metabase data available to other applications, including the core components of IIS, applications built on IIS,

and applications that are independent of IIS, such as management or monitoring tools.

IIS 6.0 Core Components (IIS 6.0)

IIS 6.0 contains several core components that perform important functions in the new IIS architecture:

HTTP Protocol Stack (HTTP.sys). When IIS 6.0 runs in worker process isolation mode, HTTP.sys listens for

requests and queues those requests in the appropriate queue. Each request queue corresponds to one

application pool. An application pool corresponds to one request queue within HTTP.sys and one or more

worker processes.

Worker processes. A worker process is user-mode code whose role is to process requests, such as processing

requests to return a static page, invoking an ISAPI extension or filter, or running a Common Gateway Interface

(CGI) handler. In both application isolation modes, the worker process is controlled by the WWW service.

However, in worker process isolation mode, a worker process runs as an executable file named W3wp.exe,

and in IIS 5.0 isolation mode, a worker process is hosted by Inetinfo.exe. Worker processes use HTTP.sys to

receive requests and to send responses by using HTTP. Worker processes also run application code, such as

ASP.NET applications and XML Web services.

WWW Service Administration and Monitoring. It is responsible for two roles in IIS: HTTP administration and

worker process management. This service reads metabase information and initializes the HTTP.sys

namespace routing table with one entry for each application. HTTP.sys then uses the routing table data to

determine which application pool responds to requests from what parts of the namespace. When HTTP.sys

receives a request, it signals the WWW service to start a worker process for the application pool. When you

add or delete an application pool, the WWW service processes the configuration changes, which includes

adding or deleting the application pool queue from HTTP.sys. The WWW service also listens for and processes

Page 30: Sap business objects financial consolidation

Tuning Best Practices Guide 30

Business Objects Financial Consolidation

©SAP AG 2009

configuration changes that occur as a result of application pool recycling. The WWW service is responsible for

managing the worker processes, which includes starting the worker processes and maintaining information

about the running worker processes. This service also determines when to start a worker process, when to

recycle a worker process, and when to restart a worker process if it becomes blocked and is unable to process

any more requests.

Inetinfo.exe. When IIS 6.0 runs in worker process isolation mode, Inetinfo.exe is a user-mode component that

hosts the IIS metabase and that also hosts the non-Web services of IIS 6.0, including the FTP service, the

SMTP service, and the NNTP service. Inetinfo.exe depends on IIS Admin service to host the metabase. When

IIS 6.0 runs in IIS 5.0 isolation mode, Inetinfo.exe functions much as it did in IIS 5.0. In IIS 5.0 isolation mode,

however, Inetinfo.exe hosts the worker process, which runs ISAPI filters, Low-isolation ISAPI extensions, and

other Web applications.

IIS Metabase. The IIS metabase is a plaintext, XML data store that contains most IIS configuration information.

Although most of the IIS configuration settings are stored in the IIS metabase, a few settings are maintained in

the Windows registry. The metabase consists of the following elements:

o The MetaBase.xml file stores IIS configuration information that is specific to an installation of IIS,

MBSchema.xml file which contains the metabase schema.

o The MBSchema.xml file is a master configuration file that defines default attributes for all metabase

properties and enforces rules for constructing and placing metabase entries within the metabase.

o In-memory metabase. The in-memory metabase contains the most current metabase and metabase

schema configuration. The in-memory metabase accepts changes to the metabase configuration and

schema, storing them in RAM, and periodically writing changes to the on-disk metabase and metabase

schema files.

When IIS starts, the MetaBase.xml and MBSchema.xml files are read by the IIS storage layer and copied to the

in-memory metabase. While IIS is running, any changes that you make to the in-memory metabase are

periodically written to disk. IIS also saves the in-memory metabase to disk when you stop IIS. The IIS Admin

service makes the metabase available (by means of the Admin Base Object) to other applications, including

the core components of IIS, applications built on IIS, and applications that are independent of IIS, such as

management or monitoring tools.

IIS 6.0 runs a server in one of two distinct request processing models, called application isolation modes. In each

isolation mode, IIS functions differently, although both application isolation modes rely on HTTP.sys as the HTTP

listener.

Worker Process Isolation Mode

Worker process isolation mode takes advantage of the redesigned architecture for IIS 6.0. In this application isolation

mode, you can group Web applications into application pools, through which you can apply specific configuration

settings to groups of applications and to the worker processes servicing those applications. Any Web directory or virtual

directory can be assigned to an application pool.

Page 31: Sap business objects financial consolidation

Tuning Best Practices Guide 31

Business Objects Financial Consolidation

©SAP AG 2009

By creating new application pools and assigning Websites and applications to them, you can make your server more

efficient and reliable, and your other applications always available, even when the applications in the new application

pool terminate.

We recommend you configure one application pool for each BusinessObjects Financial Consolidation application

that you want to deploy.

IIS 5.0 Isolation Mode

IIS 5.0 isolation mode is provided for applications that depend upon specific features and behaviors of IIS 5.0. It is not

recommended to use this isolation mode for BFC application.

After you have your site and application up and running the way you want, you can save all or part of the configurations

for a backup copy, or for import and export to other sites or computers.

IIS automatically makes a backup copy of the metabase (a repository for most Internet Information Services (IIS)

configuration values is a plaintext XML file that can be edited manually or programmatically and is also extensible in a

highly efficient manner) configuration and schema files each time the metabase changes. Administrators can also

create backup files on demand, or create backup copies of individual site or application configurations and then export

and import them to and from other sites or computers.

Backup files contain only configuration data; they do not include your contents (.asp files, .htm files, .dll files, and so

on).

To save a metabase configuration

1. In IIS Manager, right-click the local computer, point to All Tasks, and then choose Backup/Restore Configuration.

2. Choose Create Backup.

3. In the Configuration backup name box, type a name for the backup file.

4. To restore your backup files to a different computer, provide a password that will be used as the encryption key by doing the

following:

Select the Encrypt backup using password check box, type a "strong password" in the Password box, and then type the same

password in the Confirm password box.

5. Choose OK, and then choose Close.

To restore a metabase configuration

1. In IIS Manager, right-click the local computer, point to All Tasks, and then choose Backup/Restore Configuration.

2. From the Backups list box, choose a previous backup version, and choose Restore.

3. When a confirmation message appears, choose Yes.

4. Choose OK, and then choose Close.

To save a site or application configuration

1. In IIS Manager, right-click the site or application you want to back up, point to All Tasks, and choose Save Configuration to a File.

Page 32: Sap business objects financial consolidation

Tuning Best Practices Guide 32

Business Objects Financial Consolidation

©SAP AG 2009

2. In the File name box, type a file name.

3. In the Path box, type or browse to the location where you want to save the file.

4. Choose OK.

Improving ASP.NET Performance

Consider the following settings in the Machine.config file:

Set maxIoThreads (this setting controls the maximum number of I/O threads in the .NET thread pool) and

maxWorkerThreads (this setting controls the maximum number of worker threads in the thread pool) to 100.

If you move your application to a new computer, ensure that you recalculate and reconfigure the settings based on the

number of CPUs in the new computer.

Improving Web Services Performance

To perform large data transfers, start by checking that the maxRequestLength parameter in the <httpRuntime> element

of machine.config file is large enough. This parameter limits the maximum SOAP message size for a Web service.

Next, check your timeout settings. Set an appropriate timeout on the Web service proxy, and make sure that your

ASP.NET timeout is larger than your Web service timeout.

We recommend the following values: 3600 for the timeout parameter and 16384 for the maxRequestLenght parameter.

Monitoring Tools

Monitoring the servers consists of the following tasks:

Verify that the processes InetInfo.exe and w3wp.exe are up and running.

Monitor CPU (processor Time, Processor Queue Length) and memory utilization (Available Mbytes,

Pages/sec) and IIS counters

If the value of % Processor Time is high, then queuing occurs, and in most scenarios the value of System\ Processor Queue Length is

high. The next step is to identify which process is consuming processor time.

A low value of Available MBytes indicates that your system is low on physical memory, caused either by system memory limitations or the

application that is not releasing memory.

Configure log monitoring (see administration guide).

When you install IIS 6.0, all of the IIS performance counters, with the exception of the SNMP counters, appear in

System Monitor and in Performance Logs and Alerts.

The Web Service counters help you determine how well the World Wide Web Publishing Service (WWW service)

processes requests. The WWW service is a user-mode service. These counters also reflect the processing that occurs

in the kernel-mode driver, HTTP.sys.

Page 33: Sap business objects financial consolidation

Tuning Best Practices Guide 33

Business Objects Financial Consolidation

©SAP AG 2009

Configure these counters to monitor performance for individual Websites, or for all sites on a server (by using the

_Total instance). Note that some counters that were included with IIS 5.x are now obsolete and, therefore, return a zero

value.

The table below describes the counters for the Web Service performance object.

Bytes (Sent, Received, and Transferred) Counters

Counter Description

Total Bytes Sent The number of data bytes that have been sent by the WWW service since the service started. This counter is new in IIS 6.0.

Bytes Sent/sec The rate, in seconds, at which data bytes have been sent by the WWW service.

Total Bytes Received The total bytes of data that have been received by the WWW service since the service started. This counter is new in IIS 6.0.

Bytes Received/sec The rate, in seconds, at which data bytes have been received by the WWW service.

Total Bytes

Transferred

The total number of bytes of data that have been sent and received by the WWW service since the service started. This

counter is new in IIS 6.0.

Bytes Total/sec The sum of Bytes Sent/sec and Bytes Received/sec.

Connections and Attempts Counters

Counter Description

Current Connections The number of active connections to the WWW service.

Maximum

Connections

The maximum number of simultaneous connections made to the WWW service since the service started.

Total Connection

Attempts

The number of connections to the WWW service that have been attempted since the service started.

Connection

Attempts/sec

The rate, in seconds, at which connections to the WWW service have been attempted since the service started.

Total Logon Attempts The number of attempts to log on to the WWW service that have occurred since the service started.

Logon Attempts/sec The rate, in seconds, at which attempts to log on to the WWW service have occurred.

Table Requests Counters

Counter Description

Total Options

Requests

The number of HTTP requests that have used the OPTIONS method since the WWW service started.

Options Requests/sec The rate, in seconds, at which HTTP requests that use the OPTIONS method have been made.

Total Get Requests The number of HTTP requests that have used the GET method since the WWW service started.

Get Requests/sec The rate, in seconds, at which HTTP requests that use the GET method have been made to the WWW service.

Total Post Requests The number of HTTP requests that have used the POST method since the WWW service started.

Post Requests/sec The rate, in seconds, at which requests that use the POST method have been made to the WWW service.

Total Head Requests The number of HTTP requests that have used the HEAD method since the WWW service started.

Head Requests/sec The rate, in seconds, at which HTTP requests that use the HEAD method have been made to the WWW service.

Total Put Requests The number of HTTP requests that have used the PUT method since the WWW service started.

Put Requests/sec The rate, in seconds, at which HTTP requests that use the PUT method have been made to the WWW service.

Total Delete The number of HTTP requests that have used the DELETE method since the WWW service started.

Page 34: Sap business objects financial consolidation

Tuning Best Practices Guide 34

Business Objects Financial Consolidation

©SAP AG 2009

Bytes (Sent, Received, and Transferred) Counters

Counter Description

Requests

Delete Requests/sec The rate, in seconds, at which HTTP requests that use the DELETE method have been made to the WWW service.

Total Trace Requests The number of HTTP requests that have used the TRACE method since the WWW service started.

Trace Requests/sec The rate, in seconds, at which HTTP requests that use the TRACE method have been made to the WWW service.

Total Move Requests The number of HTTP requests that have used the MOVE method since the WWW service started.

Move Requests/sec The rate, in seconds, at which HTTP requests that use the MOVE method have been made to the WWW service.

Total Copy Requests The number of HTTP requests that have used the COPY method since the WWW service started.

Copy Requests/sec The rate, in seconds, at which HTTP requests that use the COPY method have been made to the WWW service.

Total Search

Requests

The number of HTTP requests that have used the SEARCH method since the WWW service started.

Search Requests/sec The rate, in seconds, at which HTTP requests that use the SEARCH method have been made to the WWW service.

Total Lock Requests The number of HTTP requests that have used the LOCK method since the WWW service started.

Lock Requests/sec The rate, in seconds, at which HTTP requests that use the LOCK method have been made to the WWW service.

Total Unlock

Requests

The number of HTTP requests that have used the UNLOCK method since the WWW service started.

Unlock Requests/sec The rate, in seconds, at which HTTP requests that use the UNLOCK method have been made to the WWW service.

Total Method

Requests

The number of HTTP requests that have been made since the WWW service started.

Total Method

Requests/sec

The rate, in seconds, at which all HTTP requests have been received.

Table Errors (Not Found and Locked) Counters

Counter Description

Total Not Found

Errors

The number of requests that have been made since the service started that were not satisfied by the server because the

requested document was not found. Usually reported as HTTP error 404.

Not Found Errors/sec The rate, in seconds, at which requests were not satisfied by the server because the requested document was not found.

Total Locked Errors The number of requests that have been made since the service started that could not be satisfied by the server because the

requested document was locked. Usually reported as HTTP error 423.

Locked Errors/sec The rate, in seconds, at which requests were not satisfied because the requested document was locked.

ASP.NET supports the following ASP.NET system performance counters, which aggregate information for all ASP.NET

applications on a Web server computer, or, alternatively, apply generally to a system of ASP.NET servers running the

same applications.

Use the ASP.NET counters in the following table to monitor ASP.NET system performance.

Table Application Restarts and Applications Running Counters

Counter Description

Application Restarts The number of times that an application has been restarted since the Web service started. Application restarts are

Page 35: Sap business objects financial consolidation

Tuning Best Practices Guide 35

Business Objects Financial Consolidation

©SAP AG 2009

Table Application Restarts and Applications Running Counters

Counter Description

incremented with each Application_OnEnd event. An application restart can occur because changes were made to the

Web.config file or to assemblies stored in the application's \Bin directory, or because too many changes occurred in Web

Forms pages. Sudden increases in this counter can mean that your Web application is shutting down. If an unexpected

increase occurs, be sure to investigate it promptly. This value resets every time IIS is restarted.

Applications Running The number of applications that are running on the server computer.

Table Requests Counters

Counter Description

Requests

Disconnected

The number of requests that were disconnected because a communication failure occurred.

Requests Queued The number of requests in the queue waiting to be serviced. If this number increases as the number of client requests

increases, the Web server has reached the limit of concurrent requests that it can process. The default maximum for this

counter is 5,000 requests. You can change this setting in the computer's Machine.config file.

Requests Rejected The total number of requests that were not executed because insufficient server resources existed to process them. This

counter represents the number of requests that return a 503 HTTP status code, which indicates that the server is too busy.

Request Wait Time The number of milliseconds that the most recent request waited in the queue for processing.

Table State Server Sessions Counters

Counter Description

State Server

Sessions Abandoned

The number of user sessions that were explicitly abandoned. These are sessions that have been ended by specific user

actions, such as closing the browser or navigating to another site.

State Server

Sessions Active

The number of active user sessions.

State Server

Sessions Timed Out

The number of user sessions that are now inactive. In this case, the user is inactive, not the server.

State Server

Sessions Total

The number of sessions created during the lifetime of the process. This counter represents the cumulative value of State

Server Sessions Active, State Server Sessions Abandoned, and State Server Sessions Timed Out counters.

Table Worker Process Counters

Counter Description

Worker Process

Restarts

The number of times that a worker process restarted on the server computer. A worker process can be restarted if it fails

unexpectedly or when it is intentionally recycled. If worker process restarts increase unexpectedly, investigate immediately.

Worker Processes

Running

The number of worker processes that are running on the server computer.

Table Anonymous Requests Counters

Counter Description

Anonymous Requests The number of requests that use anonymous authentication.

Anonymous

Requests/sec

The average number of requests that have been made per second that use anonymous authentication.

Table Cache Total Counters

Counter Description

Cache Total Entries The total number of entries in the cache. This counter includes both internal use of the cache by the ASP.NET framework

and external use of the cache through exposed APIs.

Cache Total Hits The total number of responses served from the cache. This counter includes both internal use of the cache by the ASP.NET

framework and external use of the cache through exposed APIs.

Page 36: Sap business objects financial consolidation

Tuning Best Practices Guide 36

Business Objects Financial Consolidation

©SAP AG 2009

Table Application Restarts and Applications Running Counters

Counter Description

Cache Total Misses The number of failed cache requests. This counter includes both internal use of the cache by ASP.NET and external use of

the cache through exposed APIs.

Cache Total Hit Ratio The ratio of cache hits to cache misses. This counter includes both internal use of the cache by ASP.NET and external use

of the cache through exposed APIs.

Cache Total Turnover

Rate

The number of additions to and removals from the cache per second. Use this counter to help determine how efficiently the

cache is being used. If the turnover rate is high, the cache is not being used efficiently.

Table Errors Counters

Counter Description

Errors During

Preprocessing

The number of errors that occurred during parsing. Excludes compilation and run-time errors.

Errors During

Compilation

The number of errors that occurred during dynamic compilation. Excludes parser and run-time errors.

Errors During

Execution

The total number of errors that occurred during the execution of an HTTP request. Excludes parser and compilation errors.

Errors Unhandled

During Execution

The total number of unhandled errors that occurred during the execution of HTTP requests.

An unhandled error is any uncaught run-time exception that escapes user code on the page and enters the ASP.NET internal error-handling logic. Exceptions occur in the following circumstances:

• When custom errors are enabled, an error page is defined, or both.

• When the Page_Error event is defined in user code and either the error is cleared (by using the

HttpServerUtility.ClearError method) or a redirect is performed.

Errors Unhandled

During Execution/sec

The number of unhandled exceptions that occurred per second during the execution of HTTP requests.

Errors Total The total number of errors that occurred during the execution of HTTP requests. Includes parser, compilation, or run-time

errors. This counter represents the sum of the Errors During Compilation, Errors During Preprocessing, and Errors During

Execution counters. A well-functioning Web server should not generate errors.

Errors Total/sec The average number of errors that occurred per second during the execution of HTTP requests. Includes any parser,

compilation, or run-time errors.

Table Request Bytes Counters

Counter Description

Request Bytes In

Total

The total size, in bytes, of all requests.

Request Bytes Out

Total

The total size, in bytes, of responses sent to a client. This does not include standard HTTP response headers.

Table Requests Counters

Counter Description

Requests Executing The number of requests that are currently executing.

Requests Failed The total number of failed requests. All status codes greater than or equal to 400 increment this counter.

Note: Requests that cause a 401 status code increment this counter and the Requests Not Authorized counter. Requests that cause a 404 or 414 status code increment this counter and the Requests Not Found counter. Requests that cause a 500 status code increment this counter and the Requests Timed Out counter.

Requests In The number of requests in the application request queue.

Page 37: Sap business objects financial consolidation

Tuning Best Practices Guide 37

Business Objects Financial Consolidation

©SAP AG 2009

Table Application Restarts and Applications Running Counters

Counter Description

Application Queue

Requests Not Found The number of requests that failed because resources were not found (status code 404, 414).

Requests Not

Authorized

The number of requests that failed because of lack of authorization (status code 401).

Requests Succeeded The number of requests that executed successfully (status code 200).

Requests Timed Out The number of requests that timed out (status code 500).

Requests Total The total number of requests that have been made since the service started.

Requests/sec The average number of requests that have been executed per second. This counter represents the current throughput of the

application.

Table Sessions Counters

Counter Description

Sessions Active The number of sessions that are active.

Sessions Abandoned The number of sessions that have been explicitly abandoned.

Sessions Timed Out The number of sessions that timed out.

Sessions Total The total number of sessions.

Table Transactions Counters

Counter Description

Transactions Aborted The number of transactions that were aborted.

Transactions

Committed

The number of transactions that were committed. This counter increments after page execution if the transaction does not

abort.

Transactions Pending The number of transactions that are in progress.

Transactions Total The total number of transactions that have occurred since the service was started.

Transactions/sec The average number of transactions that were started per second.

Table Miscellaneous Counters for ASP.NET Applications

Counter Description

Compilations Total The total number of times that the Web server process dynamically compiled requests for files with .aspx, .asmx, .ascx, or .ashx extensions (or a code-behind source file).

Note: This number initially climbs to a peak value as requests are made to all parts of an application. After compilation occurs, however, the resulting binary compilation is saved on disk, where it is reused until its source file changes. This means that, even when a process restarts, the counter can remain at zero (be inactive) until the application is modified or redeployed.

Debugging Requests The number of requests that occurred while debugging was enabled.

Pipeline Instance

Count

The number of active request pipeline instances for the specified ASP.NET application. Because only one execution thread

can run within a pipeline instance, this number represents the maximum number of concurrent requests that are being

processed for a specific application. In most circumstances, it is better for this number be low when the server is busy,

because this means that the CPU is well used.

For troubleshooting, you can consult the following information:

Windows events log

Page 38: Sap business objects financial consolidation

Tuning Best Practices Guide 38

Business Objects Financial Consolidation

©SAP AG 2009

Internet Information Services (IIS) 6.0 generates events in Event Viewer so that you can verify the performance

of IIS. Event Viewer tracks the following: error events, warning events, and informational events. The logs in

Event Viewer provide an audited record of all services and processes in the Microsoft® Windows® Server 2003

operating system, such as logon and connection times. You can find the list of events that are generated by the

World Wide Web Publishing Service that handles internal administration of the W3SVC at

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/01cb5312-c8d0-48b5-955a-

47b61fccff04.mspx?mfr=true

At the address http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/01cb5312-c8d0-

48b5-955a-47b61fccff04.mspx?mfr=true you can find the list of the events that the WWW service worker

process generates, such as authentication and authorization, application problems, memory monitoring, and so

on.

Microsoft IIS log files

IIS log files can include information such as who has visited your site, what was viewed, and when the

information was last viewed. You can monitor attempts to access your sites, virtual folders, or files and

determine whether attempts were made to read or write to your files. IIS log file formats allow you to record

events independently for any site, virtual folder, or file.

The W3C Extended log file format is the default log file format for IIS. It is a customizable ASCII text-based

format. You can use IIS Manager to select which fields to include in the log file, which allows you to keep log

files as small as possible. Because HTTP.sys handles the W3C Extended log file format, this format records

HTTP.sys kernel-mode cache hits.

Table 10.1 lists and describes the available fields. Default fields are noted. We also recommend activating

cookie and time taken options.

Table 10.1 W3C Extended Log File Fields

Field Appears As Description Default Y/N

Date date The date on which the activity occurred. Y

Time time The time, in coordinated universal time (UTC), at which the activity occurred. Y

Client IP Address c-ip The IP address of the client that made the request. Y

User Name cs-username The name of the authenticated user who accessed your server. Anonymous users are

indicated by a hyphen.

Y

Service Name and

Instance Number

s-sitename The Internet service name and instance number that was running on the client. N

Server Name s-

computername

The name of the server on which the log file entry was generated. N

Server IP Address s-ip The IP address of the server on which the log file entry was generated. Y

Server Port s-port The server port number that is configured for the service. Y

Method cs-method The requested action, for example, a GET method. Y

URI Stem cs-uri-stem The target of the action, for example, Default.htm. Y

URI Query cs-uri-query The query, if any, that the client was trying to perform. A Universal Resource Identifier

(URI) query is necessary only for dynamic pages.

Y

Page 39: Sap business objects financial consolidation

Tuning Best Practices Guide 39

Business Objects Financial Consolidation

©SAP AG 2009

Table 10.1 W3C Extended Log File Fields

Field Appears As Description Default Y/N

HTTP Status sc-status The HTTP status code. Y

Win32 Status sc-win32-status The Windows status code. N

Bytes Sent sc-bytes The number of bytes that the server sent. N

Bytes Received cs-bytes The number of bytes that the server received. N

Time Taken time-taken The length of time that the action took, in milliseconds. N

Protocol Version cs-version The protocol version —HTTP or FTP —that the client used. N

Host cs-host The host header name, if any. N

User Agent cs(User-Agent) The browser type that the client used. Y

Cookie cs(Cookie) The content of the cookie sent or received, if any. N

Referrer cs(Referrer) The site that the user last visited. This site provided a link to the current site. N

Protocol Substatus sc-substatus The substatus error code. Y

SAP BusinessObjects Financial Consolidation Web application log files ( see administration guide)

OS Optimization

See the Application Server part.

Citrix/TSE Server

General Information

Citrix versions:

- Citrix Winframe 1.0, 1.8

- Citrix Metaframe 1.0, 1.8

- Citrix Metaframe XP 1.0, FR1, FR2,FR3

- Metaframe Presentation Server 3.0

- Citrix Presentation server 4.0, 4.5 or (XenApp server)

Citrix specific characteristics:

- Add-on to Windows Terminal services (except Citrix Winframe product)

- ICA (Independent Computing Architecture) as native network communication protocol

- Supports multi-clients (Microsoft, Linux, PDA, DOS….)

- Supports multi-protocols (TCP/IP, SPX/IPX, Netbeui, Modem….)

- Desktop & published applications connection

Page 40: Sap business objects financial consolidation

Tuning Best Practices Guide 40

Business Objects Financial Consolidation

©SAP AG 2009

- Load-balancing functionalities based on server load (Advanced & Enterprise Edition only).

Citrix uses one protocol: ICA (Independent Computing Architecture) and one service: IMA (Independent Management

Architecture)

Bandwidth Required by ICA Protocol

Only display, keyboard, and mouse movements are sent and received from both sides.

ICA flow only requires 10-20 Kbps.

Desktop Versus Published Application Connection

Desktop or published application connection depends of the type of client which has been implemented.

Thick clients: Program neighborhood

- Farm connection

- Server connection

Thin clients: Web Interface /Nfuse

- Citrix Web portal

- Only Published Application

Citrix thick client

View of citrix program neighborhood

Page 41: Sap business objects financial consolidation

Tuning Best Practices Guide 41

Business Objects Financial Consolidation

©SAP AG 2009

Citrix thin client

View of citric web portal

Desktop connection

- Possible security issues

- To be carefully administered

- More compatibility…

Citrix Published application / Virtualization connection

- More secure

- Transparency

- Not compatible with all applications.

Application Installation

Installation mode:

- Mandatory when installing applications

- INI files are memorized

- All initial registry entries created under HKCU\Software are duplicated under: HKLM\Software\Microsoft\Windows

NT\CurrentVersion\Terminal server\Install\Software

- Command line: change user /install

Execute mode (default):

- Default mode when the server is started

- Memorized registry entries recorded during installation mode and the user‘s registry entries are merged

Page 42: Sap business objects financial consolidation

Tuning Best Practices Guide 42

Business Objects Financial Consolidation

©SAP AG 2009

- INI files are copied to the user‘s home windows.

- Command line: change user /execute

Database Server

The database server contains all of the data in the SAP BusinessObjects Financial Consolidation. It supports standard

database engines: Microsoft SQL Server and Oracle Database Server.

For BFC, the application server connects to the database server using OLE DB and the RDBMS client. The load of

database server may be important during consolidation treatments.

SAP publishes support information for the Oracle and SQL Server databases in the BFC documentation for each

version. The table below shows the support matrix as of time of writing.

Oracle Release Cartesis Magnitude Release

Oracle Database 10g Release 2 (10.2) Patch Set 2 BFC 7.0

0racle Database 11g BFC 7.0

Microsoft SQL Server2005 SP2 BFC7.0, BFC 7.5

Microsoft SQL Server 2008 BFC 7.5

For a BFC database server, we recommend at least 2 processors, but large production environments with 4 processors

or more are not uncommon. The number of required processors will depend not only on the number of concurrent

users, but more specifically on the number of concurrent server tasks for background consolidation processing and the

clock rate and architecture of the processor: 32 or 64-bit, hyper threading and/or dual-core. A rule of thumb is one

processor for each background consolidation processing job. A consolidation can run anywhere between 1 minute to 1

hour, depending on the consolidation definition and input data, with an average of about 15 minutes for a 1,000,000

row consolidation table (around 100 MB). Consolidations can be executed concurrently, in which case multiple

processors are recommended to maintain consistent response times. The number of concurrent server tasks can be

limited technically and by procedures, privileges, and scheduling. At least one processor should be kept reserved for

the other application server tasks like data entry processing and reporting, with multiple ―reserved‖ processors

recommended for larger user communities. Running a production BFC system can require significant processor power

from the database server.

For a BFC database server, we recommend at least 2 GB of RAM, but large production environment might require

more. The BFC application metadata is accessed frequently and will be cached both on the database server as on the

application server. The size of the metadata depends on the configuration, but typically will between 250 and 10000

MB.

Additional memory is required for every BFC database connection, which can grow from 50 to about 300 connections

on a busy production system. The memory requirements of the connections will depend on whether the primary activity

is data entry (low), batch job processing, or reporting (high). The large BFC user data amount tables for data entry and

consolidation are only partially stored in the cache, as most of the data will be historical and only occasionally

referenced. Additionally, full table scans on this data are commonly executed for reporting, which has as a

Page 43: Sap business objects financial consolidation

Tuning Best Practices Guide 43

Business Objects Financial Consolidation

©SAP AG 2009

consequence that the table is cached relatively short. For this reason, there is no point in making the database cache

as large as possible and aiming to hold a 10 GB BFC database entirely in RAM.

Configuring storage correctly is one of the most important steps towards a satisfying database performance.

Unfortunately it is a complex topic as it involves several layers from application workload (mainly read or write, mainly

sequential or random, small or large requests, inter-arrival rates and concurrency) and operating system file system

architecture, to hardware adapters, controllers, buses, arrays and disks drives. Few rules exist, as almost every system

architecture will have to allow for a different combination of hardware configurations and software workloads. There are

very few points in common between a small BFC customer, and a large BFC factory. Small customers typically perform

data entry during the day, some periodic end-of-the-month consolidation batch job processing and subsequent

reporting working on a dedicated database server with a few disks as direct attached storage (DAS). Large customers

have typically hundreds of data entry users, non-stop parallel consolidation processing and ad hoc reporting on a

database hosting multiple BFC environments.

The optimal I/O configuration depends completely on the storage subsystem solution chosen. SAP does not

benchmark all possible storage hardware and software configurations with all the vendors involved. Below, you will find

the most important guidelines that will give you acceptable performance for the BFC database at an acceptable price.

Calculate the number of disks by I/O throughput requirements and not by the required capacity to store the database.

Storing a 30 GB BFC database on a single 73 GB disk will make the system completely I/O-bound and will not give

adequate response times. As a starting value, 10 disks per processor can be used. The use of 15K rpm HDD is

preferred above 10K. Short-stroking the disk to only access the outer-edge can be considered, but will not be

technically possible in all environments.

Only use RAID-5 if benchmarks have shown you that on your production environment this does not give any

performance problems. In all other cases, use RAID 1+0. Being mechanical devices, disks remain the most vulnerable

part of a computer system. As they might fail one day, they need to be protected by some sort of RAID configuration,

with hardware RAID largely preferable to offload the CPUs. RAID 1+0 (0+1) implements a mirrored striped array. RAID-

5 or RAID-5 alternatives as EMC RAID-S, is more adapted for data warehouses with mainly read activity and is best to

be avoided for write intensive applications like BFC.

Extra attention is required when the BFC data is stored in SAN environments, especially when the SAN is hosted by a

third-party and storage is acquired by the amount of space needed. Physical disks are hidden by many layers of

abstraction in an SAN and you have to make sure that enough spindles are available to support the I/O throughput

requirements. If continuous heavy consolidation processing is taking place, a database containing 60 GB of user data

can easily generate a daily redo that is similar in volume. Do not expect miracles from a storage cache. A storage

subsystem cache generally will improve performance for writes (immediately confirmed) and sequential reads (indexes)

but not for random reads (full table scans). Depending on the usage, BFC can be a very write-intensive application with

considerable amounts of random reads. This is something that a storage cache will find difficult to handle.

BFC application does not provide any tools to monitor database performances (the technical logs on the application

server point out the queries that take more than 15 s) but it provides a mechanism to create custom indexes if it was

proven that they will lead to better performance. This mechanism is a table in the BFC Oracle schema or Sql database

called CT_DATASOURCE_INDEXES.

Page 44: Sap business objects financial consolidation

Tuning Best Practices Guide 44

Business Objects Financial Consolidation

©SAP AG 2009

The structure of this table is as follows:

COLUMN TYPE LENGHT NULLABLE COMMENTARY

Datasource INT 4 YES [=ct_datasource.id]

Rank INT 4 YES Number of indexes on the datasource

Partition INT 4 YES NULL=All partitions

NOT NULL=one specific partition

Columns NVARCHAR 100 YES Columns to be indexed, coma-delimited

Is_clustered BIT 1 YES TRUE= clustered index

Is_unique BIT 1 YES TRUE= unique index

Is_ascending BIT 1 YES TRUE= Ascending sort

Is_custom BIT 1 YES TRUE= Customer-created index

Table_scope INT 4 YES Column to be used later

Base_name NVARCHAR 13 YES Name used to identify the index (optional)

Table columns

‘Datasource’ column

Stores the ID of the data source to which you want to assign the index. It contains the values available in the

ct_datasource table, as shown below:

ID NAME TITLE

-524287 PKAMOUNT Package data

-524286 PCAMOUNT Preconsolidation data

-524285 COAMOUNT Consolidation data

-524284 INVESTRATE Investment rates

-524283 SCPSTKRATE Stockholding rates

-524282 SCPRATE Scope rates

-524281 CONRATE Conversion rates

-524280 TAXRATE Tax rates

-524279 OPBALAMOUNT Opening balance amounts

-524278 CSCPRATE Top Conso Scope rates

-524277 CCSRATE Full Conso Scope rates

Page 45: Sap business objects financial consolidation

Tuning Best Practices Guide 45

Business Objects Financial Consolidation

©SAP AG 2009

-524276 CSCPSTK Conso stockholding rates

-524275 CSCP2RATE Second Top Conso Scope rates

-524274 CCS2RATE Sec Full Conso Scope rates

-524273 CSCP2STCK Second Conso Stock rates

-524272 INTERCOAMNT Intercompany reconciliations

-524271 CTGY-INDIC Category Builder Indicators

‘Rank’ column

Number of indexes defined for a data source. You cannot have more than 17 indexes for the one table.

‗Partition‘ column

Takes on the value of the partition’s first dimension.

Example: Category

· null = all partitions

· not null = category_id

‗Columns column‘

List of dimension IDs (columns that you want to index), separated by a comma

‘Is_clustered’ column

Indicates whether the index is clustered (False: 0 – True: 1)

‘Is_unique’ column

Indicates whether the index is unique (False: 0 – True: 1)

‘Is_ascending’ column

Not developed yet: should indicate whether the data is to be sorted in ascending or descending order. (At the

time or writing, the default value must be set at 1, which indicates an ascending sort)

(False: 0 (descending sort) – True: 1 (ascending sort))

‘Is_custom’ column

Indicates whether the index has been created by SAP or by a client

(False: 0 (by SAP) – True: 1 (by the client)).

We strongly recommend that clients set this value to ‘1’ to facilitate any interventions required from

SAP Support.

‗Table_scope’ column

Not developed yet: type of table that needs to be indexed according to the following scale of bitmap values:

1=Persistent tables

2= BFC work tables

4=Temporary DBMS tables

7=All types of table taken together

(At the time of writing, this value MUST be 7)

‘Base_name’ column

Base name that the client wants to add to the index name in order to identify it. This name cannot exceed 13

characters.

In Oracle, the index names must be unique in each database and must contain fewer than 18 characters.

Page 46: Sap business objects financial consolidation

Tuning Best Practices Guide 46

Business Objects Financial Consolidation

©SAP AG 2009

The automatic generation algorithm uses the table name and, where appropriate, the partition number in base

36 and the rank. Therefore, it does not work for table names containing more than 13 characters for partitioned

data sources and more than 17 characters for the others.

You add base_name to indicate the base name of the index for the tables which exceed these limits.

Oracle Database

When installing the Oracle database engine, the required components are the Oracle server on the DBMS server and

the Oracle client with the Oracle provider for OleDB on the application servers. All the other components are optional.

Oracle Corporation recommends that you upgrade your client software to match the current server software and use

the latest patch releases.

SAP BusinessObjects Financial Consolidation requires the database to be configured in dedicated server mode and

does not support the shared server mode (formerly known as multi-threaded server mode). The character set

recommended for a SAP BusinessObjects Financial Consolidation database is AL32UTF8 for BFC 7.5 and

WE8MSWIN1252 for BFC 7.0. The national character set recommended is UTF8 for BFC 7.5 and AL16YTF16 for BFC

7.0.

Initialization Parameters The appropriate values for the Oracle initialization parameters depend on the following:

• The resources available on the server

• Size of BFC consolidated data tables

For optimum performance, the value of the db_cache_size parameter should be at least three times the size of the

largest consolidated data table. You should also take the RAM available on the server into account when setting the

value of this parameter.

An Oracle connection uses 500 KB to 1 MB of RAM on the server. Because there can be as many open connections as

users connected BFC, you should ensure that there is sufficient memory available on the server to manage these

connections.

For most of the Oracle parameters, you can use the default values for BFC. The parameters we recommend be

changed are listed below.

This example is valid for a server that has 1GB of RAM.

DB_BLOCK_SIZE = 16384 (*)

SGA_TARGET= 600M (**)

LOG_BUFFER = 1M

SHARED_POOL_RESERVED_SIZE = 0

JAVA_POOL_SIZE = 0

LARGE_POOL_SIZE = 0

Page 47: Sap business objects financial consolidation

Tuning Best Practices Guide 47

Business Objects Financial Consolidation

©SAP AG 2009

PROCESSES = 300 (**)

WORKAREA_SIZE_POLICY= AUTO

PGA_AGGREGATE_TARGET=200M (**)

OPEN_CURSORS = 1000

SESSION_CACHED_CURSORS = 100

CURSOR_SHARING = EXACT

OPTIMIZER_MODE=ALL_ROWS

RECYCLEBIN=OFF

(*) depending on the type of server e.g. Windows, Unix, etc.

(**) depending on the server characteristics e.g. RAM, number of users, etc.

If more memory is available on the server, you should increase the values of the SGA_TARGET and

PGA_AGGREGATE_TARGET parameters.

For optimum performance, the database cache value should be at least three times the size of the largest BFC

consolidated data table.

Options in the BFC Administration Console that Can Affect the Database Performance The parameter AdvancedDbString in the BFC administration console has an impact on the database performance.

This parameter is made up of the following three options: • Use temporary table: when this option is activated, the worktables containing a large amount of data (for example

those generated during consolidation or when saving packages are managed in the database as global temporary

tables. This option enables you to reduce the number of logs generated when the Oracle instance is in ARCHIVE_LOG

mode, or when the SQL base is in "Full" archive mode. Work tables that do not contain a large volume of data will

always be managed as standard tables. If you activate this option, verify that the temporary tablespace of your Oracle

instance is big enough, or that the database tempdb has enough space.

Note: If temporary tables are activated, some limitations might present themselves in your SQL rules. The main one is

that you cannot create indexes for your temporary tables once data has been added to them.

The temporary tables used by BusinessObjects Financial Consolidation are GLOBAL TEMPORARY type. To find out

more on the limitations of temporary tables, see the documentation on your DBMS.

Starting with Oracle 9.2.0.8, when temporary tables are used, the conversion step of the consolidation process is

slow in some cases. In order to optimize the execution, you can deploy on the applications servers the registry keys

ConsolidationBeforePrepareData and ConsolidationAfterPrepareData to modify the database behavior during the

conversion process. If the keys are missing or empty, the original behavior is not changed. There is no need to restart

the data source if the value of these key is modified.

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\CARTESIS\Magnitude]

"ConsolidationBeforePrepareData"="alter session set optimizer_mode=rule"

"ConsolidationAfterPrepareData"="alter session set optimizer_mode=all_rows"

Page 48: Sap business objects financial consolidation

Tuning Best Practices Guide 48

Business Objects Financial Consolidation

©SAP AG 2009

• Advanced data access

: You should not activate this option with Oracle 9i.

This option enables you to change the filter query strategy. When it is activated, a permanent table called

ct_filter_result is created instead of the usual worktables. This option enables you to reduce the size of the redo-log

under Oracle and generally improve performance under SQL and Oracle.

• Load direct path

This option will activate the HINT APPEND clauses when the worktables are used and thereby reduce the size of the

log. For this option to function correctly, you must activate the NOLOGGING option for the tablespace dedicated to

worktables.

Here is an example of the impact these options have on the redo size generated and on the response time for a

consolidation treatment that generates 1477405 lines (208,75 MB).

The database version is 10.2.0.4 was on a Windows 2003 Server. The results may be different on a UNIX platform so

we recommend testing these parameters before activate them.

1. None of these options: 1,482.77 MB redo generated ( execution time 11 min 42s)

2. Temporary tables activated: 589,65 MB redo generated ( execution time 12 min 02 s)

3. Load Direct Path : 585,38 MB redo generated ( execution time 16 min 08 s)

4. Advanced Data Access : 1586,24 MB redo generated ( execution time 17 min 06s)

5. Load direct Path +Advanced Data Access: redo generated 665.02 MB ( execution time 18 min 40s)

6. All three options activated: 565.03MB redo generated ( execution time 14 min 41s)

7. Temporary Tables + Advanced Data Access: 580,38 MB (execution time 13 min 06 s)

8. Temporary Tables +Load Direct Path : 537.58 MB redo generated (execution time 13 min 26s)

Monitoring Tools You should implement a monitoring system to constantly monitor the following aspects of a database. This can be

achieved by writing custom scripts, implementing Oracle's Enterprise Manager, or buying a third-party monitoring

product. If an alarm is triggered, the system should automatically notify the DBA (e-mail, page, and so on) to take

appropriate action.

Infrastructure availability:

Is the database up and responding to requests

Are the listeners up and responding to requests

Things that can cause service outages:

Is the archive log destination filling up?

Objects getting close to their max extents

Tablespaces running low on free space/ Objects what would not be able to extend

Page 49: Sap business objects financial consolidation

Tuning Best Practices Guide 49

Business Objects Financial Consolidation

©SAP AG 2009

User and process limits reached

Oracle provides the following tools/ utilities to assist with performance monitoring and tuning:

Alert.log

ADDM (Automated Database Diagnostics Monitor) introduced in Oracle 10g

TKProf

Statspack or AWR ( depends on the Oracle license)

Oracle Enterprise Manager - Tuning Pack (depends on the Oracle license)

Starting with Oracle 10G, operating system statistics are available with Statapck, AWR, Oracle Enterprise Manger, and

so on. It is still recommended to monitor and tune operating system CPU, I/O and memory utilization separately, using

the tool provided by the specific operating system.

Alert.log

The alert log file (also referred to as the ALERT.LOG) is a chronological log of messages and errors written out by an

Oracle Database. Typical messages found in this file are: database startup, shutdown, log switches, space errors,

ORA-00600 (internal) errors, ORA-01578 errors (block corruption), ORA-00060 errors (deadlocks), and so on.

The directory where it is found can be determined by the background_dump_dest initialization parameter: select value

from v$parameter where name = 'background_dump_dest';

This file should constantly be monitored to detect unexpected messages and corruptions.

Oracle will automatically create a new alert log file whenever the old one is deleted.

ADDM

For Oracle systems, the statistical data needed for accurate diagnosis of a problem is saved in the Automatic Workload

Repository (AWR). The Automatic Database Diagnostic Monitor (ADDM) analyzes the AWR data on a regular basis,

then locates the root causes of performance problems, provides recommendations for correcting any problems, and

identifies non-problem areas of the system. Because AWR is a repository of historical performance data, ADDM can be

used to analyze performance issues after the event, often saving time and resources reproducing a problem.

An ADDM analysis is performed every time an AWR snapshot is taken and the results are saved in the database. You

can view the results of the analysis using Oracle Enterprise Manager or by viewing a report in a SQL*Plus session.

In most cases, ADDM output should be the first place that a DBA looks when notified of a performance problem. ADDM

provides the following benefits:

Automatic performance diagnostic report every hour by default

Problem diagnosis based on decades of tuning expertise

Time-based quantification of problem impacts and recommendation benefits

Identification of root cause, not symptoms

Recommendations for treating the root causes of problems

Page 50: Sap business objects financial consolidation

Tuning Best Practices Guide 50

Business Objects Financial Consolidation

©SAP AG 2009

Identification of non-problem areas of the system

Minimal overhead to the system during the diagnostic process

The types of problems that ADDM considers include the following:

CPU bottlenecks - Is the system CPU bound by Oracle or some other application?

Undersized Memory Structures - Are the Oracle memory structures, such as the SGA, PGA, and buffer cache,

adequately sized?

I/O capacity issues - Is the I/O subsystem performing as expected?

High load SQL statements - Are there any SQL statements which are consuming excessive system resources?

High load PL/SQL execution and compilation, as well as high load Java usage

RAC specific issues - What are the global cache hot blocks and objects; are there any interconnect latency

issues?

Sub-optimal use of Oracle by the application - Are there problems with poor connection management,

excessive parsing, or application level lock contention?

Database configuration issues - Is there evidence of incorrect sizing of log files, archiving issues, excessive

checkpoints, or sub-optimal parameter settings?

Concurrency issues - Are there buffer busy problems?

Hot objects and top SQL for various problem areas

In addition to problem diagnostics, ADDM recommends possible solutions. When appropriate, ADDM recommends

multiple solutions for the DBA to choose from. ADDM considers a variety of changes to a system while generating its

recommendations. Recommendations include:

Hardware changes - Adding CPUs or changing the I/O subsystem configuration

Database configuration - Changing initialization parameter settings

Schema changes - Hash partitioning a table or index, or using automatic segment-space management (ASSM)

Application changes - Using the cache option for sequences or using bind variables

Using other advisors - Running the SQL Tuning Advisor on high load SQL or running the Segment Advisor on

hot objects

The primary interface for diagnostic monitoring is Oracle Enterprise Manager. If Oracle Enterprise Manager is

unavailable, you can run ADDM using the DBMS_ADDM package. In order to run the DBMS_ADDM APIs, the user

must be granted the ADVISOR privilege.

To diagnose database performance issues, ADDM analysis can be performed across any two AWR snapshots as long

as the following requirements are met:

Both the snapshots did not encounter any errors during creation and both have not yet been purged.

There were no shutdown and startup actions between the two snapshots.

Page 51: Sap business objects financial consolidation

Tuning Best Practices Guide 51

Business Objects Financial Consolidation

©SAP AG 2009

While the simplest way to run an ADDM analysis over a specific time period is with the Oracle Enterprise Manager GUI,

ADDM can also be run manually using the $ORACLE_HOME/rdbms/admin/addmrpt.sql script and DBMS_ADVISOR

package APIs. The SQL script and APIs can be run by any user who has been granted the ADVISOR privilege.

Typically, you would view output and information from the automatic database diagnostic monitor through Oracle

Enterprise Manager or ADDM reports. However, you can display ADDM information through the DBA_ADVISOR views.

This group of views includes:

DBA_ADVISOR_TASKS: This view provides basic information about existing tasks, such as the task Id, task

name, and when created.

DBA_ADVISOR_LOG: This view contains the current task information, such as status, progress, error

messages, and execution times.

DBA_ADVISOR_RECOMMENDATIONS: This view displays the results of completed diagnostic tasks with

recommendations for the problems identified in each run. The recommendations should be looked at in the

order of the RANK column, as this relays the magnitude of the problem for the recommendation. The BENEFIT

column gives the benefit to the system you can expect after the recommendation is carried out.

DBA_ADVISOR_FINDINGS: This view displays all the findings and symptoms that the diagnostic monitor

encountered along with the specific recommendation.

Here is an example of ADDM analysis (run manually using the $ORACLE_HOME/rdbms/admin/

addmrpt.sqlscript):

DETAILED ADDM REPORT FOR TASK 'TASK_3559' WITH ID 3559 ------------------------------------------------------ Analysis Period: 08-OCT-2009 from 23:03:25 to 23:17:12 Database ID/Instance: 3685384899/1 Database/Instance Names: SAP1/sap Host Name: SP-BASESBENCH Database Version: 10.2.0.4.0 Snapshot Range: from 2119 to 2120 Database Time: 767 seconds Average Database Load: .9 active sessions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ FINDING 1: 100% impact (767 seconds) ------------------------------------ The SGA was inadequately sized, causing additional I/O or hard parses. RECOMMENDATION 1: DB Configuration, 100% benefit (767 seconds) ACTION: Increase the size of the SGA by setting the parameter "sga_target" to 1890 M. ADDITIONAL INFORMATION: The value of parameter "sga_target" was "1512 M" during the analysis period. SYMPTOMS THAT LED TO THE FINDING: SYMPTOM: Wait class "User I/O" was consuming significant database time. (23% impact [176 seconds]) SYMPTOM: Hard parsing of SQL statements was consuming significant database time. (17% impact [130 seconds]) FINDING 2: 21% impact (158 seconds) ----------------------------------- SQL statements consuming significant database time were found.

Page 52: Sap business objects financial consolidation

Tuning Best Practices Guide 52

Business Objects Financial Consolidation

©SAP AG 2009

RECOMMENDATION 1: SQL Tuning, 13% benefit (97 seconds) ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID "9vtm7gy4fr2ny". RELEVANT OBJECT: SQL statement with SQL_ID 9vtm7gy4fr2ny and PLAN_HASH 2367994939 select con# from con$ where owner#=:1 and name=:2 RATIONALE: SQL statement with SQL_ID "9vtm7gy4fr2ny" was executed 655 times and had an average elapsed time of 0.15 seconds. RECOMMENDATION 2: SQL Tuning, 7.6% benefit (58 seconds) ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID "bwt0pmxhv7qk7". RELEVANT OBJECT: SQL statement with SQL_ID bwt0pmxhv7qk7 and PLAN_HASH 2625956280 delete from con$ where owner#=:1 and name=:2 RATIONALE: SQL statement with SQL_ID "bwt0pmxhv7qk7" was executed 367 times and had an average elapsed time of 0.16 seconds. FINDING 3: 11% impact (82 seconds) ---------------------------------- Individual database segments responsible for significant user I/O wait were found. RECOMMENDATION 1: Segment Tuning, 11% benefit (82 seconds) ACTION: Investigate application logic involving I/O on database object with id 212747. RELEVANT OBJECT: database object with id 212747 SYMPTOMS THAT LED TO THE FINDING: SYMPTOM: Wait class "User I/O" was consuming significant database time. (23% impact [176 seconds]) FINDING 4: 8.5% impact (65 seconds) ----------------------------------- Cursors were getting invalidated due to DDL operations. This resulted in additional hard parses which were consuming significant database time. RECOMMENDATION 1: Application Analysis, 8.5% benefit (65 seconds) ACTION: Investigate appropriateness of DDL operations. SYMPTOMS THAT LED TO THE FINDING: SYMPTOM: Hard parsing of SQL statements was consuming significant database time. (17% impact [130 seconds]) FINDING 5: 8.4% impact (65 seconds) ----------------------------------- SQL statements were not shared due to the usage of literals. This resulted in additional hard parses which were consuming significant database time. RECOMMENDATION 1: Application Analysis, 8.4% benefit (65 seconds) ACTION: Investigate application logic for possible use of bind variables instead of literals. ACTION: Alternatively, you may set the parameter "cursor_sharing" to "force". RATIONALE: At least 5 SQL statements with PLAN_HASH_VALUE 389409320 were found to be using literals. Look in V$SQL for examples of such SQL statements. SYMPTOMS THAT LED TO THE FINDING: SYMPTOM: Hard parsing of SQL statements was consuming significant database time. (17% impact [130 seconds]) FINDING 6: 7.8% impact (60 seconds) ----------------------------------- Time spent on the CPU by the instance was responsible for a substantial part of database time. RECOMMENDATION 1: Application Analysis, 7.8% benefit (60 seconds) ACTION: Parsing SQL statements were consuming significant CPU. Please refer to other findings in this task about parsing for further details. ADDITIONAL INFORMATION: The instance spent significant time on CPU. However, there were no predominant SQL statements responsible for the CPU load. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ADDITIONAL INFORMATION ---------------------- Wait class "Application" was not consuming significant database time.

Page 53: Sap business objects financial consolidation

Tuning Best Practices Guide 53

Business Objects Financial Consolidation

©SAP AG 2009

Wait class "Commit" was not consuming significant database time. Wait class "Concurrency" was not consuming significant database time. Wait class "Configuration" was not consuming significant database time. Wait class "Network" was not consuming significant database time. Session connect and disconnect calls were not consuming significant database time. The database's maintenance windows were active during 100% of the analysis period. The analysis of I/O performance is based on the default assumption that the average read time for one database block is 10000 micro-seconds. An explanation of the terminology used in this report is available when you run the report with the 'ALL' level of detail.

TKProf

It essentially formats a trace file into a more readable format for performance analysis. The DBA can then identify and

resolve performance issues such as poor SQL, indexing, and wait events.

Before tracing can be enabled, the environment must first be configured by performing the following steps:

· Enable Timed Statistics – This parameter enables the collection of certain vital statistics such as CPU execution

time, wait events, and elapsed times. The resulting trace output is more meaningful with these statistics. To show the

value of the parameter timed statistics: show parameter timed_statistics or select value from v$parameter where name = timed_statistics;

The command to enable timed statistics is:

ALTER SYSTEM SET TIMED_STATISTICS = TRUE;

· Check the User Dump Destination Directory – The trace files generated by Oracle can be numerous and large.

These files are placed by Oracle in user_dump_dest directory as specified in the init.ora. The user dump destination

can also be specified for a single session using the alter session command. Make sure that enough space exists on

the device to support the number of trace files that you expect to generate.

Show parameter user_dump_dest or select value from v$parameter where name = 'user_dump_dest';

Once the directory name is obtained, the corresponding space command (OS dependent) will report the amount of

available space. Delete unwanted trace files before starting a new trace to free up the disk space.

The next step in the process is to enable tracing. By default, tracing is disabled due to the burden (5-10%) it places on

the database. Tracing can be defined at the session level:

ALTER SESSION SET SQL_TRACE = TRUE;

A DBA may enable tracing for another user‘s session by:

exec DBMS_SYSTEM.SET_EV(287,43672, 10046, 12,‘‘);

or

exec dbms_monitor.SESSION_TRACE_ENABLE(262 , 4927 , TRUE, TRUE);

where the 262 = sid (Session ID) and 4927=serial# can be obtained from the v$session view.

The same options that we use to enable tracing are used to disable it. These include:

ALTER SESSION SET SQL_TRACE = FALSE;

To disable tracing for another user‘s session use:

exec DBMS_SYSTEM.SET_EV(287,43672, 10046, 0, ‗‘);

or

Page 54: Sap business objects financial consolidation

Tuning Best Practices Guide 54

Business Objects Financial Consolidation

©SAP AG 2009

exec dbms_monitor.SESSION_TRACE_DISABLE(262 , 4927);

The tkprof command can be executed from the operating system prompt:

tkprof sand1d_ora_728.trc output.txt sys=no

Each tkprof output file contains a header, body, and summary section.

The header simply displays the trace file name, definitions, and sort options selected.

********************************************************************************

count = number of times OCI procedure was executed

cpu = cpu time in seconds executing

elapsed = elapsed time in seconds executing

disk = number of physical reads of buffers from disk

query = number of buffers gotten for consistent read

current = number of buffers gotten in current mode (usually for update)

rows = number of rows processed by the fetch or execute call

*** SESSION ID:(305.34440) 2009-01-07 02:43:48.413

TKPROF: Release 10.1.0.2.0 - Production on Mi Sep 30 14:11:14 2009

Copyright (c) 1982, 2004, Oracle. All rights reserved.

Trace file: sand1d_ora_728.trc

Sort options: default

The body contains the performance metrics for SQL statements.

The summary section contains an aggregate of performance statistics for all SQL statements in the file.

******************************************************************************** OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 4 0.03 0.01 0 13 0 0

Execute 5 38.87 116.78 129831 47291 64105 1085775

Fetch 0 0.00 0.00 0 0 0 0

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 9 38.90 116.80 129831 47304 64105 1085775

Misses in library cache during parse: 4 Misses in library cache during execute: 1 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------

db file scattered read 2231 1.10 12.61

log file switch completion 15 0.10 0.57

log buffer space 20 0.04 0.68

direct path write 3 0.00 0.00

log file sync 4 0.00 0.01

SQL*Net message to client 12 0.00 0.00

SQL*Net message from client 12 0.02 0.08

db file sequential read 19 0.00 0.02

control file sequential read 5 0.00 0.01

direct path write temp 363 0.00 0.10

direct path read temp 37091 0.37 34.25

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count cpu elapsed disk query current rows

Page 55: Sap business objects financial consolidation

Tuning Best Practices Guide 55

Business Objects Financial Consolidation

©SAP AG 2009

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 293 0.01 0.01 0 0 0 0

Execute 373 0.25 0.76 0 727 569 175

Fetch 325 0.03 0.31 572 755 0 152

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 991 0.29 1.09 572 1482 569 327

Misses in library cache during parse: 2 Misses in library cache during execute: 2 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited

---------------------------------------- Waited ---------- ------------

db file scattered read 34 0.00 0.12

SQL*Net message to client 6 0.00 0.00

SQL*Net message from client 6 0.00 0.00

db file sequential read 38 0.08 0.16

7 user SQL statements in session. 367 internal SQL statements in session. 374 SQL statements in session. ******************************************************************************** Trace file: sand1d_ora_728.trc Trace file compatibility: 10.01.00 Sort options: default 2 sessions in tracefile. 7 user SQL statements in trace file. 367 internal SQL statements in trace file. 374 SQL statements in trace file. 62 unique SQL statements in trace file. 62427 lines in trace file. 104 elapsed seconds in trace file.

The output file displays a table of performance metrics after each unique SQL statement. Each row in the table

corresponds to each of the three steps required in SQL processing.

1. Parse – The translation of the SQL into an execution plan. This step includes syntax checks, permissions, and all

object dependencies.

2. Execute – The actual execution of the statement.

3. Fetch – The number of rows returned for a SELECT statement.

The table columns include the following:

· Count – The number of times a statement was parsed, executed, or fetched.

· CPU – The total CPU time in seconds for all parse, execute, or fetch calls.

· Elapsed – Total elapsed time in seconds for all parse, execute, or fetch calls.

· Disk – The number of physical disk reads from the datafiles for all parse, execute, or fetch calls.

· Query – The number of buffers retrieved for all parse, execute, or fetch calls.

· Current – The number of buffers retrieved in current mode (INSERT, UPDATE, or DELETE statements).

DELETE FROM TL253IS2EZBLQ

WHERE

ACCNT IN (SELECT REFVALUE_ID FROM CT_HIERARCHY_CONTENT WHERE HIERARCHY_ID =

:B1 AND ID IN (SELECT PARENT_ID FROM CT_HIERARCHY_CONTENT WHERE

HIERARCHY_ID = :B1 ))

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 1 0.00 0.00 0 0 0 0

Execute 1 0.39 0.42 5 10746 0 0

Fetch 0 0.00 0.00 0 0 0 0

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 2 0.39 0.42 5 10746 0 0

Misses in library cache during parse: 1

Misses in library cache during execute: 1

Page 56: Sap business objects financial consolidation

Tuning Best Practices Guide 56

Business Objects Financial Consolidation

©SAP AG 2009

Optimizer mode: ALL_ROWS

Parsing user id: 59 (recursive depth: 1)

Rows Row Source Operation

------- ---------------------------------------------------

0 DELETE TL253IS2EZBLQ (cr=10577 pr=2 pw=0 time=370804 us)

0 HASH JOIN RIGHT SEMI (cr=10577 pr=2 pw=0 time=370798 us)

60 VIEW VW_NSO_1 (cr=8 pr=2 pw=0 time=11867 us)

60 HASH JOIN (cr=8 pr=2 pw=0 time=11806 us)

61 TABLE ACCESS BY INDEX ROWID CT_HIERARCHY_CONTENT (cr=4 pr=2 pw=0 time=11433 us)

61 INDEX RANGE SCAN CT_HIERA_CNTN2_IDX (cr=3 pr=2 pw=0 time=11366 us)(object id 1157369)

61 TABLE ACCESS BY INDEX ROWID CT_HIERARCHY_CONTENT (cr=4 pr=0 pw=0 time=196 us)

61 INDEX RANGE SCAN CT_HIERA_CNTN2_IDX (cr=3 pr=0 pw=0 time=131 us)(object id 1157369)

641679 TABLE ACCESS FULL TL253IS2EZBLQ (cr=10569 pr=0 pw=0 time=48 us)

Elapsed times include waiting on following events:

Event waited on Times Max. Wait Total Waited

---------------------------------------- Waited ---------- ------------

db file sequential read 2 0.00 0.01

********************************************************************************

Tkprof provides many useful command line options that provide additional functionality for the DBA.

· print – Lists only the first n SQL statements in the output file. If nothing is specified, all statements will be listed.

Use this option when the list needs to be limited to the ―Top n‖ statements. This is useful when combined with a sorting

option to enable the top n statements by CPU, or disk reads, or parses, and so on.

· aggregate – When ―Yes‖, tkprof will combine the statistics from multiple user executions of the same SQL

statement. When ―No‖, the statistics will be listed each time the statement is executed.

· insert – Creates a file that will load the statistics into a table in the database for further processing. Choose this

option if you want to perform any advanced analysis of the tkprof output.

· sys – Enables or disables the inclusion of SQL statements executed by the SYS user, including recursive SQL

statements. The default is to enable.

· table – Used in the Explain Plan command (if specified) for Oracle to load data temporarily into an Oracle table.

The user must specify the schema and table name for the plan table. If the table exists all rows will be deleted

otherwise tkprof will create the table and use it.

· record - creates an SQL script with the specified filename that contains all non-recursive SQL statements from the

trace file.

· explain – Executes an Explain Plan for each statement in the trace file and displays the output. Explain Plan is less

useful when used in conjunction with tkprof than it is when used alone. Explain Plan provides the predicted optimizer

execution path without actually executing the statement. tkprof shows you the actual execution path and statistics after

the statement is executed. In addition, running Explain Plan against SQL statements that were captured and saved is

always problematic given dependencies and changes in the database environment.

· sort – Sorts the SQL statements in the trace file by the criteria deemed most important by the DBA. This option

allows the DBA to view the SQL statements that consume the most resources at the top of the file, rather than

searching the entire file contents for the poor performers. The following are the data elements available for sorting:

· prscnt – The number of times the SQL was parsed.

· prscpu – The CPU time spent parsing.

· prsela – The elapsed time spent parsing the SQL.

Page 57: Sap business objects financial consolidation

Tuning Best Practices Guide 57

Business Objects Financial Consolidation

©SAP AG 2009

· prsdsk – The number of physical reads required for the parse.

· prsmis – The number of consistent block reads required for the parse.

· prscu - The number of current block reads required for the parse.

· execnt – The number of times the SQL statement was executed.

· execpu – The CPU time spent executing the SQL.

· exeela – The elapsed time spent executing the SQL.

· exedsk – The number of physical reads during execution.

· exeqry – The number of consistent block reads during execution.

· execu – The number of current block reads during execution.

· exerow – The number of rows processed during execution.

· exemis – The number of library cache misses during execution.

· fchcnt – The number of fetches performed.

· fchcpu – The CPU time spent fetching rows.

· fchela – The elapsed time spent fetching rows.

· fchdsk – The number of physical disk reads during the fetch.

· fchqry – The number of consistent block reads during the fetch.

· fchcu – The number of current block reads during the fetch.

· fchrow – The number of rows fetched for the query

Statspack

The STATSPACK utility has been available since Oracle 8.1.6 to monitor the performance of your database.

STATSPACK originally replaced the UTLBSTAT/UTLESTAT scripts available with earlier versions of Oracle.

In this chapter, you will see how to install STATSPACK, and how to run and interpret the reports generated by the

STATSPACK.

STATSPACK must be installed in every database to be monitored. Prior to installing STATSPACK, you should create a

tablespace to hold the STATSPACK data. During the installation process, you will be prompted for the name of the

tablespace to hold the STATSPACK database objects. You should also designate a temporary tablespace that will be

large enough to support the large inserts and deletes STATSPACK may perform.

The installation script, named spcreate.sql, is found in the /rdbms/admin subdirectory under the Oracle software home

directory. The spcreate.sql script creates a user named PERFSTAT and creates a number of objects under that

schema.

Remember to set timed_statistics to true for your instance. Setting this parameter provides timing data which is

invaluable for performance tuning.

To properly collect database statistics, the initialization parameter STATISTICS_LEVEL should be set to TYPICAL (the

default) or ALL.

Page 58: Sap business objects financial consolidation

Tuning Best Practices Guide 58

Business Objects Financial Consolidation

©SAP AG 2009

The SQL script "spauto.sql" can be used to run STATSPACK every hour on the hour. See the script in

$ORACLE_HOME/rdbms/admin/spauto.sql for more information (note that JOB_QUEUE_PROCESSES must be set >

0).

STATSPACK has two types of collection options, level and threshold. The level parameter controls the type of data

collected from Oracle, while the threshold parameter acts as a filter for the collection of SQL statements into the

stats$sql_summary table.

SELECT * FROM stats$level_description ORDER BY snap_level;

Level 0 This level captures general statistics, including rollback segment, row cache, SGA, system events, background events, session events, system statistics, wait statistics, lock statistics, and Latch information.

Level 5 This level includes capturing high resource usage SQL Statements, along with all data captured by lower levels.

Level 6 This level includes capturing SQL plan and SQL plan usage information for high resource usage SQL Statements, along with all data captured by lower levels.

Level 7 This level captures segment level statistics, including logical and physical reads, row lock, itl and buffer busy waits, along with all data captured by lower levels.

Level 10 This level includes capturing Child Latch statistics, along with all data captured by lower levels.

You can change the default level of a snapshot with the statspack.snap function. The i_modify_parameter => 'true'

changes the level permanent for all snapshots in the future.

SQL> exec statspack.snap(i_snap_level => 6, i_modify_parameter => 'true');

To generate a statspack report run the script spauto.sql in $ORACLE_HOME/rdbms/admin.

Since every system is different, this is only a general list of things you should regularly check in your STATSPACK

output:

Top 5 wait events (timed events)

Load profile

Instance efficiency hit ratios

Wait events

Latch waits

Top SQL

Instance activity

File I/O and segment statistics

Memory allocation

Buffer waits

Statspack Report Header

STATSPACK report for

Database DB Id Instance Inst Num Startup Time Release RAC

Page 59: Sap business objects financial consolidation

Tuning Best Practices Guide 59

Business Objects Financial Consolidation

©SAP AG 2009

~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---

2184676364 MA 1 07-Feb-09 02:23 10.2.0.4.0 NO

Host Name: wweasora0002_s Num CPUs: 16 Phys Memory (MB): 31,744

~~~~

Snapshot Snap Id Snap Time Sessions Curs/Sess Comment

~~~~~~~~ ---------- ------------------ -------- --------- -------------------

Begin Snap: 16 09-Feb-09 16:45:01 86 3.5

End Snap: 17 09-Feb-09 17:00:00 95 3.0

Elapsed: 14.98 (mins)

Cache Sizes Begin End

~~~~~~~~~~~ ---------- ----------

Buffer Cache: 16,000M Std Block Size: 16K

Shared Pool Size: 1,008M Log Buffer: 13,882K

Note that this section may appear slightly different depending on your version of Oracle. For example, the Curs/Sess

column, which shows the number of open cursors per session, is new with Oracle9i (an 8i Statspack report would not

show this data).

Here, the item we are most interested in is the elapsed time. We want that to be large enough to be meaningful, but

small enough to be relevant (15 to 30 minutes is OK).

Statspack Load Profile

Load Profile Per Second Per Transaction

~~~~~~~~~~~~ --------------- ---------------

Redo size: 2,018,417.30 47,082.44

Logical reads: 151,220.96 3,527.44

Block changes: 7,622.27 177.80

Physical reads: 5,388.17 125.69

Physical writes: 697.88 16.28

User calls: 1,458.69 34.03

Parses: 613.87 14.32

Hard parses: 36.04 0.84

Sorts: 61.88 1.44

Logons: 0.20 0.00

Executes: 680.37 15.87

Transactions: 42.87

% Blocks changed per Read: 5.04 Recursive Call %: 77.67

Rollback per transaction %: 73.26 Rows per Sort: 340.71

Here, we are interested in a variety of things, but if we are looking at a "health check", three items are important:

The Hard parses (normally we want very few of them but the BFC application does not use bind variables)

Executes (how many statements we are executing per second / transaction)

Transactions (how many transactions per second we process).

This gives an overall view of the load on the server.

Statspack Instance Efficiency Percentage

Next, we move onto the Instance Efficiency Percentages section, which includes perhaps the only ratios we look at in

any detail:

Instance Efficiency Percentages

Page 60: Sap business objects financial consolidation

Tuning Best Practices Guide 60

Business Objects Financial Consolidation

©SAP AG 2009

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Buffer Nowait %: 99.74 Redo NoWait %: 100.00

Buffer Hit %: 96.43 In-memory Sort %: 100.00

Library Hit %: 92.80 Soft Parse %: 94.13

Execute to Parse %: 9.77 Latch Hit %: 99.46

Parse CPU to Parse Elapsd %: 32.57 % Non-Parse CPU: 99.20

Shared Pool Statistics Begin End

------ ------

Memory Usage %: 93.16 92.90

% SQL with executions>1: 59.64 77.23

% Memory for SQL w/exec>1: 66.78 82.00

Interpreting the ratios in this section can be slightly more complex than it may seem at first glance. While high values

for the ratios are generally good, indicating high efficiency, such values can be misleading. Low values are not always

bad.

The three in bold are the most important: Library Hit, Soft Parse % and Execute to Parse. All of these have to do with

how well the shared pool is being utilized.

Here, in this report, we are quite pleased with the Library Hit and the Soft Parse % values. If the library Hit ratio was

low, it could be indicative of a shared pool that is too small, or just as likely, that the application does not use the bind

variables ( which is the case for BFC application).

Statspack Top 5 Timed Events

Moving on, we get to the Top 5 Timed Events section (in Oracle9i Release 2 and later) or Top 5 Wait Events (in

Oracle9i Release 1 and earlier).

Top 5 Timed Events Avg %Total

~~~~~~~~~~~~~~~~~~ wait Call

Event Waits Time (s) (ms) Time

----------------------------------------- ------------ ----------- ------ ------

CPU time 5,274 64.9

read by other session 337,911 1,649 5 20.3

db file scattered read 100,078 585 6 7.2

log file parallel write 61,900 133 2 1.6

db file parallel write 49,786 122 2 1.5

-------------------------------------------------------------

This section is among the most important and relevant sections in the Statspack report. Here is where you find out

what events (typically wait events) are consuming the most time. In Oracle9i Release 2, this section is renamed and

includes a new event: CPU time.

CPU time is not really a wait event (hence, the new name), but rather the sum of the CPU used by this session, or the

amount of CPU time used during the snapshot window. In a heavily loaded system, if the CPU time event is the biggest

event, that could point to some CPU-intensive processing (for example, forcing the use of an index when a full scan

should have been used), which could be the cause of the bottleneck.

Db file scattered read - That generally happens during a full scan of a table. You can use the Statspack report to help

identify the query in question and fix it.

Read by other session - This event occurs when a session requests a buffer that is currently being read into the buffer cache by another session. Prior to release 10.1, waits for this event were grouped with the other reasons for waiting for buffers under the 'buffer busy wait' event

Log file parallel write - Writing redo records to the redo log files from the log buffer.

Page 61: Sap business objects financial consolidation

Tuning Best Practices Guide 61

Business Objects Financial Consolidation

©SAP AG 2009

Db file parallel write - This event occurs in the DBWR. It indicates that the DBWR is performing a parallel write to files and blocks. When the last I/O has gone to disk, the wait ends.

You can find a description of all waits event at: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3159

CPU and Memory Statistics

Starting with Oracle 10 statspack collects operating system statistics

Host CPU (CPUs: 16)

~~~~~~~~ Load Average

Begin End User System Idle WIO WCPU

------- ------- ------- ------- ------- ------- --------

0.02 0.03 52.73 5.09 42.18 2.81 57.79

Instance CPU

~~~~~~~~~~~~

% of total CPU for Instance: 38.13

% of busy CPU for Instance: 65.94

%DB time waiting for CPU - Resource Mgr:

Memory Statistics Begin End

~~~~~~~~~~~~~~~~~ ------------ ------------

Host Mem (MB): 31,744.0 31,744.0

SGA use (MB): 17,088.0 17,088.0

PGA use (MB): 348.0 378.7

% Host Mem used for SGA+PGA: 54.9 55.0

-------------------------------------------------------------

Time Model System Stats DB/Inst: MA/MA Snaps: 16-17

-> Ordered by % of DB time desc, Statistic name

Statistic Time (s) % of DB time

----------------------------------- -------------------- ------------

sql execute elapsed time 10,332.3 98.5

DB CPU 5,453.4 52.0

parse time elapsed 133.0 1.3

hard parse elapsed time 118.7 1.1

failed parse elapsed time 24.9 .2

PL/SQL execution elapsed time 2.5 .0

hard parse (sharing criteria) elaps 1.4 .0

repeated bind elapsed time 1.4 .0

connection management call elapsed 0.3 .0

DB time 10,493.8

background elapsed time 351.5

background cpu time 36.8

-------------------------------------------------------------

SQL ordered by Gets

Here you will find the most CPU-Time consuming SQL statements.

SQL ordered by CPU DB/Inst: MA/MA Snaps: 16-17

-> Resources reported for PL/SQL code includes the resources used by all SQL

statements called by the code.

-> Total DB CPU (s): 5,453

-> Captured SQL accounts for 79.6% of Total DB CPU

-> SQL reported below exceeded 1.0% of Total DB CPU

CPU CPU per Elapsd Old

Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value

---------- ------------ ---------- ------ ---------- --------------- ----------

233.65 25 9.35 4.3 479.87 4,515,230 3680353070

Module: ctserver.exe

SELECT a1.accnt, a1.nature, a1.period, a1.flow, SUM(a1.consamoun

t) FROM ct_co7676 a1 WHERE (a1.accnt IN (1929, 1906, 2024, 2281,

2049, 2048, 2046, 2437, 2324, 2810, 2339, 3475, 2342, 2078, 226

5, 1953, 2234, 2266, 2008, 2023, 2041, 2025, 1973, 1918, 2028, 1

Should be less than the number of processors (or cores) in your machine

Page 62: Sap business objects financial consolidation

Tuning Best Practices Guide 62

Business Objects Financial Consolidation

©SAP AG 2009

907, 2270, 2029, 2842, 2520, 2521) AND a1.nature IN (1, 2, 3, 4,

214.33 20 10.72 3.9 484.20 3,875,779 3129365819

Module: ctserver.exe

SELECT a1.accnt, a1.nature, a1.period, a1.flow, SUM(a1.consamoun

t) FROM ct_co7676 a1 WHERE (a1.accnt IN (1906, 1907, 1918, 1929,

1953, 1973, 2008, 2023, 2024, 2025, 2028, 2029, 2041, 2046, 204

8, 2049, 2078, 2234, 2265, 2266, 2270, 2281, 2324, 2339, 2342, 2

437, 2520, 2521, 2810, 2842, 3475) AND a1.nature IN (1, 2, 3, 4,

196.32 20 9.82 3.6 441.48 3,737,720 3360925328

Module: ctserver.exe

SELECT a1.accnt, a1.nature, a1.period, a1.flow, SUM(a1.consamoun

t) FROM ct_co7676 a1 WHERE (a1.accnt IN (1906, 1907, 1918, 1929,

1953, 1973, 2008, 2023, 2024, 2025, 2028, 2029, 2041, 2046, 204

8, 2049, 2078, 2234, 2265, 2266, 2270, 2281, 2324, 2339, 2342, 2

437, 2520, 2521, 2810, 2842, 3475) AND a1.nature IN (1, 2, 3, 4,

When you examine the instance report, you often find high-load SQL statements that you want to examine more

closely. The SQL report, ?rdbms/admin/SPREPSQL.SQL, displays statistics, the complete SQL text, and (if a level six

snapshot has been taken), information on any SQL plan(s) associated with that statement.

Example of sprepsql output:

STATSPACK SQL report for Old Hash Value: 3424744297 Module: ctserver.exe

DB Name DB Id Instance Inst Num Release RAC Host

------------ ----------- ------------ -------- ----------- --- ----------------

MA 2184676364 MA 1 10.2.0.4.0 NO wweasora0002_s

Start Id Start Time End Id End Time Duration(mins)

--------- ------------------- --------- ------------------- --------------

106 11-Feb-09 10:45:00 108 11-Feb-09 11:15:01 30.02

SQL Statistics

~~~~~~~~~~~~~~

-> CPU and Elapsed Time are in seconds (s) for Statement Total and in

milliseconds (ms) for Per Execute

% Snap

Statement Total Per Execute Total

--------------- --------------- ------

Buffer Gets: 3,625,147 181,257.4 2.47

Disk Reads: 57,245 2,862.3 2.87

Rows processed: 4,195 209.8

CPU Time(s/ms): 211 10,564.0

Elapsed Time(s/ms): 479 23,945.3

Sorts: 0 .0

Parse Calls: 20 1.0

Invalidations: 0

Version count: 1

Sharable Mem(K): 63

Executions: 20

SQL Text

~~~~~~~~

SELECT a1.accnt, a1.nature, a1.period, a1.flow, SUM(a1.consamount) FROM ct_co7676 a1 WHERE (a1.accnt IN

(1906, 1907, 1918, 1929, 1953, 1973, 2008, 2023, 2024, 2025, 2028, 2029, 2041, 2046, 2048, 2049, 2078, 2234,

2265, 2266, 2270, 2281, 2324, 2339, 2342, 2437, 2520, 2521, 2810, 2842, 3475) AND a1.nature IN (1, 2, 3, 4,

5, 6, 8, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 35, 36, 37,

38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 92, 96, 97, 98, 99, 100, 101, 13, 197, 104, 105, 107,

112, 113, 114, 115, 103, 102, 127, 128, 129, 130, 131, 132, 134) AND a1.period IN (28147712, 28409856,

28622848) AND a1.flow IN (24, 63, 112) AND a1.entity IN (SELECT a2.id FROM ct_entity a2) AND (a1.ct_0000_tcr

= 0) AND (a1.ct_0000_ep = 0) AND (a1.ct_0000_sa = 0) AND (a1.ct_0000_pg = 0) AND (a1.ct_0000_vehicles = 0)

AND (a1.partner = 0) AND (a1.ctshare = 0) AND (a1.ct_0000_ota = 0) AND (a1.ct_0000_dest = 0) AND (

a1.ct_0000_zone = 0) AND (a1.ct_0000_coj = 0) AND (a1.ct_0000_lv = 0) AND (a1.ct_0000_st = 0) AND

(a1.ct_0000_ohr = 0) AND (a1.ct_0000_pl = 0)) GROUP BY a1.accnt, a1.nature, a1.period, a1.flow

Page 63: Sap business objects financial consolidation

Tuning Best Practices Guide 63

Business Objects Financial Consolidation

©SAP AG 2009

Known Optimizer Plan(s) for this Old Hash Value

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Shows all known Optimizer Plans for this database instance, and the Snap Id's

they were first found in the shared pool. A Plan Hash Value will appear

multiple times if the cost has changed

-> ordered by Snap Id

First First Last Plan

Snap Id Snap Time Active Time Hash Value Cost

--------- --------------- --------------- ------------ ----------

107 11-Feb-09 11:00 11-Feb-09 11:23 876260613 49503

Plans in shared pool between Begin and End Snap Ids

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Shows the Execution Plans found in the shared pool between the begin and end

snapshots specified. The values for Rows, Bytes and Cost shown below are those

which existed at the time the first-ever snapshot captured this plan - these

values often change over time, and so may not be indicative of current values

-> Rows indicates Cardinality, PHV is Plan Hash Value

-> ordered by Plan Hash Value

--------------------------------------------------------------------------------

| Operation | PHV/Object Name | Rows | Bytes| Cost |

--------------------------------------------------------------------------------

|SELECT STATEMENT |----- 876260613 -----| | | 49503 |

|HASH GROUP BY | | 1 | 87 | 49503 |

| NESTED LOOPS | | 1 | 87 | 49502 |

| TABLE ACCESS FULL |CT_CO7676 | 1 | 74 | 49502 |

| INDEX UNIQUE SCAN |TPM08AG3K13IT_IDX | 1 | 13 | 0 |

--------------------------------------------------------------------------------

End of Report

Tablespace and datafile IO Stats

Tablespace IO Stats DB/Inst: MA/MA Snaps: 16-17

->ordered by IOs (Reads + Writes) desc

Tablespace

------------------------------

Av Av Av Av Buffer Av Buf

Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)

-------------- ------- ------ ------- ------------ -------- ---------- ------

CT_DATA

106,363 118 5.2 41.4 16,363 18 336,692 4.9

CT_TEMP

26,968 30 1.7 15.5 68,217 76 0 0.0

CT_TEMP_IDX

15,598 17 0.4 1.0 14,026 16 1 0.0

UNDOTBS1

3 0 16.7 1.0 13,795 15 10,184 0.2

CT_AUT_DATA

675 1 4.8 1.0 5,577 6 150 0.2

CT_AUT_DATA_IDX

1,122 1 5.1 1.0 3,176 4 0 0.0

TEMP

1,643 2 14.8 4.0 512 1 0 0.0

CT_SYSTEM

98 0 6.9 1.0 347 0 2 0.0

SYSTEM

219 0 2.6 1.0 139 0 3,286 0.2

TOOLS

13 0 8.5 1.0 121 0 0 0.0

CT_SYSTEM_IDX

5 0 2.0 1.0 96 0 3 0.0

CT_DATA_IDX

27 0 7.8 1.0 27 0 0 0.0

SYSAUX

3 0 3.3 1.0 6 0 0 0.0

CT_AUT_SYSTEM_IDX

3 0 16.7 1.0 3 0 0 0.0

CT_AUT_SYSTEM

Page 64: Sap business objects financial consolidation

Tuning Best Practices Guide 64

Business Objects Financial Consolidation

©SAP AG 2009

3 0 0.0 1.0 3 0 0 0.0

USERS

3 0 0.0 1.0 3 0 0 0.0

-------------------------------------------------------------

File IO Stats DB/Inst: MA/MA Snaps: 16-17

->Mx Rd Bkt: Max bucket time for single block read

->ordered by Tablespace, File

Tablespace Filename

------------------------ ----------------------------------------------------

Av Mx Av

Av Rd Rd Av Av Buffer BufWt

Reads Reads/s (ms) Bkt Blks/Rd Writes Writes/s Waits (ms)

-------------- ------- ----- --- ------- ------------ -------- ---------- ------

CT_AUT_DATA /databasem/datafile/MA/CT_AUT_DATA01.dbf

10 0 8.0 8 1.0 832 1 10 0.0

/databasem/datafile/MA/CT_AUT_DATA02.dbf

665 1 4.7 32 1.0 4,745 5 140 0.2

CT_AUT_DATA_IDX /databasem/datafile/MA/CT_AUT_DATA_IDX01.dbf

1,122 1 5.1 64 1.0 3,176 4 0

CT_AUT_SYSTEM /databasem/datafile/MA/CT_AUT_SYSTEM01.dbf

3 0 0.0 1.0 3 0 0

CT_AUT_SYSTEM_IDX /databasem/datafile/MA/CT_AUT_SYSTEM_IDX01.dbf

3 0 16.7 1.0 3 0 0

CT_DATA /databasem/datafile/MA/CT_DATA01.dbf

10,951 12 5.8 64 40.4 1,562 2 35,111 5.4

/databasem/datafile/MA/CT_DATA02.dbf

10,487 12 4.5 16 42.0 1,741 2 32,728 4.4

/databasem/datafile/MA/CT_DATA03.dbf

10,809 12 4.3 ### 40.9 1,685 2 33,894 4.2

/databasem/datafile/MA/CT_DATA04.dbf

10,638 12 4.4 32 41.4 1,455 2 33,287 4.2

/databasem/datafile/MA/CT_DATA05.dbf

10,610 12 4.5 16 41.7 1,645 2 33,136 4.3

/databasem/datafile/MA/CT_DATA06.dbf

10,629 12 5.9 ### 41.4 1,590 2 33,634 5.7

/databasem/datafile/MA/CT_DATA07.dbf

10,275 11 6.3 ### 42.9 1,937 2 33,011 6.1

/databasem/datafile/MA/CT_DATA08.dbf

10,637 12 5.9 64 41.2 1,533 2 34,829 5.4

/databasem/datafile/MA/CT_DATA09.dbf

10,442 12 5.8 16 42.0 1,684 2 32,949 5.5

/databasem/datafile/MA/CT_DATA10.dbf

10,885 12 4.4 ### 40.2 1,531 2 34,113 4.1

CT_DATA_IDX /databasem/datafile/MA/CT_DATA_IDX01.dbf

3 0 16.7 1.0 3 0 0

/databasem/datafile/MA/CT_DATA_IDX02.dbf

3 0 0.0 1.0 3 0 0

/databasem/datafile/MA/CT_DATA_IDX03.dbf

3 0 16.7 1.0 3 0 0

/databasem/datafile/MA/CT_DATA_IDX04.dbf

3 0 0.0 1.0 3 0 0

/databasem/datafile/MA/CT_DATA_IDX05.dbf

3 0 16.7 1.0 3 0 0

/databasem/datafile/MA/CT_DATA_IDX06.dbf

3 0 0.0 1.0 3 0 0

/databasem/datafile/MA/CT_DATA_IDX07.dbf

3 0 3.3 1.0 3 0 0

/databasem/datafile/MA/CT_DATA_IDX09.dbf

3 0 16.7 1.0 3 0 0

CT_DATA_IDX /databasem/datafile/MA/CT_DATA_IDX10.dbf

3 0 0.0 1.0 3 0 0

CT_SYSTEM /databasem/datafile/MA/CT_SYSTEM01.dbf

98 0 6.9 64 1.0 347 0 2 0.0

CT_SYSTEM_IDX /databasem/datafile/MA/CT_SYSTEM_IDX01.dbf

5 0 2.0 16 1.0 96 0 3 0.0

Page 65: Sap business objects financial consolidation

Tuning Best Practices Guide 65

Business Objects Financial Consolidation

©SAP AG 2009

CT_TEMP /databasem/datafile/MA/CT_TEMP01.dbf

6,638 7 1.8 ### 15.3 17,545 20 0

/databasem/datafile/MA/CT_TEMP02.dbf

8,843 10 1.2 16 12.5 18,452 21 0

/databasem/datafile/MA/CT_TEMP03.dbf

4,660 5 2.5 16 22.1 14,887 17 0

/databasem/datafile/MA/CT_TEMP04.dbf

6,827 8 1.7 64 15.2 17,333 19 0

CT_TEMP_IDX /databasem/datafile/MA/CT_TEMP_IDX01.dbf

3,449 4 0.3 32 1.0 3,200 4 0

/databasem/datafile/MA/CT_TEMP_IDX02.dbf

4,709 5 0.4 ### 1.0 3,930 4 0

/databasem/datafile/MA/CT_TEMP_IDX03.dbf

2,452 3 0.5 64 1.0 2,528 3 0

/databasem/datafile/MA/CT_TEMP_IDX04.dbf

4,988 6 0.4 ### 1.0 4,368 5 1 0.0

SYSAUX /databasem/datafile/MA/sysaux01.dbf

3 0 3.3 1.0 6 0 0

SYSTEM /databasem/datafile/MA/system01.dbf

219 0 2.6 ### 1.0 139 0 3,286 0.2

TEMP /databasem/datafile/MA/temp01.dbf

1,643 2 14.8 4.0 512 1 0

TOOLS /databasem/datafile/MA/TOOLS01.dbf

13 0 8.5 16 1.0 121 0 0

UNDOTBS1 /databasem/datafile/MA/undotbs01.dbf

3 0 16.7 1.0 13,795 15 10,184 0.2

USERS /databasem/datafile/MA/users01.dbf

3 0 0.0 1.0 3 0 0

-------------------------------------------------------------

Note that Oracle considers average read times of greater than 20 ms unacceptable. If a datafile consistently has

average read times of 20 ms or greater then:

- The queries against the contents of the owning tablespace should be examined and tuned so that less

data is retrieved.

- If the tablespace contains indexes, another option is to compress the indexes so that they require less

space and hence, less IO.

- The contents of that datafile should be redistributed across several disks/logical volumes to more

easily accommodate the load.

- If the disk layout seems optimal, check the disk controller layout. It may be that the datafiles need to be

distributed across more disk sets.

Enqueue Activity

Enqueue activity DB/Inst: MA/MA Snaps: 16-17

-> only enqueues with waits are shown

-> Enqueue stats gathered prior to 10g should not be compared with 10g data

-> ordered by Wait Time desc, Waits desc

Enqueue Type (Request Reason)

------------------------------------------------------------------------------

Requests Succ Gets Failed Gets Waits Wt Time (s) Av Wt Time(ms)

Page 66: Sap business objects financial consolidation

Tuning Best Practices Guide 66

Business Objects Financial Consolidation

©SAP AG 2009

------------ ------------ ----------- ----------- ------------ --------------

RO-Multiple Object Reuse (fast object reuse)

61,371 61,371 0 6,794 89 13.10

CI-Cross-Instance Call Invocation

6,342 6,342 0 156 16 105.49

TX-Transaction (allocate ITL entry)

34 34 0 28 1 26.68

CF-Controlfile Transaction

10,991 10,991 0 23 0 3.39

TX-Transaction (index contention)

91 91 0 3 0 5.67

TX-Transaction (row lock contention)

14 14 0 14 0 .36

FB-Format Block

5,392 5,393 0 1 0 1.00

TX-Transaction

63,807 63,807 0 1 0 .00

-------------------------------------------------------------

An enqueue is a locking mechanism. The action to take depends on the lock type that is causing the most problems.

The most common lock waits are generally for:

TX - Transaction Lock: Generally due to application concurrency mechanisms, or table setup issues.

ST - Space management enqueue: Usually caused by too much space management occurring. For example: create

table as select on large tables on busy instances, small extent sizes, lots of sorting, and so on.

RO-Multiple Object Reuse (fast object reuse) is acquired while dropping/truncating the objects. Drop /truncate will post

the DBWR to flush the buffers associated with those objects. While DBWR is performing this operation, foreground

process will wait for RO enqueue. Usually, RO can be seen with CI (Cross Instance invalidation) enqueues.

If the TX enqueues concern ITL slots you can change the initrans parameters of the tables and indexes by using

wddlhook table (you‘ll find more information about wddlhook table in Administration Guide).

To purge unnecessary data from the PERFSTAT schema use the SPPURGE.SQL script located under

$ORACLE_HOME/rdbms/admin. This deletes snapshots that fall between the begin and end snapshot IDs you specify.

To uninstall statspack - connect to database as SYSDBA and run spdrop.sql located under

$ORACLE_HOME/rdbms/admin.

AWR

The AWR Report leverages the Automatic Workload Repository statistics and can be executed within Enterprise

Manager Grid Control if desired and will probably replace STATSPACK for good in the future. You must license the

Oracle Diagnostics Pack to access the AWR dictionary views necessary for the AWR Report (STATSPACK is still a

free utility).

AWR relies on a background process, the MMON process. By default, MMON wakes up every hour and does statistics

collection into the repository snapshots. This interval is configurable by the DBA. AWR snapshots provide a persistent

view of database statistics. They are created in the SYS schema and stored in the SYSAUX table space. In-memory

Page 67: Sap business objects financial consolidation

Tuning Best Practices Guide 67

Business Objects Financial Consolidation

©SAP AG 2009

statistics are gathered once a second on active sessions. They are not written to the database and are aged out of

memory as new statistics are gathered.

AWR collects database statistics every 60 minutes, and this data is maintained for a week and then purged. The

statistics collected by AWR are stored in the database. The AWR Report accesses the AWR to report statistical

performance information similar to how STATSPACK always did. Since the AWR schema was originally based on the

STATSPACK schema, you will find much of what is in STATSPACK in the AWR Report. The AWR data is stored

separately from the STATSPACK data, so running both is a bit superfluous.

The Oracle database uses AWR for problem detection and analysis as well as for self-tuning. A number of different

statistics are collected by the AWR, including wait events, time model statistics, active session history statistics, various

system- and session-level statistics, object usage statistics, and information on the most resource-intensive SQL

statements. To properly collect database statistics, the initialization parameter STATISTICS_LEVEL should be set to

TYPICAL (the default) or ALL.

Creating Snapshots

The DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT procedure creates a snapshot at a time other than

the one generated automatically.

Dropping Snapshots

You can drop snapshots by using the DBMS_WORKLOAD_REPOSITORY.DROP_SNAPHOT_RANGE procedure.

Changing Snapshot Settings

You can change the interval of the snapshot generation and how long the snapshots are retained using the

DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS procedure. This determines the snapshot

capture and purging policy.

The RETENTION parameter specifies the new retention period in minutes. The specified value must be in the range of

1,440 minutes (1 day) to 52,560,000 minutes (100 years). If you specify zero, the maximum value of 100 years will be

used; if you specify NULL, the retention will not be changed. The MMON process is responsible for purging the WR

data.

The INTERVAL parameter specifies the new snapshot interval in minutes. The specified value must be between 10

minutes and 5,256,000 (1 year). If you specify zero, the maximum value of 1 year will be used; if you specify NULL, the

interval will not be changed.

You can view the current settings from the DBA_HIST_WR_CONTROL dictionary view.

Page 68: Sap business objects financial consolidation

Tuning Best Practices Guide 68

Business Objects Financial Consolidation

©SAP AG 2009

Viewing AWR Reports

You can access the AWR using the Enterprise Manager (EM) Database Control, from its Administration tab.

You can view AWR reports using the awrrpt.sql and awrrpti.sql scripts located in the $ORACLE_HOME/rdbms/admin

directory. The awrrpt.sql script displays statistics for a range of snapshot IDs. The report can be saved as text file or

HTML file. The awrrpti.sql script is similar to awrrpt.sql; the only difference is you can specify the database ID and

instance ID as parameters.

The output of an awr report is similar to a statspack report.

SQL Server database

When you install Microsoft SQL Server 2005, you can use the default settings.

General Settings

SQL Server Database – SAP BusinessObjects Financial Consolidation, and User

Managment CMC Repository databases

Database options

In the Recovery model group box, select the following option:

Full if you want to activate the log.

Simple if you do not want to activate the log.

Page 69: Sap business objects financial consolidation

Tuning Best Practices Guide 69

Business Objects Financial Consolidation

©SAP AG 2009

If you activate the log, you must regularly back up your database logs. If you do not do so, the database log will be

completely filled.

In the Settings groupbox, we recommend that you select the following options:

Auto create statistics

Auto update statistics

The others are optional.

You should not select the Auto shrink and Auto close options on a server so as to not decrease its performance.

Only one login is required to connect to the base with SQL Server 2005. This login must be the dbo of the base. This

login can be different from the sa login.

You cannot log in using a login with the same rights as the dbo. The database collation should be one that uses the

125X code page, for example, the Latin1_General collation with the 1252 code page. You should select a

case-insensitive (CI) and accent-sensitive (AS) sort order (Example: Latin1_General_CI_AS).

NUMA mode If the SQL Server 2005 engine is installed on a server with NUMA architecture, you should deactivate the NUMA mode

in the server BIOS.

We recommend that you test these settings to make sure that they actually improve performance.

Modifying the Degree of Parallelism The following settings can improve performance, especially when SAP BusinessObjects Financial Consolidation is

used with a large number of concurrent users (more than 50 concurrent users).

1. Edit the properties of the SQL Server 2005 engine.

2. Select the Advanced tab.

3. Set the Max degree of Parallelism setting to 1 processor.

Be careful this parameter cannot improve the performance in all of the cases.

SQL Server Advanced Properties – SAP BusinessObjects Financial Consolidation

Page 70: Sap business objects financial consolidation

Tuning Best Practices Guide 70

Business Objects Financial Consolidation

©SAP AG 2009

Server options

The Parallelism setting does not affect the number of processors that SQL Server uses in a multiple-processor

environment. The Parallelism setting only governs the number of processors on which any particular Transact-SQL

statement can run at the same time. If the Parallelism setting is set to use one processor, the SQL Server query

optimizer will not create execution plans that permit any particular Transact-SQL statement to run on multiple

processors at the same time.

Activating the READ_COMMITTED_SNAPSHOT Option The following setting improves performance, especially when SAP BusinessObjects Financial Consolidation is used

with a large number of concurrent users.

This option is mandatory.

1. Log on to the SQL Server 2005 server.

2. Select the SAP BusinessObjects Financial Consolidation databases.

3. Run the following query:

USE master

go

ALTER DATABASE Mybase SET READ_COMMITTED_SNAPSHOT ON

Page 71: Sap business objects financial consolidation

Tuning Best Practices Guide 71

Business Objects Financial Consolidation

©SAP AG 2009

Go

The database must not be running during this query. If a connection remains open on this base, the query will be

blocked.

Recommendations for Improving Performance on SQL Server To increase the performance of the SQL Server engine, we recommend that you separate the data files from the log

files and store them on different disks.

Each database consists of at least two files: one is a primary data file (by default, with the .mdf extension), the other is

log file (by default, with the .ldf extension). There are also secondary data files (by default, with the .ndf extension). A

database can have only one primary data file, zero or more secondary data files, and one or more log files.

Files containing data should ideally be stored on one secure disk volume (for example, RAID 5), while log files should

be stored on another (for example, RAID 1). These two volumes can be managed by two RAID controllers with cache

memory or by one multi-channel RAID controller with cache memory. If the latter option is selected, then each volume

will be managed by a separate channel.

The speed of the hard disks will directly affect the processing speed of the database.

We also recommend that you allocate the most RAM possible to SQL Server.

When the space available in the database is less than 15%, SQL Server's performance will decrease. We therefore

recommend that you monitor the database to ensure that there is always 20% available disk space in it.

Recommendations for Setting the File Size of SAP BusinessObjects Financial Consolidation Database

Set a reasonable size for your database: first of all, before database creation, you should estimate how large your

database will be.

Set a reasonable size for your transaction log: the general rule of thumb for setting the transaction log size is to set it to

20-25 percent of the database size. The smaller the size of your database is, the greater the size of the transaction log

should be, and vice versa. For example, if the estimated database size is equal to 10Mb, you should set the size of the

transaction log to 4-5Mb, but if the estimated database size is over 500Mb, then 50Mb can be enough for the size of

the transaction log.

Set a reasonable size for the Autogrow increment: automatically growing does result in some performance degradation;

therefore you should set a reasonable size for the autogrow increment to avoid automatic growing too often. It is

advised to set the size of the autogrow increment with the double of the size of your biggest table of consolidation,

otherwise try to set the initial size of the database and the size of the autogrow increment.

Page 72: Sap business objects financial consolidation

Tuning Best Practices Guide 72

Business Objects Financial Consolidation

©SAP AG 2009

Backing up Databases using the Simple Recovery Model If the Simple recovery model is used for an SQL Server database, bcp or insert into type commands cannot be used

when backing up the database.

Since SAP BusinessObjects Financial Consolidation uses commands such as insert into during consolidation

operations, running a consolidation operation at the same time as a backup is performed may cause the consolidation

to fail.

We recommend you switch the database to the Bulk-Logged recovery model before performing the backup. You can

then switch the database back to the Simple recovery model.

You can switch from one model to another using the following SQL queries:

To switch the database to "bulk logged" recovery model:

ALTER DATABASE <MyBase> SET RECOVERY BULK_LOGGED

To switch the database to the Simple recovery model:

ALTER DATABASE <MyBase> SET RECOVERY SIMPLE

These commands can be included in backup scripts so that the consolidation processing can run correctly during

backup operations.

Advantages of a 64-bit Computing Server 64-bit computing has many advantages over the 32-bit architecture. The following is a list of 64-bit advantages:

Large memory addressing: The 64-bit architecture offers a larger directly-addressable memory space. SQL Server

2005 (64-bit) is not bound by the 4 GB memory limit of 32-bit systems. Therefore, more memory is available for

performing complex queries and supporting essential database operations. This greater processing capacity reduces

the penalties of I/O latency by utilizing more memory than traditional 32-bit systems.

Enhanced parallelism: The 64-bit architecture provides advanced parallelism and threading. Improvements in parallel

processing and bus architectures enable 64-bit platforms to support larger numbers of processors (up to 64) while

providing close to linear scalability with each additional processor. With a larger number of processors, SQL Server can

support more processes, applications, and users in a single system.

Memory Addressing with 64-bit versus AWE

Inherently, A 32-bit system can manage a maximum of 4 GB of memory. This limits the addressable memory space for

Windows 2000 and Windows 2003 systems to 4 GB. With 2 GB reserved for the operating system by default, only 2

GB of memory remains for SQL Server. To allow a 32-bit system to address memory beyond the 4 GB limit, a set of

memory management extensions to the Microsoft Win32 API called Address Windowing Extensions (AWE) is used.

Using AWE, applications can acquire physical memory as non-paged memory, and then dynamically map views of the

non-paged memory to the 32-bit address space. By using AWE, SQL Server Enterprise Edition can address up to 32

Page 73: Sap business objects financial consolidation

Tuning Best Practices Guide 73

Business Objects Financial Consolidation

©SAP AG 2009

GB of physical memory on Windows Server 2003 Enterprise Edition and up to 64 GB of memory on Windows Server

2003 Datacenter Edition.

Although AWE provides a way to use more memory, it imposes overhead and adds initialization time leading to weaker

performance compared to 64-bit systems. Also, the additional memory addressability with AWE is available only to the

SQL Server's data buffers. It is not available to other memory consuming database operations such as caching query

plans, sorting, indexing, joins, or for storing user connection information. It is also not available on other engines such

as Analysis Services. In contrast, SQL Server 2005 (64-bit) makes memory available to all database processes and

operations. Using the 64-bit version on either IA64 or x64 hardware, a SQL Server instance can address up to 1

terabyte of memory; the current maximum amount of physical memory supported by Windows Server 2003 SP1. This

memory is available to all components of SQL Server, and to all operations within the database engine.

Page 74: Sap business objects financial consolidation

Tuning Best Practices Guide 74

Business Objects Financial Consolidation

©SAP AG 2009

4 Application Tuning

Performance: A Few Reminders

The performance is based on 4 fundamental dimensions:

- Business Requirements

- Data Quality

- Application Usability

- Response Times

Optimizing the performance consists in aligning these 4 fundamental dimensions with the customer‘s expectations. In

other words the goal is to find the right balance between all these dimensions.

This analysis is more efficient when it is carried out jointly between the client and the project team, based directly on

client specificities, by managing priorities and trade-offs.

Data Collection

Validating the Data Volume

Recommendation:

You should check the data volume in some packages for each category scenario.

Response Times

Application Usability

Data Quality

Business Requirements

Page 75: Sap business objects financial consolidation

Tuning Best Practices Guide 75

Business Objects Financial Consolidation

©SAP AG 2009

Things to avoid:

- Operate a budget with about a hundred of packages and 20.000.000 rows.

- Realize by chance that 90% of one reporting ID amounts are 0 because of an interface issue or a calculation formula

issue, and are therefore useless.

Why you shall check the data volume:

- To justify the response times of all actions in packages,

- To validate the configuration, in particular calculation formulas and / or package rules,

- To identify any data entry or data import issue,

- To reduce the consolidation volume and the response times.

How to check the data volume:

- By identify a representative sample of packages for the reporting ID,

- By counting the number of rows via SQL queries or package export file,

- By checking that all the package data is useful and relevant.

Checking the Standards Calculations

Recommendation:

You should check that the calculation formulas generate only useful data.

Things to avoid:

- Generate 0s to expand rows and / or columns automatically in input documents.

- Populate indicators with a constant value and configure an error control that checks that these indicators‘ value has

changed during the package data entry.

Why you shall check the standards calculations:

- To limit the package data volume,

- To identify how the category scenario configuration can be quickly optimized.

How to check the standards calculations:

- By avoiding constant-type formulas that generate 0s, 1s …

Page 76: Sap business objects financial consolidation

Tuning Best Practices Guide 76

Business Objects Financial Consolidation

©SAP AG 2009

- By justifying when calculations apply to analysis dimensions,

- By limiting the account hierarchy,

- By identifying one representative package and assessing the impact of calculations in the package total volume by

exporting / importing the package.

Limiting External Calculations

Recommendation:

You should avoid calculating too much data from another package.

Things to avoid:

- Perform a bulk calculation from another entire category.

- Calculate the YTD flow from the sum of each periodic month available in each respective previous package.

- Populate the Actual package with Budget data only for comparison purpose in the package schedules.

Why you should limit external calculations:

- Because this type of calculation adversely impacts response times,

- Because this link between packages may make the entry process more cumbersome.

How to limit external calculations:

- By giving priority to data retrieval,

- By applying these calculations to the bare useful minimum,

- By considering using the opening data feature combined with standard calculation formulas,

- Possibly by considering populating the destination package via data import from a source package.

Avoiding Package Rules

Recommendation:

You shall use package rules to perform adjustments only.

Things to avoid:

Configure package rules that calculate total accounts or Cash Flow Statement items.

Why you shall avoid package rules:

Page 77: Sap business objects financial consolidation

Tuning Best Practices Guide 77

Business Objects Financial Consolidation

©SAP AG 2009

- Because it could make the data collection process more cumbersome.

- Because it would massively use the IT platform resources as and when the system load increases,

- To limit the package volume, notably when rules are used as a substitute for calculations.

How to avoid package rules:

By giving priority to adjustments in the package schedules.

Data Processing

Checking the Consolidation logs

Recommendation:

You shall check the consolidation logs on a regular basis.

Things to avoid:

- Let the data volume increase very rapidly because of some rules that still need to be optimized?

- Leave abnormal unitary processing time for some rules.

Why you shall check the consolidation logs:

- To make out response time reference measures,

- To validate the configuration, in particular consolidation rules that generate much volume and SQL rules,

- To control the data volume at the beginning and at the end of the consolidation.

How to check the consolidation logs:

- By making out the duration distribution between rules,

- By identifying rules that last the most and that select / generate the most data,

- By limiting the generated data (usefulness, aggregation …).

Using SQL Rules Sparingly

Page 78: Sap business objects financial consolidation

Tuning Best Practices Guide 78

Business Objects Financial Consolidation

©SAP AG 2009

Recommendation:

You should be very cautious about SQL rules.

Things to avoid:

- Delete hundreds of thousands of rows during the consolidation process.

- Ignore that SQL rules can last several minutes in real life while they only take a few seconds during tests.

Why you should use SQL rules sparingly:

- Not to make the implementation and maintenance more complex

- Because some bespoke developments can adversely impact the system performance if not integrated enough

- Because it is more risky as regards the application malfunctioning

How to use SQL rule sparingly:

- By adapting and optimizing the SQL syntax according to the target RDBMS (Oracle, SQL Server)

- By not generating data which is not compliant with the consolidation engine

- By trying not to delete data.

Data Retrieval

Selecting Data Properly

Recommendation:

You should initialize the dimensions as really needed.

Things to avoid:

- Select all the Unit dimension values with the aggregation grouping mode.

- Combine the Period dimension with one single period category in the initialization blocks.

Why you shall select data properly:

- To make the configuration more understandable

- To avoid increasing the number of accessed consolidation tables when the retrieval engine analyzes the report

configuration.

Page 79: Sap business objects financial consolidation

Tuning Best Practices Guide 79

Business Objects Financial Consolidation

©SAP AG 2009

How to select data properly:

- By avoiding aggregating all values on mandatory dimensions,

- By configuring a reasonable number of columns in comparative consolidation report,

- By controlling documents created via ―Save As…‖.

Designing Simple Web Reports

Recommendation:

You should not create extra long expendable Web reports.

Things to avoid:

- Run Web reports which generate a 5Mb-sized HTML page.

Why you shall design simple Web reports:

- To limit the impact of the document HTML page size in the report opening time,

- To limit the network download,

- To make the HTML code easier and quicker to read for the Web browser on the users‘ desktop.

How to design simple Web reports:

- By restricting the report size to a few hundreds of cells

- By implementing genuine browsing rationale with links between reports

- By using expendable rows and / or columns sparingly in reports

- By avoiding complex style books

Creating index(es) on Data Tables

Recommendation:

You should optimize the report execution time by creating index(es) on data tables.

Why you should create index(es) on data tables:

- To avoid a data table full scan by the database engine to fetch the queried data

Page 80: Sap business objects financial consolidation

Tuning Best Practices Guide 80

Business Objects Financial Consolidation

©SAP AG 2009

- To keep database server from any useless workload

How to create index(es) on data tables:

- By auditing the SQL query run by the heaviest reports,

- By checking that the default index on the Account dimension is appropriate for the most critical operating

reports,

- By using the product built-in features to create new specific indexes,

- By reviewing the report configuration by creating report links associated with relevant indexes.

Application

Calibrating the Operating IT Platform

Recommendation:

You should check that the IT platform works smoothly on a regular basis.

Things to avoid:

- Accept too long consolidation or mediocre duration (1 hour to generate 2 million rows is a Carat (former

version of the product) vintage performance on last century‘s hardware),

- Accept report which opens in more than 1 minute

- Try not to spare time.

Why you should calibrate the operating IT platform:

- Because performance results from the combination between IT platform and configuration

- To make sure that IT platform performance meets the IT specifications

- To make sure that IT platform is ready to host the delivered configuration (it was first tune before the

configuration starts)

How to calibrate the operating IT platform:

- By laboratory testing for reference measures

- By measuring the disks performances

- By checking the RDBMS engine settings

Page 81: Sap business objects financial consolidation

Tuning Best Practices Guide 81

Business Objects Financial Consolidation

©SAP AG 2009

© Copyright 2010 SAP AG. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice. Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors. Microsoft

®, WINDOWS

®, NT

®, EXCEL

®, Word

®, PowerPoint

® and SQL Server

® are registered trademarks of

Microsoft Corporation. IBM

®, DB2

®, OS/2

®, DB2/6000

®, Parallel Sysplex

®, MVS/ESA

®, RS/6000

®, AIX

®, S/390

®, AS/400

®, OS/390

®, and OS/400

® are

registered trademarks of IBM Corporation. ORACLE

® is a registered trademark of ORACLE Corporation.

INFORMIX®-OnLine for SAP and Informix

® Dynamic Server

TM

are registered trademarks of Informix Software Incorporated. UNIX

®, X/Open

®, OSF/1

®, and Motif

® are registered trademarks of the Open Group.

HTML, DHTML, XML, XHTML are trademarks or registered trademarks of W3C®, World Wide Web Consortium, Massachusetts

Institute of Technology. JAVA

® is a registered trademark of Sun Microsystems, Inc. JAVASCRIPT

® is a registered trademark of Sun Microsystems, Inc.,

used under license for technology invented and implemented by Netscape. SAP, SAP Logo, R/2, RIVA, R/3, ABAP, SAP ArchiveLink, SAP Business Workflow, WebFlow, SAP EarlyWatch, BAPI, SAPPHIRE, Management Cockpit, mySAP.com Logo and mySAP.com are trademarks or registered trademarks of SAP AG in Germany and in several other countries all over the world. All other products mentioned are trademarks or registered trademarks of their respective companies. Disclaimer: SAP AG assumes no responsibility for errors or omissions in these materials. These materials are provided ―as is‖

without a warranty of any kind, either express or implied, including but not limited to, the implied warranties of merchantability, fitness for a particular purpose, or non-infringement. SAP shall not be liable for damages of any kind including without limitation direct, special, indirect, or consequential damages that may result from the use of these materials. SAP does not warrant the accuracy or completeness of the information, text, graphics, links or other items contained within these materials. SAP has no control over the information that you may access through the use of hot links contained in these materials and does not endorse your use of third party Web pages nor provide any warranty whatsoever relating to third party Web pages.