benchmarking financial processes with data envelopment ... · concerning these input and output...

19
Benchmarking Financial Processes with Data Envelopment Analysis 1 Benchmarking Financial Processes with Data Envelopment Analysis (Dipl.-Kfm. Stefan Blumenberg) 1 Preface During the last years most of the industry and service organizations optimized their supply chain due to increasing competition and therefore resulting cost pressure. One of the most important ways to achieve these goals was to outsource non-core competencies to other highly specialized providers. By doing this the outsourcing companies reduced their level of self produced goods or services and they were able to concentrate on their core competencies. Thus, these industry and service organizations could face the growing competition. Regardless of this evolution, German financial service providers sticked to the inhouse development of products, information systems and other processes. Now, after severe years, they realize the need for an industrialization of their value chains to adapt to changing market situations. On the one hand they reconsider to reduce the amount of inhouse production of goods and services to concentrate on their core competencies. On the other hand banks can also generate new revenue streams by appearing as an insourcer on the market to provide services they are believed to be “best-practice” compared to their competitors. 1.1 The Problem A supply chain can be described as the interaction among different organizational units within a legal body like a company and the interactions between legally seperated organizational units such as differ- ent companies [Ballou, R. H. 2004, p. 5)]. A supply chain can be roughly differentiated in two different flows [Ballou, R. H. (2004), p. 6]: 1. Products or services 2. Information The physical part of the supply chain is determined by the transformation of goods and services and their flow through different companies or organizational units within a company. This part has been op- timized very well during the last years, especially automotive industry achieved a big knowledge in this area [Mentzer, J. T. et al. (2001)]. Whereas the flow of information and especially the incorporated fi- nancial processes has been rarely analysed and optimized [Pfaff, D.; Skiera, B.; Weitzel, T. (2003), p. 2]. The E-Finance Lab and the Institute of Information Systems, Frankfurt University, conducted a study of the financial processes within the supply chain (the so called “financial chain”) of the top 1000 compa- nies in Germany, excluded financial service institutions and insurance companies. Unlike financial ser- vice providers, the financial chain of the participating companies has only a supporting character to their core competencies. These companies where ignored because the financial chain study only focuses on companies where the financial chain represents a supporting function. We identified a “generic financial chain” that consists of seven sub-processes. This financial chain forms the basis of the study. Figure 1 shows the generic financial chain divided in two main parts: financial trade enablement with the subprocesses qualification, financing, pricing and insurance and financial trade settlement with the three subprocesses invoice, reclamation and payment. Financial trade settlement covers the financial after sales actions within the financial chain. Financial trade enablement and financial trade settlement en- velop the fulfillment, where goods and services are interchanged. Information about these subprocesses are collected and analysed to improve performance. For more information about the financial chain see Pfaff, D.; Skiera, B.; Weiss, J. (2004).

Upload: others

Post on 26-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 1

Benchmarking Financial Processes with Data Envelopment Analysis

(Dipl.-Kfm. Stefan Blumenberg)

1 Preface During the last years most of the industry and service organizations optimized their supply chain due to increasing competition and therefore resulting cost pressure. One of the most important ways to achieve these goals was to outsource non-core competencies to other highly specialized providers. By doing this the outsourcing companies reduced their level of self produced goods or services and they were able to concentrate on their core competencies. Thus, these industry and service organizations could face the growing competition.

Regardless of this evolution, German financial service providers sticked to the inhouse development of products, information systems and other processes. Now, after severe years, they realize the need for an industrialization of their value chains to adapt to changing market situations. On the one hand they reconsider to reduce the amount of inhouse production of goods and services to concentrate on their core competencies. On the other hand banks can also generate new revenue streams by appearing as an insourcer on the market to provide services they are believed to be “best-practice” compared to their competitors.

1.1 The Problem A supply chain can be described as the interaction among different organizational units within a legal body like a company and the interactions between legally seperated organizational units such as differ-ent companies [Ballou, R. H. 2004, p. 5)].

A supply chain can be roughly differentiated in two different flows [Ballou, R. H. (2004), p. 6]:

1. Products or services

2. Information

The physical part of the supply chain is determined by the transformation of goods and services and their flow through different companies or organizational units within a company. This part has been op-timized very well during the last years, especially automotive industry achieved a big knowledge in this area [Mentzer, J. T. et al. (2001)]. Whereas the flow of information and especially the incorporated fi-nancial processes has been rarely analysed and optimized [Pfaff, D.; Skiera, B.; Weitzel, T. (2003), p. 2].

The E-Finance Lab and the Institute of Information Systems, Frankfurt University, conducted a study of the financial processes within the supply chain (the so called “financial chain”) of the top 1000 compa-nies in Germany, excluded financial service institutions and insurance companies. Unlike financial ser-vice providers, the financial chain of the participating companies has only a supporting character to their core competencies.

These companies where ignored because the financial chain study only focuses on companies where the financial chain represents a supporting function.

We identified a “generic financial chain” that consists of seven sub-processes. This financial chain forms the basis of the study.

Figure 1 shows the generic financial chain divided in two main parts: financial trade enablement with the subprocesses qualification, financing, pricing and insurance and financial trade settlement with the three subprocesses invoice, reclamation and payment. Financial trade settlement covers the financial after sales actions within the financial chain. Financial trade enablement and financial trade settlement en-velop the fulfillment, where goods and services are interchanged. Information about these subprocesses are collected and analysed to improve performance. For more information about the financial chain see Pfaff, D.; Skiera, B.; Weiss, J. (2004).

Page 2: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 2

InsurancePricingFinan

cingQualification

Payment

ReclamationInvoiceFulfill-

ment

Financial Trade Enablement Financial Trade Settlement

A n a l y s i s

InsurancePricingFinan

cingQualification

Payment

ReclamationInvoiceFulfill-

ment

Financial Trade Enablement Financial Trade Settlement

A n a l y s i s

Figure 1: Generic model of the Financial Chain

As this part of the supply chain has not been the focus of last years research the cooperation and infor-mation exchange between the different subprocesses can be improved.

For instance it is very important for the invoice process to receive the pricing information from the de-partment that bargains with the customer.

85% - 99% of the companies stated that these seven subprocesses exist within their organization [Skiera, B. et al. (2003)]. Two of three companies quoted that they are dissatisfied with their financial processes and half of them also have identified possibilities for improvement.

They were asked if they could imagine to integrate suprocesses in a shared service organization which executes the specific suprocess companywide or to outsource a complete subprocess.

51.5% of the companies are confident that several subprocesses can be integrated in a shared services organization or even outsourced to reduce costs [Skiera, B. et al. (2003)]. The subprocess invoice seems to be one of the most interesting process concerning the integration in a shared service organiza-tion: nearly 70% of the companies stated that they already integrated it, are planning to integrate it or at least believe this subprocess to be integrable [Skiera, B. et al. (2003)]. As the construction of a shared service organization requires the processes to be well-structured and documented, shared service or-ganizations can be seen as a pre-stage to outsourcing.

Outsourcing is only an interesting option if there is a big room for improvement. The financial chain study shows that the companies by far believe the subprocess invoice to have the most important room for improvements. But an outsourcing decision requires these companies to take a closer look on them and to compare their invoice process to these of other organizations.

2 Benchmarking invoice efficiency

2.1 A brief overview of benchmarking research in literature In literature the comparison of different processes is referred to as “benchmarking”. The superior aim of benchmarking is the performance enhancement of an organization [Ulrich, P. (1998), p. 15-16]. Con-cerning the invoice subprocess organizations try to compare their invoice process performance to other companies. Identifying inefficiencies could be an indicator to improve processes or think of a consolida-tion into a shared services organization or ultimately to outsource the whole process.

In general the process of benchmarking can be divided into five steps: [Homburg, C. (2000), p. 583; Töpfer, A. (1997)]

1. Choosing an object of investigation.

2. Choosing benchmarking partners: the process of benchmarking is often differentiated in internal and external benchmarking. An internal benchmarking investigates objects of investigation within a company whereas external benchmarking focuses on legally independent companies [Legner, C. (1999), p. 7].

3. Analysis: measuring the performance differences via the identification of operating figures.

4. Identification of improvement possibilities.

5. Execution of the identified improvement possibilities: beneath an execution the results have to be monitored and their improvement potential has to be identified via a profitability analysis.

Page 3: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 3

The comparison of different units is not an invention of the last years research [Church, A. H. (1909), p. 188]. In the german literature it is known as “Betriebsvergleich” which is similar to benchmarking/ inter-company comparison [Wöhe, G. (1984), p. 1254].

2.2 Choosing Methodology As a result of the financial chain study we received input and output data characterizing the invoice process:

1. Input data:

a. IT costs per month for the invoice infrastructure and maintenance (this could be an IT infrastructure or a paper based infrastructure)

b. Number of people working in the invoice process and perform the operations

2. Output data:

a. Number of invoices produces per month

As a result a single efficiency measure using this data is required to compare these companies against each other. Due to the fact that the output invoices are not traded on a market and therefore there is no common market price for this output, no single or weighted efficiency relation measure like [number of cars produced multiplied with the car market price / number of people producing cars multiplied with the loan] can be estimated [Scheel, H. (2000), p.60].

Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data Envelopment Analysis (DEA) uses input and output factors to identify a single relative ef-ficiency measure to compare a company to the best performing organizations [Schmid, F. A. (1994), p. 228]. This method does not require monetary inputs and outputs. Furthermore it uses combinations of efficient units to compare inefficient units against these linear combinations (or virtual units) and identi-fies their relative efficiency scores [Charnes, A. et al. (1994), p. 6].

2.3 Data Envelopment Analysis

2.3.1 Basic principles of the Data Envelopment Analysis 2.3.1.1 Graphical Example

In the DEA context, different units that are compared against each other (that could be companies, branches or even processes) are called “Decision Making Units” (DMU) because they can identify and vary their inputs and outputs [Bala, K.; Cook, W. D. (2003), p. 440].

Data Envelopment Analysis is a non-parametric approach that compares a population of observations against the “best practice” observation [Parsons L. J. (1992), p. 186]. Unlike parametric approaches like regression analysis it optimizes on each individual observation and does not require a single function that suits best to all observations [Charnes A. et al. (1994), p. 5; Scheel, H. (2000), p. 62].

Figure 2 describes this implication. This population of DMUs is characterized by utilizing a certain amount of one input (described at the axis of abscissae) producing a certain amount of one output (de-scribed at the axis of ordinates). If DMU 13 and DMU 4 are compared it is clear that DMU 4 produces more output at the same rate of input compared to DMU 13. So DMU 4 dominates DMU 13. DMUs 2,4,7,8 and 11 are also efficient because combinations of them dominate the other DMUs. These effi-cient DMUs build a piecewise linear frontier derived from DEA that is described by the solid line in fig. 2. Every DMU beneath this efficient frontier is inefficient.

The usage of combinations of efficient DMUs is the heart of the DEA: these combinations are called vir-tual producers. For example, DMU 10 is inefficient because it lies beneath the efficient frontier. Its virtual procuder lies on the picewise linear frontiert between DMU No. 8 and No. 4. described by VP 10. This virtual producer is a combination of approximately 50% of DMU 8 and 50% of DMU 4 and produces more output than DMU 10 with the same amount of input X.

Page 4: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 4

Input

Output

DMU 1

DMU 2 DMU 3

DMU 4

DMU 5

DMU 6

DMU 7

DMU 8

DMU 9

DMU 10

DMU 12

DMU 11

DMU 13

DMU 3

VP 10

X Input

Output

DMU 1

DMU 2 DMU 3

DMU 4

DMU 5DMU 5

DMU 6

DMU 7

DMU 8

DMU 9

DMU 10

DMU 12

DMU 11

DMU 13DMU 13

DMU 3

VP 10

X

Figure 2: Basic DEA model

The DEA efficiency measure θ is scaled between 0 and 1: efficient DMUs (i.e. in fig. 2 DMUs 2, 4, 7, 8, and 11) have an efficiency measure of θ = 1, inefficient ones beneath the efficient frontier a 1≤θ . Sub-sequent the efficiency measure of DMU 10 in this simple graphical example can be calculated by the fraction of the distances between X/DMU 10, X/VP 10 and DMU10/ VP 10:

1010/VPDMU10X/DMU10 X/DMU

+=θ

Equ. 1: Calculation of DEA efficiency in a simple graphical one input/ one output example

In the current example the distances have the following values:

X/ DMU 10 = 6,2

DMU 10/ VP 10 = 2,8

Resulting DMU 10 has an efficiency measure of θ = 0,689 = 68,9 % compared to the virtual producer VP 10.

It must be pointed out that this comparison is only a relative one. All DMUs are compared to the best performing DMUs and so their efficiency measure is calculated in relation to all other DMUs. This impli-cates that a comparison in a weak market sector and its organizations could result in a differentiation described above with DEA efficient companies that may have a negative operating profit but compared to their competitors they perform much better (but still not profitable).

2.3.1.2 Numerical Example

After this restricted graphical example a numerical one will demonstrate DEA possibilities. We assume four bank branches which produce two different outputs: check account transactions and customer ser-vice with one input (amount of employees):

Bank branch Input (Employees)

Output 1 (Check account transactions)

Output 2 (Customer services)

1 10 500 10 2 10 100 40 3 10 200 15 4 10 100 20

Page 5: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 5

Table 1: Input and output values of the numerical example

Bank branch 4 is dominated by branch 2 because it uses the same amount of input with the same amount of output 1 but less output 2.

Branch 3 is also dominated by combinations of branches 1 and 2 because

• branch 1(2) can be scaled down to a factor of %21,34xs = ( %94,28ys = ). This combination would produce the same output as branch 3 (200;15) but with less input (6,315 units).

• a linear combination of branch 1 and 2 (i.e. the virtual producer of branch 3) can be identified that lies on the efficient frontier. By taking %17,54xf = of branch 1 and %83,45yf = of branch 2 we get the virtual producer of branch 3 that handles 316,66 check account transac-tions and 23,75 customer services with the same input as branch 3.

Therefore bank branch 3 is DEA inefficient and spoken in a graphical way, beneath the efficient frontier. For further information about the calculations above see appendix No. 1.

2.3.1.3 Benefits and limitations of the Data Envelopment Analysis

As already described in 2.3.1, DEA offers some benefits to other approaches but also has some limita-tions that have to be kept in mind when using DEA.

Advantages [Charnes A. et al. (1994), p. 8; Schmid, F. A. (1994), p. 228]:

• DEA is able to handle multiple inputs and outputs

• DEA does not require a functional form that relates inputs and outputs

• DEA optimizes on each individual observation and compares them against the “best practice” observations

• DEA can handle inputs and outputs without knowing a price or knowing the weights

• DEA produces a single measure for every DMU that can be easily compared with other DMUs

Limitations [Homburg, C. (2000), p. 583; Charnes A. et al. (1994), p. 5; Anderson, T. (1996)]:

• DEA only calculates relative efficiency measures

• As a nonparametric technique statistical hypothesis test are quite difficult

2.3.2 Different DEA models Data envelopment analysis is not just one single method. This analysis is a collection of different meth-ods which serve different needs depending on scale effects, measurement of the distance to the envel-opment surface or the functional form of the envelopment analysis [Charnes A. et al. (1994), p. 23]. Usually in literature four different characteristics of the DEA are differentiated:

1. CCR model (Charnes, Cooper and Rhodes, 1978):

The CCR model bases the evaluation on a production technology that has constant re-turns to scale (CRS) and radial distance measure to the efficient frontier [Cantner, U.; Hanusch, H (1998), p. 231], i.e. that the inefficiency of a DMU compared to the corre-sponding efficient units lying on the efficient border is measured via a radius vector [Schefczyk, M. (1996), p. 171] as described by the grey line in appendix 1. Furthermore the efficient frontier is piecewise linear [Charnes A. et al. (1994), p. 45].

The CCR model is differentiated in input or output orientation, depending upon reduction of inputs with constant outputs or enhancement of output with constant input [Charnes A. et al. (1994), p. 31].

As a result the CCR model reveals an objective description of overall efficiency and identifies the sources of inefficiencies [Charnes A. et al. (1994), p. 23], i.e. –for the ex-ample in 2.3.1.2- how much output has to be decreased for an inefficient unit to become efficient.

The above described two examples in 2.3.1.1 and 2.3.1.2 rely on the CCR model.

2. BCC model (Banker, Charnes and Cooper, 1984):

Page 6: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 6

In contrast to the CCR model the BCC model offers a differentiation between technical efficiency and scale-efficiency [Golany, B.; Roll, Y. (1989), p. 249] because it evaluates solutions for nonincreasing returns to scale (NIRS), nondecreasing returns to scale (NDRS) or variable returns to scale (VRS) [Bowlin, W. F. (1998) ,p.9]. With a combina-tion of the CCR model inefficient CCR DMUs do not have to be technical inefficient [Cantner, U.; Hanusch, H (1998), p. 234]. Potentially a part of this inefficiency can be mitigated by increasing or decreasing the production volume resulting in a removal of scale inefficiencies. As the CCR model the BCC model also implies a radial distance measure and a piecewise linear frontier [Charnes A. et al. (1994), p. 45].

Like the CCR model the BCC model also differentiates in an input or output orientation [Schefczyk, M. (1996), p. 171].

3. Multiplicative model (Charnes, Cooper, Seiford and Stutz, 1983):

In contrast to the CCR and BCC model the multiplicative model offers a different charac-teristic of the efficient frontier by providing a piecewise log-linear envelopment surface or a piecewise Cobb-Douglas interpretation of the production process [Charnes A. et al. (1994), p. 24]. The returns to scale assumption depend on the envelopment surface: a log-linear surface assumes constant returns to scale and a Cobb-Douglas coherence assumes variable returns to scale.

4. Additive model (Charnes, Cooper, Golany, Seiford and Stutz, 1985):

Unlike the CCR or BCC model the additive model is unoriented, i.e. it does not differen-tiate between input or output orientation which means that a reduction of input with a synchronous enhancement of outputs is possible [Schefczyk, M. (1996), p. 171]. The additive model assumes constant returns to scale and piecewise linear efficient frontier.

For a close discussion of the differences between the four models see “Data Envelopment Analysis: Theory, Methodology, and Application” [Charnes, A. et al (1994)].

These four in literature commonly used and described DEA models build a profound basis for an effi-ciency analysis with different returns to scale, different envelopment surfaces and different ways to pro-ject inefficient units to the efficient frontier [Charnes A. et al. (1994), p. 49]. During the last years a lot of extensions to these four models have been developed that allow further fine tuning to the basic models. Three of these numerous extensions should be briefly presented:

a) The basic DEA models always assume that inputs and outputs can be altered by the DMUs. In realistic situation there are often variables that are exogenous variables that can not be altered. For example the distribution of competitors may influence efficiency scores without being alterable by the DMUs [Banker, R. D.; Morcy, R. C. (1986a), p. 513]. These variables are called nondiscretionary and will also be used for benchmark-ing the invoice efficiency later.

b) The comparison of quite different units was also addressed by [Banker, R. D.; Morey, R. C. (1986b), p. 1614]. The introduction of categorical variables extends the application focus of the DEA. For example bank branches that are difficult to compare because of different demographics can be categorized or even binary-coded variables can be com-puted.

c) The former described models all focus only analyse one period situations. This restric-tion was enhanced through a window analysis that allows the comparison of changing efficiency of a unit over time [Charnes A. et al. (1994), p. 57] by treating this DMU as if it were a different DMU.

For more information about extensions to the basic DEA models see “Data Envelopment Analysis: The-ory, Methodology, and Application” [Charnes, A. et al (1994)].

2.3.3 CCR model algorithm After the basic information about DEA and its different models the algorithm of the CCR model will be explained. The CCR model will be the basis for the efficiency measurement of the german top 1000 companies comparison. Details why the CCR model has been chosen in this case will be described in the next section.

Basis for the efficiency measurement is usually the productivity θ [Holling, H.; Lammers, F.; Pritchard R. D. (1999); Wöhe, G. (1996), p.49] that can be described as:

Page 7: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 7

InputOutput

Equ. 2: Efficiency measure in the one input/ one output case

This case assumes a single input and output. If some organizations are compared this way a bigger (smaller) θ means a better (worse) performance because less (more) input is used for a constant amount of output. The meaning of θ is only available through a comparison to other organization, for this reason the efficiency is always a relative measure [Scheel, H. (2000), p. 3].

Typically organizations have multiple inputs ix ),...,1( mi = and outputs jy ),...,1( sj = that have to

be weighted ),( ji uv and then can be summated as described in equation 3. These weights have to be defined by the researcher before a calculation. The numerator describes the sum of the weighted output and the denominator the sum of the weighted inputs.

=

== m

iii

s

jjj

xv

yu

1

Equ. 3: Efficiency measure in the multiple input/ output case

The comparison of different organizations using the measure described in equation 3 is quite inaccurate because it is not clear if the resulting efficiency measures depend on the real performance or on the (randomly) chosen weights. Whereas DEA chooses the optimal individual weights for every DMU de-rived from the data [Charnes A. et al. (1994), p. 5; Scheel, H. (2000), p. 62].

Therefore every DMUk maximizes its efficiency θ by choosing its individual weights which suit best to its situation (equation 4).

=

== m

iiki

s

jjkj

k

xv

yu

1

Equ. 4: Objective function of the DEA algorithm

As a side condition it is important that every other DMUl has a 1≤θ with the chosen weights of DMUk because the DEA efficiency measure should be scaled between 0 and 1. This coherency can be de-scribed by equation 5.

( )nlxv

yu

m

iili

s

jjlj

,...,1,1

1

1 =≤

=

=

Page 8: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 8

Equ. 5: Side condition

These equations provide the basis for the formulation of CCR as described in equation 6. To the above described objective function and the side condition we have the non-negative constraint that the weights

ji uv , must be bigger or equal than 0.

( )

00

,...,1,1

max

1

1

1

1

=≤

=

=

=

=

=

i

j

m

iili

s

jjlj

m

iiki

s

jjkj

k

vu

nlxv

yu

xv

yuθ

Equ. 6: Basic formulation for the CCR model

This formulation has one disadvantage: it cannot be solved by linear programming because it is not lin-ear. The side condition can be very simple linearized by the multiplication of the denominator resulting in equation 7.

( )nlxvyum

iili

s

jjlj ,...,1,

11

=≤ ∑∑==

Equ. 7: Linearized side condition

The objective function will be linearized via the normalization of the denominator. The denominator can be normalized and added to the side functions. Equation 8 describes this coherency.

11

=∑=

m

iiki xv

Equ. 8: Normalized denominator of the objective function

The input weights iv of the denominator are independent anent the value of the equations right side. A

“2” on the right side produces the same result concerning the weights iv as a “4”. Therefore the de-nominator of the objective function is equalled to “1”, added to the side functions resulting in a linear ob-jective function.

After the linearization of both the objective function and the side function the complete linear program of the CCR model is described in equation 9.

Page 9: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 9

( )

00

,...,1,

1

max

)(

11

1

1

=≤

=

=

∑∑

==

=

=

i

j

m

iili

s

jjlj

m

iiki

s

jjkj

k

vu

nlxvyu

xv

yu

LP

Equ. 9: Linear CCR-algorithm

2.4 Empirical data usage to benchmark invoice efficiency

2.4.1 Demography The rate of return of the financial chain study was 10,3 % of the 1000 surveyed companies, of which 31 companies returned a sufficient answer to benchmark their invoice process. Sufficient means that these companies answered the three questions concerning the described inputs and outputs completely that are required for a Data Envelopment Analysis.

It is quite obvious that two of these 31 companies misunderstood the three input and output questions within the questionnaire because their answers are far away of all the other results. For example one company quotes that their annual IT costs for the production of 1000 invoices are 11,905,000 Euro at an average of 12,658 Euro (calculated without the two outliers). These two outliers are not considered in the subsequent computations.

As a result figure 3 shows the differentiation of the 29 benchmarked companies into different industry sectors. This differentiation is based on the NACE codes that are used by the European Union to desig-nate the various statistical classifications of economic activities. 41% of these companies are part of the manufacturing sector, followed by 24% of real estate, renting and business activities sector, 14% whole-sale and retail trade, repair of motor vehicles, motorcycles and personal household sector, another 14% transport, storage and communication sector and finally 7% are part of the electricity, gas and water supply sector.

Industry sectors

24%

7%14%

14%

41%

ManufacturingElectricity, gas and water supplyWholesale and retail trade; repair of motor vehicles, motorcycles and personal and household goodsTransport, storage and communicationReal estate, renting and business activities

Figure 3: Industry sectors of the benchmarked companies

Page 10: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 10

2.4.2 Classification of the research data into the DEA framework Concerning the five steps required to benchmark a unit described in 2.1 DEA just covers one step [Homburg, C. (2000), p. 583]: analysis (step 3). DEA mainly results in the single efficiency measure that allows the simple comparison of different DMUs and therefore has its core competency in the identifica-tion of operating figures.

Besides this single efficiency measure DEA addresses a part of step No. 4 of the benchmarking process with slacks. Slacks, which are a result of a DEA, are the amount of inputs (outputs) that a DMU has to reduce (decrease) to reach the efficient frontier and become efficient. But these figures are not as strong as required by the five step model. The identification of improvement possibilities can be fulfilled by a detailed examination of the DMUs that build the virtual producer or by a case study.

In this case the desired object of investigation (step 1) is the invoice process. This subprocess of the fi-nancial chain is focused because of its importance concerning a consolidation within an organization (i.e. a shared service organization) or even an outsourcing.

The benchmarking partners are the 1,000 biggest German companies without financial service provid-ers.

The questioned companies have an amount of invoices per year that ranges from 1,000 to 60,000. Com-panies surely have economies of scale within the first invoices they issue and probably diseconomies of scale at a certain very high amount of invoice that have to be processed. But within the range of 1,000 to 60,000 invoices constant economies of scale can be assumed. Especially the high minimal value of 1,000 invoices per year supports this thesis. Consequential the CCR model with the assumption of con-stant returns to scale has been chosen to compute the efficiency scores.

2.4.3 Calculation and interpretation of the efficiency measures via CCR algorithm The efficiency measures have been calculated using the CCR model algorithm which is described in 2.3.3 and additionally cross-checked with the software Efficiency Measurement System (EMS) [Scheel, H. (2000), p.152] which is based on the interior point solver BPMPD [Mészáros, C. (1997)].

Using the CCR algorithm (2.3.3), for every single DMU a linear program with one objective function and 30 side conditions was designed (for an example see the linear program for DMU 1 described in appen-dix No. 2). These 29 linear programs were solved via the linear optimization model solver LINGO 7.0. The EMS software reveals the same results, aside from small deviations at the forth decimal place which can be declared with the usage of different LP solvers (LINGO 7.0 vs. BPMPD).

The EMS software was used with an input-oriented adjustment because of the input oriented aim to minimize the inputs without a reduction of the output. Concerning the CCR algorithm constant returns to scale have been chosen.

The results are described in table 2. Three DMUs (DMU 2, DMU 6 and DMU 8) have an efficiency score of 100% and therefore these DMUs build the efficient frontier. Besides the efficient DMUs it is very as-tonishing that 21 DMUs have an efficiency score less than 1% (DMU 9-29). This result requires an ex-amination of possible correlation to other factors or even the involvement of further factors into the DEA.

DMU Efficiency score with CRS

1 8,805755%2 100,000000%3 22,109850%4 2,610365%5 81,475790%6 100,000000%7 71,452730%8 100,000000%9 0,101547%

10 0,037465%11 0,001020%12 0,878752%13 0,031023%

Page 11: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 11

14 0,000243%15 0,001664%16 0,003355%17 0,000475%18 0,003287%19 0,000173%20 0,004171%21 0,000003%22 0,015625%23 0,000261%24 0,000018%25 0,000092%26 0,000342%27 0,001298%28 0,000463%29 0,000006%

Table 2: Efficiency measures (CCR-model) of the 29 benchmarked companies with constant returns to scale

First of all a comparison of the industry sectors of the efficient units to the inefficient units does not re-veal a conclusion. Two of the inefficient units are from the industry sector of manufacturing while the third efficient unit is part of the wholesale and retail trade sector. Figure 4 shows the industry sector of the inefficient DMUs which are quite similar to the industry sector differentiation of all benchmarked companies described in figure 3.

Industry sector allocation of inefficient companies

45%

14%

27%

5%9%ManufactoringElectricity, gas and water supplyWholesale and retail trade; repair of motor vehicles, motorcycles and personal and household goodsTransport, storage and communicationReal estate, renting and business activities

Figure 4: Industry sectors of the inefficient companies

A correlation analysis of the received efficiency scores with the amount of electronically processed and shipped invoices and the average invoice amount does not reveal a significant correlation. But the corre-lation between the efficiency scores and the amount of invoices per year reveals a coherency. DMU 5, 6 and 7 have the highest amount of invoices per year (20,544,000, 50,400,000 and 60,000,000 invoices per year). It seems that companies with a high yearly amount of invoices tend to be efficient (Pearson´s correlation coefficient of 0,638 with a significance of 0,01). This makes it necessary to reconsider the assumption of constant returns to scale (CRS) and recalculate the efficiency scores again with nonde-creasing returns to scale (NDRS).

Table 3 compares the results of the calculation with constant returns to scale (middle column, this col-umn is identical to table 2 and has been calculated via the CCR algorithm) and nondecreasing returns to scale illustrated in the right column (calculated via the BCC algorithm). The BCC case has been calcu-lated using Efficiency Measurement System in the input oriented case and nondecreasing returns to scale. Note that the amount of efficient units increased while the already constant returns to scale effi-cient units (DMU 2, 6 and 8) also stay efficient in the nondecreasing returns to scale case. Units that are efficient within constant returns to scale analysis always stay efficient in the nonincreasing and nonde-

Page 12: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 12

creasing returns to scale case [Scheel, H. (2000), p. 45] because her scale size is already in an opti-mum that an increase or decrease cannot alter the productivity. This case is called “most productive scale size” by [Banker, R. D. (1984), p. 35].

DMU Efficiency score with CRS

Efficiency score with NDRS

1 8,805755% 100,000000% 2 100,000000% 100,000000% 3 22,109850% 54,446168% 4 2,610365% 22,744732% 5 81,475790% 100,000000% 6 100,000000% 100,000000% 7 71,452730% 71,452725% 8 100,000000% 100,000000% 9 0,101547% 1,735854%

10 0,037465% 2,079490% 11 0,001020% 0,711000% 12 0,878752% 8,038596% 13 0,031023% 1,761948% 14 0,000243% 0,176644% 15 0,001664% 0,814917% 16 0,003355% 0,944345% 17 0,000475% 0,615500% 18 0,003287% 0,362059% 19 0,000173% 0,202670% 20 0,004171% 0,152398% 21 0,000003% 0,023555% 22 0,015625% 0,625005% 23 0,000261% 0,076197% 24 0,000018% 0,119458% 25 0,000092% 0,121617% 26 0,000342% 0,126871% 27 0,001298% 0,063181% 28 0,000463% 0,047638% 29 0,000006% 0,038899%

Table 3: Comparison of efficiency scores calculated with constant returns to scale and nondecreasing returns to scale.

Besides this involvement of economies of scale another already in the financial chain study examined factor should be considered. An invoice can not be easily compared to another invoice because of its complexity and length. Up to now we assumed that the output “number of invoices” can easily be com-pared but some companies may bill for a box of paper while another bill for complex technical equipment like nuclear power plants. The issuing of an invoice concerning a nuclear power plant will surely take more time and IT infrastructure than the invoice concerning a box of paper. Therefore the amount of in-voice line items could be an adequate measure to specify an invoice´s complexity.

2.4.4 Calculation and interpretation of the efficiency measures via BCC algorithm During the design of the financial chain questionnaire and the subsequent pretest it was very obvious that none of the pretested companies was able to specify their average amount of invoice line items that could specify the invoice´s complexity. As an alternative the minimal and maximal invoice amount was asked for that should provide an indication of the amount of invoice line items. With these minimal and maximal invoice amounts an average invoice value was calculated that serves as an indicator for the complexity of an invoice. This factor should also be included in a recalculation of the efficiency scores.

Page 13: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 13

Contrary to the other inputs and outputs the complexity of invoices can not be controlled by the DMUs because only customers can influence this value by their orders. Therefore the average unit´s complex-ity must be treated as a nondiscretionary variable (as described in 2.3.2) that cannot be altered. Besides the 2 inputs and 1 output described in 2.2 the complexity is integrated in the DEA as a nondiscretionary output that expresses the invoice´s complexity. The DEA results calculated with 4 variables are illus-trated in the right column of table 4. As a comparison the former computed efficiency scores are also presented in the second and third column from the left. Please notice that for two of the 29 companies (DMU 6 and 18) no efficiency score with the invoice complexities variable could be calculated because these two companies did not answered the questions concerning the minimal and maximal invoice amount.

DMU Efficiency scores with CRS

Efficiency scores with NDRS

Efficiency scores with NDRS and invoice

complexity measure 1 8,805755% 100,000000% 100,000000% 2 100,000000% 100,000000% 100,000000% 3 22,109850% 54,446168% 50,703809% 4 2,610365% 22,744732% 22,981477% 5 81,475790% 100,000000% 100,000000% 6 100,000000% 100,000000% - 7 71,452730% 71,452725% 100,000000% 8 100,000000% 100,000000% 100,000000% 9 0,101547% 1,735854% 2,858289%

10 0,037465% 2,079490% 2,072580% 11 0,001020% 0,711000% 0,839164% 12 0,878752% 8,038596% 8,562904% 13 0,031023% 1,761948% 3,297872% 14 0,000243% 0,176644% 0,169008% 15 0,001664% 0,814917% 0,832828% 16 0,003355% 0,944345% 0,989283% 17 0,000475% 0,615500% 0,629248% 18 0,003287% 0,362059% - 19 0,000173% 0,202670% 0,202814% 20 0,004171% 0,152398% 0,151461% 21 0,000003% 0,023555% 0,022534% 22 0,015625% 0,625005% 31,187884% 23 0,000261% 0,076197% 0,075730% 24 0,000018% 0,119458% 0,462471% 25 0,000092% 0,121617% 0,379369% 26 0,000342% 0,126871% 0,136154% 27 0,001298% 0,063181% 0,063967% 28 0,000463% 0,047638% 0,047865% 29 0,000006% 0,038899% 0,039452%

Table 4: Comparison of efficiency scores calculated with constant returns to scale, nondecreasing re-turns to scale and nondecreasing returns to scale and invoice complexity measure

Now, with 27 benchmarked DMUs 5 of them are efficient. Compared to the benchmarking under nonde-creasing returns to scale illustrated in the third column from the left all nondecreasing returns to scale ef-ficient units stay also efficient in the complexity variable case. Unfortunately DMU 6, which was deemed to be efficient benchmarked with nondecreasing returns to scale did not answer the questionnaire prop-erly to compare its efficiency score. The efficiency score of DMU 7 was only 71% in the nodeacreasing returns to scale case. The inclusion of the invoice´s complexity boosts its efficiency score to 100%. The other DMUs do not significantly change their efficiency score. Still 55% of all companies have an effi-ciency score smaller than 1%.

Page 14: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 14

3 Conclusion

Data Envelopment Analysis is a suitable method for benchmarking financial processes that can be char-acterized by inputs and outputs. Data Envelopment Analysis recognizes different returns to scale and thus identify technical and allocative efficiency with diverse DEA attemps.

The benchmarked invoice process of the fortune 1000 in Germany can be described by the two inputs IT costs and number of people working within the invoice process and the output number of processed in-voices and a complexity measure that specifies the amount of invoices due to the fact that the complex-ity of invoices differs between the analyzed companies.

By calculating three different DEA models the correct returns to scale and input and output combinations will be identified. First, a calculation with constant returns to scale but without the complexity measure shows no proper results. Only three DMUs are believed to be (technically) efficient but 21 have an effi-ciency score beneath 1%. Thus, the assumption of constant returns to scale within the invoice process is not accurate. Under the consideration of variable returns to scale two more DMUs reach an efficiency measure of one.

The extension of the outputs with the invoice complexity measure and a new calculation with variable re-turns to scale shows a sound result. 5 companies are efficient

Page 15: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 15

Appendix No. 1

The described example of the four different branches can be diagrammed in the following coordinate system. Branches are marked with black dots, virtual producers with grey ones:

B3B4

B2

B1

VP3

05

1015202530354045

0 100 200 300 400 500 600Check account transactions

For a computation of the factors that branch 1 and 2 have to be scaled down to produce as much output as branch 3 and furthermore for the computation of the linear combination of branches 1 and 2 to be-come the virtual producer VP3 there are two functions necessary:

1. the efficient frontier between branch 1 and 2 (the solid black line in the figure above)

2. the radius vector through the zero point connecting the zero point and the virtual producer VB3 (the solid grey line in the figure above).

To receive these equations we take the linear equation bxmy +×= and insert the available values of the branches:

ad 1.) bm +×= 50010

bm +×= 10040

As a result we receive 5,47=b and 075,0−=m

Efficient frontier: 5,47075,0 +×−= xy (I)

ad 2.) bm +×= 20015

As the radius vector has an axis intercept of zero therefore 0=b .

As a result we receive 075,0=m

Radius vector through the zero point: xy ×= 075,0 (II)

Now equation (I) and (II) are equated and we receive the coordinates of VP3:

Output 1 (Check account transactions) = 6,316

Output 2 (Customer Services) = 75,23

Page 16: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 16

With the coordinates of VP3 the fraction fx ( fy ) of branch 1(2) that generates the VP3 can be calcu-lated:

23,75y40x10316,67y100x500

=×+×=×+×

5417,0=fx

4583,0=fy

With the coordinates of VP3 the factor sx ( sy ) that branch 1 (2) and two have to be scaled down are computed the following:

51y40x10002y100x500

=×+×=×+×

0,3421=sx

0,2894=sy

Page 17: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 17

Appendix No. 2

DMU 1:

Max=216000*u;

216000*u ≤ 1.39*v+0.1852*w;

4200000*u ≤ 2.38*v+0.0005*w;

2160000*u ≤ 6.94*v+0.0009*w;

480000*u ≤ 10.42*v+0.0104*w;

20544000*u ≤ 58.41*v+0.0001*w;

50400000*u ≤ 59.52*v+0.0002*w;

60000000*u ≤ 83.33*v+0.0033*w;

72000*u ≤ 138.89*v+0.0000*w;

360000*u ≤ 277.78*v+0.0278*w;

132120*u ≤ 302.76*v+0.0227*w;

6000*u ≤ 333.33*v+0.3333*w;

4320000*u ≤ 578.70*v+0.0023*w;

480000*u ≤ 1770.83*v+0.0167*w;

7200*u ≤ 2083.33*v+0.2778*w;

54000*u ≤ 3703.70*v+0.0370*w;

120000*u ≤ 4166.67*v+0.0250*w;

20400*u ≤ 4901.96*v+0.0490*w;

240000*u ≤ 8333.33*v+0.0833*w;

16800*u ≤ 10714.29*v+0.1786*w;

480000*u ≤ 12500.00*v+0.2500*w;

960*u ≤ 15625.00*v+2.0833*w;

1800*u ≤ 22222.22*v+0.0000*w;

60000*u ≤ 25000.00*v+0.5000*w;

3960*u ≤ 25252.53*v+0.2525*w;

24360*u ≤ 30788.18*v+0.2053*w;

110400*u ≤ 38043.48*v+0.1359*w;

480000*u ≤ 41666.67*v+0.5208*w;

204000*u ≤ 49019.61*v+0.7353*w;

3600*u ≤ 69444.44*v+0.8333*w;

v*1.39+w*0.1852=1

Page 18: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 18

Bibliography

Anderson, T. “A Data Envelopment Analysis (DEA) Home Page, Portland State University”, http://www.emp.pdx.edu/dea/homedea.html, [2004/02/02].

Bala, K., and Cook, W. D. “Performance measurement with classification information: an en-hanced ad-ditive DEA model” Omega - The International Journal of Management Science (31:6), 2003, pp. 439-450.

Ballou, R. H. (2004): Business Logistics/Supply Chain Management, 5th ed, Prentice Hall, New Jersey, 2004.

Banker, R. D., and Morey, R. C. “Efficiency Analysis for Exogenously Fixed Inputs and Out-puts” Opera-tions Research (34:4), 1986a, pp. 513-521.

Banker, R. D., and Morey, R. C. “The Use of Categorical Variables in Data Envelopment Analysis” Man-agement Science (32:12), 1986b, pp. 1613-1627.

Bowlin, W. F. Measuring Performance: An Introduction to Data Envelopment Analysis (DEA), http://llanes.panam.edu/edul8305/papers/introtodea.pdf, 1998, [2004/01/04].

Cantner, U., and Hanusch, H „Effizienzanalyse mit Hilfe der Data-Envelopment-Analysis“ Wirtschafts-wissenschaftliches Studium (27:5), 1998, pp. 228-237.

Charnes, A., Cooper, W. W., Lewin, A. Y., and Seiford, L. M. Data Envelopment Analysis: Theory, Methodology, and Application, Kluwer Academic Publishers, Boston, 1994.

Castanias, R. P., and Helfat, C. E. “The managerial rents model: Theory and empirical analysis” Journal of Management (27:6), 2001, pp. 661-678.

Church, A. H. “Organisation by production factors” The engineering magazine (38), 1909, pp. 184-194.

Cooper, W. W., Seiford, L. M., and Tone, K. Data Envelopment Analysis – A Comprehensive Text with Models, Applications, References and DEA-Solver Software, Kluwer Academic Pub-lishers, Boston, 2003.

Golany, B., and Roll, Y. “An Application Procedure for DEA” Omega International Journal of Manage-ment Science (17:3), 1989, pp. 237-250.

Holling, H., Lammers, F., and Pritchard R. D. Effektivität durch Partizipatives Produktivitäts-management - Überblick, neue theoretische Entwicklungen und europäische Fallbeispiele, http://www.hogrefe.de/aktuell/3-8017-0842-X_lese.html, 1999, [2004/03/01].

Homburg, C. „Benchmarking durch Data Envelopment Analysis“ Wirtschaftswissenschaftliches Studium (29), 2000, pp. 583-587.

Legner, C. Benchmarking informationssystemgestützter Geschäftsprozesse, Dissertation, St. Gallen, PhD thesis ,

Page 19: Benchmarking Financial Processes with Data Envelopment ... · Concerning these input and output data the application of the “Data Envelopment Analysis” seems to be suitable. Data

Benchmarking Financial Processes with Data Envelopment Analysis 19

http://verdi.unisg.ch/org/iwi/iwi_pub.nsf/wwwPublAuthorEng/87C790B3BDD2D05CC1256BD700347F72/$file/Diss990724_druck.pdf, 1999, [2004/01/26].

Mentzer, J. T., DeWitt, W., Keebler, J. S., Min, S., Nix, N. W., Smith, C. D., and Zacharia, Z. G. “Defining Supply Chain Management” Journal of Business Logistics (22:2), 2001, pp. 1-26.

Mészáros, C. “BPMPD Home Page”, http://www.sztaki.hu/~meszaros/bpmpd/, 1997 , [2003/12/28].

Parsons L. J. “Productivity Versus Relative Efficiency In Marketing: Past And Future?” in Re-search Tra-ditions In Marketing (Recent Economic Thought), Lilien G., Laurent G., and Pras B. (eds.), Kluwer Aca-demic Publishers, Amsterdam, 1992, pp. 169-196.

Pfaff, D., Skiera, B., and Weiss, J. Financial Supply Chain Management, Galileo Press, Bonn, 2004.

Pfaff, D., Skiera, B., and Weitzel, T. „Financial-Chain-Management – Empirische Fundierung und Ansät-ze zur Verbesserung“ forthcoming in WIRTSCHAFTSINFORMATIK, 2004.

Scheel, H. Effizienzmaße der Data Envelopment Analysis, PhD thesis, Deutscher Universitäts-Verlag, Wiesbaden, 2003.

Schefczyk, M. „Data Envelopment Analysis“ Die Betriebswirtschaft (56:2), 1996, pp.167-183.

Schmid, F. A. „Messung technischer Effizienz durch DEA - Ein Kommentar zu Sheldon“ ifo Studie: Zeit-schrift für empirische Wirtschaftsforschung (40), 1994, pp. 227-36.

Skiera, B., König, W., Gensler, S, Weitzel, T., Beimborn, D., Blumenberg, S., Franke, J., and Pfaff, D. Financial-Chain-Management – Prozessanalyse, Effizienzpotenziale und Outsourcing – Eine empirische Untersuchung mit den 1.000 größten deutschen Unternehmen, Research re-port, Frankfurt, http://www.efinancelab.com/pubs/pubs2003/fcm.pdf, 2003, [2003/10/02].

Steinmann, L. Konsistenzprobleme der Data Envelopment Analysis in der empirischen For-schung, PhD thesis, http://www.soi.unizh.ch/research/publications/steinmann.pdf, Zürich, 2002.

Töpfer, A „Benchmarking“ Wirtschaftswissenschaftliches Studium (26), 1997, pp. 202-205.

Ulrich, P. Organisationales Lernen durch Benchmarking, Deutscher Universitäts-Verlag, Wies-baden, 1998.

Wöhe, G. Einführung in die Allgemeine Betriebswirtschaftslehre, 15th ed., Vahlen, München, 1984.

Wöhe, G. Einführung in die Allgemeine Betriebswirtschaftslehre, 19th ed., Vahlen, München, 1996.