operational use evaluation of it investments

12
Operational Use evaluation of IT investments: An investigation into potential benefits Hussein Al-Yaseen a,1 , Tillal Eldabi b, * , David Y. Lees b , Ray J. Paul b,c a Department of Information Technology, Al-Ahliyya National University, 247, Amman 19328, Amman, Jordan b School of Information Systems, Computing, and Mathematics, Brunel University, Uxbridge, Middlesex, UB8 3PH, United Kingdom c Department of Information Systems, London School of Economics, Houghton Street, London WC2A 2AE, United Kingdom Available online 16 August 2005 Abstract The process of evaluation of IT projects often seems to cease just as quantifiable results start to become available—in Operational Use (OU). This paper investigates OU IT evaluation and contrasts it with the evaluation undertaken dur- ing the specification, construction, and testing of IT projects; which we choose to call Prior Operational Use (POU) to distinguish it from OU. Analysis of 123 usable responses from the FTSE 500 companies, show that many companies appear not to undertake OU evaluation. However, where OU evaluation was conducted, it appears to be of clear value to the organisations. Benefits claimed include the ability to assess deviations from their original plans, and to provide a basis for validating the original methods used (in their POU evaluations). Ó 2005 Elsevier B.V. All rights reserved. Keywords: Prior Operational Use evaluation; Operational Use evaluation; IT investment appraisal 1. Introduction Expenditure on information technology (IT) in the United Kingdom—and other countries for that matter—is continuously increasing as companies rely more and more on IT. Consequently, the issue of IT evaluation is increasingly a concern for all decision makers. Currently, a large percentage of organisational new capital investment is spent on IT, directly or indirectly. Managers would like to be sure that investment on IT is economically justi- fiable (Farbey et al., 1993). Justifying expenditure on IT is a long standing problem, and managers for the past few decades have expressed concerns about the value they are getting from IT invest- ments; moreover they have been searching for ways to evaluate and justify the use of IT. ÔMany conduct cost/benefit evaluation on projects, but most of them have an element of fictionÕ. The saddest part is that it is not just the benefits that are fictional, 0377-2217/$ - see front matter Ó 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2005.07.001 * Corresponding author. E-mail address: [email protected] (T. Eldabi). 1 Previously at: Brunel University. European Journal of Operational Research 173 (2006) 1000–1011 www.elsevier.com/locate/ejor

Upload: faizal-galuh

Post on 22-Nov-2014

17 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Operational Use Evaluation of IT Investments

European Journal of Operational Research 173 (2006) 1000–1011

www.elsevier.com/locate/ejor

Operational Use evaluation of IT investments:An investigation into potential benefits

Hussein Al-Yaseen a,1, Tillal Eldabi b,*, David Y. Lees b, Ray J. Paul b,c

a Department of Information Technology, Al-Ahliyya National University, 247, Amman 19328, Amman, Jordanb School of Information Systems, Computing, and Mathematics, Brunel University, Uxbridge, Middlesex, UB8 3PH, United Kingdom

c Department of Information Systems, London School of Economics, Houghton Street, London WC2A 2AE, United Kingdom

Available online 16 August 2005

Abstract

The process of evaluation of IT projects often seems to cease just as quantifiable results start to become available—inOperational Use (OU). This paper investigates OU IT evaluation and contrasts it with the evaluation undertaken dur-ing the specification, construction, and testing of IT projects; which we choose to call Prior Operational Use (POU) todistinguish it from OU. Analysis of 123 usable responses from the FTSE 500 companies, show that many companiesappear not to undertake OU evaluation. However, where OU evaluation was conducted, it appears to be of clear valueto the organisations. Benefits claimed include the ability to assess deviations from their original plans, and to provide abasis for validating the original methods used (in their POU evaluations).� 2005 Elsevier B.V. All rights reserved.

Keywords: Prior Operational Use evaluation; Operational Use evaluation; IT investment appraisal

1. Introduction

Expenditure on information technology (IT) inthe United Kingdom—and other countries for thatmatter—is continuously increasing as companiesrely more and more on IT. Consequently, the issueof IT evaluation is increasingly a concern for alldecision makers. Currently, a large percentage of

0377-2217/$ - see front matter � 2005 Elsevier B.V. All rights reservdoi:10.1016/j.ejor.2005.07.001

* Corresponding author.E-mail address: [email protected] (T. Eldabi).

1 Previously at: Brunel University.

organisational new capital investment is spent onIT, directly or indirectly. Managers would like tobe sure that investment on IT is economically justi-fiable (Farbey et al., 1993). Justifying expenditureon IT is a long standing problem, and managersfor the past few decades have expressed concernsabout the value they are getting from IT invest-ments; moreover they have been searching for waysto evaluate and justify the use of IT. �Many conductcost/benefit evaluation on projects, but most ofthem have an element of fiction�. The saddest partis that it is not just the benefits that are fictional,

ed.

Page 2: Operational Use Evaluation of IT Investments

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011 1001

but the costs are as well� (Farbey et al., 1993). Sucha continuous increase in investment coupled bycontinuous need for justification presents a chal-lenge to the information systems community.

Many authors agree that evaluation of invest-ment is a key issue for such IT projects and theirmanagement: Kumar (1990), Dabrowska andCornford (2001), and Irani et al. (2002). Invest-ment justification and evaluation of effectivenessis traditionally—within fields other than IT—acomplex process. However, analysts usually man-age to get an answer, which they can feel confidantis a valid representation of the real value. But inIT, confidence in measures has never reached a le-vel similar to traditional products. Many organisa-tions report that they are uncertain about how tomeasure the impact and the outcomes of their ITinvestments. This is mainly attributable to the factthat IT returns-on-investment are mostly intangi-ble which makes it difficult to measure using tradi-tional accounting practice.

IT evaluation has been a widely explored issue inorder to resolve the above issues and in search ofreliable measurement drivers. Most of the theoret-ical literature in IT evaluation (such as Bradfordand Florin, 2003; Gunasekaran et al., 2001; Linand Pervan, 2003; Liu et al., 2003; Remenyi et al.,2000; Irani and Love, 2002) tends to depart fromthe traditional accounting-based evaluation meth-ods by appreciating the intangible aspects of ITbenefits as well as the tangible ones. Authors aremore inclined to view evaluation as part of theplanning activity only or, in some cases, as partof the development process. There are also a num-ber of empirical studies—such as those reviewed byBallantine et al. (1996)—which examined ex-ante

evaluation, yet only a few (for example Kumar,1990; and to some extent Beynon-Davies et al.,2004) that have explored the ex-post evaluation.

Generally speaking most empirical and theoret-ical articles (with very few exceptions) tend to clas-sify IT evaluation as a planning activity or take atemporal view along the development life-cycleonly to stop short of the operational phase.Although a number of the above authors havetouched upon this phase, evaluation activities arestill not represented as integral parts of the evalu-ation process. The extent to which organisations

adopt rigorous evaluation at the operational phaseis unknown.

In this paper, we aim to empirically explore theevaluation process by extending the temporalview—with more concentration on the operationalphase—in order to understand issues related to ITevaluation after project completion. We start inthe following section by defining IT evaluationfor the purpose of this research. We then use thisas the a theoretical basis for the collection of datafrom major companies in the UK regarding theirapproaches and processes for IT project evalua-tion, as well as their rationale and application ofany OU evaluation that they conducted. The sec-tion after that redefines the research problem andthe key research questions in relation to the twoforms of evaluation. The next sections discuss theresearch methodology, data collection, results,and synthesis, respectively. In the final section,we present lessons learned from this research.

2. The purposes and forms of evaluation

Evaluation has been defined as the process ofassessing the worth of something (Beynon-Davieset al., 2000). Another definition given is that it canbe defined as the process of establishing—by quan-titative or qualitative means—the worth of IT to theorganisation (Willcocks, 1992). We take the stancethat evaluation is a process that takes place at differ-ent points in time, or continuously, explicitlysearching for (quantitatively or qualitatively) theimpact of IT projects (Eldabi et al., 2003). The valueof this latter definition is that it explicitly recognisesthe different stages in the full lifecycle of an Infor-mation System in which evaluation is performed,and provides the opportunity to discriminate be-tween two decidedly different views of the evalua-tion process, each serving different aims.

The first view of evaluation is as a means togain direction in the IS project. Here, �predictive�evaluation is performed to forecast the impact ofthe project. Using financial and other quantitativeestimates, the evaluation process provides supportand justification for the investment through theforecasting of projected baseline indicators suchas Payback, Net Present Value (NPV) or Internal

Page 3: Operational Use Evaluation of IT Investments

Prior Operational Use Evaluation

Operational Use Evaluation

Systems’ life cycle

Time

Development stages

System into operational use

Fig. 1. IS/IT evaluation types in the systems� life cycle.

1002 H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

Rate of Return (IRR) (Farbey et al., 1993; Liuet al., 2003; Yeo and Qiu, 2003). It is known var-iously as �ex-ante� evaluation (Remenyi et al.,2000), �formative� evaluation (Brown and Kiernan,2001), or as we shall refer to it, �Prior OperationalUse� (POU) evaluation. This form of evaluationguides the project, and may lead to changes inthe way the system is structured and carried out.It does not however give any feedback beyondthe design, implementation, and delivery of theproject outcomes.

In contrast, evaluation can also be considered interms of the effectiveness of the IT system in situ—what a system actually accomplishes in relation toits stated goals (Al-Yaseen et al., 2004; Eldabiet al., 2003). This form of evaluation draws on realrather than projected data, and can be used to jus-tify adoption (Love and Irani, 2001; Irani, 2002);estimate the direct cost of the system, estimatethe tangible benefits of the system (Liu et al.,2003); ensure that the system meets requirements(Irani, 2002); measure the system effectivenessand efficiency (Poon and Wagner, 2001); measurethe quality of programs and to estimate indirectcosts and other costs (Love and Irani, 2001); orto measure the quality of programmes (Eldabiet al., 2003). This type of evaluation should be per-formed during the operational phase of the pro-ject. We shall refer to this type as �OperationalUse� (OU) evaluation. Fig. 1 shows these formsof evaluation with respect to the life cycle from asystem�s inception to the end of its useful life.

3. The problem and the research opportunity

Most of the literature (such as Beynon-Davieset al., 2000; Farbey et al., 1999; Jones and Hughes,

2000; Walsham, 1999; Remenyi et al., 2000) at-tempts to improve the process of evaluation bymeans of either (a) consolidating and enumeratingmore factors to consider in the evaluation, or (b)adding more theoretical rigour to the techniquesused (Irani, 2002; Irani and Love, 2002). As men-tioned above, most studies concentrated on whatwe termed the POU phase with high emphasis onearly stages of development. In contrast, we findthat only rarely that OU evaluation has been stud-ied. The most recent and comprehensive empiricalstudy in this category was conducted 15 years agoin Canada by Kumar (1990).

The main problem is that there is no body ofknowledge in the area to help improve the tech-niques used in evaluation at this stage, whichencourages decision makers to refrain fromemploying it altogether. For this reason we havedecided to research into the practitioners� percep-tions of the evaluation process and the practicesassociated with the evaluation adopted withinlarge organisations. We attempt to obtain insightsinto OU evaluation in order to identify the real ex-tent to which OU evaluation is practised and whatlessons that could be learned to improve knowl-edge about it. To do that—we believe—the follow-ing questions need to be answered by practitionerswho are most involved with the evaluation pro-cesses. Such answers are obtained by posing thefollowing questions as a platform for our researchactivity:

• What is the balance between POU and OU ITevaluations?

• What are the main reasons for adopting each ofthe evaluation types?

• What criteria are currently being used for eval-uating IT investment in each type of evaluation?

Page 4: Operational Use Evaluation of IT Investments

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011 1003

• When is Operational Use evaluationperformed?

• What are the main reasons for adopting a com-parison between the outcomes of evaluationtypes?

4. Research approach

The research theme is based on a comparisonbetween POU and OU as means for the identifica-tion of current practices of OU evaluation and tounderstand its application within an organisa-tional context. As suggested by the argument ofTashakkori and Teddlie (2003), we opted forquantitative research through questionnaires asan appropriate instrument base for starting the re-search. No doubt, other research approacheswould be beneficial and we anticipate other

Fig. 2. Research phases.

researchers might follow up on this. The followingsection describes the processes of questionnairedesign, deployment, and analysis used, and sum-marises the participant characteristics. Fig. 2 pre-sents the sequential structure of the researchphases and activities within each.

4.1. Research phases

Phase one reviews both types of evaluation(POU and OU). The main issues identified in theliterature were then used to develop a question-naire that focuses on how organisations carryout evaluation of their IT systems. The question-naire is split into six sections centred on gatheringinformation on:

1. organisational background;2. information technology infrastructure;3. business issues of IT investment;4. prior Operational Use evaluation in different

stages of system life cycle (feasibility, design,implementation, and testing and completion);

5. Operational Use evaluation; as well as6. other information related to both types of

evaluation.

Before the formal survey was sent to the compa-nies, two pilot iterations were conducted. The firstiteration involved four doctoral students. Based ontheir feedback, certain items in the questionnairewere modified, along with minor layout changes,which were made in order to improve clarity andreadability. The second iteration involved fourprofessionals—two academics, one IT managerin a business organisation, and one business ana-lyst in another organisation. There were only cos-metic changes at this iteration, giving us theconfidence to issue the questionnaire.

In phase two, the questionnaire developed inphase one was sent to the top 500 organisationsin the UK (the FTSE 500). The questionnaireswere mailed to IT managers or top executives.As shown in Table 1, returns covered a varietyof organisations from financial services, informa-tion technology, manufacturing, transport, centralgovernment, consultancy, retail/wholesaling, andpublishing. Of the 500 questionnaires posted, 152

Page 5: Operational Use Evaluation of IT Investments

Table 1Organisations in the sample

Organisation Percentage

Financial services 19Manufacturing 15Information technology 14Retail/wholesaling 9Computer manufacturing 7Central government 6Consultancy 6Transport 5Publishing 3Others 16

1004 H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

responses were received; 18 were returned unan-swered and 11 were returned but incomplete. Thelatter two categories of responses were ignoredmaking the final number of usable responses 123,giving a response rate of 24.6%. This rate was con-sidered to be above expectation given that the gen-erally accepted average responses to non-incentivebased questionnaires are around 20%.

In phase three, we analysed the data from the re-sponses of the questionnaire using a combinationof the parametric statistical methods, DescriptiveAnalysis and Factor Analysis (Pett et al., 2003).Organisations were asked to select from the listthe closest choice of reason for adopting eachof Prior Operational Use and Operational Useevaluation. A summary of the key responses tothe questionnaire—the reasons for adopting PriorOperational Use evaluation (codename: POUeR),and Operational Use evaluation (codename:OUeR)—are tabulated in Appendix A, along with

-2.000

0.000

2.000

4.000

6.000

8.000

10.000

12.000

1 2 3 4 5 6 7 8 9 10 11 1

Va

Eig

enva

lues

FOUR

FACT CONCI

Fig. 3. Eigenvalue of the reasons for adopt

the OU evaluation criteria (codename: OUeC).Each of these variables were measured using a fivepoint Likert scales (1 = not important and 5 = very

important).For technically interested readers we report that

a factor analysis technique was employed in orderto identify possible categories. Factor analysis wasperformed in three steps (following Berthold andHand, 2003):

(1) A matrix of correlation coefficients for allpossible pairings of the variables wasgenerated.

(2) Factors were then extracted from the correla-tion matrix using principal factors analysis.

(3) The factors were rotated to maximise therelationships between the variables and someof the factors and minimise association withothers using Varimax Kaiser Normalisation,which maintained independence among themathematical factors. The eigenvalues deter-mined which factors remained in the analy-sis. Following Kaiser�s criterion, factorswith an eigenvalue of less than 1 wereexcluded. A Screen plot provides a graphicimage of the eigenvalue for each componentextracted (see Figs. 3 and 4).

5. Respondents� characteristics

The average monthly IT budget for the organi-sations in the sample was £2,513,000 with the med-ian at £1,645,000. 25% of the participating

2 13 14 15 16 17 18 19 20 21 22 23 24

riables

ing Prior Operational Use evaluation.

Page 6: Operational Use Evaluation of IT Investments

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

1 6 10

Variables

Eig

enva

lues

Three factorsexplaining

92.31% of all thevariance

5432 987

Fig. 4. Eigenvalue of the reasons for adopting Operational Use evaluation.

Table 2Reasons for adopting Prior Operational Use evaluation—Factor analysis

Reasons Factors

Systemcompletionand justification

Systemcosts

Systembenefits

Otherreason

POUeR1 0.967POUeR2 0.982POUeR3 0.991POUeR4 0.986POUeR5 0.950POUeR6 0.942POUeR7 0.972POUeR8 0.970POUeR9 0.955POUeR10 0.966

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011 1005

organisations have monthly IT budget exceeding£2,660,000 and 10% of the participating organisa-tions have a monthly IT budget of £5,903,000. Onaverage, the participating organisations had beenusing IT for approximately 16–20 years and mosthad a history of more than 20 years of using IT.85% of the participating organisations had a cen-tral integrated IT infrastructure department, while15% of each department in the participatingorganisations had its own IT infrastructure. 8.1%of the participating organisations had adopted ITas a response to problem(s); whilst 26% hadadopted IT searching for ways of improving effec-tiveness and standing in the marketplace, and65.9% had adopted IT systems for both reasons.

POUeR11 0.898POUeR12 0.919POUeR13 0.884POUeR14 0.842POUeR15 0.899POUeR16 0.880POUeR17 0.902POUeR18 0.932POUeR19 0.926POUeR20 0.936POUeR21 0.775POUeR22 0.861POUeR23 0.828POUeR24 0.792

Note: Only loadings greater than 0.50 are shown.

6. Data analysis and preliminary findings

This section presents aggregated results from di-rect answers to the research questions mentionedabove. The basic issues considered here are: reasonsfor adopting either types of evaluations, criteria forevaluations, reasons for comparisons between thetwo types, and reasons for any such gaps.

6.1. Reasons for adopting Prior Operational Use

evaluation

The results are presented in Table 2. Using afactor analysis cut-off level of 0.5, four factorswere considered the main reasons of adoptingPrior Operational Use evaluation (explaining91.47% of the variance—see Fig. 3), which we de-

scribe as �system completion and justification�, �sys-tem costs�, �system benefits�, and �other reasons�.

The first factor �system completion and justifica-tion� is highly correlated with ten variables, the

Page 7: Operational Use Evaluation of IT Investments

Table 4Operational Use evaluation criteria—Factor analysis

Criteria Factors

Systemcompletion

Systeminformation

Systemimpact

Othercriteria

OUeC1 0.973OUeC2 0.869OUeC3 0.894OUeC4 0.865OUeC5 0.776OUeC6 0.973OUeC7 0.973OUeC8 0.784OUeC9 0.974OUeC10 0.979OUeC11 0.874OUeC12 0.842OUeC13 0.959OUeC14 0.874OUeC15 0.928OUeC16 0.849OUeC17 0.933

Note: Only loadings greater than 0.50 are shown.

1006 H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

second factor �system costs� is highly correlatedwith ten variables, and the third factor �systembenefits� are highly correlated with three factors,whilst the fourth factor �other reasons� is highlycorrelated with one variable barriers for adoptingthe system which was also found to be the leastevaluated reason in practice, as shown in Table2. A glossary of variables is found in Appendix A.

6.2. Reasons for adopting Operational Use

evaluation

The most important reasons for adopting Oper-ational Use evaluation were identified from a five-point Likert scale ranging from 1 (not important)to 5 (very important). The results are presentedin Table 3. Employing a factor analysis cut-off le-vel of 0.5, three factors were considered as themain reasons of adopting Operational Use evalua-tion (explaining 92.31% of the variance—seeFig. 4), which we call �system costs�, �system bene-fits�, and �other reasons�.

6.3. Operational Use evaluation criteria

The results are presented in Table 4. A factoranalysis cut-off level of 0.5 was employed; Oper-ational Use evaluation criteria were resulted infour factors explaining 87.03% of the variance(Fig. 5), which we termed �system completion�,

Table 3Reasons for adopting Operational Use evaluation—Factoranalysis

Variables Factors

Otherreasons

Systembenefits

Systemcosts

OUeR1 0.951OUeR2 0.951OUeR3 0.941OUeR4 0.912OUeR5 0.977OUeR6 0.978OUeR7 0.982OUeR8 0.946OUeR9 0.919OUeR10 0.922

Note: Only loadings greater than 0.50 are shown.

�system information�, �system impact�, and �othercriteria�.

The first factor �system completion� is highlycorrelated to seven criteria, the second factor �sys-tem information� are highly correlated to five crite-ria, the third factor �system impact� is highlycorrelated to four criteria, whilst �other criteria� iscorrelated to one criterion—net operating costs,which was also found to be the least evaluated cri-teria in practice. For more information, see Table4 which shows the construct loadings for the rea-sons of adopting Operational Use evaluation.

6.4. Reasons for adopting a comparison between

prior Operational Use and Operational Use

evaluation

Most of the organisations (77.7%) that carriedout a formal OU evaluation conducted it in acomparison with the outcomes of POU evaluation,and found that there was an important �gap� orinconsistency between the evaluations. This gapcomprised three major dimensions—gaps in esti-mating the systems� economic lifespan, cost, andbenefits.

Page 8: Operational Use Evaluation of IT Investments

-1

0

1

2

3

4

5

6

7

1 5 8 10 11 12 13 14 15 16 17

Variables

Eig

enva

lues

Four factors explaining

87.03% of allthe variance

432 76 9

Fig. 5. Eigenvalue of Operational Use evaluation criteria.

Table 5Reasons for the gap between POU and OU evaluation

Reason Mean Standard deviation

Lack of an appropriate evaluation method 4.67 0.48Lack of agreement on evaluation criteria 4.58 0.50Groups who are involved in the evaluation process 4.49 0.55Intangible benefits of the system 4.42 0.62Availability of qualified evaluator 4.36 0.65Changes to user requirements 4.11 0.86Changes to system requirements 4.09 0.73Maintenance costs of the system 3.91 0.73Operational costs of the system 3.76 0.68Indirect costs of the system 3.53 0.69Changes to the markets requirements 3.44 0.69

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011 1007

The main reasons for adopting a comparisonwere again identified using a five-point Likert scaleranging from 1 (not important) to 5 (very impor-tant). The two most important reasons were tocheck that the planned benefits were achievedand to compare between planned and actual costs.The least two important reasons for the compari-son were to record lessons for the future and to im-prove the evaluation process for future systems.

6.5. Reasons for the gap between POU and OU

evaluation

The main reasons for adopting this comparativeapproach were measured on a five-point Likert

scale ranging from 1 (not important) to 5 (veryimportant), as shown in Table 5.

7. Synthesis

All of the responding organisations have and docarry out formal POU evaluation, but only abouta third (36.5%) currently perform a formal OUevaluation IT use. This means that about two-thirds (63.5%) of the organisations do not gatherany evidence to establish how successful their ITprojects were, therefore cannot use such informa-tion from OU evaluation to improve their evalua-tion techniques.

Page 9: Operational Use Evaluation of IT Investments

1008 H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

The most popular reasons for adopting OUevaluation were related to formal aspects ofsigning off the project (based around traditionalmeasures such as meeting requirements, andachieving agreed metrics for effectiveness, usage,efficiency, security, performance, etc.), and systemcosts. The two factors—systems� benefits andadoption barriers—were found to be less impor-tant. On the other hand, amongst the 45 organisa-tions, the most frequent reason for adopting OUevaluation was to do with the systems� benefits(both tangible and intangible). Most of the sam-pled organisations attach greater importance tothe measurement of benefits rather than the mea-surement of costs. The most frequently cited crite-rion for OU evaluation was system information(accuracy of information, timeliness and currencyof information, adequacy of information, andquality of programs). The most important claimeduse and benefit of adopting OU evaluations wassystem cost (operational cost, training cost, main-tenance cost, upgrade cost, reduction in other staffcost, reduction in salaries, and other expensessaved).

Results suggest that most decision makers donot place much importance on OU evaluation oftheir IT systems. Most managers tend to think ofit only as a formality rather than a proper evalua-tion process. It can be postulated that such a per-ception plays an important role in hindering theadoption of OU evaluation. Results also provideevidence that OU is useful if it is perceived as morethan just a formality. For example, amongst the45 who consider adopting OU evaluation thosecompanies who seriously perform it tend to gainconsiderable benefits, including the validation oftheir original POU evolutional estimates. Butmore importantly OU evaluation helps thoseorganisations to better appreciate and capturethe intangible benefits associated with IT. Evi-dently, if IT evaluation is starting to capture thebenefit side more than the cost side, then OU eval-uation—given the above results—should play animportant role in gauging such benefits.

To summarise the findings, it is clear that thepractitioners are not appreciating the full benefitsof OU and need to be aware of such benefits. Suchlack of appreciation is evidently behind the appar-

ent scarcity of implementations of OU evaluation,which negatively feeds back into perceptions andso forth.

8. Conclusions

The main aim of this research was to capture apicture of operational use (OU) evaluation in con-trast with Prior Operational Use (POU) evaluationas practiced within UK organisations in order tounderstand obstacles hindering the full implemen-tation of OU evaluation and its potential benefits.In a survey of the FTSE 500 companies we foundout that around two thirds of the 123 respondentorganisations gave less importance to the OU eval-uation of IT than POU. Of those organisationswho did use OU evaluation, some thought of itas a completion formality for signing off the pro-ject. Further findings from the research survey sug-gest that within a structured approach, OU couldbe beneficial to organisations when acquiringnew systems. This matches the expectation thatwhatever is learned from current evaluation oughtto be useful to evaluate new systems.

We have considered the survey result that com-panies appear to perform OU evaluation as a for-mality rather than to reflect on (and improve) theappreciation of benefits. We postulate the reasonfor this is that whilst the potential benefits ofengaging with a process of OU evaluation exists,the organisational structure within which it mustoperate does not generally cater for it. A clear con-trast between POU and OU is evident when con-sidering modern project management approachessuch as PRINCE2, which usually incorporates fre-quent cycles of POU evaluation (OGC, 2002) as afundamental component of the method. The fixedtime horizon inherent in project-based work can bethe precursor to a considerable organisationalomission in full project evaluation. This omissionoccurs because no interest group is charged withassessing the value of the IT project over its entirelifecycle (from inception to decommissioning),which would therefore include OU. In otherwords, project completion is taken to mean exactlythat—so evaluation ceases when the system be-comes operational because the self-contained and

Page 10: Operational Use Evaluation of IT Investments

Appendix A (continued)

Reasons Description of reasons

POUeR8 Quality and completeness of systemdocumentation

POUeR9 Hardware performancePOUeR10 Quality of programsPOUeR11 Operational costsPOUeR12 Training costsPOUeR13 Maintenance costsPOUeR14 Upgrade costsPOUeR15 Reduction in clerical salariesPOUeR16 Reduction in other staff costsPOUeR17 Other expenses savedPOUeR18 Direct costsPOUeR19 Indirect costsPOUeR20 Other costsPOUeR21 Tangible benefitsPOUeR22 Intangible benefitsPOUeR23 Other benefitsPOUeR24 Barriers of adopting the system

Reasons for adopting Operational Use evaluation

OUeR1 Estimating of system lifeOUeR2 Justify system adoptionOUeR3 RisksOUeR4 BarriersOUeR5 Tangible benefitsOUeR6 Intangible benefitsOUeR7 Other benefitsOUeR8 Direct costsOUeR9 Indirect costsOUeR10 Other costs

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011 1009

budgeted project has then ended. After completionthere is nothing else to do.

A further finding that can be attributed to thisstudy is that when organisations carry out bothtypes of evaluation (OU and POU) the deviationfrom original estimates became a focal point forfurther analysis. Our study shows that the reasonsfor adopting the OU–POU comparison were to en-able the auditing of the planned benefits and tolearn lessons appropriate for future projects (seeTable 5). Our results, regarding obstacles to OUevaluations, were found to be mutually supportedby the study by Owens and Beynon-Davies (1999)on mission critical systems. Currently, only organ-isations who perform serious OU evaluationunderstand the benefits of it. And there are notmany of these, so very little analysis exists onplanned costs and actual costs (or benefits).

Without OU evaluation the cost of futureprojects would seem likely to be less accuratelyestimated. Our research results are entirely consis-tent with this observation. At the moment the costof lost opportunities can be conjectured to be onthe increase. Without OU evaluation how can weknow whether this is true or not, or much elseabout what is going on? Our study confirms thatdissemination of the importance of OU evaluationamongst both the academic and practitioners�communities could play an important role in moreIT effectiveness and less disappointments. We hopethe reader agrees, in which case this paper hasmade such a contribution.

Appendix A. Variables (reasons) codenames used

for analysis

Reasons Description of reasons

Reasons for adopting Prior Operational Use

evaluation

POUeR1 System meets requirementsPOUeR2 System effectivenessPOUeR3 System usagePOUeR4 System efficiencyPOUeR5 Justify adoptionPOUeR6 System securityPOUeR7 System performance

Operational Use evaluation criteria

OUeC1 Internal controlsOUeC2 Project schedule complianceOUeC3 System security and

disaster protectionOUeC4 Hardware performanceOUeC5 System performance

versus specificationsOUeC6 System usageOUeC7 Quality and completeness

of system documentationOUeC8 Accuracy of informationOUeC9 Timeliness and currency

of information(continued on next page)

Page 11: Operational Use Evaluation of IT Investments

Appendix A (continued)

Reasons Description of reasons

OUeC10 Adequacy of informationOUeC11 Appropriateness of informationOUeC12 Quality of programsOUeC13 User satisfaction and attitude

towards systemsOUeC14 User friendliness of system–user

interfaceOUeC15 System�s impacts on users

and their jobsOUeC16 System�s fit with the impact

upon organizationOUeC17 Net operating costs

(savings of system)

1010 H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

References

Al-Yaseen, H., Eldabi, T., Paul, R.J., 2004. A quantitativeassessment of operational use evaluation of informationtechnology: Benefits and barriers. In: Proceedings of theTenth Americas Conference on Information Systems, NewYork, August 2004, pp. 688–692.

Ballantine, J.A., Galliers, R.D., Stray, S.J., 1996. Informationsystems/technology evaluation practices: Evidence from UKorganizations. Journal of Information Technology 11, 129–141.

Berthold, M., Hand, D.J., 2003. Intelligent Data Analysis,second ed. Springer-Verlag, Berlin.

Beynon-Davies, P., Owens, I., Lloyd-Williams, M., 2000. ISFailure, Evaluation and Organisational Learning. UKAIS,Cardiff, pp. 444–452.

Beynon-Davies, P., Owens, I., Williams, M.D., 2004. Informa-tion systems evaluation and the information systems devel-opment process. Enterprise Information Management 17,276–282.

Bradford, M., Florin, J., 2003. Examining the role of innova-tion diffusion factors on the implementation success ofenterprise resources planning systems. International Journalof Accounting Information systems (4), 205–225.

Brown, J., Kiernan, N., 2001. Assessing the subsequent effect ofa formative evaluation on a program. Journal of Evaluationand Program Planning 24, 129–143.

Dabrowska, E.K., Cornford, T., 2001. Evaluation andtelehealth—An interpretative study. In: Proceedings ofthe Thirty-Fourth Annual Hawaii International Conferenceon System Sciences (HICSS)-34, January 2001, Maui,Hawaii. Computer Society Press of the IEEE, Piscataway,NJ.

Eldabi, T., Paul, R.J., Sbeih, H., 2003. Operational useevaluation/post implementation evaluation of IT. In:UKAIS, 2003, Warwick.

Farbey, B., Land, F., Targett, D., 1993. How to Assess Your ITInvestment: A Study of Methods and Practice. Butterworth-Heinemann Ltd., London.

Farbey, B., Land, F., Targett, D., 1999. Moving IS evaluationforward: Learning themes and research issues. Journal ofStrategic Information Systems 8, 189–207.

Gunasekaran, A., Love, P.E.D., Rahimi, F., Miele, R., 2001. Amodel for investment justification in information technol-ogy projects. International Journal of Information Man-agement 21, 349–364.

Irani, Z., 2002. Information systems evaluation: Navigatingthrough the problem domain. International Journal ofInformation and Management 40, 11–24.

Irani, Z., Love, P.E.D., 2002. Developing a frame of referencefor ex-ante IT/IS investment evaluation. European Journalof Information Systems 11, 74–82.

Irani, Z., Sharif, A., Love, P.E.D., Kahraman, C., 2002.Applying concepts of fuzzy cognitive mapping to model:The IT/IS investment evaluation process. InternationalJournal of Production Economics (75), 199–211.

Jones, S., Hughes, J., 2000. Understanding IS evaluation as acomplex social process. In: Chung, H.M. (Ed.), Proceedingsof the 2000 Americas Conference on Information Systems(AMCIS), 10–13 August, Long Beach, CA. Association forInformation Systems, Atlanta. pp. 1123–1127.

Kumar, K., 1990. Post implementation evaluation of computerinformation systems: Current practices. Communications ofthe Association for Computer Machinery 33 (2), 203–212.

Lin, C., Pervan, G., 2003. The practice of IS/IT benefitsmanagement in large Australian organisations. Interna-tional Journal of Information and Management (41), 13–24.

Liu, Y., Yu, F., Su, S.Y.W., Lam, H., 2003. A cost-benefitevaluation server for decision support in e-business. Journalof Decision Support Systems (36), 81–97.

Love, P.E.D., Irani, Z., 2001. Evaluation of IT costs inconstruction. Journal of Automation in Construction 10,649–658.

OGC, 2002. Managing successful projects with PRINCE2.Office of Government Commerce, London.

Owens, I., Beynon-Davies, P., 1999. The post implementationevaluation of mission-critical information systems andorganisational learning. In: Proceedings of the SeventhsEuropean Conference of Information Systems, Copenha-gen, Copenhagen Business School. pp. 806–813.

Pett, M.A., Lackey, N.R., Sullivan, J.J., 2003. Making Sense ofFactor Analysis: The use of Factor Analysis for InstrumentDevelopment in Health Care Research. Sage Publications,London.

Poon, P., Wagner, C., 2001. Critical success factors revisited:Success and failure cases of information systems for seniorexecutives. Journal of Decision Support Systems 30, 393–418.

Remenyi, D., Money, A., Sherwood-Smith, M., Irani, Z., 2000.The Effective Management and Management of IT Costsand Benefits. Butterworth-Heinemann Ltd., London.

Tashakkori, A., Teddlie, C., 2003. The past and the future ofmixed methods research: From methodological triangula-

Page 12: Operational Use Evaluation of IT Investments

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011 1011

tion to mixed methods designs. In: A., Tashakkori, C.,Teddlie (Eds.), Handbook of Mixed Methods in Social andBehavioral Research. Sage, Thousand Oaks, CA.

Walsham, G., 1999. Interpretive evaluation design for informa-tion systems. In: Willcocks, L., Lester, S. (Eds.), Beyond theIT Productivity Paradox. Wiley, Chichester, pp. 363–380.

Willcocks, L., 1992. Evaluating information technology invest-ments, research findings and reappraisal. Journal of Infor-mation Systems 2, 243–268.

Yeo, K.T., Qiu, F., 2003. The value of management flexibility—A real option approach to investment evaluation. Interna-tional Journal of Project Management 21, 243–250.