robotic process automation - an evaluative model for

63
U PPSALA U NIVERSITY D EPARTMENT OF I NFORMATICS AND MEDIA BACHELOR THESIS Robotic process automation - An evaluative model for comparing RPA-tools Authors: Lucas Bornegrim and Gustav Holmquist Supervisor: Prof. Dr. Andreas HAMFELT 14 juni 2020

Upload: others

Post on 21-Apr-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Robotic process automation - An evaluative model for

UPPSALA UNIVERSITY

DEPARTMENT OF INFORMATICS AND MEDIA

BACHELOR THESIS

Robotic process automation -An evaluative model for comparing

RPA-tools

Authors:Lucas Bornegrim and Gustav Holmquist

Supervisor:Prof. Dr. Andreas HAMFELT

14 juni 2020

Page 2: Robotic process automation - An evaluative model for

Sammanfattning

Denna forskning studerar de tre marknadsledande RPA-verktygen, Automation Any-where, Blue Prism och UiPath, for att fylla bristen pa litteratur om metoder for utvarderingoch jamforelse av RPA-verktyg. Design science research genomfordes genom att utfor-ma och skapa artefakter i form av processimplementeringar och en utvarderingsmodell.En typisk process som representerar ett vanligt anvandningsomrade implementerades medanvandning av vart och ett av de tre RPA-verktygen for att skapa en utvarderingsmodell.Officiell dokumentation, tillsammans med de tre implementeringarna, studerades. Utvarder-ingsfragor specifika for RPA-verktygsutvardering skapades baserat pa en kvalitetsmodellfor produktkvalitet som finns i ISO/IEC 25010-standarden. Egenskaper som ar beroende avorganisatoriskt sammanhang ingick inte i utvarderingen for att skapa en utvarderingsmodellsom inte ar beroende av nagon specifik affarsmiljo. Resultaten av forskningen ger kunskapom (1) hur RPA-verktyg kan implementeras och (2) skillnaderna som finns mellan de tremarknadsledande RPA-verktygen. Forskningen bidrar ocksa i form av en metod for attundersoka och utvardera RPA-verktygen. Vid skapandet av utvarderingsmodellen drogsslutsatsen att nagra av kriterierna i kvalitetsmodellen i ISO/IEC 25010 var av lag rele-vans och de ar darfor inte inkluderade i den resulterande modellen. Genom att analyseraoch utvardera den skapade utvarderingsmodellen, med hjalp av ett teoretiskt koncept avdigitala resurser och deras utvardering, forstarktes utvarderingsmodellens validitet. Ur ettutvarderingsperspektiv betonar denna forskning behovet av att anpassa och andra befint-liga utvarderingsmetoder for att framgangsrikt utvardera de mest relevanta egenskapernahos RPA-verktyg.

Abstract

This research studies the three market-leading RPA-tools, Automation Anywhere, BluePrism and UiPath, in order to fill the lack of literature regarding methods for evaluatingand comparing RPA-tools. Design science research was performed by designing and cre-ating artefacts in the form of process implementations and an evaluative model. A typicalprocess representing a common area of use was implemented using each of the three RPA-tools, in order to create an evaluative model. Official documentation, along with the threeimplementations, were studied. Evaluative questions specific to RPA-tool evaluation werecreated based on a quality model for product quality found in the ISO/IEC 25010 standard.Characteristics dependant on organisational context were not included in the evaluation,in order to create an evaluative model which is not dependant on any specific businessenvironment. The results of the research provide knowledge of (1) how RPA-tools canbe implemented and (2) the differences that exist between the three market-leading RPAtools. The research also contributes in the form of a method for investigating and evaluat-ing the RPA-tools. When creating the evaluative model, some of the criteria found in theISO/IEC 25010 quality model were concluded to be of low relevance and, therefore, notincluded in the model. By analysing and evaluating the created evaluative model, using atheoretical concept of digital resources and their evaluation, the validity of the evaluativemodel was reinforced. From an evaluative perspective, this research emphasises the needto appropriate and change existing evaluative methods in order to successfully evaluate themost relevant characteristics of RPA-tools.

A

Page 3: Robotic process automation - An evaluative model for

ContentsList of Figures D

List of Tables D

1 Introduction 11.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Problem Description/Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.5 Research question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.6 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Research Approach and Methodology 52.1 Design Science Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Research approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3 Research method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.3.1 Data collection method . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3.2 Analysis method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.4 A critical examination of the method . . . . . . . . . . . . . . . . . . . . . . . 10

3 Theory 103.1 ISO/IEC 25000 series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2 ISO/IEC 25010 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.3 Digital resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.4 Evaluating digital resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4 Process implementation 164.1 The typical process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.2 Instantiation artefacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.2.1 UiPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.2.2 Blue Prism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.2.3 Automation Anywhere . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5 Evaluative model 375.1 Creating the evaluative model . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5.1.1 Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375.1.2 Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385.1.3 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.1.4 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.1.5 Maintainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.1.6 Portability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

5.2 Evaluative questions summarised . . . . . . . . . . . . . . . . . . . . . . . . . 41

6 Evaluation 426.1 Evaluation of the RPA-instantiations . . . . . . . . . . . . . . . . . . . . . . . 42

6.1.1 UiPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426.1.2 Blue Prism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

B

Page 4: Robotic process automation - An evaluative model for

6.1.3 Automation Anywhere . . . . . . . . . . . . . . . . . . . . . . . . . . 466.1.4 Comparative tables of the evaluation . . . . . . . . . . . . . . . . . . . 48

6.2 Evaluation of the evaluative model . . . . . . . . . . . . . . . . . . . . . . . . 546.2.1 Analysis and evaluation of RPA-tools as digital resources . . . . . . . . 546.2.2 Design science research evaluation . . . . . . . . . . . . . . . . . . . . 54

7 Discussion 557.1 Summary of what was learned . . . . . . . . . . . . . . . . . . . . . . . . . . 557.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567.3 Areas requiring further work . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

8 Conclusion 57

C

Page 5: Robotic process automation - An evaluative model for

List of Figures1 ISO/IEC 25010: Quality models (Estdale and Georgiadou 2018) . . . . . . . . 112 The typical process being implemented . . . . . . . . . . . . . . . . . . . . . 173 UiPath: Main process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 UiPath: Send email to each address . . . . . . . . . . . . . . . . . . . . . . . . 205 UiPath: Read Responses and Mark off based on response . . . . . . . . . . . . 216 UiPath: Check who did not respond: first for each-loop . . . . . . . . . . . . . 227 UiPath: Check who did not respond: second for each-loop . . . . . . . . . . . 228 UiPath: Send reminders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Blue Prism: Extract and Store Excel sheet . . . . . . . . . . . . . . . . . . . . 2410 Blue Prism: Send Email action . . . . . . . . . . . . . . . . . . . . . . . . . . 2511 Blue Prism: Read and Store Emails action . . . . . . . . . . . . . . . . . . . . 2612 Blue Prism: Loop to Remove certain rows . . . . . . . . . . . . . . . . . . . . 2613 Blue Prism: Properties of the Decision Node . . . . . . . . . . . . . . . . . . . 2714 Blue Prism: Remove Columns from Collection . . . . . . . . . . . . . . . . . 2815 Blue Prism: Open Workbook and Write from Collection . . . . . . . . . . . . 2916 Blue Prism: Action: Write Collection to Workbook . . . . . . . . . . . . . . . 2917 Blue Prism: Remove rows with common e-mail address . . . . . . . . . . . . . 3018 Blue Prism: Compare E-mail address column of two Collections . . . . . . . . 3119 Automation Anywhere: Send e-mails and read e-mails . . . . . . . . . . . . . 3220 Automation Anywhere: Setup for ’Email Connect’ . . . . . . . . . . . . . . . 3321 Automation Anywhere: Read emails and mark off . . . . . . . . . . . . . . . . 3422 Automation Anywhere: Check whom to remind: outer loop . . . . . . . . . . . 3523 Automation Anywhere: Check whom to remind: inner loop . . . . . . . . . . . 36

List of Tables1 Nine dimensions of digital resources (Goldkuhl and Rostlinger 2019) . . . . . 142 Quality ideals for digital resources divided into nine dimensions (Goldkuhl and

Rostlinger 2019) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Evaluation of RPA-tools compatibility . . . . . . . . . . . . . . . . . . . . . . 484 Evaluation of RPA-tools usability, part 1 . . . . . . . . . . . . . . . . . . . . . 495 Evaluation of RPA-tools usability, part 2 . . . . . . . . . . . . . . . . . . . . . 506 Evaluation of RPA-tools reliability . . . . . . . . . . . . . . . . . . . . . . . . 517 Evaluation of RPA-tools security . . . . . . . . . . . . . . . . . . . . . . . . . 528 Evaluation of RPA-tools maintainability . . . . . . . . . . . . . . . . . . . . . 539 Evaluation of RPA-tools portability . . . . . . . . . . . . . . . . . . . . . . . . 53

D

Page 6: Robotic process automation - An evaluative model for

1 IntroductionIn this section, a background of Robotic Process Automation (RPA) as a subject is presented.Following the background is a literature review presenting studies of RPA-tool implementa-tions. The researched problem area and the research purpose, including the research contribu-tions, and research questions, are described. The lack of a theoretical framework specified toRPA-tools is described. Finally, the delimitations of the research are presented.

1.1 BackgroundAutomation is a solution where a machine performs a task. When a work task or workflow isautomated, this means that a machine or technology performs something previously performedby a person. A crucial question posed is what should be automated and what should be done bya person. Repetitive, time-consuming work processes that require neither intuition or creativitycan be performed by a machine that mimics the input a person would make, and with greaterefficiency. Automation eliminates human errors, something that is otherwise a risk when hu-mans perform processes. One way to implement automation is Robotic Process Automation orRPA. (Aguirre and Rodriguez 2017)

RPA-tools are a way to perform if-, then- and else-statements on structured data; RPA-toolsmainly achieve this by interacting in a similar way that a person would do, i.e. using the userinterface. Application Programming Interface (API) can be used together with RPA; in reality,it is usually a combination of these two ways. RPA-tools work by (1) mapping the processesto be implemented and how the processes are carried out and then (2) being able to performthe process independently. The purpose of RPA-tools is to automate simple and repetitiveprocesses. (Aalst, Bichler, and Heinzl 2018; Radke, Dang, and Tan 2020) It is important tonote that RPA is not a physical robot but is a form of computer-based program (Aguirre andRodriguez 2017).

To exemplify the effectiveness of what RPA-tools can achieve, there is a previous case study atTelefonica O2 that used an RPA-solution to streamline their business operations. The companystarted with 20 robots and later expanded to 75 robots. This implementation was carried out bythree RPA-specialised employees which used RPA to automate fifteen main processes. Thesefifteen processes represented 35 per cent of all back-office operations. Some of the processesthat were automated were: SIM swaps, credit checks, order processing, customer assignment,unlatching, porting, ID generation, customer conflict resolution and customer data updates.(Willcocks, Lacity, and Craig 2015a)

Examples of other uses for RPA are to

• monitor specific events (received e-mail or documents stored in a folder on your com-puter);

• read and extract data from files (Electronic spreadsheets, PDFs or e-mails);

• perform checks on data according to certain specifications (VAT, price, et cetera);

• securely log into one or more programs;

• create documents in the organisation’s system;

• make decisions based on predefined conditions (e.g. if e-mail attachments are not in anallowed format (according to the organisation), then the robot can send an e-mail back to

1

Page 7: Robotic process automation - An evaluative model for

the sender with a request for new files in the correct format);

• send confirmations (e-mails, messages, logs, et cetera). (Willcocks, Lacity, and Craig2015b; Moffitt, Rozario, and Vasarhelyi 2018; Radke, Dang, and Tan 2020)

A significant advantage of RPA is that the tool can be quickly configured to perform the tasks ofusers without incurring any high costs. In general, RPA does not require technical know-howand can be configured by anyone without programming knowledge. However, according toWillcocks, Lacity, and Craig (2015a), it is advantageous for the IT department to be involvedin the design and implementation. The IT department must participate to be consistent withIT governance, security, IT architecture and IT infrastructure. RPA-tools are implementedon top of existing systems, and thus there is no need to develop a whole new platform whenimplementing RPA-solutions. (Anagnoste 2017)

RPA-tools can lead to lower costs for companies by automating repetitive processes. RPA isa cheaper way to link legacy systems with one another instead of changing or developing anew information infrastructure where the interaction is seamless. RPA is a faster way to geta higher return on investment instead of doing it traditionally by improving the informationsystem. (Aalst, Bichler, and Heinzl 2018; Willcocks, Lacity, and Craig 2015b)

1.2 Literature reviewRPA is a new subject to the scientific domain, some of the first appearances of research aboutRPA are from 2015. Much of the previous research regarding RPA are case studies of organisa-tions that have implemented RPA-tools. The primary focus areas of the previous studies are (1)what has been automated and (2) what advantages or disadvantages the organisation faced afterthe implementation. As previously mentioned, Willcocks, Lacity, and Craig (2015a) conducteda case study at Telefonica 02 where they studied the implementation of RPA-robots. The studyputs focus on evaluating whether a process should or should not be automated and provides fiveguidelines for implementers.

Another example of a case study is at a business process outsourcing (BPO) company located inBogota, Colombia, which Aguirre and Rodriguez (2017) conducted. The case study describes atask that has been automated using an RPA-tool, and the researchers want to compare the RPA-tool with humans. The comparison was conducted by dividing the workers into two groups.The first group used RPA-solutions, and the second group used solely human workers. Theresults of the study showed that RPA-tools boosts productivity by 21 per cent but were only2 per cent faster than humans. The lack of speed increase of RPA-implementation could bebecause some workers were very skilled and faster than the RPA-tools, but the RPA-tools coulddo several tasks at once.

A case study conducted by Fernandez and Aman (2018) researched the organisational andindividual impact of implementing an RPA-solution on Global Accounting Services in one ofthe largest global business services firms. The case study puts focus on the human experienceof implementing RPA-solutions and highlights the importance of collaboration among RPAand human operators. Implementing the RPA-solution boosted work quality and accuracy,which lead to saving the time of accountants, time which the accountants could then spendon more challenging tasks. The article describes that implementing RPA-solutions did notnecessarily lead to increasing unemployment; instead, RPA-solutions managed to create newjob roles and change existing ones. The study further describes the need for accountants to

2

Page 8: Robotic process automation - An evaluative model for

have IT knowledge as technology advances and changes every year, which is highlighted bythe implementation of RPA.

In a study conducted by Moffitt, Rozario, and Vasarhelyi (2018), they apply RPA as a conceptto the auditing business. The authors claim that there is little to no research in the area of RPAand that leading firms tend to focus on the field of Artificial Intelligence (AI) rather than RPA.The authors make a case for implementing RPA solutions in the auditing field of business basedon RPA being successfully implemented in other fields of business. Repetitive processes withinauditing that could be automated, as well as requirements for making automation possible, arepresented. The conclusions reached are that RPA solutions could benefit accounting businessesmostly by reducing time spent within highly repetitive processes and by being able to performauditing tasks error-free. The authors suggest RPA-related research areas and future researchissues for further investigation of RPA-tools. The authors pose the following questions forfuture research: (1) ’which of the RPA-tools are most promising?’ and (2) ’how should RPA-tools be evaluated?’ (ibid.).

1.3 Problem Description/AreaAutomation, and particularly RPA, is currently at the forefront of the IT industry, and many arelooking for modern RPA-solutions to manage their problems and streamline their operations.IT consulting companies are more focused on solving their customers’ problems with the helpof RPA-tools. (Willcocks, Lacity, and Craig 2015b).

None of the studies in the literature review evaluated the RPA-tools that were used for theprocess implementation. Previous studies put the focus on comparing the effectiveness andefficiency of business processes and their result before and after implementing RPA solutions,without evaluating the tools used to implement the processes. The focus of previous studieslies in the impact of RPA in an organisational context; this points to a gap of knowledge andresearch on the subject of evaluating the RPA-tools themselves.

Regarding such a highly topical subject, research should exist regarding the implementationof the RPA-tools. There is a lack of scientifically based comparative studies of RPA-tools forchoosing which of the RPA-tools suits the automation intended. The literature review showsthat there is an evident lack of literature regarding the implementation and comparison of RPA-tools, which makes further research attractive and desirable not only for a small group of re-searchers.

As mentioned previously, Moffitt, Rozario, and Vasarhelyi (2018) suggest RPA-related researchareas and future research issues for further investigation of RPA-tools. In these research areas,the authors address specific questions about which RPA-tools are the most promising and howRPA-tools should be evaluated.

What initiated this project is a request from an IT consulting company. The company has notpreviously and does not currently use RPA-solutions, but has an interest in further exploringthe area to help its customers. The company’s interest in the subject also reaffirms the lackof information and knowledge in the evaluation and comparison of RPA-tools in the businesssector.

3

Page 9: Robotic process automation - An evaluative model for

While previous research has been conducted regarding the organisational impact of implement-ing RPA solutions, there is a clear gap in researching the tools which are used for the imple-mentation. The lack of research could point to the choice of RPA-tool being of low priority dueto their potential similarity, or the need for such research. A theoretical framework, based onscientific research, for the specific evaluation of RPA-tools, has not yet been created, and thebasis for creating such a framework must be methods for evaluating other software or systems.This research considers the ISO/IEC (International Organisation for Standardisation/Interna-tional Electrotechnical Commission) 25000 series of standards to be the most applicable whenevaluating software without a clear evaluative framework. The ISO/IEC 25000 series of stan-dards creates a framework for software and system evaluation focusing on quality in use andproduct quality (ISO Central Secretary 2014).

1.4 PurposeBy studying the market-leading tools within RPA, the purpose is to fill the lack of literatureby providing an understanding of what differentiates the RPA-tools from each other and howacquirers can evaluate RPA-tools. This research creates an evaluative model, working as aproof of concept for comparing RPA-tools, in order to achieve the purpose of evaluating andcomparing RPA-tools. The evaluative model is based on the quality models of the ISO/IEC25010 standard, which will be studied in the implementation of a typical process that rep-resents a common application area. Evaluative questions are created based on the ISO/IEC25010 standard while researching two areas, (1) official documentation of the RPA-tools and(2) implementation of each RPA-tool.

In order to analyse and validate the evaluative model, Goldkuhl and Rostlinger (2019) concep-tual framework of digital resources is used by applying and comparing the evaluation of digitalresources with the evaluative model. Because RPA-tools can be seen as digital resources, theevaluative model must have the ability to be proven to cover the different facets of digital re-source evaluation.

The results of the research will provide knowledge of (1) how RPA-tools can be implementedand (2) the differences that exist between the three market-leading RPA-tools. The researchcontributes in the form of a method for evaluating the RPA-tools. The knowledge contributionsthat this study will produce are (1) prescriptive information on implementation with the threemarket-leading RPA-tools, and (2) an evaluative model for evaluating and comparing the mostrelevant characteristics of different RPA-tools.

4

Page 10: Robotic process automation - An evaluative model for

1.5 Research question• How can RPA-tools be evaluated and compared, by appropriating traditional characteris-

tics for evaluating software and systems?

In order to answer the research question, the following questions are researched whenimplementing the RPA-tools:

– How can the market-leading RPA-tools be implemented in a typical situation thatrepresents a common area of use?

– Which of the market-leading RPA-tools is preferred in a typical situation that rep-resents a common area of use?

1.6 DelimitationsWhen examining and comparing RPA-tools, the research will not consider any tools other thanUiPath, Automation Anywhere and Blue Prism because these tools are market-leading RPA-tools (Anagnoste 2017), and are the most relevant RPA-tools to the client. The research willnot examine how RPA-implementations affect an organisation or its individuals, but only howthe three RPA-tools are suitable for a typical situation that describes a common area of use.This research will draw general conclusions, based on the characteristics of the ISO/IEC 25010standard, regarding the tools from the researched implementations. Attributes which dependon a specific organisational context or evaluate a specific process have not been selected forevaluation by this research.

When implementing the process, this research uses solutions that require the least amount ofprogramming experience; as such, the method for analysis does not go in-depth regarding theextended possibilities when writing code within the RPA-tools.

2 Research Approach and MethodologyIn this section, the research approach is presented, describing the design science research strat-egy and how design science research is performed in this study. The research method is pre-sented, followed by a critical examination of the used method.

2.1 Design Science ResearchThe research strategy is design science research, where a method for the implementation of thesame process is designed and used to evaluate the different RPA-tools. Based on the ISO/IEC25010 standard, the criteria for evaluating the tools are developed inductively during the imple-mentation of the IT artefacts and by studying the official documentation of the RPA-tools.

The purpose of design science research in information systems is to create a purposeful ITartefact to address crucial organisational problems. The artefact must describe the implemen-tation and the application so the implementer can use it in the correct domain. (Hevner et al.2004)

5

Page 11: Robotic process automation - An evaluative model for

Design science research is differentiated from regular design and creation by focusing on un-certain areas. By performing novel and risk-taking design, researchers can claim that they areperforming actual research rather than regular design and creation as practised in the industry.(Oates 2006)

Hevner et al. (2004) describe design science research as a problem-solving process and havederived seven guidelines for design science in information systems research.

1. Design as an artefact, in design science research artefacts are rarely full-grown informa-tion systems that will be used by organisations in practice. Potential implementers shouldinterpret the artefacts as a proof of concept of the ideas, practices, technical capabilitiesand products that the information system should become. The proof of concept providesan understanding of how to effectively and efficiently analyse, design, implement anduse information systems.

2. Problem relevance, information systems research aims to acquire knowledge and under-standing of how the development and implementation of technology-based solutions cansolve essential business problems. A design science research project provides relevanceby constructing innovative artefacts which provide an understanding of how a specificproblem can be solved.

3. Design evaluation, the evaluation of a designed artefact is based on validity, utility, qual-ity and efficacy. The evaluation is based on the requirements of the business environmentfor which the artefact is designed. The evaluation is also based on how the artefact canbe integrated within the business environment’s technical infrastructure. Functionality,completeness, consistency, accuracy, performance, reliability, usability and fit with theorganisation are attributes that can be used by implementers to evaluate IT artefacts.

4. Research contributions, for design science research to be useful, it must contribute in atleast one of three ways:

• The design artefact itself can be the contribution by being the solution to a prob-lem. The artefact contributes by extending the knowledge base or applying existingknowledge in new ways.

• Extending and improving existing foundations in the design science knowledgebase. For example, adding to an already existing model.

• Through methodology by developing and using evaluation methods (e.g. experi-mental, analytical, observational, testing and descriptive) and new evaluation met-rics. Measures and evaluation metrics are a crucial part of design science research.

The designed artefacts need to accurately represent the environments (e.g. business en-vironment or technological environment) for which they are designed in order to allowevaluation of the contribution. The artefacts must have the ability to be implemented,pointing to the importance of instantiations. The instantiations prove the artefacts abilityto be implemented within the intended environment.

5. Research rigour, the designed artefact must be constructed and evaluated using rigorousmethods. The level of rigour is measured in the effective use of the knowledge base.A successful design science research project includes selecting appropriate developmenttechniques, constructing a theory or artefact and selecting appropriate ways of evaluatingthe artefact.

6

Page 12: Robotic process automation - An evaluative model for

6. Design as a search process, the design process can be described as a search process tofind a solution to a problem. In design science research, it is common to simplify aproblem by only representing it with suitable means, goals and rules. The researchercan also simplify the problem by dividing it into smaller sub-problems. Sometimes thesimplifications can be unrealistic, but it provides a starting point.

7. Communication of research, the research needs to be communicated with the targeted au-dience in mind. A technology-oriented audience requires presentation detailing how theresearcher constructs the artefact and how implementers can use the artefact within theintended organisational context. A management-oriented audience requires presentationproviding the detail needed for determining whether the artefact is worth implementingwithin an organisational context. (Hevner et al. 2004)

The developed artefact must be evaluated in order to perform design science research correctly.Evaluating design science research is not only a question of whether the developed artefactworks, but also a question of how and why it works (Pries-Heje, Baskerville, and Venable2008). The evaluation criteria depend on the purpose of the artefact. The artefact can beevaluated as a proof of concept instead of evaluating in a real-life context. By creating a proof ofconcept, the design solution can be shown to have specific properties that behave in a particularway under certain conditions. Conclusions can be drawn from the proof of concept withoutevaluating the artefact in a real-life context. (Oates 2006)

2.2 Research approachThis research develops two different types of artefacts. (1) The first type of artefact is instantia-tion. Three instantiations, which automate the same process, are designed by this research; oneinstantiation is created using each RPA-tool. The process has been developed in collaborationwith the client to find a typical process from which general conclusions can be drawn. (2) Thesecond type of artefact is an evaluative model for evaluation of the RPA-tools, according tothe criteria corresponding to the questions created by researching the RPA-tools based on theISO/IEC 25010 standard. The evaluative model will be derived from the implementation of theRPA-tools. The research implements the following process:

1. Spreadsheet with e-mail addresses is retrieved.

2. E-mails with attached spreadsheet are sent to those responsible.

3. Waits for e-mail responses and marks off based on the response.

4. Sends an e-mail reminder to those who did not respond.

5. Compiles a list of those who need manual application.

Most businesses make use of electronic spreadsheets, through software such as Microsoft Ex-cel and Google Sheets. E-mail is a form of communication used by most businesses and users.RPA-solutions are typically not used for very complex processes; a simple process implement-ing spreadsheet- and e-mail functionality is thereby concluded to be a typical process.

The purpose of the evaluative model is to be used by acquirers as a method for evaluating RPA-tools; the main research contribution of this research is thereby through a methodology. Theevaluative model is a proof of concept, the utility of which is proven through applying the modelwhen evaluating the three researched RPA-tools. While the use of RPA-tools is increasing, sois the need to evaluate and select the appropriate tool for the task; as mentioned in section 1.2,

7

Page 13: Robotic process automation - An evaluative model for

there is a lack of such research. The tools Automation Anywhere, Blue Prism and UiPath havebeen selected as they are the three market-leading RPA-tools (Anagnoste 2017).

The attributes of relevance for this research are based on the ability to be applied within anyorganisational context. Accuracy and fit with the organisation are not evaluated in this researchbecause these attributes are dependent on the organisational context, which is not the primaryfocus of this research. Performance is not evaluated because measuring performance in thedevelopment environment is not representative of the performance within every operationalenvironment, according to Estdale and Georgiadou (2018). Consistency is not measured be-cause only one implementation, performed by one implementer, within each tool is studied,and thereby, no measurement of consistency between multiple implementations can be made.Including attributes which depend on an organisational context would limit the application ar-eas of the evaluative model. When using the evaluative model, the acquirer can evaluate andcompare RPA-tools in a way that allows the acquirer to decide which tool is appropriate for theintended business environment.

The three instantiation artefacts provide validity and utility through their ability to be evalu-ated based on the characteristics of the ISO/IEC 25010 standard, and thereby work as a basisfor developing the evaluative model. The instantiation artefacts provide quality and efficacyby implementing a simple process that is easy to conclude from, while still being relevant inrepresenting a common area of use. The evaluative model’s validity is proven through the ap-plication of the model when evaluating the three RPA-tools. The evaluative model’s validity isfurther proven by connecting the model to a theoretical concept of digital resources and theirevaluation. The evaluative model provides utility by filling the gap of being able to evaluate andcompare RPA-tools. The model provides quality and efficacy through being based on officialdocumentation and data from the implementation of the instantiation artefacts.

Rigorous methods are used through:

• Basing the evaluative model on the established quality model in the ISO/IEC 25010 stan-dard.

• Appropriating the quality model to fit RPA-tool evaluation with the purpose to fit anyorganisational context.

• Performing the appropriation of the quality model based on knowledge gained from theofficial documentation of each RPA-tool and through process implementation within eachRPA-tool.

• Validating the evaluative model by applying Goldkuhl and Rostlinger (2019) concept ofdigital resources to the model’s evaluation of RPA-tools.

Research rigour is, thereby, reached through effective use of the knowledge gained from theISO/IEC 25010 standard, the RPA-tool implementations, the RPA-tool documentation andmodel validation through the application of the theoretical concept of digital resources.

The problem of evaluating RPA-tools is simplified in this research by only implementing asimple typical process representing a common area of use. Rigorous methods are used bybasing the evaluation on the ISO/IEC 25010 standard, and applying the standard when selectingrelevant characteristics for evaluation of RPA-tools. The relevance of the characteristics isdetermined by studying the implemented artefacts and by studying official documentation ofthe RPA-tools.

8

Page 14: Robotic process automation - An evaluative model for

This research describes the instantiations in detail in order to explain how the evaluative modelis derived. The instantiations are also described to understand how the model can be appliedwhen choosing the appropriate tool in an organisational context. The instantiation artefacts arecommunicated through a technology-oriented presentation because it provides details on howto configure the implementation. The evaluative model is management-oriented because it pro-vides details on how to perform the comparison when deciding which RPA-tool to implementwithin an organisational context.

2.3 Research methodIn this section, the methods for data collection and data analysis are presented. The data collec-tion is performed through observations and document analysis. The data analysis is qualitativeand is used to analyse qualitative data.

2.3.1 Data collection method

The research project collects data through observations and document analysis. Observationsare made, regarding the characteristics based on the ISO/IEC 25010 standard, of the tools dur-ing the implementation of the process. Document analysis is performed by studying the officialdocumentation of the RPA-tools. The implementations are performed by the observant, whichmeans that the observations are of a participatory form. When conducting participant observa-tions, the observer must pay close attention to and document relevant findings. Participatingobservers should also reflect on how their participation affects the observed process. (Bowenet al. 2009; S. L. Schensul, J. J. Schensul, and LeCompte 1999)

The choice of method for data collection is justified by the need for detailed and comparabledata on the entire process of the three implementations. The focus areas of the observationsare based on the quality model for product quality in the ISO/IEC 25010 standard. The entireprocess of the three implementations is documented before the data analysis.

Two data collection methods are used in this research to study RPA-tools. Using multiplemethods provides a way to confirm the data from multiple sources. Both document analysisand observations generate data that can be compared and confirmed by the other. Documentanalysis is used to collect data which is not provided by the observations of the implementationsand to confirm the results of the observations further. (Carter et al. 2014)

2.3.2 Analysis method

The data analysis has an inductive approach, as evaluative questions are created based on ap-plying the ISO/IEC 25010 standard to the evaluation of RPA-tools. The evaluative model iscreated through open/goal-free evaluation (Goldkuhl and Rostlinger 2019) of the RPA-tools,as the criteria are established by studying the tools and their official documentation. The dataanalysis method is qualitative and is used to analyse qualitative data collected from studyingthe official documentation of each RPA-tool and from observations made during the imple-mentation of each RPA-tool. By applying the standard to the evaluation of RPA-tools, somecharacteristics may be found less relevant than others in RPA-tool evaluation compared to eval-uating other software or systems. The characteristics of the ISO/IEC 25010 quality model forproduct quality are appropriated or removed based on the relevance for evaluating RPA-toolsoutside of specific organisational contexts.

9

Page 15: Robotic process automation - An evaluative model for

Differences between the RPA-tools are illustrated in tables, which are divided into categories.The categories and the evaluative questions of each category are decided based on applyingthe ISO/IEC 25010 standard to RPA-tools during the creation of the evaluative model. Thetables contain all tools and evaluative questions and describe how each tool corresponds toeach evaluative question.

After creating the evaluative model, the validity of the evaluative criteria is reaffirmed byanalysing the model based on Goldkuhl and Rostlinger (2019) concept of digital resources.Seeing RPA-tools as digital resources, the analysis based on the concept of digital resourcesaims to prove the evaluative model valid on a theoretical basis.

2.4 A critical examination of the methodEven though the implemented process represents a common area of use, basing the evaluativemodel from the implementation of only one process is a limit that can not be overlooked.

This research focuses on evaluating characteristics outside of an organisational context, whichmeans that the evaluative model will not be fully applicable when evaluating RPA-tools forevery specific organisational context.

3 TheoryIn this section, the ISO/IEC 25000 series of standards and the ISO/IEC 25010 standard forevaluating software and systems are presented. This section describes the quality model forproduct quality, found in the ISO/IEC 25010 standard, in detail. Goldkuhl and Rostlinger(ibid.) concept of digital resources is presented, followed by a description of digital resourceevaluation.

3.1 ISO/IEC 25000 seriesThe ISO/IEC 25000 series of international standards is entitled ’Systems and software engi-neering – Systems and software Quality Requirements’ and Evaluation and is used to evaluatesystems or software quality based on two models, quality in use and product quality, defined inISO/IEC 25010. Quality in use applies to evaluate the interaction of software in use within aspecific context by specific users to achieve specific goals. Product quality applies to evaluatestatic properties of software and dynamic properties of the computer system, independent ofcontext. (ISO Central Secretary 2014)

The goal when creating the ISO/IEC 25000 series of standards was to assist the developmentand acquisition of systems and software products with the specification and evaluation of qual-ity requirements. Two main processes are covered to reach these goals: (1) software qualityrequirements specification and (2) software quality evaluation, which is supported by a systemand software quality measurement process. (ibid.)

10

Page 16: Robotic process automation - An evaluative model for

3.2 ISO/IEC 25010The quality models found in ISO/IEC 25010 are composed of various characteristics; five char-acteristics define quality in use, and eight characteristics define product quality. The qualitymodels are presented in Figure 1. (ISO Central Secretary 2011)

Figure 1: ISO/IEC 25010: Quality models (Estdale and Georgiadou 2018)

Product quality is of most interest to potential buyers or acquirers who want to get technicallyinvolved with the software or system. Product quality is divided into eight characteristics.

• Functional suitability, unlike the following seven characteristics, deals with suitabilitywithin an organisational context without specifying which specific context. The char-acteristic is divided into the sub-characteristics functional completeness, functional cor-rectness and functional appropriateness. The characteristics are viewed by potential ac-quirers or buyers to assess whether the product fits their particular organisational needs.

• Performance efficiency measures the technical performance of the product, including timebehaviour, resource utilisation and capacity. The relevance of such measures might,however, be questioned, as measures performed during development might not be takenin the same context (platform or environment) as will be used for potential acquirers.

• Compatibility describes how software or systems affect and are affected by other softwareor systems. Compatibility is divided into two sub-categories:

– Co-existence measures the degree to which a product can perform its required func-tions while sharing an environment and resources with other products, without adetrimental impact on any other products.

11

Page 17: Robotic process automation - An evaluative model for

– Interoperability is measured by the degree of support for information exchange andthe use of the exchanged information with other products or applications.

• Usability evaluates the human interaction with the product, and is divided into six sub-categories:

– Appropriate recognisability measures the degree to which users can recognise if theproduct is suited for their needs.

– Learnability is the capability of a software product to enable users to learn how touse the product.

– Operability describes how easily a software product or system is operated.

– User error protection describes what happens when a user error occurs and how theerrors are handled within the software product.

– User interface aesthetics measures the degree to which a user interface is pleasingand satisfying to interact with.

– Accessibility measures the degree to which a product or system can be used by userswith the broadest range of characteristics and capabilities.

• Reliability measures the degree to which a system or product performs specified functionsunder specified conditions. Reliability is divided into four sub-categories:

– Maturity measures the extent to which the product can be trusted to perform andwork as it is supposed to.

– Availability measures the availableness of the software product, for example, if theservice is reachable at all times.

– Fault tolerance measures the degree of a system or product operating as intendeddespite errors in hardware or software.

– Recoverability specifies how the product handles data recovery and re-establishingof the system state in case of an interruption or failure.

• Security measures the degree to which a product or system protects data so that users,other products or other systems have the appropriate degree of data access according toauthorisation levels. The security characteristic is divided into five sub-categories:

– Confidentiality measures how the product ensures that data is only accessible tothose authorised.

– Integrity measures the extent to which a product or system prevents unauthorisedaccess or modification of data.

– Non-repudiation measures the degree to which actions and events can be proven tohave occurred.

– Accountability measures the degree to which actions of a unique user, product orsystem can be traced back to the same unique user, product or system.

– Authenticity measures the degree to which the identity of a subject or resource canbe proved to be the one claimed.

12

Page 18: Robotic process automation - An evaluative model for

• Maintainability measures how well a product or system can be modified in order toimprove it, correct it or adapt it to changes. Maintainability is divided into five sub-categories:

– Modularity is the degree to which a system is composed of discrete componentssuch that a change to one component has minimal impact on other components.

– Reusability is measured in the possibilities of reusing an asset in more than onesystem or reusing an asset in building other assets.

– Analysability measures the ability to assess the impact of intended changes in aproduct or system, or the ability to diagnose a product for errors or flaws.

– Modifiability is the degree to which a product or system can be effectively andefficiently modified without degrading existing product quality.

– Testability is the degree of effectiveness and efficiency test criteria can be estab-lished for a system, product or component, and how effectively and efficiently testscan be performed to determine whether the test criteria are met.

• Portability measures how effectively and efficiently a system, product or component canbe transferred from one hardware, software or other operational or usage environment toanother. Portability is divided into three sub-categories:

– Adaptability measures how well a product or system can be adapted for different orevolving hardware, software or other operational or usage environments.

– Installability measures how well a product or system can be successfully installedand uninstalled within a specific environment.

– Replaceability measures how well a product can replace another specific productfor the same purpose in the same operational environment.

(ISO Central Secretary 2011)

3.3 Digital resourcesGoldkuhl and Rostlinger (2019) use the term digital resource to describe all the different as-pects of modern information technology. A digital resource is an integration of software re-sources and information resources, based on the necessary hardware resources. For a business,a digital resource works as an asset and aids in reaching business goals. A digital resource isthe result of planned investments and, as an asset, it is economically generative. (ibid.)

Digital resources are used for communication and other handling and exchange of informa-tion. Digital resources are realised through technological resources, software and hardware,and can have relations to other digital resources through digital information exchange or otherdigitalised sharing. (ibid.)

’Digital resource’ is a flexible term and can be used to describe many different aspects ofmodern information technology. Goldkuhl and Rostlinger (ibid.) structure digital resourcesinto nine dimensions to cover the different facets of digital resources. The dimensions areillustrated in Table 1.

13

Page 19: Robotic process automation - An evaluative model for

Dimension AspectRelational Actors in the digitised activity or business; organisers, users (informa-

tion providers, information consumers)Semantic Digitised communication/information; business terms and concepts, in-

formation resourcesFunctional Digital functionality; digitised business activities, digital servicesInteractive Digital meetings for users; interaction via an interface between digital

resource and userNormative Goals and values operating the digitised activity or businessRegulative Regulations operating the digitised activity or businessEconomic Investments/costs, benefits and assets in the digitised activity or busi-

nessArchitectural Digital landscapes; relations between digital resources, e.g. exchange,

linking, sharingTechnical Technologies (for software and hardware) and technical mechanisms

used for the digital resource (e.g. transfer, storage, security)

Table 1: Nine dimensions of digital resources (Goldkuhl and Rostlinger 2019)

• The relational dimension describes the stakeholders which can be related to the digitalresource, such as users interacting with the resource. Users can provide a digital resourcewith information or use the information provided by a digital resource.

• The semantic dimension describes the parlance and phraseology of the information re-lated to the digital resource. Digitised information resources need to have a perceivablemeaning for users of the resource. Semantics, in this case, refers to the concepts andterminology used by the digital resource.

• The functional dimension refers to the business processes and business activities whichmake use of and interact with the digital resource. Digital resources are frequently basedon existing business processes and are introduced to assist in the execution of these pro-cesses. Regarding the functional dimension, digital resources perform communicativeand other types of information handling functions.

• The interactive dimension refers to the way concepts and terms are organised and dis-played, in useful ways, to the users of the digital resource. The way the users interactwith the digital resource is through user interfaces and, thereby, the interactive dimensiondeals with how the users can interact with the digital resource through a user interface.

• The normative dimension refers to the goals and values which guide the digital resource.The digitising of businesses needs to be based on the goals and values of the business inquestion. The digital resources are normatively managed and characterised in order tofollow these goals and values.

• The regulative dimension refers to the regulations which guide the digital resource. Dig-ital resources of businesses need to comply with regulations, such as standards and busi-ness agreements. The creation of IT-systems, and thereby digital resources, has to complywith rules and laws. The regulative dimension overlaps with the normative dimension, inthe fact that rules can guide and stand as a ground for the goals and values of businesses.

• The economic dimension: digital resources make up economical assets in businesses,

14

Page 20: Robotic process automation - An evaluative model for

which means that they contribute to the economic efficiency of the business incorporatingthe resource. Developing, acquiring and managing digital resources lead to costs, anddigital resources lead to benefits for the business.

• The architectural dimension: digital resources are most often part of a system, interactingwith other digital resources. Digital resources are parts of digital landscapes, which refersto being part of a network of multiple digital resources. The digital landscape refers tothe digital resources within the network, and which relations exist between these digitalresources.

• The technical dimension refers to the technologies utilised by digital resources. In orderto manage, store, present and transfer information, software and hardware technologiesneed to be utilised. (Goldkuhl and Rostlinger 2019)

3.4 Evaluating digital resourcesIn order to perform a successful evaluation of a digital resource, the resource needs to be de-scribed in detail. After describing the digital resource, evaluative conclusions can be drawn.The description of the digital resource can be used as a part of the documented evaluation.There are multiple different strategies for evaluating digital resources. Three common evalua-tion strategies are open/goal-free evaluation, goal-based evaluation and theory-based evalua-tion. (ibid.)

Open/goal-free evaluation is performed without using pre-set evaluative criteria. Positive andnegative aspects of the digital resource are studied, including potential problems and strengths.These aspects can be identified by (1) studying the actual implementation of the digital re-source, (2) communication with relevant business actors or (3) studying documents which de-scribe the digital resource. (ibid.)

Goal-based evaluation uses business goals as pre-set criteria for evaluation. These businessgoals need to be identified before the evaluation of the digital resource. After setting up thecriteria, the digital resource is described and compared with the criteria. (ibid.)

Theory-based evaluation means that the digital resource is evaluated based on a theoreticalframework. An example of such a theoretical framework is the concept of digital resources,according to Goldkuhl and Rostlinger (ibid.). In the case of digital resources, the evaluationcan be performed by evaluating the resources qualities based on the nine dimensions of digitalresources. The connection of these qualities to each dimension is illustrated in Table 2.

15

Page 21: Robotic process automation - An evaluative model for

Dimension Digital resource qualityRelational Organiser clarity/responsibility; target clarity (user properties), clarity

in information source, availability/security for usersSemantic Information quality, business-language compliance and clarityFunctional Functional repertoire and service quality, business activity contribu-

tions, digital process integrationInteractive Usability, availabilityNormative Normative compliance and clarityRegulative Regulative compliance and clarityEconomic Cost/benefit effectivenessArchitectural InteroperabilityTechnical Robustness, technical efficiency and security

Table 2: Quality ideals for digital resources divided into nine dimensions (Goldkuhl and Rostlinger2019)

4 Process implementationThis section begins by describing the typical process. Following the process description, theimplementations of the process within the RPA-tools are described.

4.1 The typical processThe process that was implemented begins by reading a spreadsheet file containing e-mail ad-dresses. An e-mail with an attached spreadsheet file is then sent to each e-mail address. Afterwaiting for responses, unread e-mails are read and stored in two spreadsheet files based on re-sponse; the first file stores all responses, and the second file stores responses which need manualapplication. In this case, the manual application list represents instances when the respondentneeds additional support which can not be supplied by the RPA-solution. After storing the re-sponses, a reminder is sent via e-mail to those who have not yet responded. After waiting foradditional responses, unread e-mails are read and stored again. The process was first imple-mented using UiPath, followed by Blue Prism, and lastly Automation Anywhere. The typicalprocess being implemented is illustrated in Figure 2.

16

Page 22: Robotic process automation - An evaluative model for

Figure 2: The typical process being implemented

17

Page 23: Robotic process automation - An evaluative model for

4.2 Instantiation artefactsThis section describes the implementation of the typical process within each RPA-tool. Thesection is divided into three parts, each of which describes the implementation within one ofthe RPA-tools.

4.2.1 UiPath

UiPath supports both local and cloud-based implementation. For this research, the process wasimplemented and run locally. No additional set-up other than installing the software itself wasneeded before implementing the process.

The implementation process began by creating the main process, in which all parts of theprocess were to be implemented. The main process, containing all subprocesses, is illustratedin Figure 3.

18

Page 24: Robotic process automation - An evaluative model for

Figure 3: UiPath: Main process

19

Page 25: Robotic process automation - An evaluative model for

A built-in function for reading a Comma-separated values file (CSV-file) was used to collect andsave all e-mail addresses in order to allow the automation of sending the e-mails to multiplerecipients. The e-mail addresses were extracted from a CSV-file and saved as a DataTable-variable. Within the main sequence, a for each-loop was used to loop through the DataTable-variable and send an e-mail, containing a CSV-file, to each e-mail address. The e-mail wassent by using the built-in function for sending e-mails through Simple Mail Transfer Protocol(SMTP). Other options for sending e-mails are Exchange, IBM notes, Outlook and POP3. Thesending of e-mails is illustrated in Figure 4.

Figure 4: UiPath: Send email to each address

In order to read e-mails, the built-in function ’Get Internet Message Access Protocol (IMAP)Mail Messages’ was used. Settings were selected for reading only unread messages and mark-ing messages as read. The e-mails are saved in a universal variable that can later be used byother processes within the main process. A sequence was implemented to encapsulate the pro-cess of checking the contents of the e-mail responses. Within the sequence, a for each-loop wasused to loop through all the received e-mails. An if-statement was used to check if the contentof the e-mail contained the word ’Yes’, which is used to mark off that no manual applicationis needed. The e-mail contents were retrieved from the universal variable containing all e-mailresponses and the contents were saved by adding them as a DataRow in a DataTable-variable.The e-mails with the response ’Yes’ were saved in a CSV-file containing the senders’ e-mailaddress. An else-statement was used to save all other e-mails, which need manual application.The e-mails with a response other than ’Yes’ were saved in a CSV-file containing the senders’e-mail addresses and the message contents. The sequence results in two CSV-files; (1) the firstcontaining the e-mail addresses of the responders who do not need manual application, and(2) the second containing the e-mail addresses and e-mail contents of the responders who needmanual application. The process of reading responses and marking off based on response isillustrated in Figure 5.

20

Page 26: Robotic process automation - An evaluative model for

Figure 5: UiPath: Read Responses and Mark off based on response

In order to send reminders, a DataTable-variable containing the e-mail addresses of those whohad not responded was created. The e-mails were stored in the DataTable by looping througheach e-mail in the CSV-file containing all e-mail addresses and all received e-mails (Figure6). Within the for each-loop, a second for each-loop was used to check if the email addressexisted within the DataTable of responses (Figure 7). A boolean-variable, by default set to’False’, was created to address whether or not the email was found. If the email was found inthe DataTable of responses, the boolean was set to ’True’. When the inner for each-loop hadconcluded, a new DataTable-variable was created to store those who had not responded. If theboolean-variable was set to ’False’ the e-mail address was stored in the new DataTable. Afterlooping through all e-mails, the DataTable containing all e-mails that need reminders was savedin a CSV-file.

21

Page 27: Robotic process automation - An evaluative model for

Figure 6: UiPath: Check who did not respond: first for each-loop

Figure 7: UiPath: Check who did not respond: second for each-loop

22

Page 28: Robotic process automation - An evaluative model for

A new sequence was created to handle the sending of reminders (Figure 8). The CSV-file con-taining the e-mail addresses of those who had not responded was stored in a DataTable vari-able. A for each-loop was used to loop through all e-mail addresses and send e-mail remindersthrough the built-in ’Send SMTP Mail Message’ function.

Figure 8: UiPath: Send reminders

Finally, time delays were added between the sending and reading of e-mails. The implementedtime delays can be seen in Figure 3.

4.2.2 Blue Prism

When testing Blue Prism, the software was installed locally. Accompanying the software itselfis an instance of an SQL-database which is required for Blue Prism to store data. When access-ing the RPA-tool itself, a local SQL-database is needed. The user interface is based on placingnodes and arrows for all sub-processes and the connections between them. Even though theinterface with its built-in functions is easy to understand, further functions installed throughpackages called Visual Business Objects (VBO) are required to use functionality such as inter-acting with Excel sheets and sending or reading e-mails. These packages are easy to install andare found within the locally installed folder structure.

The implementation process began by creating a new ’Object’ and editing the object through theObject Studio interface. Initially, there are ’Start’- and ’End’-nodes to which the process wasconnected. First, a VBO for handling Excel sheets was installed, which included the functionfor reading and storing the contents of an Excel sheet. The process for reading and storing an

23

Page 29: Robotic process automation - An evaluative model for

Excel sheet is implemented by creating an action node for creating an instance with a ’handler’.The handler is then used to implement the action ’Open Workbook’, which reads and stores theExcel sheet in a data variable. An action is then implemented called ’Get Worksheet’; thisaction extracts the data from the stored Excel sheet and saves it in a ’Collection’. Collectionscan be seen as generic lists, and are the variables used to store any data within Blue Prism.The data extracted from the Excel sheet is a collection of all e-mail addresses. The process ofreading and storing the data from an Excel sheet can be seen in Figure 9. To the left are theactions performed in the process; to the right are the stored data which will hold the data whenthe process is run.

Figure 9: Blue Prism: Extract and Store Excel sheet

After storing all e-mails, a loop was implemented. When implementing a loop in Blue Prism, acollection which will be looped through is selected. For sending e-mails, a VBO for handlinge-mails through Outlook was installed. An action for sending e-mails was implemented withinthe loop. The action extracts the e-mail address from the current row of the collection beinglooped and sends an e-mail to each address. The structure used in the ’Send Email’ action isillustrated in Figure 10.

24

Page 30: Robotic process automation - An evaluative model for

Figure 10: Blue Prism: Send Email action

When the e-mails are sent, an action for reading e-mails was implemented. The action uses theinbox of a locally logged in Outlook account and stores the e-mails in a collection. The actioncan be set to read e-mails that are read, unread or both. The collection of e-mails are split intocolumns containing, among others, ’SenderEmailAddress’ and ’Body’. The properties of theaction can be seen in Figure 11.

25

Page 31: Robotic process automation - An evaluative model for

Figure 11: Blue Prism: Read and Store Emails action

After creating a collection of all received e-mails, the collection was copied to another collec-tion. The new collection was looped through in order to create a collection of e-mail addresseswhich need manual application. Within the loop, a decision node was implemented with acondition checking the contents of the e-mail body. When the body contained the string ’Yes’,the e-mail was removed from the collection; this resulted in a collection containing only theresponders which will need manual application. The creation of a new collection and the loopremoving specific rows is illustrated in Figure 12. The properties of the decision node with thecondition are illustrated in Figure 13.

Figure 12: Blue Prism: Loop to Remove certain rows

26

Page 32: Robotic process automation - An evaluative model for

Figure 13: Blue Prism: Properties of the Decision Node

The collection of responders who need manual application are then stored in a separate Excelsheet. Before writing the contents of the collection to an Excel sheet, actions were imple-mented to remove unnecessary columns. Blue Prism does not contain any built-in functions forwriting only specific columns of a collection and therefore, actions to remove columns wereimplemented. Separate actions for removing each unnecessary row were implemented as seenin Figure 14. The rows left in the collection were the e-mail address, sender name and the bodyof the e-mail.

27

Page 33: Robotic process automation - An evaluative model for

Figure 14: Blue Prism: Remove Columns from Collection

Actions for creating a new instance, opening a new Workbook (representing a new Excel sheetwithin Blue Prism) and writing the contents of the collection to the Workbook, were imple-mented through the VBO for handling Excel sheets. The overview of the actions can be seen inFigure 15 and the properties for writing the collection to the Workbook can be seen in Figure16.

28

Page 34: Robotic process automation - An evaluative model for

Figure 15: Blue Prism: Open Workbook and Write from Collection

Figure 16: Blue Prism: Action: Write Collection to Workbook

29

Page 35: Robotic process automation - An evaluative model for

After compiling the Excel sheet of responders who need manual application, actions for sendingthe reminders were implemented. First, a loop going through the collection of all e-mailswas created; within the loop, a second loop was created, going through all received e-mails.In the inner loop, a decision node was implemented, comparing the e-mail addresses of thetwo collections. When the e-mail address of the outer loop matched an e-mail address of theinner loop, the row was removed from the collection of all e-mail addresses; this resulted in acollection of only the e-mail addresses which had not responded. The two loops are illustratedin Figure 17. The properties of the decision node are illustrated in Figure 18.

Figure 17: Blue Prism: Remove rows with common e-mail address

30

Page 36: Robotic process automation - An evaluative model for

Figure 18: Blue Prism: Compare E-mail address column of two Collections

Finally, loops for sending the reminders were implemented. The collection containing thee-mail addresses who had not yet responded was looped, and within the loop, an action forsending e-mails was implemented. After the reminders are sent, the responses are read aspreviously in Figure 11; this was implemented in the same way as the initial sending of thee-mails (Figure 9 and Figure 10). When the reminders are sent, and the responses are read,additional actions for reading and storing received e-mails were implemented. Lastly, timedelays were implemented between the sending and reading of e-mails.

4.2.3 Automation Anywhere

Automation Anywhere is a cloud-based platform, and the process was implemented through aweb interface. Before implementing the process, files were required to be downloaded in orderto perform specific tasks locally.

The implementation process began by using a built-in function for opening a CSV-file throughan ’Action’. The data from the CSV-file was iterated through by a loop. Within the loop, anaction for sending e-mails via Outlook was implemented; e-mails were sent to each e-mailaddress in the CSV-file. After the e-mails are sent, an ’Action’ for starting an ’E-mail session’was implemented, obtaining e-mails via IMAP. When specifying the e-mail account, built-infunctionality for securely storing the login credentials was used. The process of sending andreading the e-mails is illustrated in Figure 19. The properties of the E-mail connection actionis illustrated in Figure 20.

31

Page 37: Robotic process automation - An evaluative model for

Figure 19: Automation Anywhere: Send e-mails and read e-mails

32

Page 38: Robotic process automation - An evaluative model for

Figure 20: Automation Anywhere: Setup for ’Email Connect’

After implementing sending and reading of e-mails, a loop was implemented going throughall e-mails from the e-mail session, specified to reading only unread messages. The e-mailsare stored as a dictionary of strings. The built-in ’dictionary GET’-function was used to assignthe e-mail messages to a generic variable called ’emailBody’, through the key ’emailMessage’.A second dictionary was created and was assigned a generic variable with the standard value’Yes’. Then in an if-block, ’emailBody’ is compared with the generic variable containing ’Yes’,as the software does not explicitly support functionality to compare strings. If the body of thee-mail equals ’Yes’, it is stored in a CSV-file used for storing all responses. If the e-mail bodydoes not equal ’Yes’, the e-mail is stored in a separate CSV-file containing responses who needmanual application, as well as in the CSV-file with all responses. The e-mails are stored usingthe built-in ’Log to file’-function, which is a generic function for adding data to files. The’Log to file’-function is used because there is no built-in functionality for appending data to aCSV-file. The function has some drawbacks; because the function is generic, the contents needto be manually formatted to conform to the format of the CSV-file. The loop is illustrated inFigure 21.

33

Page 39: Robotic process automation - An evaluative model for

Figure 21: Automation Anywhere: Read emails and mark off

The CSV-file of the received e-mails is then opened to loop through all e-mail addresses whichhave responded. The loop goes through all rows of the CSV-file and assigns the e-mail addressto a ’Record’-variable. Within the loop, a second loop is implemented which goes through allrows of the CSV-file, containing all e-mail addresses, and stores them in a ’Record’-variable.In the inner loop, an if-statement is implemented, which compares the e-mail address of thereceived e-mail with all e-mail addresses. If the e-mail address is found in both records, aboolean-variable called ’IfFound’ is created and set to the value ’True’, and a ’Break’ is imple-mented to exit the loop. In the outer loop, if the ’IfFound’-variable is set to ’False’ (meaningthat the e-mail address had not responded), the e-mail address is stored in a CSV-file using thebuilt-in ’Log to file’-function. Finally, the ’IfFound’-variable is set to ’False’ before iteratingthrough the loop again. The two for each-loops for storing all e-mail addresses which have notresponded are illustrated in Figure 22 and Figure 23.

34

Page 40: Robotic process automation - An evaluative model for

Figure 22: Automation Anywhere: Check whom to remind: outer loop

35

Page 41: Robotic process automation - An evaluative model for

Figure 23: Automation Anywhere: Check whom to remind: inner loop

36

Page 42: Robotic process automation - An evaluative model for

After storing the e-mail addresses which have not responded, the CSV-file is opened and loopedthrough to send reminders via Outlook. The contents of the e-mail inbox are then obtainedin the same way that was performed initially, via IMAP. The e-mail addresses of responsesare once again compared with the contents of the CSV-file containing all e-mail addresses tostore new responses. Lastly, time delays were added between the sending and reading of e-mails.

5 Evaluative modelIn this section, the creation of the evaluative model is described based on the ISO/IEC 25010standard, official documentation of the RPA-tools and the implementations of each RPA-tool.The creation of the evaluative model is connected to the concept of digital resources. Finally,the evaluative questions created for the evaluative model are summarised.

5.1 Creating the evaluative modelIn order to reach an evaluative model, the process implementations within the three tools alongwith official documentation were analysed in order to connect the properties of RPA-tools withthe ISO/IEC 25010 quality model for product quality. Based on the experience of implementingthe process within the RPA-tools, as well as connecting the implementations and documentationwith the ISO/IEC 25010 quality model for product quality, the evaluative questions making upthe evaluative model were created.

The first aspect of the ISO/IEC 25010 quality model for product quality is functional suitability.Functional suitability is not deemed relevant in this research because it deals with suitabilitywithin an organisational context.

The second aspect of the ISO/IEC 25010 quality model for product quality is performanceefficiency. As mentioned previously in section 3.2, performance efficiency measurements arerarely fully applicable according to Estdale and Georgiadou (2018). Because performanceefficiency measurements are platform- or environment-dependent, it does not fit within a gen-eral evaluative model; therefore, performance efficiency is left out of the evaluation in thisresearch.

5.1.1 Compatibility

RPA-solutions act on top of existing systems, in the same way as a user would. Because RPA-solutions run in the background and act as a user, co-existence with software and systems willinherently work in most cases. The most relevant aspect when looking at co-existence withRPA-tools is thereby whether other sources can also manipulate the files or software whichare manipulated by the implemented processes, which could lead to inaccurate data. In orderto make sure that the files or software are accurate for the process, the RPA-solution can berun in a contained environment, such as a computer or web server set up for the sole purposeof running the automated processes. Measuring the co-existence characteristic of RPA-toolsis dependent on how the process is implemented and in which environment the process runs.Because co-existence is dependent on context and the fact that RPA-solutions can be run withina contained environment to solve co-existence issues, it is not of a high priority when evaluatingRPA-tools.

37

Page 43: Robotic process automation - An evaluative model for

Interoperability is measured in the possibilities of interacting with other software or systems.As the purpose of RPA-solutions is to interact with files and software like a user would, func-tionality for interacting with other software is of high importance. Possibilities for interactionis measured in which integrations exist for the RPA-tool. It is also relevant if additional soft-ware integrations can be created. Therefore, two questions are posed: (1) ’what integrations aresupported by the RPA-tool?’ and (2) ’can additional software integrations be created?’.

5.1.2 Usability

As appropriate recognisability does not technically evaluate the software itself, and because theRPA-tools always have the shared purpose of automating repetitive processes, the only questionof relevance is whether it is clear to which type of user the RPA-tool is designed. If the RPA-tool is meant for developers, it should be clear that the tool supports the writing of code. Ifthe RPA-tool is meant for any user, regardless of programming experience, it should be clearthat the RPA-tool supports implementation without writing code. The question asked in thisresearch regarding appropriate recognisability is, therefore: ’are the programming requirementsand programming capabilities of the tool clearly presented?’

The learnability capabilities of RPA-tools are projected in guiding documentation (e.g. ’how-to’-guides), official discussion forums, and official documentation on usage and functionality ofthe RPA-tool. The obvious question regarding learnability in RPA-tools is whether or not suchdocumentation exists for each product, but because this is a ’yes or no’-question the answerdoes not provide sufficient depth from an analytical perspective. The more interesting questionis to which extent such documentation is provided.

Regarding operability in the case of the implemented RPA-tools, the processes are run andmonitored through a central program either locally or in a web interface. The questions regard-ing the operability of RPA-tools are: (1) ’are all processes handled through a central programon a server or do you have to keep track of all processes including monitoring?’, and (2) ’areprocesses run locally or on a web server?’.

User error protection is considered not to need specific questions directed at RPA-tools. Ques-tions regarding the evaluating of user error protection for any software are asked. RPA-toolsare designed for drag-and-drop functionality, and some are designed for non-developers whichmakes the User error protection a high priority. As an example of handling user errors, the pro-gram should not allow the user to use a png file as input to a spreadsheet function which couldcause the RPA-robot to crash. The question posed is ’how are user errors handled?’.

Measuring user interface aesthetics is a highly subjective matter, which may be of interest tosome users. However, for implementers who find the user interface aesthetics to be necessary,there are other already established frameworks for measuring user interface aesthetics. For thisresearch, user interface aesthetics is of low priority because measuring user interface aestheticsof RPA-tools does not differ from measuring other software.

For RPA-tools, questions regarding accessibility are similar to the questions regarding appro-priate recognisability but focus on actual implementation instead of whether or not the requiredprogramming experience is communicated clearly. The questions posed are (1) whether ornot programming experience is needed, and (2) to which extent programming experience isneeded (i.e. does the implementer have to know a specific programming language or does theimplementer only need to have basic knowledge of how programming works).

38

Page 44: Robotic process automation - An evaluative model for

5.1.3 Reliability

As RPA-tools are used to perform processes which a user would perform, the maturity of RPA-tools is measured through the ability to implement such processes. All RPA-tools can inherentlyautomate processes, but the posed question of interest is if there are any built-in features whichdo not yet fully work or could be improved.

Measuring availability is dependent on whether the process is run locally or through a cloudservice. If the process is run locally, availability is of no concern as long as the local computeror server is kept running. If the process is run through a cloud service, it requires the cloudservice to be available at all times. Generally, the availability of cloud services is of no concernas significant downtime of current service providers is unlikely; therefore, it is not covered inthe evaluation of the RPA-tools.

The following question is asked to measure the fault tolerance of RPA-tools: ’if an error occurs,how does the RPA-tool handle it?’. Measuring recoverability of the RPA-tools is covered bythe same evaluative question as for measuring fault tolerance. There is a distinct differencebetween recoverability and fault tolerance. However, in this case, they can be covered by thesame question as long as the answer covers data recovery.

5.1.4 Security

Because RPA-tools operate like a user, measuring security is mainly a question of how theRPA-tool handles user credentials and other sensitive personal information; therefore, the onlysub-category of relevance is integrity. The other sub-categories deal with being able to traceand hold users or other software accountable for actions taken; this is of low relevance in RPA-tools because the only entities which interact with the product are the implementers of theRPA-solution. If the process is run through a cloud service, the question of storing credentialsand personal information in the cloud is of interest. The question for security is ’are theresafety concerns regarding login credentials and storing of personal information? For example,are passwords saved directly in the process script, or can they be stored securely?’

5.1.5 Maintainability

RPA-tools are one module acting on top of existing systems. RPA-tools interact with the systemlike a user would or with integrations; modularity with RPA-tools is reached through integra-tions with software making up other parts of the same system. The conclusion is that RPA-toolsshould not be evaluated based on modularity; instead, modularity should be measured for thesystem which integrates the RPA-solution. Evaluating modularity for RPA-tools is concludedto be non-applicable.

For RPA-tools, reusability is inherently significant through the ability to reuse the RPA-toolin order to automate different processes. Reusability is also applicable because the automatedprocesses, or parts of the automated processes, can be used in building other processes. Whilereusability is highly applicable to RPA-tools, measuring the differing degrees of reusability isdifficult because all RPA-tools can inherently create multiple processes and transfer parts ofimplemented processes to other processes. Reusing the RPA-tools and created assets is a corefunction of RPA-tools; reusability can, therefore, be inherently considered to be of the highestdegree in any RPA-tool. Measuring reusability is, therefore, concluded to be irrelevant whenevaluating RPA-tools.

39

Page 45: Robotic process automation - An evaluative model for

For RPA-tools, the questions of analysability do not significantly differ from the analysabilityof other software or systems; the focus with RPA-tools is on the ability to analyse changes anderrors in the processes. As implementation within RPA-tools is designed like a flowchart con-taining each event within the implemented process, any changes within the process are apparentwithin the flowchart itself; therefore, ability to assess the impact caused by intended changesis concluded to be of a high degree in any RPA-tool using the flowchart representation of pro-cesses. The evaluative question asked is therefore based solely on error cause presentation:’When errors occur, how are the error causes displayed and explained to the user?’.

The RPA-tools themselves can not be modified; for RPA-tools, the relevant aspect is, therefore,the modifiability of the products created using the RPA-tools, i.e. the automated processes.Because all RPA-tools are designed to modify processes, the modifiability can inherently beconsidered to be of the highest degree in any RPA-tool; measuring modifiability is thereforeconcluded to be irrelevant in RPA-tool evaluation.

For RPA-tools, testing would be performed in the same way as with a programming language.In RPA-tools, testing can be performed by running and debugging processes to see whetherthe actions within the process perform correctly. When evaluating the testability of RPA-tools,the questions of interest are (1) ’how much information is presented within the RPA-tool whiledebugging a process?’, and (2) ’which information is presented within the RPA-tool whiledebugging a process?’.

5.1.6 Portability

RPA-tools inherently adapt well to change, because the tools interact within a system as a userwould. Processes which interact with other software will simply need to be modified in orderto interact with different or changed software. Any process can potentially be adapted to adifferent operational environment by changing properties of the process to match the new en-vironment. The most relevant point when adapting an RPA-solution to different or evolvingsoftware or operational environments is which integrations are supported, as adapting the pro-cess might need support for other integrations. Conclusively, no specific evaluative questionsare asked regarding the adaptability of RPA-tools, as the questions regarding interoperabilityalready cover integrations.

As RPA-tools are not dependent on other software or systems, other than through the processesin the same way as a user, all tools can be easily installed or uninstalled separately from thesystems in which they are used. Installability is, therefore, concluded to be irrelevant whenevaluating RPA-tools.

For RPA-tools, replaceability would entail the replacing of other RPA-tools, human users, orother software solutions such as Business Process Management (BPM) Systems. Replacinga previous solution with an RPA-solution will always work if the RPA-tool can automate theprocess; therefore, replaceability is concluded to be insignificant when evaluating RPA-tools,apart from looking at the capabilities of the RPA-tool. For example, if the RPA-tool requiresprogramming experience, the tool will, by default have a high degree of replaceability as long asthe implementer has the required programming experience. Measuring required programmingexperience is already covered by the questions regarding accessibility.

40

Page 46: Robotic process automation - An evaluative model for

When acquiring new software, an important factor is the cost of the product. For the researchedRPA-tools, pricing information is not readily available. Pricing of each RPA-tool is obtainedon a case-by-case basis for each buyer. Because the cost is concluded to be a relevant factorwhen evaluating the RPA-tools, it is included in the evaluative model; however, it is not eval-uated in the case of this research. The evaluative question is ’what does the RPA-tool cost toacquire?’.

5.2 Evaluative questions summarisedConclusively, the following evaluative questions comprise the evaluative model for evaluatingRPA-tools:

• Compatibility:

– Which integrations are supported by the RPA-tool?

– Can additional software integrations be created?

• Usability:

– Are the programming requirements and programming capabilities of the tool clearlypresented?

– To which extent is guiding documentation provided (e.g. how-to-guides)?

– Are there official discussion forums, and if so, to which extent do they providehelpful discussion?

– To which extent is official documentation on usage and functionality provided?

– Are all processes handled through a central program on a server or do you have tokeep track of all processes incl. monitoring?

– Are processes run locally or on a web server?

– How are user errors handled?

– What damage can be caused if such a user error is allowed?

– Is programming experience needed?

– If programming experience is needed, to which extent? (i.e. does the implementerhave to know a specific programming language or does the implementer only needto have basic knowledge of how programming works?)

• Reliability:

– Are there any built-in features which do not yet fully work or could be improved?

– If an error occurs, how does the RPA-tool handle it? (specify the handling of datarecovery)

• Security:

– Are there safety concerns regarding login credentials and storing of personal infor-mation? For example, are passwords saved directly in the process script, or can theybe stored securely?

41

Page 47: Robotic process automation - An evaluative model for

• Maintainability:

– When errors occur, how are the error causes displayed and explained to the user?

– How much information is presented within the RPA-tool while debugging a pro-cess?

– Which information is presented within the RPA-tool while debugging a process?

• Portability:

– Which integrations are supported by the RPA-tool?

– Is programming experience needed, and if so, to which extent and which capabili-ties are there for coding?

• Cost:

– What does the RPA-tool cost to acquire?

6 EvaluationIn this section, the artefacts are evaluated. First, each of the three RPA-tools is evaluated basedon the evaluative model created in section 5, and the results are summarised in tables. Sec-ond, the evaluative model is analysed and evaluated based on the concept of digital resources.Finally, the evaluative model is evaluated based on the criteria for evaluating design scienceresearch artefacts.

6.1 Evaluation of the RPA-instantiationsUsing the evaluative model derived and explained in section 5, the three RPA-tools are evalu-ated based on data generated during the three implementations and information in the officialdocumentation.

6.1.1 UiPath

In this section, UiPath is evaluated based on the evaluative model created in section 5 whichconsists of compatibility, usability, reliability, security, maintainability and portability.

CompatibilityCompatibility focuses on the software integrations of the RPA-tool. UiPath has official docu-mentation on supported integrations; the integrations are as follows: BPM, Digitization, GoogleSuite, Image Analysis, IoT, Language integration, Machine Learning, Microsoft Office 365,Microsoft Teams, Slack, Security, Salesforce, Messaging, Amazon Textrac, Microsoft AzureForm Recognizer, ServiceNow, Microsoft Dynamics 365. (UiPath n.d.)

UiPath also supports building integrations through the UiPath platform by building ’customactivities’. Custom activities are coded in Visual Studio, with the programming language C#.UiPath provides a coding base (boilerplate) for building custom activities. (ibid.)

42

Page 48: Robotic process automation - An evaluative model for

UsabilityUiPath supports code-free development, with the possibility of writing code. Code can beimplemented through the ’Invoke code’-activity, which invokes Visual Basic .NET or C# .NETcode for passing a list of arguments. The activity also returns a list of arguments in the formof .NET code. UiPath supports an interface for completely code-free development while stillhaving the capability for coding; therefore, UiPath is concluded to be meant for any type of userwith a basic understanding of programming, with additional capabilities for developers. Theprogramming requirements and additional capabilities for developers are clearly documentedin the UiPath documentation. (UiPath n.d.)

UiPath has an extensive official discussion forum. The discussion forums are extensively usedand can be observed or interacted with to find large quantities of valuable guiding information.The official UiPath documentation has an extensive library explaining the functionality of thesoftware and how it is used; for example, the page of the for each-activity explains all propertiesof the activity and what inputs are needed. There is an official web portal for learning thetool, containing online courses with extensive guides on how implementation within the tool isperformed. (ibid.)

The processes implemented with UiPath can be run either locally or through a program on a webserver. While the processes can be run in the background, processes still need to be monitored.UiPath supports monitoring through the ’Orchestrator’ where all robots and processes are kepttrack of, containing metrics of how the robots are performing. The metrics clearly illustrate ifthe robots are performing the processes successfully or if errors occur. (ibid.)

User errors are handled, like in a programming environment, by throwing exceptions to dis-allow users from using incorrect or faulty inputs. As exceptions are thrown, no significantdamage can be caused due to user errors. (ibid.)

ReliabilityThe maturity of UiPath is concluded to be of a high degree, as all RPA-tool functionality ispresent without limitations. The high maturity of UiPath is also reached by being highly provenand tested software for automation.

UiPath can be set to continue the workflow when errors occur, meaning that no data generatedor collected during the process runtime is lost. Similar to within a programming environment,specifying handling of exceptions can be set through the use of try-catch blocks. (ibid.)

SecurityCredentials and passwords can be stored in UiPath using the ’Orchestrator’. However, UiPathuses SecureString, which deletes the string after it is no longer needed. SecureString securityis limited because intruders can still retrieve the string during runtime; SecureString is notencrypted or secured in any other way. However, integrations can be used by the RPA-developerto store passwords and other credentials (i.e. CyberArk integration). (ibid.)

MaintainabilityUiPath supports intuitive debugging functionality. When exceptions are caught, a messagedescribing the exception is presented to the user. The error messages clearly describe whatcaused the exception to be caught. The error messages are presented with a description of whathas occurred, including: (1) what has thrown the exception, (2) the source of the exception and(3) an exception type. (ibid.)

43

Page 49: Robotic process automation - An evaluative model for

PortabilityAs mentioned in section 5.1.6, portability is evaluated based on integrations, which is coveredunder compatibility, and required programming experience, which is covered under usabil-ity.

CostThe pricing information on UiPath is not readily available. Product cost is obtained on a case-by-case basis for each acquirer.

6.1.2 Blue Prism

In this section, Blue Prism is evaluated based on the evaluative model created in section 5 whichconsists of compatibility, usability, reliability, security, maintainability and portability.

CompatibilityBlue Prism is heavily focused on the capability of creating integrations with any software. Inte-grations are built by creating VBOs which can contain functions coded in .NET programminglanguages Visual Basic and C#, or workflows of built-in logical stages within the Blue Prismclient. When Blue Prism is installed, a large number of certified VBOs are included; some ofthese integrations are for Excel, Word, Outlook, support for SQL Server, OLEDB, and ActiveDirectory. Blue Prism also, as of recently, contains integrations for communicating with WebAPI:s, including RESTful Web Services. There is no official documentation for every currentlycertified integration. (Blue Prism n.d.)

UsabilityBlue Prism supports code-free development partly, but some programming experience is rec-ommended for any implementation. Code can be implemented through ’code stages’ whichare entirely customised actions containing programming code. While some processes can beimplemented entirely code-free, the possibilities for entirely code-free implementation is lim-ited. The documentation focuses on the extended possibilities of Blue Prism for developerswith experience writing code in .NET programming languages. Blue Prism is concluded to besuited for developers or users with knowledge in .NET programming languages such as C#.(ibid.)

The official discussion forums for Blue Prism is called the Digital Exchange. The officialdiscussion forums are not extensively used, and during the implementation of the instantiationartefact, third-party discussion forums proved to be more useful. The Digital Exchange can beused to ask questions regarding the tool, but the low level of activity means that the potentialfor relevant answers is limited. Blue Prism provides an extensive library of guiding onlinecourses through a web portal named Blue Prism University. The web portal also containsan official discussion forum for learning the tool, but this part of the discussion forum is notextensively used either. There is no easily accessible official documentation to be found onthe Blue Prism web platform. During the implementation of the instantiation artefact, whenlooking up guiding information, third-party discussion forums proved to be more useful thanofficial resources. (ibid.)

44

Page 50: Robotic process automation - An evaluative model for

The processes implemented with Blue Prism can be run locally or on a web server. The robotsand processes are kept track of through an interface called ’Analytics’, where metrics are dis-played. The metrics include active robots, database contents and caught exceptions (error met-rics). Robots can be scheduled to perform processes in queues or at specific timestamps. Therobots can run processes in the background, but still, need to be kept track of through theAnalytics interface. (Blue Prism n.d.)

User errors are handled, like in a programming environment, by throwing exceptions to dis-allow users from using incorrect or faulty inputs. As exceptions are thrown, no significantdamage can be caused due to user errors. (ibid.)

ReliabilityIn the same way as UiPath, the maturity of Blue Prism is concluded to be of a high degree, asall RPA-tool functionality is present without limitations. The high maturity of Blue Prism isalso reached by being highly proven and tested software for automation.

As with UiPath, Blue Prism can be set to continue the workflow when errors occur, whichmeans that no data generated or collected during the process runtime is lost. Similar to withina programming environment, specifying handling of exceptions can be set through the use oftry-catch blocks, in the case of Blue Prism, through nodes named ’Recover’ and ’Resume’.(ibid.)

SecurityIn Blue Prism, credential storage is performed through a built-in system; the purpose of thesystem is to securely store credentials for logging in to target applications used by the processes.The credentials are encrypted in the Blue Prism database, which is installed with Blue Prismitself, and permissions can be set for which users, processes, resources, and roles get accessto the credentials. A built-in Business Object that can be implemented within the processesprovides actions for using the encrypted credentials. (ibid.)

MaintainabilityLike UiPath, Blue Prism supports intuitive debugging functionality. Error messaging is per-formed by alerting the user with a description of the cause for the exception. The error mes-sages clearly describe what caused the exception to be caught. Error messages are presentedin a pop-up window presenting: (1) the type of exception, (2) a message of what has happenedand (3) the source of the exception. (ibid.)

PortabilityAs mentioned in section 5.1.6, portability is evaluated based on integrations, which is coveredunder compatibility, and required programming experience, which is covered under usabil-ity.

CostThe pricing information on Blue Prism is not readily available. Product cost is obtained on acase-by-case basis for each acquirer.

45

Page 51: Robotic process automation - An evaluative model for

6.1.3 Automation Anywhere

In this section, Automation Anywhere is evaluated based on the evaluative model created insection 5 which consists of compatibility, usability, reliability, security, maintainability andportability.

CompatibilityAutomation Anywhere supports integration through the built-in ’App Integration’-command.The supported integrations through the command are as follows: Browsers, such as MicrosoftInternet Explorer and Mozilla Firefox, DOS Command Prompt, Java Applet, Java Application,Telnet Unix Shell, Windows Application. The documentation claims that there are other built-in integrations, but these are not specified in the documentation. There are also third-partyintegrations within various software for interacting with Automation Anywhere within theseother software products. The documented third-party integrations are as follows: RPA Botsfor Excel, RPA Bots for G Suite, RPA Bots for Salesforce, Human-Bot Collaboration. Thereis no information to be found regarding the possibility of creating integrations; therefore, it isassumed that such capabilities do not exist. (Automation Anywhere n.d.)

UsabilityAutomation Anywhere documentation clearly states that code writing is not supported, apartfrom running .dll-files written in other programming environments. Automation Anywhere isusable for any user but might be found limited from a developer standpoint. As a developermight prefer the extended possibilities of other RPA-tools capabilities to write code, Automa-tion Anywhere is concluded to be meant for users and businesses without programming expe-rience or even basic knowledge about programming. The limited programming possibilities ofAutomation Anywhere is documented in the official documentation for Automation Anywhere.(ibid.)

Automation Anywhere has an official discussion forum, but the forum is not used extensively,which means that there are limited quantities of information to be found. As a side-note, mostdiscussion on the Automation Anywhere forum is in Japanese and is automatically translatedto English. Automation Anywhere has extensive official documentation on how the softwareworks and is used; for example, the page on the ’Loop’-command explains all properties of thecommand and which data inputs are needed. Automation Anywhere has an official web portalfor learning, called Automation Anywhere University. The web portal has an extensive libraryof online courses and training material. (ibid.)

Automation Anywhere bots can be run locally or on a web server, but are always controlledthrough the ’Control Room’ which is a web platform. The robots are run in the background, butstill, need to be monitored. The robots and processes are kept track of through the monitoringsoftware Nagios. The robots and processes, as well as the machine and network running therobots, are monitored. When errors occur, the monitoring software sends e-mail or text messagealerts to the administrator, specifying the error. (ibid.)

User errors are handled, like in a programming environment, by throwing exceptions to dis-allow users from using incorrect or faulty inputs. As exceptions are thrown, no significantdamage can be caused due to user errors. (ibid.)

46

Page 52: Robotic process automation - An evaluative model for

ReliabilityThe maturity of Automation Anywhere is concluded to be fairly high, but with some limi-tations; for example, the e-mail implementations are not fully functional due to bugs, as thesoftware is still in a development stage according to forum posts. No official documentationcould confirm that Automation Anywhere is still in a development stage. Because AutomationAnywhere is a more recently released RPA-tool compared to the other market leaders, it isconcluded to be less proven and tested and therefore of a lower maturity degree. (AutomationAnywhere n.d.)

As with UiPath and Blue Prism, Automation Anywhere can be set to continue the processworkflow when errors occur, which means that no data generated or collected during the pro-cess runtime is lost. Similar to within a programming environment, specifying handling ofexceptions can be set through the use of try-catch blocks. (ibid.)

SecurityAutomation Anywhere is heavily focused on providing a secure service, with built-in functionsfor securely storing credentials and other data. Automation Anywhere heavily advertises their’best-in-class’-security. Using the built-in security measures for storage is intuitive for anyuser. For example, when storing login credentials to an e-mail server as strings, the user isalerted and suggested to store the credentials securely. The credentials are securely stored in a’Locker’, for which permissions can be set to control which users have access to the credentials.Similar secure storage can be performed for any type of data. (ibid.)

MaintainabilityWhile the handling of errors works well within Automation Anywhere, error messages fordebugging purposes are limited regarding the contents of the error descriptions. While thesoftware can be set to alert the user when exceptions are caught, the actual contents of themessaging do not clearly describe what caused the error (drawn from experience during theimplementation of the instantiation artefact). However, the software can be set to take snapshotsof the screen at the moment an error occurs, which can then be used to understand the causeof the error. The Automation Anywhere error messaging is concluded to be limited, but thescreen snapshot functionality provides at least some ability to analyse what might have causedthe error. (ibid.)

PortabilityAs mentioned in section 5.1.6, portability is evaluated based on integrations, which is coveredunder compatibility, and required programming experience, which is covered under usabil-ity.

CostThe pricing information on Automation Anywhere is not readily available. Product cost isobtained on a case-by-case basis for each acquirer.

47

Page 53: Robotic process automation - An evaluative model for

6.1.4 Comparative tables of the evaluation

This section summarises the results of the RPA-tool evaluation, presented in tables for compar-ing the different RPA-tools.

Evaluation of RPA-tools compatibilityEvaluative question UiPath Blue Prism Automation AnywhereWhich integrations aresupported by the RPA-tool?

BPM, Digitization,Google Suite, ImageAnalysis, IoT, Lan-guage intergration,Machine Learning,Microsoft Office 365,Microsoft Teams,Slack, Security, Sales-force, Messaging,Amazon Textrac, Mi-crosoft Azure FormRecognizer, Servi-ceNow, MicrosoftDynamics 365

Through VBOs: Excel,Word, Outlook, sup-port for SQL Server,OLEDB, and ActiveDirectory, and more.Contains integrationsfor communicatingwith Web API:s, in-cluding RESTful WebServices

Browsers, such asMicrosoft InternetExplorer and MozillaFirefox, DOS Com-mand Prompt, JavaApplet, Java Appli-cation, Telnet UnixShell, Windows Appli-cation, and others notspecified.

Can additional softwareintegrations be created?

Yes Yes No

Table 3: Evaluation of RPA-tools compatibility

48

Page 54: Robotic process automation - An evaluative model for

Evaluation of RPA-tools usability, part 1Evaluative question UiPath Blue Prism Automation AnywhereAre the programmingrequirements and pro-gramming capabilitiesof the tool clearly pre-sented?

Clearly meant for anyuser with some pro-gramming knowledgeand extra capabilitesfor developers.

Clearly suited for .NETdevelopers.

Clearly suited for any-user without program-ming experience, de-veloper capabilites arelimited.

To which extent is guid-ing documentation pro-vided (e.g. how-to-guides)

Official web portal forlearning the tool, con-taining online courseswith guides.

Large official library ofguiding online courses.

Official web portalfor learning, con-taining courses andcertifications.

Are there official dis-cussion forums, and ifso, to which extentdo they provide helpfuldiscussion?

Extensively used dis-cussion forums.

Official forums existbut are not activelyused.

Official discussionforums exist, howeverwith a low level ofactivity.

To which extent is of-ficial documentation onusage and functionalityprovided?

Extensive official docu-mentation.

No easily accessible of-ficial documentation tobe found.

Extensive official docu-mentation.

Are all processes han-dled through a centralprogram on a server, ordo you have to keeptrack of all processesincl. monitoring your-self?

Can be run locally or ona web server. Monitor-ing performed throughthe ’Orchestrator’, con-taining metrics of howthe robots are perform-ing.

Can be run locally or ona web server. Monitor-ing performed throughan interface called ’An-alytics’, where metricsare displayed.

Can be run locally oron a web server, but al-ways controlled in webenvironment. Mon-itored through enter-prise grade monitoringsoftware called Nagios.

Table 4: Evaluation of RPA-tools usability, part 1

49

Page 55: Robotic process automation - An evaluative model for

Evaluation of RPA-tools usability, part 2Evaluative question UiPath Blue Prism Automation AnywhereAre processes run lo-cally or on a webserver?

Can be run locally or ona web server.

Can be run locally or ona web server.

Can be run locally or ona web server.

How are user errorshandled?

By throwing exceptionsto disallow users fromusing incorrect or faultyinputs.

by throwing exceptionsto disallow users fromusing incorrect or faultyinputs

by throwing exceptionsto disallow users fromusing incorrect or faultyinputs

What damage can becaused if such a user er-ror is allowed?

No significant damage,due to catching excep-tions.

No significant damage,due to catching excep-tions.

No significant damage,due to catching excep-tions.

Is programming experi-ence needed?

Some basic knowledge Yes No

If programming expe-rience is needed, towhich extent and whichcapabilities are therefor coding?

Code can be imple-mented using VisualBasic .NET or C# .NETcode. Supports an in-terface for completelycode-free developmentwhile still having thecapability for coding.

While some processescan be implementedcompletely code-free,the possibilities forcompletely code-freeimplementation islimited. Extendedpossibilities for devel-opers with experiencein .NET programminglanguages.

Developer capabilitiesare not supported,apart from running.dll-files written inother programmingenvironments.

Table 5: Evaluation of RPA-tools usability, part 2

50

Page 56: Robotic process automation - An evaluative model for

Evaluation of RPA-tools reliabilityEvaluative question UiPath Blue Prism Automation AnywhereAre there any built-infeatures which do notyet fully work or couldbe improved?

Not in the case of thisimplementation.

Not in the case of thisimplementation.

Some limitations, forexample, the e-mail im-plementations are notfully functional.

If an error occurs, howis it handled by theRPA-tool? (specify thehandling of data recov-ery)

Can be set to continuethe workflow whenerrors occur, whichmeans that no datagenerated or collectedduring the process run-time is lost. Handlingof exceptions can beset through the use oftry-catch blocks.

Can be set to continuethe workflow whenerrors occur, whichmeans that no datagenerated or collectedduring the process run-time is lost. Handlingof exceptions can beset through the useof try-catch blocks,in the case of BluePrism, through nodesnamed ’Recover’ and’Resume’.

Can be set to continuethe workflow whenerrors occur, whichmeans that no datagenerated or collectedduring the process run-time is lost. Handlingof exceptions can beset through the use oftry-catch blocks.

Table 6: Evaluation of RPA-tools reliability

51

Page 57: Robotic process automation - An evaluative model for

Evaluation of RPA-tools securityEvaluative question UiPath Blue Prism Automation AnywhereAre there safety con-cerns regarding logincredentials and storingof personal informa-tion? For example,are passwords saved di-rectly in the processscript, or can they bestored securely?

Credentials and pass-words can be storedusing the ’Orchestra-tor’. However, UiPathuses SecureStringwhich deletes the stringafter it is no longerneeded. SecureStringsecurity is limitedbecause intruders canstill retrieve the stringduring runtime; Secure-String is not encryptedor secured in any otherway. However, inte-grations can be usedby the RPA-developerto store passwords andother credentials (i.e.CyberArk integration).

Credential storage isperformed through abuilt-in system. Thecredentials are en-crypted in the BluePrism database. Per-missions can be setfor access to the cre-dentials. Actions forusing the encryptedcredentials can be im-plemented in processes.

Built-in functions forsecurely storing cre-dentials and other data.Credentials and otherdata can be securelystored in a ’Locker’, forwhich permissions canbe set.

Table 7: Evaluation of RPA-tools security

52

Page 58: Robotic process automation - An evaluative model for

Evaluation of RPA-tools maintainabilityEvaluative question UiPath Blue Prism Automation AnywhereWhen errors occur, howclearly are the errorcauses displayed andexplained to the user?

When an error occurs,a message clearly de-scribing the exceptionand the cause is pre-sented to the user.

When an error occurs,a message clearly de-scribing the exceptionand the cause is pre-sented to the user.

Error messages havelimited information, thecauses are frequentlynot presented clearly.However, has screensnapshot functionalityfor capturing the screenat the time of the error.

How much informationis presented within theRPA-tool while debug-ging a process?

UiPath supports in-tuitive debuggingfunctionality, withclear messaging whenexceptions are caught.

Blue Prism supportsintuitive debuggingfunctionality, withclear messaging whenexceptions are caught.

While the handlingof errors works wellwithin AutomationAnywhere, error mes-saging for debuggingpurposes is limited re-garding the contents ofthe error descriptions.

Which informationis presented withinthe RPA-tool whiledebugging a process?

The error messagesare presented with adescription of whathas occurred, what hasthrown the exception,the source of the excep-tion and an exceptiontype.

Error messages are pre-sented in a pop-up win-dow presenting the typeof exception, a messageof what has happenedand the source of theexception.

Error messages containalmost no descriptiveinformation, but thescreen snapshot func-tionality can present ascreenshot at the timeof the error.

Table 8: Evaluation of RPA-tools maintainability

Evaluation of RPA-tools portabilityEvaluative question UiPath Blue Prism Automation AnywhereWhich integrations aresupported by the RPA-tool?

See Table 1. See Table 1. See Table 1.

Is programming expe-rience needed, and ifso, to which extent andwhich capabilities arethere for coding?

See Table 3. See Table 3. See Table 3.

Table 9: Evaluation of RPA-tools portability

53

Page 59: Robotic process automation - An evaluative model for

6.2 Evaluation of the evaluative modelIn this section, the evaluative model is analysed and evaluated based on Goldkuhl and Rostlinger(2019) concept of digital resources, followed by an evaluation based on design science researchevaluation.

6.2.1 Analysis and evaluation of RPA-tools as digital resources

The creation of the evaluative model was performed through open/goal-free evaluation (ibid.),by generating evaluative criteria specifically fit for the evaluation of RPA-tools. Seeing RPA-tools as digital resources in line with Goldkuhl and Rostlinger (ibid.), the resources were firstdescribed in detail based on the implementations of each RPA-tool as well as the official doc-umentation of the RPA-tools. The evaluation was mainly open/goal-free, but can be seen aspartly goal-based because the creation of the criteria was based on the established ISO/IEC25010 standard. The evaluation was open/goal-free because the standard was used only as abase while generating new evaluative criteria specified to the properties of RPA-tools.

RPA-tools, in line with Goldkuhl and Rostlinger (ibid.), are digital resources. In order to provethe validity of the evaluative model based on scientific theory, the model can be proven to coverthe relevant aspects of digital resource evaluation based on the nine dimensions of Goldkuhland Rostlinger (ibid.).

The relational dimension is evaluated based on covering which type of user the RPA-tool ismeant for, specified to developer or non-developer. The semantic dimension is covered in thecreation of the model but concluded to be irrelevant in the case of RPA-tools as it is up to theimplementer to implement processes following business concepts and terms. The functionaldimension is covered in the creation of the model by co-existence with other software or sys-tems. The functional dimension is also covered in the evaluation of functionality, such as errorhandling and monitoring. The interactive dimension deals with sub-categories of usability andis covered in characteristics such as learnability and accessibility. The normative and regulativedimensions would be measured in the implemented processes themselves, in the same way asa user acting according to business goals, values and regulations. The normative and regulativedimensions are, thereby, not covered in the resulting evaluative model. The economic dimen-sion is covered in the model by taking the cost of the RPA-tools into account. The evaluativemodel covers the architectural dimension by evaluating the possible integrations of the RPA-tools because this was concluded to be the relevant point of evaluation regarding interoperabil-ity of RPA-tools. The technical dimension is covered in the creation of the model, throughcharacteristics performance efficiency, reliability, security, maintainability and portability, ofwhich performance efficiency was not included in the model due to context dependency.

6.2.2 Design science research evaluation

The evaluative model is evaluated based on validity, utility, quality and efficacy, which are thecriteria for evaluating design science research artefacts, according to Hevner et al. (2004). Thisresearch and the evaluative model work as a proof of concept for evaluating and comparingRPA-tools. The validity of the evaluative model is proven in this research through the applica-tion of the model in section 6.1 when evaluating the three RPA-tools; therefore, the evaluativemodel achieves the goal of being able to evaluate and compare RPA-tools successfully. Validityis also reached by connecting the evaluative model with Goldkuhl and Rostlinger (2019) con-cept of digital resources and their evaluation in section 6.2.1. The evaluative model achieves

54

Page 60: Robotic process automation - An evaluative model for

utility by filling the gap of being able to evaluate and compare RPA-tools. Because the modelevaluates the tools outside of an organisational context, it can be used for evaluation beforeacquiring the RPA-tool for any operational environment.

The model provides quality by being created from the specific research of applying a widelyaccepted and established software quality model (ISO/IEC 25010 Quality Model for ProductQuality) on RPA-tools and basing the evaluative questions on official documentation and actualimplementations within the tools. The model provides efficacy through the ability to evaluatewhether an RPA-tool is right for a potential acquirer, using questions which are specificallydesigned to be of highest relevance when choosing an RPA-tool.

7 DiscussionThis section begins by presenting a summary of what was learned through this research. Lim-itations of the research are presented, followed by the theoretical and practical significance ofthe research. Finally, areas requiring further work are described.

7.1 Summary of what was learnedWhen developing relevant evaluative questions for RPA-tools, many of the criteria found ingenerally accepted models for quality evaluation were concluded to be of low relevance. Thelow relevance of some characteristics stems from the nature of how RPA-tools are built. Be-cause RPA-tools act on top of the systems in which they are integrated, as a user would, co-existence is inherently of a high degree; therefore, co-existence is of low relevance when eval-uating RPA-tools. Sub-categories such as modularity, reusability and modifiability were con-cluded to be of low relevance when choosing between RPA-tools because these are inherentlyof a high degree in any RPA-tool. The performance characteristic was completely discarded forevaluation, as performance is significantly dependent on the environment in which the RPA-solution is implemented. Because RPA-tools act as a user, they are inherently interoperablewith systems like a user; therefore, software integrations are the most relevant point of interestregarding interoperability.

All three of the tested RPA-tools share a similar form of process implementation. Creating theprocess is performed as creating a flowchart, placing nodes for each action in the process andconnective arrows deciding the order in which actions are performed. The flowchart style ofimplementation leads to (1) simple manipulation of any part of the process, and (2) a clear, butsufficiently detailed, graphical representation of the process being developed.

The focus when evaluating relevant aspects of RPA-tools is on software integration support,programming experience required, error handling, documentation and secure handling of data.Most of the RPA-tools (at least the three covered in this research) are equal in terms of whatthe RPA-tools can achieve. However, the main differences between the RPA-tools are

• how you build the processes, some RPA-tools (e.g. Blue Prism) require more program-ming knowledge than others (e.g. Automation Anywhere);

• learnability of the RPA-tools;

• software integration support of the RPA-tools;

• security regarding storing of credentials and other sensitive data.

55

Page 61: Robotic process automation - An evaluative model for

To simply classify one RPA-tool as more appropriate than another for a specific process isnot possible, because it is significantly dependent on who is going to implement the process.Instead, potential acquirers should choose RPA-tool based mainly on learnability, softwareintegrations, the users programming experience and security.

Even though the model is created without regard for the organisational context, the purpose ofthe evaluative model is to evaluate and compare RPA-tools from the viewpoint of any specificorganisational context. Evaluating for a specific organisational context is reached by potentialacquirers appropriating the model for the context in question; for example, ranking the im-portance of the evaluative questions for the specific context. The evaluative model works as abase, into which potential acquirers of RPA-tools can insert specific organisational needs forevaluation and comparison.

Basing the evaluative model on the ISO/IEC 25010 quality model for product quality leadsto the creation of a model which can be analysed from the theoretical standpoint of digitalresources as a concept, in line with Goldkuhl and Rostlinger (2019). The ability to analyse andevaluate the model based on the concept of digital resources was a way to show the validity ofthe evaluative model.

The model provides a starting point for understanding the relevant differences in RPA-tools, andcan potentially be used for further research into the subject. In practice, the evaluative modelreaches the goal of being able to evaluate and compare RPA-tools; the model can be used toassist in the choice of RPA-tool, but not as a definitive answer for every type of use.

7.2 LimitationsThe main limitation is that the model leaves out many context dependent characteristics. Dueto the model’s purpose to evaluate RPA-tools outside of a specific organisation, this limitationis necessary for this research. The model will not be sufficient in all cases for every purpose;RPA-tool implementers will have to consider the organisational context and, thereby, balancethe evaluative model for the specific context.

One aspect that has not been successfully evaluated in this research is the cost of the RPA-tools.Many companies base their decisions for acquiring software or a product on how much it costs;this is indeed an influential factor when selecting RPA-tools. Unfortunately, none of the threeRPA-tools covered in this research has specified how much they cost to acquire. In order to getpricing information, organisations need to contact the RPA-tool provider.

7.3 Areas requiring further workThe model can be expanded by including evaluation for specific organisational contexts, whichis not covered in this research. By further evaluation of characteristics for interacting withinspecific systems, the evaluative model will be fully applicable for a broader range of RPA-toolimplementers. As it stands, the model is not meant to evaluate RPA-tools for every organisa-tional context fully; the model is meant to be partly applicable for the broadest possible rangeof implementers.

The evaluative model has only been proven through the application in this research. In order toprove the model further and test the validity of the model, it would need to be tested by othersources.

56

Page 62: Robotic process automation - An evaluative model for

8 ConclusionThis research provided descriptions of three different implementations of the same typical pro-cess, using the three market-leading RPA-tools. The research question of how the market-leading RPA-tools can be implemented in a typical situation, describing a common area ofuse, is answered and presented in the implementation of the instantiation artefacts in section4.2. The three descriptions provide prescriptive knowledge on how to implement the typicalprocess and general descriptive knowledge about the three RPA-tools.

The preferred RPA-tool for implementing the typical process described in this research is sig-nificantly dependent on who is going to implement the process rather than solely what theRPA-tools can accomplish. However, the extensive documentation and active discussion fo-rums, coupled with being appropriate for a broader range of users, gives UiPath a significantedge in many cases.

This research presents a proof of concept for evaluating and comparing RPA-tools, in the formof an evaluative model. The model was proven through (1) use in the evaluation and comparisonof the three market-leading RPA-tools and (2) connecting the evaluative model to the conceptof digital resource evaluation. The answer to the research question of how RPA-tools can beevaluated and compared is provided through the evaluative model in section 5.

57

Page 63: Robotic process automation - An evaluative model for

ReferencesAalst, Wil MP van der, Martin Bichler, and Armin Heinzl (2018). Robotic process automation.Aguirre, Santiago and Alejandro Rodriguez (2017). ”Automation of a business process using

robotic process automation (rpa): A case study”. In: Workshop on Engineering Applications.Springer, pp. 65–71.

Anagnoste, Sorin (2017). ”Robotic Automation Process-The next major revolution in terms ofback office operations improvement”. In: Proceedings of the International Conference onBusiness Excellence. Vol. 11. 1. De Gruyter Open, pp. 676–686.

Automation Anywhere (n.d.). URL: https://www.automationanywhere.com/. (ac-cessed May 24, 2020).

Blue Prism (n.d.). URL: https://www.blueprism.com/. (accessed May 24, 2020).Bowen, Glenn A et al. (2009). ”Document analysis as a qualitative research method”. In: Qual-

itative research journal 9.2, p. 27.Carter, Nancy et al. (2014). ”The use of triangulation in qualitative research.” In: Oncology

nursing forum. Vol. 41. 5.Estdale, John and Elli Georgiadou (2018). ”Applying the ISO/IEC 25010 Quality Models to

Software Product”. In: European Conference on Software Process Improvement. Springer,pp. 492–503.

Fernandez, Dahlia and Aini Aman (2018). ”Impacts of robotic process automation on globalaccounting services”. In: Asian Journal of Accounting and Governance 9, pp. 123–132.

Goldkuhl, Goran and Annie Rostlinger (2019). Digitala resurser i verksamheter.Hevner, Alan R et al. (2004). ”Design science in information systems research”. In: MIS quar-

terly, pp. 75–105.ISO Central Secretary (2014). Systems and software engineering — Systems and software Qual-

ity Requirements and Evaluation (SQuaRE) — Guide to SQuaRE. en. Standard. InternationalOrganization for Standardization. URL: https://www.iso.org/standard/64764.html.

– (2011). Systems and software engineering — Systems and software Quality Requirementsand Evaluation (SQuaRE) — System and software quality models. en. Standard. InternationalOrganization for Standardization. URL: https://www.iso.org/standard/35733.html.

Moffitt, Kevin C, Andrea M Rozario, and Miklos A Vasarhelyi (2018). ”Robotic process au-tomation for auditing”. In: Journal of Emerging Technologies in Accounting 15.1, pp. 1–10.

Oates, Briony J (2006). Researching information systems and computing. Sage.Pries-Heje, Jan, Richard Baskerville, and John R Venable (2008). ”Strategies for Design Sci-

ence Research Evaluation.” In: ECIS, pp. 255–266.Radke, Andreas M, Minh Trang Dang, and Albert Tan (2020). ”USING ROBOTIC PROCESS

AUTOMATION (RPA) TO ENHANCE ITEM MASTER DATA MAINTENANCE PRO-CESS.” In: LogForum 16.1.

Schensul, Stephen L, Jean J Schensul, and Margaret Diane LeCompte (1999). Essential ethno-graphic methods: Observations, interviews, and questionnaires. Vol. 2. Rowman Altamira.

UiPath (n.d.). URL: https://www.uipath.com/. (accessed May 24, 2020).Willcocks, Leslie P, Mary Lacity, and Andrew Craig (2015a). ”Robotic process automation at

Telefonica O2”. In:– (2015b). ”The IT function and robotic process automation”. In:

58