enterprise service buses: a comparison regarding reliable

87
Enterprise Service Buses: A Comparison Regarding Reliable Message Transfer MIKAEL AHLBERG Master of Science Thesis Stockholm, Sweden 2010

Upload: vuongnhan

Post on 13-Feb-2017

224 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Enterprise Service Buses: A Comparison Regarding Reliable

Enterprise Service Buses: A Comparison Regarding Reliable Message Transfer

M I K A E L A H L B E R G

Master of Science Thesis Stockholm, Sweden 2010

Page 2: Enterprise Service Buses: A Comparison Regarding Reliable

Enterprise Service Buses: A Comparison Regarding Reliable Message Transfer

M I K A E L A H L B E R G

Master’s Thesis in Computer Science (30 ECTS credits) at the School of Computer Science and Engineering Royal Institute of Technology year 2010 Supervisor at CSC was Alexander Baltatzis Examiner was Stefan Arnborg TRITA-CSC-E 2010:113 ISRN-KTH/CSC/E--10/113--SE ISSN-1653-5715 Royal Institute of Technology School of Computer Science and Communication KTH CSC SE-100 44 Stockholm, Sweden URL: www.kth.se/csc

Page 3: Enterprise Service Buses: A Comparison Regarding Reliable

AbstractWhen it comes to integration solutions and especially integration of sys-tems that require a high level of reliability, maybe even critical systems,the platform which handles the data transport has to be able to makesure that no data disappears from the system. If a transfer error oc-curs, there has to be very specific rules to handle these errors so thatall messages can be traced back to its origin.

The task at hand has been to evaluate two comparable integrationplatforms to investigate what solutions they provide to be able to with-hold a high reliability factor, what has to be implemented by hand andif there are any possible shortcomings with specific solutions. As a basefor this evaluation, a number of different test scenarios have been builtup, based on different types of transport protocols, to get a real worldconnected, but at the same time, transparent work.

The work shows that the result did not match the expectationsset before the work started. The systems lost messages even thoughfunctionality was enabled to handle platform instability. In other words,to be able to use these platforms in a critical environment you will haveto implement functions by hand to ensure reliable message transferringin all scenarios.

Page 4: Enterprise Service Buses: A Comparison Regarding Reliable

ReferatEnterprise Service Buses: En jämförelse med avseende

på tillförlitlig meddelandeöverföring

När det gäller integrationslösningar och framförallt integrering av sy-stem som kräver en högre tillförlitlighet, kanske även kritiska system,måste plattformen som hanterar dataöverföringen se till så att ingendata försvinner från systemet. Skulle ett eventuellt överföringsfel upp-stå måste det därför finnas tydliga sätt att hantera dessa så att allameddelanden kan spåras.

Uppgiften har varit att utvärdera två likvärdiga integrationsplatt-formar för att undersöka vilka lösningar det finns för att kunna hållaen hög tillförlitlighet samt vad som måste implementeras för hand ochom det finns eventuella brister med en specifik lösning. Som grund fördenna utvärdering har ett antal olika testscenarion byggts upp, baseratpå olika typer av överföringsprotokoll, för att få ett verklighetsförankratmen samtidigt ett relativt överblickbart arbete.

Det visar sig att resultatet inte stämmer överens med de förvänt-ningar som låg till grund för arbetet. Systemen förlorar meddelandentrots att funktionalitet är påslagen för att hantera platformsinstabilitet.Det krävs med andra ord implementering för hand för att kunna an-vända plattformarna i en kritisk miljö och samtidigt vara säker på attmeddelanden inte går förlorade.

Page 5: Enterprise Service Buses: A Comparison Regarding Reliable

Contents

Glossary

1 Introduction 11.1 Problem definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Evaluation method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Background 52.1 The market for integration . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Early adoptions of integration solutions . . . . . . . . . . . . . . . . 7

2.2.1 EAI and its problems . . . . . . . . . . . . . . . . . . . . . . 82.3 The Enterprise Service Bus . . . . . . . . . . . . . . . . . . . . . . . 8

2.3.1 What is an ESB . . . . . . . . . . . . . . . . . . . . . . . . . 82.3.2 Techniques included in the ESB family . . . . . . . . . . . . . 92.3.3 Sonic ESB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3.4 Mule ESB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.4 Messaging system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.5 Reliable message transfers . . . . . . . . . . . . . . . . . . . . . . . . 13

3 Study of platform functionality 173.1 Persistent message queues . . . . . . . . . . . . . . . . . . . . . . . . 173.2 Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.3 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.4 Other functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4 Implementation 214.1 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.2 Message flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4.2.1 Database as sending access solution . . . . . . . . . . . . . . 224.2.2 File as sending access solution . . . . . . . . . . . . . . . . . 234.2.3 Web service access solution . . . . . . . . . . . . . . . . . . . 23

4.3 Environment setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.4 Sonic ESB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.4.1 System installation . . . . . . . . . . . . . . . . . . . . . . . . 24

Page 6: Enterprise Service Buses: A Comparison Regarding Reliable

4.4.2 Database to database message flow . . . . . . . . . . . . . . . 244.4.3 Database to file message flow . . . . . . . . . . . . . . . . . . 294.4.4 Database to multiple receivers message flow . . . . . . . . . . 304.4.5 File to database message flow . . . . . . . . . . . . . . . . . . 314.4.6 File to file message flow . . . . . . . . . . . . . . . . . . . . . 334.4.7 File to multiple receivers message flow . . . . . . . . . . . . . 344.4.8 Web service message flow . . . . . . . . . . . . . . . . . . . . 344.4.9 Persistent queue setup . . . . . . . . . . . . . . . . . . . . . . 35

4.5 Mule ESB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.5.1 System installation . . . . . . . . . . . . . . . . . . . . . . . . 374.5.2 Database to database message flow . . . . . . . . . . . . . . . 374.5.3 Database to file message flow . . . . . . . . . . . . . . . . . . 404.5.4 Database to multiple receivers message flow . . . . . . . . . . 414.5.5 File to database message flow . . . . . . . . . . . . . . . . . . 424.5.6 File to file message flow . . . . . . . . . . . . . . . . . . . . . 434.5.7 File to multiple receivers message flow . . . . . . . . . . . . . 434.5.8 Web service message flow . . . . . . . . . . . . . . . . . . . . 444.5.9 Persistent queue setup . . . . . . . . . . . . . . . . . . . . . . 454.5.10 Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5 Results 475.1 Receiver disconnected scenario . . . . . . . . . . . . . . . . . . . . . 47

5.1.1 Database as sending access solution . . . . . . . . . . . . . . 475.1.2 File as sending access solution . . . . . . . . . . . . . . . . . 495.1.3 Web service access solution . . . . . . . . . . . . . . . . . . . 49

5.2 Receiver temporary disconnected scenario . . . . . . . . . . . . . . . 505.2.1 Database as sending access solution . . . . . . . . . . . . . . 505.2.2 File as sending access solution . . . . . . . . . . . . . . . . . 515.2.3 Web service access solution . . . . . . . . . . . . . . . . . . . 51

5.3 Platform or message system crash scenario . . . . . . . . . . . . . . . 515.3.1 Database as sending access solution . . . . . . . . . . . . . . 525.3.2 File as sending access solution . . . . . . . . . . . . . . . . . 525.3.3 Web service access solution . . . . . . . . . . . . . . . . . . . 52

5.4 Receiver disconnected in multi receiver flow . . . . . . . . . . . . . . 535.4.1 Database and file access solutions to multiple receivers . . . . 53

5.5 Summary of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545.6 Persistent delivery performance hit . . . . . . . . . . . . . . . . . . . 55

5.6.1 Sonic’s database to database performance test . . . . . . . . 555.6.2 Mule’s database to database performance test . . . . . . . . . 55

6 Discussion 576.1 Receiver disconnected scenario . . . . . . . . . . . . . . . . . . . . . 576.2 Receiver temporary disconnected scenario . . . . . . . . . . . . . . . 596.3 Platform or message system crash scenario . . . . . . . . . . . . . . . 60

Page 7: Enterprise Service Buses: A Comparison Regarding Reliable

6.4 Receiver disconnected in multi receiver flow . . . . . . . . . . . . . . 616.5 Persistent delivery performance hit . . . . . . . . . . . . . . . . . . . 62

7 Conclusions 657.1 Scenario results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657.2 Platform comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

7.2.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657.2.2 Reliable messaging . . . . . . . . . . . . . . . . . . . . . . . . 66

7.3 Problems that arose . . . . . . . . . . . . . . . . . . . . . . . . . . . 677.4 Possible solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687.5 Final words on the platforms . . . . . . . . . . . . . . . . . . . . . . 69

8 Further work 718.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718.2 Organizational level . . . . . . . . . . . . . . . . . . . . . . . . . . . 728.3 Security and Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Bibliography 73

Appendices 74

A Performance tests 75A.1 Sonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75A.2 Mule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75A.3 Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

Page 8: Enterprise Service Buses: A Comparison Regarding Reliable
Page 9: Enterprise Service Buses: A Comparison Regarding Reliable

Glossary

CSV Comma-separated values, a simple text based data format.

CXF A Web service framework.

DLQ Dead Letter Queue.

ESB Enterprise Service Bus, a software allowing integration of applications byproviding a robust platform with a set of different type of tools and functions.

JDBC Java Database Connectivity, an API based on Java for accessing databases.

JMS Java Messaging Service, a Java based Message-oriented middleware API forsending messages.

JRE Java Runtime Environment, contains libraries and the Java Virtual Machinefor executing and running Java applications.

MOM Message-oriented middleware, software handling transportation of data byproviding asynchronous message transfer support.

POJO Plain Old Java Object.

RME Rejected Message Endpoint.

SOA Service-oriented architecture, a set of rules or a design pattern that emphasizeloose coupling.

SOAP An XML based protocol for exchanging data when using Web services.

SQL Structured Query Language, a database computer language.

WSDL Web Service Description Language, a model for describing a Web Service.

XML eXtensible Markup Language, a markup language designed for carrying data.

Page 10: Enterprise Service Buses: A Comparison Regarding Reliable

XPath XML Path Language, a language for selecting data from an XML docu-ment.

XSLT eXtensible Stylesheet Language Transformations, used for transforming XMLdocuments.

Page 11: Enterprise Service Buses: A Comparison Regarding Reliable

Chapter 1

Introduction

Enterprises today more than often have multiple applications and systems whichhave been constructed to do specialized tasks. To reuse these applications for newbusiness logic, an integration solution is built around the applications which willbe integrated. Over time, however, these applications could have been written ina number of different languages, using different communication protocols, whichcould make the integration task a very difficult and time consuming one.

To ease the burden for the integrator or system engineer, a number of appli-cations or platforms exist today on the market for integration solutions. Eachproviding solutions for both large as well as small enterprise systems. Accordingto the article Getting on Board the Enterprise Service Bus [14] the last decade thismarket has evolved and to compete with more and more complex enterprise envi-ronments new tools have been developed. The Enterprise Service Bus, or ESB aswe will call it, is a such tool trying to ease the task for the integration specialists.These tools use standardized protocols to try to avoid vendor lock in. Dependingon the solution at hand, some integration solutions may need extra functionalityfor making a more robust integration solution. Each integration platform may haveits own functions for providing this increased reliability but it may not be the samefor other platforms.

Figure 1.1. A message flow binding two applications together with the help of anESB.

1

Page 12: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 1. INTRODUCTION

1.1 Problem definition

When these so-called ESBs are used in a more critical environment, an environmentthat may demand that no data should be lost even if the system goes down, theESB’s functionality becomes a critical component to rely on. My work revolvesaround this fact and is an evaluation of two integration platforms, comparing theirdifferences regarding reliable message transferring. As a starting point, I had accessto the thesis employer Mogul AB’s integration platform, which is based on SonicEnterprise Service Bus from Progress [16]. The alternative platform that the com-parison was made against is the Mule Enterprise Service Bus. Mule ESB is an opensource integration platform and is made available from Mulesoft [11].

1.2 Evaluation method

To be able to evaluate these two platforms regarding reliable message transfers insome sort of a real world connected test, two test platforms were built for the task.One platform based on Sonic ESB and another platform based on Mule ESB wherethe two test platforms performed similar tasks. At the same time, the systemswere closely studied to shed light on what kind of functionality and solutions eachplatform could offer to increase the reliability of the total system. The solutionsthat were found, was then tested in a number of different scenarios to see howthe systems performed with and without the specific functions found during thestudying part of the work.

The scenarios, which shed light on the differences of the systems and how theyhandled messages in different situations, will be explained in greater detail in thebeginning of the implementation part of this report. An important factor was thatthese scenarios had to have some real world connection to be able to provide areasonable picture of how the platforms may act in a real situation. Therefore thetask that was performed in the different scenarios included a number of differentaccess solutions, like file transferring, database transferring and Web services. Inthis way I could also evaluate the impact which the different access solutions hadon the platform at hand.

1.3 Delimitations

Focus was put on how the two platforms differed when it comes down to functionalityregarding reliable messaging, and how they perform in their default mode. Withreliable transferring, one can go relatively deep regarding what can have an effect onthe system, like hardware level etc. The work therefore had to be limited so that thecovered area would not get too big. The question regarding reliability could even beasked on an organizational level; who has the task to check possible error messages.It is all part of reliable message transferring, where even if the integration solutionsdoes not work properly, no messages should disappear unnoticed. You have to be

2

Page 13: Enterprise Service Buses: A Comparison Regarding Reliable

1.3. DELIMITATIONS

able to track each error message, through the use of logs or similar, so that a messagecan be redelivered if it did not reach its destination.

Therefore a list of questions was made before the work started, to narrow theevaluation down to just consider the integration platforms. These questions havebeen the base for the scenarios, which was mentioned earlier, that will be imple-mented later on in this report.

• What will happen if the receiving system is down when data is transferred?

• What will happen if the receiving system is only temporary down during datatransferring?

• What will happen if the receiving system crashes during data transferring?

• What will happen if the integration platform itself crashes during the process-ing of a message?

• How will the choice of access solution affect the problem regarding reliablemessage transferring?

• Has reliable messaging an effect on the performance of the platform at handor does it cause other problems?

3

Page 14: Enterprise Service Buses: A Comparison Regarding Reliable
Page 15: Enterprise Service Buses: A Comparison Regarding Reliable

Chapter 2

Background

To get a better understanding of what an integration solution is and how theyhave been used and evolved over time, this background chapter will explain thehistory from earlier integration solutions up till what we have today in form of theEnterprise Service Bus. In this chapter, an explanation of technologies that revolvesaround the ESBs will also be made as well as areas regarding reliable messagingand what solutions there are to this problem.

2.1 The market for integration

Integration solutions are by any means not new inventions that have appearedduring recent years. They have been on the market for a long time, mostly inlarger companies and enterprises. However, the way that you apply your integrationsolutions and which tools that are available has changed in a larger scale to meetnew demands.

In the books Enterprise Integration Patterns [7] and Patterns: Implementing anSOA Using an Enterprise Service Bus [9], the authors tell us why this integrationmarket exists today and in what way it has evolved. It is not unusual for largerenterprises to have collected more than hundreds of applications over time. Theseapplications can be all from specially written programs for performing a single taskto larger web pages and/or Web services. Often these applications are not writtenin the same programing language or with the same tools and no concern was takenabout integration when the program in place was developed. It is also rarely thecase that a single application covers the whole company business, which is oneexplanation to the number of smaller applications being developed. Companies alsooften come up with new areas of use for their services and programs, and instead ofwriting new ones or rewriting old ones they try to reuse as many old programs aspossible. This is where integration comes at hand, by integrating the applicationswhich needs to talk to each other in order to create the new functionality. In thatway, a company can continue to concentrate on its core business while at the sametime they can continue to increase their portfolio of services. Through integration

5

Page 16: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 2. BACKGROUND

they also get new products out in a faster pace than what it would have taken todevelop a completely new program for the task, which would probably have been amore complex task.

Although, even if the market for integration is apparent and has been so fora long time, there are certain difficulties regarding integration that need to beresolved. A number of aspects have to be taken into account, for example a solutionthat integrates systems which are spread over a large physical distance must bearin mind that networks are slow and arbitrary [7] compared to an internal data bus.Applications that will be integrated may get replaced or upgraded over time whichwill lead to that your integration solutions needs to be checked and modified in thefuture. Another difficulty regarding integration is that more than often you will nothave control over the applications that are marked for integration. They are labeledas so-called legacy applications and you may have to integrate these programs bysharing their data through their database access instead of rewriting the applicationto share information directly. This leads us into what developers have done to solvethese difficulties and the four communication protocols or access solutions that haveevolved. The four access solutions are file transferring, shared database, remoteprocedure invocation and messaging according to Hohpe and Woolf [7]. These fourprotocols reflects the most common integration problems that arise in companies,such as information portals, data replication, shared business functionality, Service-oriented architecture (SOA), distributed business processes and business-to-businessintegration.

Even though that with today’s tools the burden to integrate legacy applicationshave surely diminished, the impact of Web services have eased the burden even more.Web services is a part of SOA and the advantage from an integration perspective isthat you can invoke these services independently from each other and for the mostpart you avoid exotic protocols which may not work over large distances etc. Webservices use open standards such as XML [25], SOAP [19] and HTTP and we shalllater see that the integration tools relies on the use of open standards. However,sometimes it is not enough to write your own simple integration solutions with Webservices, even though the use of Web services have made this task relatively simplecompared to using legacy applications. Larger demands may be placed upon theintegration solution than a simple handmade solution may provide. It can also quiteeasily get out of hand to just let Web services integrate with each other directlywithout the help of some sort of well tested integration tool. In the next few chapterswe will see how the integration solutions were built and how they looked like beforeEnterprise Service Buses became popular on a large scale.

Finally, when speaking about integration solutions you often speak of messageflows. These so-called message flows are the way data is distributed or sent in anintegration solution, for example from the sender to the receiver. This keyword willbe mentioned throughout this report.

6

Page 17: Enterprise Service Buses: A Comparison Regarding Reliable

2.2. EARLY ADOPTIONS OF INTEGRATION SOLUTIONS

2.2 Early adoptions of integration solutions

Before there was a standardized tool or framework for integration you had to man-ually program solutions to sew together programs. This could lead to complexsolutions which would get hard to maintain and if any part had to be upgradedor replaced you had to rewrite the original integration solution. The costs and thetime spent on these integration projects were of understandably reasons higher thanwhat they could have been with a more modern approach, which also is mentionedin Getting on Board the Enterprise Service Bus [14]. The solution at hand, or thetemporary improvement, was spelled Enterprise Application Integration or in shortEAI [4] and was a step on the way towards the mentioned Enterprise Service Bus.

Figure 2.1. Applications talking to each other using EAI and hub-and-spoke.

EAI often used a so-called hub-and-spoke approach where the adapters used forconnecting each application that were to be integrated was placed in the endpointsat the application, see figure 2.1. These adapters needed to be modified for eachapplication connecting to the hub. The messages, or the data traffic between theapplications residing in the integration solution, went through the central hub, asthe picture shows.

Other improvements that helped integration, was the earlier mentioned SOA.It started to get used, according to Ortiz [14], in the mid to late 1990’s and byusing the SOA principles the companies started to build their internal programsand services directly prepared for communication with other services. SOA alsoused standard protocols like XML, SOAP and HTTP which lowered the costs forintegration according to Ortiz [14]. However SOA still used hub-and-spoke whichhad its disadvantages.

7

Page 18: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 2. BACKGROUND

2.2.1 EAI and its problems

Even though EAI solved some of the problems and difficulties which earlier led tohigher developing costs there were still problems with the solution. In both OpenSource ESBs in Action [17] and Getting on Board the Enterprise Service Bus [14]the authors explain that the two biggest problems were that point-to-point andhub-and-spoke were used for integrating the applications. It was common that you,as a developer, started with point-to-point solutions for your EAI platform, andby doing so you had to know in the developing stage which applications were tocommunicate with each other. For every new application that later on was addedto the mix the work load increased because you had to write a translator for everyapplication. Keep in mind that the applications rarely were written in the sameprogramming language, the task to translate between the different protocols thatwere used became complicated. Rademakers and Dirksen [17] also mentions thatEAI used more or less closed protocols for the transport of the messages. In thatway you could easily get yourself into a vendor lock-in which is also mentioned atthe Wikipedia site regarding EAI solutions [4].

2.3 The Enterprise Service BusAt the end of the year of 2002, Gartner published an article regarding the prospectsfor the Enterprise Service Bus called Enterprise Service Buses Emerge [18]. Thiswas around the time when ESBs were relatively new and more traditional solutionswere used to integrate systems. According to Gartner, ESBs would have a greatbreakthrough during the year of 2003 but it would start with the smaller companiesbefore large enterprises embraced the use of the new technology. Later on, around2005 and onwards, the larger enterprises would start to use the ESB technology.The reason for the embracement of this new platform was that it would simplifythe task to let the SOA applications, developed in different environments, talk toeach other and use asynchronous data transferring. Another plus factor was theESB’s modularity, were you could easily enhance the product. As mentioned, thisarticle was published when ESBs were something new and it could be interestingto know how analysis companies predicted the use of the product. In Getting onBoard the Enterprise Service Bus [14] we could see that these future visions werenot completely off the chart. According to ’industry observers’ the ESB market hadstarted to grow in the year of 2007 and the technology had left the pilot projects andwere now used in financial as well as telecom enterprises. More companies were alsoin the beginning of starting to use this technology for their integration solutions.

2.3.1 What is an ESB

We have mentioned how the future visions of ESBs looked like before and during itsstarting time but what is an Enterprise Service Bus and how does it compare to thepreviously named EAI solution. An ESB can be seen as two things [17], a pattern

8

Page 19: Enterprise Service Buses: A Comparison Regarding Reliable

2.3. THE ENTERPRISE SERVICE BUS

or a product which provides tools for integration. ESBs are today’s ’buzzword’ inthe industry when it comes to integration, and for a product to be named ESB itshould have certain core functions to comply with the demands that enterprises haveon these platforms. These core functions are location transparency, transport pro-tocol conversion, message transformation, message routing, message enhancement,security as well as monitoring and management [17]. Some of these ESB prod-ucts are built on top of earlier EAI products that were used before in the industryand which were mentioned in chapter 2.2. However, ESBs are more modular anduses standard protocols like JMS [8] and XML [25] which solves some of the earlierproblems around EAI. You could say that some knowledge has been drawn fromthe EAI when the definition of ESB platforms took place. In Enterprise ServiceBus [3] Chappell explains that ESBs could be looked upon as an EAI solution butwithout the problems that the hub-and-spoke had and much more scalable. ESBsare also much more general in the way it uses its tools and are not as centralized asthe earlier solutions, where everything went through the ’hub’. The business logicis also not as integrated as it could be when for example only a message-orientedmiddleware was used according to Chappell. The platform also supports connect-ing systems through the Internet, where you could let your platforms link togethermessage flows even if the business is distributed all over the world. You could letyour ESB software stand at the endpoints and let the ESB take care of the datathat needs to be sent between the nodes over the Internet.

One of the benefits with using an ESB platform is that it is both modularand supports a wide variety of communication protocols from the start [14]. Theplatform deals with the task of converting from one data format to another, routingof a message and it just needs protocols for getting the applications on the bus soto speak. To compare it with EAI, when using hub-and-spoke you as a integrationspecialist had to build these translators for every new application that you wantedto connect to your integration solution. It could however be a time consumingtask to move your solution from for example an EAI platform to an ESB platform,even though many ESB products, as mentioned earlier in this chapter, are builton previous products. But as integration becomes more and more important forenterprises, the move to the ESB platform could be beneficial.

ESBs have also helped when working with SOA since the platforms makes thecommunication part of SOA easier. According to Patterns: Implementing an SOAUsing an Enterprise Service Bus [9], ’The Enterprise Service Bus is to SOA as SOA isto e-business on demand’. The authors also mention that today enterprises demandmore ’quality of service’ than other techniques can offer. ESBs are described as aninfrastructure that will handle all applications who follows the SOA principle, andits main task is to transport and route data to the correct address.

2.3.2 Techniques included in the ESB family

Enterprise Service Buses relies more heavily on some techniques than others. Anexample is the XML standard [25] which is greatly and extensively used inside an

9

Page 20: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 2. BACKGROUND

ESB, it could be called one of the ESB’s cornerstones. The data, or messages,that are being transferred through the ESB to fulfill your integration solution istypically sent as an XML message. This makes it very convenient to use XMLtransformations on the data, if there is a need for data conversion in a messageflow. Since the ESBs rely on XML they also support XSLT [26] transformationsout of the box which makes XML transformations a blaze. Another benefit for usingXML as the standard message protocol is that the integration specialist could easilyuse content based routing for messages since the standard is open and easy to use.

To send these messages inside the ESB, the systems rely on the so-called message-oriented middleware or simply MOM for the task. In chapter 2.4 a more detailedexplanation will shed light on what this is but you could say that it is the softwareresponsible for transferring the data safely.

In Open Source ESBs in Action [17], Rademakers and Dirksen takes up thedifferent access types that an ESB supports, such as file transferring, Web servicesand Java Messaging Service. File support is simply that a file can be fetched fromor delivered to a folder by the ESB engine. Java Messaging Service, in short JMS,is often used to transfer the data inside the ESB to different nodes in the messageflow but can also be used to connect to applications outside the ESB. JMS usesso-called queues or topics to deliver messages from one point to another. A greaterexplanation of queues and topics can be read later on in chapter 2.4. The ESB alsoincludes software to connect to JDBC connections out of the box which gives youthe possibility to write or read data from a relations database. Other protocols thatare supported is for example SMTP, POP3, or even FTP.

As mentioned earlier, ESBs also includes message routing techniques to makesure that a message is sent to the correct node in the flow. Typically they supporta wide range of different routers which are built-in from the start, like fixed routersor content based routers. Some ESBs also support custom routers or other customobjects to intervene with the message flow. Rademakers and Dirksen also talksabout message validation in the book [17], where messages are validated to ensurethat a message does not contain errors or have been routed to the wrong destination.That way the ESB could request a new message if it was corrupt, or notify anadministrator that there is a potential problem.

Last, the platforms also support techniques for hosting its own Web servicesnowadays. This gives the software designers a more robust platform to build itscritical business projects on and can simply let the ESB take care of any possibledata that needs to be transferred.

2.3.3 Sonic ESB

Sonic ESB is developed by the company Progress [16] and is currently at version7.6 of the platform. This is the platform that is currently running at my thesisemployer’s servers. The system is shipped with a workbench which lies on top ofEclipse [5] which is a popular IDE for developing software. The workbench consistsof easy to use tools for developing message flows, i.e. processes, and has plenty of

10

Page 21: Enterprise Service Buses: A Comparison Regarding Reliable

2.3. THE ENTERPRISE SERVICE BUS

documentation at hand. It also supports a wide variety of operating systems andcontains more functionality out of the box.

To briefly describe how Sonic ESB is built, the platform uses so-called containers.The message flows that are developed for the platform are then running inside thesecontainers. That way we can split up our message flows to different containers andif one container becomes unstable, our other message flows that are not includedin the unstable container will not be affected. Our message flow is in itself builtup from so-called processes and services. It is these processes and services that runinside the containers. A message flow could consist of one ESB process or manyESB processes. These processes will typically contain nodes or endpoints that maytransform a message or connect to a database and that functionality is handled bythe services. So a typical service could be to convert an XML message with XSLT.Figure 2.2 shows an example of what an ESB process consists of. The services arebuilt with Java classes and you can use predefined services or you can build yourown ones.

Figure 2.2. An ESB process runs inside a container and can contain many differentservices in a message flow.

The platform has a management program called Sonic Management Consolewhere configurations can be made for the containers and such regarding the plat-form. Some of these configurations can also be made from the Sonic Workbench

11

Page 22: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 2. BACKGROUND

which could be handy when developing ESB processes.

2.3.4 Mule ESB

Mule ESB is an open source platform from Mulesoft [11] and is built in a differentway than for example the Sonic ESB platform. The software comes in two differentversions, an Enterprise Edition and a Community Edition and I will focus moreon the Community Edition throughout this thesis. The differences can be readat Mulesoft’s homepage. It is this platform that has been chosen for comparisonagainst the Sonic ESB platform. The platform is not shipped with a messagingsystem and you will therefore have to locate and install one for your self. SinceActiveMQ [1] is used in Open Source ESBs in Action [17] we will look closer on thismessaging system.

Mule is not using containers as Sonic does, instead Mule is more lightweight andsimply uses configuration files where you set up a message flow. These configurationfiles are then used together with an executable file to run the specified flow.

One of Mule’s cornerstones are, according to Open Source ESBs in Action [17],services. These services can be a connection to an application, which will be inte-grated, or a component inside a message flow. These components can be built byusing regular and simple Java classes, so-called POJO (Plain Old Java Objects).To get into a little more detail on how Mule is composed we can break down theflow inside the ESB to the following. All of this is explained in greater detailin the book [17]. The application that will be integrated is connected through achannel into the ESB. A channel can be a folder where files are stored, or a JMSconnection. This channel is then connected to a transport component which takescare of the connection to the ESB and performs transformations that are necessary.From the transport component, we get into the service component which consistsof an inbound router, possible POJO components and an outbound router. Thechain is then completed by another transport component and finally a channel forthe receiving system. Mule is also delivered with the most common connectionmethods (channels) like database connectivity, file transferring and of course JMSconnectivity.

One of the reasons that the Mule platform was selected is because it is an opensource project. Rademakers and Dirksen [17] discusses the topic of what opensource could mean to an ESB project. They describe the so-called myth that opensource programs is not up to the standards of corresponding programs, and thatthis so-called myth is false. This assumption, a falsely one according to the book,is because open source projects tends to be developed on spare time. But today weknow that this is not always the case. They do however mention that you wouldwant an open source ESB with a reasonably active community so that bugs andsuch that are found quickly gets fixed.

12

Page 23: Enterprise Service Buses: A Comparison Regarding Reliable

2.4. MESSAGING SYSTEM

2.4 Messaging system

Centrally for the Enterprise System Bus system and similar technologies, like theexplained EAI system, is the messaging system or the so-called message-orientedmiddleware. Its task is to make sure that the data that is sent between endpoints ina message flow is transferred correctly. Because the transport of data in a messageflow tends to include networks, the demand on the message-oriented middleware orMOM is somewhat higher than just sending data between programs. In EnterpriseIntegration Patterns [7] you can for example read that, networks are slow and canbe unreliable compared to if the data was sent between applications on the samecomputer. This negative aspect leads us in to the advantages of using a MOM.With a MOM you get access to asynchronous data transferring, which means thatthe MOM will take care of the data transferring and deliver the data or messagewhen the receiver or the sender is ready. This way, your programs does not have towait for the receiver on the other end, you just trust the MOM for delivering thedata and the MOM makes sure that the data is being transferred sooner or later.This technique, if you so may call it, is called fire-and-forget [21].

The authors of Enterprise Integration Patterns [7] and Using Message-orientedMiddleware for Reliable Web Services Messaging [21] explains that there are inessential two different ways a message can be sent through a MOM. The two differentways are Point-to-Point and Publish/Subscribe. In P2P a message is sent, as thename reveals, from one point or endpoint to another endpoint through a messagequeue. With Publish/Subscribe a message is published on a message topic wheremultiple so-called subscribers can fetch the message from. This leads us down tothe bottom line that there are queues and topics in a message-oriented middleware,where queues are one to one and topics can be viewed by multiple receivers orso-called subscribers.

The connection to the MOM is handled by the ESB and what you would haveto think of in your message flows or integration solutions is if you will be needing aqueue or a topic to fulfill the integration solution’s purpose.

2.5 Reliable message transfers

Today when integrations have become a larger part of companies and enterprises,downtime in these systems or business processes could lead to substantial costs.These integration solutions could also include business critical services or functionswhich has to be working without any interruptions. As mentioned before in chapter2.4, networks are unreliable [7] and there is always the risk of hardware failuresor electrical problems. To withstand these types of problems the integration solu-tion needs a robust and safe transferring technique between the services that areintegrated. This is were reliable message transfer enters the picture.

An important cornerstone in the Enterprise Service Bus platform regarding re-liability is the message-oriented middleware system. Because it supports asyn-

13

Page 24: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 2. BACKGROUND

chronous message transferring and in that case can send messages when the re-ceiving part is available and ready, the management of messages becomes a criticalpoint in the chain of reliability. This subject is touched in the article Using Message-oriented Middleware for Reliable Web Services Messaging [21] but they are splittingup the problem in three separate problems. They call these three problems Mid-dleware endpoint-to-endpoint reliability, Application-to-middleware reliability andlast Application-to-Application reliability. In the same article they discuss theseproblems and the possible solutions to increase the reliability. One solution thatwas mentioned regarding Middleware endpoint-to-endpoint reliability is the abil-ity to make the queues or endpoints in the message flows persistent. That way,the messages are stored to disk when reaching a queue to avoid loss of data if theplatform becomes unstable. Between these endpoints or queues the messages thenneeds to be transferred securely, for example with Java Message Service [8] whichalso is commonly used in MOMs. Persistent queues are also discussed in EnterpriseIntegration Patterns [7], typically the messages are stored in a database on the diskand not directly on the file system at hand. Problems that however could arisewith this type of configuration is that the speed of the platform could decrease,or that messages would pile up and take up a great deal of space. In combinationwith store-and-forward the guarantee for message transferring could be increasedeven more. Another interesting aspect regarding Application-to-Application relia-bility is that you could view the flow from one application to another applicationas one transaction. If something went wrong with the message transferring thatis not in line with the specifications for the rules that have been set-up, both thesending and receiving part will be rolled back to its previous state. The transac-tion part could also be split up so that the sender and the receiver belong to twodifferent transactions. Something to have in mind is that the messaging system canonly guarantee reliable delivery inside its own system, to the endpoints. After thatpoint, the applications or services that are connected to the endpoints have to dothe rest.

However there are more parts to reliable message transferring than just thetransferring part. The authors of Data provenance in SOA: security, reliability,and integrity [22] talks about that you can not always attack the reliability partthe same way you do with traditional software. In an integration solution the datacould have been sent through multiple nodes or endpoints, and these does not evenhave to be located in the same system or local network. It is enough for one of thesenodes to become compromised for the whole chain in the message flow to becomecompromised. In traditional software, it is only the endpoints that would need tobe checked, since the rest is inside the software itself.

There are also other standards, for example Web services, that can be used inan ESB for increasing the reliability. For example WS-ReliableMessaging and WS-Reliability, which Are Web Services Finally Ready to Deliver? [10] discusses briefly.In that article they also mention that Web services have begun to be used more andmore in business critical projects and that protocols such as HTTP does not haveany built-in support to increase the reliability regarding data transferring. Most

14

Page 25: Enterprise Service Buses: A Comparison Regarding Reliable

2.5. RELIABLE MESSAGE TRANSFERS

ESBs support hosting of these Web services since they are used much in enterprisestoday and therefore can use the platform’s functionality. Both Sonic ESB and MuleESB have this Web service hosting support.

When it comes to transactions and their involvement and contribution to reli-ability in a safe data transfer, there could be difficulties if the message flow chainis rather long. In the worst case the nodes are lying on completely different net-works. The author of Web Services and Business Transactions * [15] mentionsthese complex message flows and calls them business processes. The problem withthese complex business processes and transactions is if you are going to follow theACID model for a transaction. ACID stands for Atomicy, Consistency, Isolationand Durability and is a well known set of rules for handling database transactions.If the message flow chain is rather long it could be difficult to try to lock nodes ina transaction and the possibility for deadlocks could become apparent if networksincluded in the chain are distant from each other. The article is written to showtheir new framework but it also shows us which kind of problems transactions couldhave if they are implemented wrong, in a way that causes deadlocks or long waitingtimes. The transactions, however, have to work as if they were used against a simpledatabase. Either the transaction is committed and all changes have been performed,or the transaction is rolled back and no changes have been made. The principles ofACID and transactions are also discussed in Open Source ESBs in Action [17] andan example of how to set up transactions with Mule ESB is also shown. In Muleyou would configure a transaction only on the inbound endpoint and the messagewill not be removed from the queue before the transaction has been committed.

Another important part of reliable messaging is how well the system handlespossible errors when they appear. Rademakers and Dirksen [17] mention the so-called dead letter queue or invalid message queue. This is the usual approach,that when a message can not be delivered or some other error situation occurs themessage is being sent to a dead letter queue. However you have to make sure that asan administrator or maintenance worker check these queues regularly to be noticedof the problems. These queues have different names depending on the platform thatthey are used on.

15

Page 26: Enterprise Service Buses: A Comparison Regarding Reliable
Page 27: Enterprise Service Buses: A Comparison Regarding Reliable

Chapter 3

Study of platform functionality

Before we can implement possible enhancements to keep a high level regardingreliable message transferring we first have to investigate both platforms to see whatkind of functionality they provide. Earlier in the background chapter we have talkedabout possible solutions and functionality to retain high reliability throughout theplatform. It remains to be seen if these functions are available at both the SonicESB and the Mule ESB platform. The study of the two platforms was performedby partly reading the documentation at hand, provided by both companies behindthe two platforms, and partly by implementing the test system to get a quick lookto see if a function could be of any interest.

3.1 Persistent message queues

The most common functionality, and a functionality that repeatedly gets mentionedin the books regarding the subject is persistent message queues. It is used on theplatform at hand to be able to handle platform crashes as well as making sure thata message is safely delivered to its endpoint. The availability of this function istied to what kind of message system that is available to the respective platform.We have mentioned earlier in chapter 2.3.4 that Mule ESB is loosely coupled fromthe message-oriented middleware and that you can easily choose which messagesoftware you want to handle the message transferring. In Sonic’s case, the messagesoftware that gets delivered is more connected and integrated into the ESB. Thedocumentation for the Sonic platform mentions persistent message queues and letsus know that the functionality at least is available. Per default, this functionalityis turned off, which is common practice in most message handling systems. But thefunctionality is there and we can establish the fact that it is a question of configuringthe message handling system and message flows rather than something that needsto be implemented by hand.

When we get to Mule ESB we have to look at the message system we will beusing for our tests. Their Enterprise system gets shipped with IBM Websphere, wehowever are using Apache ActiveMQ [1] that is used in the book Open Source ESBs

17

Page 28: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 3. STUDY OF PLATFORM FUNCTIONALITY

Figure 3.1. The ESB can use a database to store the messages to disk.

in Action [17]. We also tested Sun’s OpenMQ [13] as a message system, but bothof these two messaging systems have functionality for persistent message queues. Itis also a matter of configuration and nothing needs to be implemented by hand. Aquick notice however is that you can also configure individual message flows to usepersistent queues which also is the case for the Sonic ESB.

The ability to use an external database is available, however, both messagesystems for the two platforms comes with an internal database that stores themessages on the file system.

3.2 TransactionsWhen it comes to transactions, which we also have mentioned earlier in chapter2.5, like persistent queues, it seems to be trickier. The documentation for the twoplatforms does not tell us that much regarding the ability to use transactions forour message flows. By using some sort of transaction, our hope is that possibleproblems that occurs when you have multiple receivers/endpoints can be avoided.In other words, if there is an error at one of the endpoints you may not want theother endpoints to still receive and process the message. Instead you would wantto rollback and resend the message. The need for transactions may not be as largein our test when we only have a point-to-point message flow. The test results willhopefully show us in what way the transactions affect the message flows.

Mule ESB seems to support two types of transactions that could be interest-ing. Normal transactions for the SQL and JMS connectors but also so-called XAtransactions [24]. The differences between them is that SQL transactions and JMStransactions can only be used between database endpoints and JMS endpoints. XAtransactions on the other hand is a wider protocol for transactions and in theorycould include both database and JMS endpoints in one transaction. In that caseyou could overcome endpoints using different access solutions or protocols and havetransaction capabilities despite the use of messages to multiple endpoints.

Sonic ESB on the other hand seems only to support XA transactions for JMS

18

Page 29: Enterprise Service Buses: A Comparison Regarding Reliable

3.3. ERROR HANDLING

queues, but it is not mentioned much in the documentation. There is however asample program that uses XA transactions in hand with the platform and we willsee if we can use this in some way.

3.3 Error handlingError handling is also something that is central to reliable message transferringbecause a message can never disappear unnoticed from the system. Usually some-thing called a dead message queue or similar is used for messages that could notget delivered or when other errors occur. Both the Sonic ESB and the Mule ESBplatform seems to have very good support for error handling but it is implementeddifferent for the two platforms when comparing them. In Sonic it is more or less asetting where you choose where you would want to deliver messages that could notbe sent to its destination when something goes wrong. It is the same for the wholeprocess that is running on the Sonic platform. For Mule on the other hand, youhave to configure more settings if you would want error handling for both messageflows and connectors. You could also have individual dead letter queues for eachservice or connector in a message flow.

3.4 Other functionalityOther functions than the above mentioned regarding reliability were not found forneither the Mule or the Sonic platform. However another setup possibility that in-creases the reliability but can not really be regarded as a function or something thatneeds to be implemented is that the message flows can run in separate containers,or in Mule’s case, multiple configuration files. In that way a container or a programcan crash without taking the whole platform with it. But the containers will stilluse the same messaging system and if the messaging system is unstable or crashes,it will probably take the whole platform down.

Another function that briefly touches the area is if the systems have functionalityfor clustering of the platforms. By using clustering, one could potentially avoidlosing the system if just one part of the cluster goes down. In this report howeverwe are only concentrating on the software part and not on the hardware which willbe a factor using clustering.

19

Page 30: Enterprise Service Buses: A Comparison Regarding Reliable
Page 31: Enterprise Service Buses: A Comparison Regarding Reliable

Chapter 4

Implementation

To be able to test the two platforms in a real world scenario, two similar test bencheswere built from scratch for both platforms. How this build up was done and whatdifficulties and choices that had to be made during the development is presented inthis chapter. There will also be some light shed on the design differences betweenthe two platforms to get a clearer picture on how the Sonic ESB and the Mule ESBdiffer in implementation detail.

4.1 ScenariosAs mentioned a number of scenarios need to be tested to be able to answer thequestions that were presented in the initial chapter. That way we could see how thetwo platforms perform in different situations and compare the results. The followingfive scenarios were those that were decided to use for this test, which are connectedto the questions asked in the introduction part.

• Receiver disconnected scenario

• Receiver temporary disconnected scenario

• Platform or message system crash scenario

• Receiving part disconnected in multi receiver message flow

• Persistent delivery performance hit

In the first scenario a message is sent from a sending part through a messageflow running on an ESB platform. At the same time, the receiving part is thendisconnected from that same message flow. The test will show us how the errorhandling is done, if messages are lost from the system, if they can not be deliveredand how you get notified by the ESB platform when a connection is dropped.

For the second scenario, the disconnected receiver is reconnected after a shortwhile to see if the platform resumes its work and if messages that were not deliveredgets re-sent.

21

Page 32: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 4. IMPLEMENTATION

The third scenario tests the platform’s ability to handle a crash. As we havepointed out in the previous chapter, the platforms support persistent queues andthis test will shed light on how that works. The platform is taken down during thetime messages are being sent from one point to another in the message flow. Whenthe platforms are restarted it will be interesting to see if the messages are still leftin the ESB system and if the tasks are resumed.

So far the scenarios have only included simple message flows, from one senderto one receiver. The fourth scenario includes message flows with multiple receivers.In that kind of message flow you often want all or none of the receivers to get amessage. If an error occurs at one of the endpoints the preferred way would be thatthe message is thrown away for the other receivers and then re-sent for all of them.Transactions, as we mentioned in chapter 3.2, could have a large impact on thisscenario.

Last we have a performance scenario where the mentioned persistent queues aretested. Since the messages that gets delivered to a persistent queue will typicallybe written down to disk or to a database server it could be interesting to see howmuch of a performance hit it has on the system.

4.2 Message flowsTo accommodate the scenarios we have to have a number of message flows to testwith. Since there are many different access solutions that could be used with anESB the decision was taken that the following three access solutions would beimplemented. File transferring, database transferring andWeb services using SOAP.An application using a JMS connection was decided against in the scope of this thesisbut since most message systems use JMS internally they will in either way show upin our testing.

In order to make the message flows more interesting the ESB platform willtransform the messages from one format to another. Typically, our message flowfor our multiple receiver scenario needs transformations to let one message getdelivered to multiple different receivers.

The message flows that were implemented to be used in the scenarios above arefollowed here.

4.2.1 Database as sending access solutionThe database will store a database table called ’userlist’ including the columns firstname, last name, social security number and id. When the message flow has a fileendpoint as the receiving part, the fist name and the last name will be fetched fromthe database. The data will then be transformed into an XML file which will bedropped in a folder on a USB memory device. The message flow with a databaseendpoint as the receiving part will also fetch the first name and the last name fromthe sending database. It will then be transformed into an email address by the ESBsystem and placed into an email account table on the receiving database server.

22

Page 33: Enterprise Service Buses: A Comparison Regarding Reliable

4.3. ENVIRONMENT SETUP

4.2.2 File as sending access solution

In the folder, which the ESB system will be polling for files, a comma separatedvalue (CSV) file will be placed. The file will include user data similar to the databasetable above. When a file endpoint also is used as the receiving part, the CSV filewill be transformed by the ESB system to an XML file. The file will be droppedin a folder on a USB memory device. Regarding the flow with a database as thereceiving part the data inside the CSV file will be inserted into the same ’userlist’table that was presented for the previous message flow in 4.2.1. The ESB will thentypically be needing to transform the CSV format to a suitable format for insertingthe data into the database server.

4.2.3 Web service access solution

Since in the case of a Web service, the sender and the receiver typically is the sameapplication, the Web service message flow will be implemented differently comparedto the other flows. The message flow will host a Web service on the ESB platformsand with the help of the tools that both platforms provide, a simple WSDL willbe presented for Web service calls. The Web service will take a name as input andthe platform will then be asking a database server for the corresponding address tothat name. The address will be delivered back to the caller if all went well.

4.3 Environment setupThe scenarios described above for testing the platforms however demanded moresoftware than the two ESBs could provide. We needed at least one database server,to be able to test message flows from and towards a database source. The sendingpart and the receiving part could be handled by the same database server but thechoice fell on using two different database servers. The recommendation was to usethe H2 database [6] together with the Sonic platform. This database software issomewhat of a lightweight product compared to some of the other bigger databaseservers. As database server number two the MySQL database server was chosen.The MySQL database is commonly known and since the evaluation of the ESBplatforms are not about database software it seems like a good decision to choose adatabase that were familiar.

The version of the H2 database that was installed was version 1.2.121 and theinstallation was straightforward and there is no need for advanced configurationsor similar. For the MySQL database a software called WampServer [23] was usedinstead of a clean installation of the MySQL server. This software includes both apreconfigured web server, phpMyAdmin and a MySQL server, which minimizesthe configuration burden. The version of the database that comes included inWampServer 2.0i is MySQL version 5.1.36.

When it comes to the connector drivers for the database servers, the softwarethat the ESB platforms use to connect to the database, they have to be of the

23

Page 34: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 4. IMPLEMENTATION

same Java version that the ESBs support. For the H2 database, the same versionof the connector (1.2.121) that was installed could be used for the Mule ESB.However version 1.0.79, which is the last build before 1.1, had to be used for theSonic platform because of the above mentioned problems. Regarding the MySQLconnector, both platforms could use version 5.1.10 to connect to the database.

Except the above mentioned database servers there was also a need for an op-erating system to run the two platforms on. There was a choice to run the testingon physically different hardware to be able to simulate network problems but thechoice fell on testing on one computer running Windows 7. This could have beena problem, or a risk, since this operating system was relatively new and the ESBsoftware may not have been fully tested on this operating system. It may have beenpreferred to run the platforms on a more server focused operating system.

Last but not least, SoapUI [20] was used to connect to the Web service messageflow. This simplifies the testing and by using SoapUI there was no need to build atesting client for this simple purpose. The version of the software used is 3.0.1.

4.4 Sonic ESBAs mentioned in the background chapter the latest version of the Sonic ESB isversion 7.6, which also was the version that were supplied from Mogul AB. At thetime when the work on the thesis begun, an earlier version of the platform was usedin operation but the choice was taken to use the latest version for this thesis.

4.4.1 System installation

The Sonic ESB platform is installed through a traditional installation program.During the installation you get the choice to use the included JRE or the systeminstalled JRE. By recommendation the included JRE was chosen since it had beenwell tested to work with the Sonic platform. The included version is 1.4, which is abit old compared to the latest version that can be retrieved from Sun. During thework a patch for the Sonic platform also appeared which took the version from 7.6to version 7.6.2. Some configurations had to be done after the installation to makesure that the workbench and the management console could connect to the domainmanager. In picture 4.1 and 4.2 you can see an overview of the workbench and themanagement console.

4.4.2 Database to database message flow

First we need to make sure that the message flow is polling towards the databaseserver and fetches the data at a given time frequency. Since we are doing this byusing built-in functionality from the platform, we use the DBService module. TheDBService module is a service where you can connect to databases. This service isconfigured from the Sonic Management Console and not from the Workbench. Thedatabase connections that can be set up from the Workbench are only for testing

24

Page 35: Enterprise Service Buses: A Comparison Regarding Reliable

4.4. SONIC ESB

Figure 4.1. The Sonic Workbench on top of the Eclipse IDE. Here you have thetools palette to the right and the current overview of a process in the center. To theleft the containers containing the message flows and services can be seen.

parts of the message flow, so-called unit testing. Our flow is going to use both theH2 database and the MySQL database and we therefore need to set up two servicesof the type DBService. These services will then be running inside a container,which we mentioned in the background chapter, and will be available for our ESBprocesses to use. One thing that still remains however is to make sure that the jarfile, containing the drivers for the H2 and MySQL connectors, are available for theplatform. If you do not make them available, the containers, where the services willbe placed, will not start and error messages will be displayed in the log files. Byopening up the configuration window for our ESB container where our services willbe running inside, we add the two jar files under the tab resources. As previouslymentioned version 1.0.79, the last version before the 1.1 build, for the H2 databaseis needed because of the use of Java version 1.4 for the Sonic platform. Our two

25

Page 36: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 4. IMPLEMENTATION

Figure 4.2. The Sonic Management Console where you can configure the ESBplatform.

26

Page 37: Enterprise Service Buses: A Comparison Regarding Reliable

4.4. SONIC ESB

Figure 4.3. The ESB platform collects data from a database, transforms it androutes the data to a receiving database server.

new services are now named TestBench.H2Service and TestBench.MySQLServiceand are placed inside our container dev_ESBTest.

Figure 4.4. Setting up a SQL query in the Sonic Workbench. The variables aremapped by the developing tool.

To get back to the Workbench where our message flow is being developed,we create two new database operations called db-to-db-getdata.esbdb and db-to-dbinsertdata.esbdb. In these files you can enter the SQL query that the operationwill execute, as you can see in figure 4.4. The data that the operation fetches fromthe database is converted automatically to the XML format which makes the use ofXSLT and XPath convenient. The message flow needs two transformations betweenfetching the data from the database and storing the data, in the form of an emailaddress, at the receiving database server. The first transformation will split up theXML message since we fetch multiple database rows from the database. In thatway each user from our ’userlist’ table in our H2 database will end up in a sepa-rate XML message. Thereafter, our transformation which assembles the data to anemail address will start processing. Finally our message will be sent to our Test-Bench.MySQLService and placed inside the database. An overview of the message

27

Page 38: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 4. IMPLEMENTATION

Figure 4.5. An overview of the database to database message flow.

flow can be seen in figure 4.5. Both transformations are using XSLT with the useof XPath expressions. With the expressions we set up where the message should besplit and what data from the original XML message part is going to be included.There are visual tools to aid you if you do not feel comfortable to write code for anXSLT file yourself, as can be seen in figure 4.6.

In the figure of the message flow there is however a database operation miss-ing, where the data should be fetched into our flow. Originally there was such adatabase operation but it was then discovered that if you want to poll against adatabase server, which we do, it had to be configured inside the Sonic ManagementConsole and not in an ESB process. Inside the SMC where we created our twodatabase services you can choose if the service should execute a SQL query andhow often the query file should be executed. There is also the the possibility tochoose a validation query which apparently is a way for the Sonic platform to testif a database connection is working. The query to be used has to return at least arow for it to work however.

When it comes to SQL queries you can also add multiple queries on the samedatabase service which could fetch the data and send it to different queues or topics,so-called endpoints. For our flow, however, we only need one query and we only

28

Page 39: Enterprise Service Buses: A Comparison Regarding Reliable

4.4. SONIC ESB

Figure 4.6. Visual tool to aid creating a XSLT transformation file. You can connectvariables by drawing lines between the different variables or by altering the codedirectly.

need to set up so that the service places the fetched message inside a queue forour ESB process to pickup. When our ESB process picks up the message from itsentry endpoint, where our service has sent it to, the process starts and finally ourmessage gets inserted into our MySQL database. When a process has finished witha message the ’Exit Endpoint’ is called and we could transfer the message to anotherprocess if we want. In our case however we insert the data into the database table.

4.4.3 Database to file message flow

Much from our earlier message flow in chapter 4.4.2 can be reused for this flow,except we have to replace our last database operation with a file drop operation.We also have to replace one of the XML transformations to match the data outputformat. Again, we use XSLT and XPath expressions to transform our message tothe correct XML format. The XML splitting part is still needed since we want aseparate file for each user from our userlist table.

The file service for dropping files in a folder is also something that is builtinto the Sonic platform and we can drag and drop a ’File Drop Service’ from ourgraphical developer palette. All that the file drop service needs is a configurationfile with the extension *.drop for it to function properly. In this configuration file,the folder where the file is dropped is set up as is the file name for the output file.You can also set up a verification message which could be sent to a queue or a topic.

29

Page 40: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 4. IMPLEMENTATION

Figure 4.7. The ESB platform collects data from a database, transforms it andplaces the data in a new file.

One thing that has not been mentioned is that the Workbench has excellenttools to debug your message flows. You have the ability to set up listeners for yourqueues and topics and you can also listen on processes as well. With this help youcan spot how far a message is being transferred before an error occurs, see figure4.8.

Figure 4.8. Message listening capabilities for the Sonic Workbench. Listeners canbe added on a service or queue in a process. Any received message is displayed underReceived Messages.

We have also not mentioned the fact that error handling is very straightforwardin the Sonic platform. All you have to do is make sure you set up the endpointsfor the so-called ’Fault endpoint’ and ’Rejected Message Endpoint’ for each ESBprocess. When a fault occurs or a message is rejected, it should be delivered to theconfigured queue or topic.

4.4.4 Database to multiple receivers message flow

The flow with multiple receivers is created easily thanks to the development toolsfor the Sonic platform. The only thing that needs to be done is using a so-called’Fanout’ component, which will duplicate our message and send it through each fan

30

Page 41: Enterprise Service Buses: A Comparison Regarding Reliable

4.4. SONIC ESB

Figure 4.9. The ESB platform collects data from a database, transforms it androutes the data to multiple receivers.

which is connected to a service or process. In that way, we only have to move ourtwo previous flows, with our services inside, to our new message flow and connectthem to the Fanout as can be seen in figure 4.10.

4.4.5 File to database message flow

For starters, a completely new file polling service was built with the help of tutorialsshipped with the platform. But as soon as the included file polling service wasdiscovered it was used instead. How to build your own services is mentioned furtherdown in this chapter since we need to build one for our CSV to XML transformationlater on.

Back to the message flow, in the same way as we created the database messageflows we here use the Sonic Management Console to create a new service fromthe built-in File Service. The service TestBench.FileService is created and in asimilar way as previously for the file dropping service, it needs a configuration fileto work. The last thing to do is making sure that the Exit Endpoint for our serviceis connected to the Entry Endpoint for our ESB process containing the messageflow. In our case it is the queue TestBench.FilePickup.

The service is then placed inside our dev_ESBTest container and the files arefetched from the file system and gets delivered to our specified queue. However, themessages that gets sent to our entry endpoint for our ESB process is on the format ofa CSV file and we need to make sure it has the format of an XML message instead.In the same way as for the Mule platform, handmade services are written in Java andwhen you choose to create a new service you get a template which helps you get intothe coding part. The Java service class implements the interface XQServiceEx whichcontains a number of methods like init(), service(), start(), stop() to mention a few.The important method for us is the service method which is the one responsible for

31

Page 42: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 4. IMPLEMENTATION

Figure 4.10. An overview of the multi-receiving message flow. Both message flowsare combined with a Fanout component.

Figure 4.11. A file is collected and transformed by the ESB platform and the datais then routed to a receiving database server.

receiving and sending messages for our transformation class. Messages are deliveredin the form of a XQEnvelope which contains XQMessages. A message that is beingsent can also have multiple parts with different data, so you have to locate thecorrect XQPart of a message, which in our case includes the CSV data. The W3Cdom package is used to create a new XML part which switches place with the CSVdata in our XQPart. The XQPart is then reattached to the XQMessage and thewhole Envelope is being placed in the outbox. The service is then uploaded to any

32

Page 43: Enterprise Service Buses: A Comparison Regarding Reliable

4.4. SONIC ESB

container to be able to be used by an ESB process. We place it in our dev_ESBTestcontainer.

Figure 4.12. An overview of the file to database message flow.

After that we only need to place our new CSV to XML service inside our messageflow and place our XML splitter after it. Last in the message flow we have ourdatabase operation that inserts the users from the original CSV file into our H2database. Since we have already configured the H2 database service we can useit without any problems. The final overview of the message flow can be seen infigure 4.12. One thing that also needs to be mentioned is that the flows, or the ESBprocesses, needs to be uploaded to a container.

4.4.6 File to file message flow

This message flow is easy to set up since we have completed our previous messageflows. We only need to replace our last database operation, from the file to databaseflow, with the built-in File Drop Service and remove the transformation that splitour XML message into multiple parts.

33

Page 44: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 4. IMPLEMENTATION

Figure 4.13. A file is collected and transformed by the ESB platform and thenplaced in a new folder.

4.4.7 File to multiple receivers message flow

Figure 4.14. A file is collected and transformed by the ESB platform and thenrouted to two different receivers.

Nothing new shows up here as it is the same procedure for this message flow as itwas for the database to multiple receivers in chapter 4.4.4. We only need to dragand drop a Fanout component from our developing tool and connect our previousflows to this Fanout.

4.4.8 Web service message flow

Hosting of a Web service, or in this case expose an ESB process, is easily done withthe Sonic platform. There is a setting for the ESB process called ’Expose as WebService’ which makes the ESB process accessible from the outside like a typical Webservice. In the same way you can also generate a WSDL file which will use the ESBprocess’ interface to create the needed code for the WSDL file.

34

Page 45: Enterprise Service Buses: A Comparison Regarding Reliable

4.4. SONIC ESB

Figure 4.15. A Web service is hosted by the ESB platform where a client can accessit from the outside.

For the message flow, or the ESB process itself, the Sonic platform providesa service called ’Unwrap SOAP’. Since Web services uses SOAP it is needed tostrip that information from the input data. As the figure 4.16 shows, the databaseoperation is then used to fetch the address for the name that has been sent in asinput data. However we need to make sure that the correct part of the data is sentto the database. For each database operation there is a tool called Request andResponse Mapping. These are visual tools where you can specify what part of amessage that should be used as input data for the database operation. On this partwe can also apply an XPath expression to extract the text we need for our operation.For the Response Mapping we choose to replace the previous message completelywith our new message containing the address data that have been fetched from thedatabase. Finally we have a Web service in our message flow which will send thedata back to the caller. In the same way as for the database operation there is alsoa mapping tool where we for this message flow can remove database specific partsand just send the address string back to the caller.

More than the above is not needed to get a fully working Web service from anESB process, but it should be mentioned that the work did not go as straightforwardas described above.

4.4.9 Persistent queue setup

Persistency works in the following way for the Sonic platform. You have to makeall queues or topics in a message flow chain persistent by changing a configurationvariable to ’At least once’ or ’Exactly once’. The default setup is ’Best effort’which is the same as non persistent. The settings refers to how the platform shouldhandle message transferring when sending a message to the next in line queue ortopic. In other words, if the last service in a process uses a topic which has a nonpersistent configuration, messages sent from this service to the Exit Endpoint willbe non persistent. The queue used for the Exit Endpoint may have a persistent

35

Page 46: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 4. IMPLEMENTATION

Figure 4.16. An overview of the Web service message flow.

configuration but the messages lying in that queue will be of non persistent type.The setting which you can choose for an ESB process in the Workbench is not

for the entire process, and for the first message to be persistent the ’Entry Endpoint’has to be persistent. If you use the setting ’Exactly once’, all queues or topics inthe chain have to have the same setting or messages will be sent to the dead letterqueue or so-called Rejected Message Endpoint.

4.5 Mule ESB

Mule gets shipped in two versions, as we described in the background chapter 2.3.4,a Community Edition and an Enterprise Edition and as also mentioned the Com-munity Edition was chosen for the task. During the progress of the work differentversions of this edition was tested since there were problems regarding the trans-actions. To exclude that a bug was causing the problem, different versions of theplatform was installed but the problems did not get solved by changing version.The Enterprise Edition, which could be freely tested for thirty days, was also testedbut the version that was used during the test was version 2.2.1 of the CommunityEdition.

36

Page 47: Enterprise Service Buses: A Comparison Regarding Reliable

4.5. MULE ESB

The Community Edition does not include a message system, like the Sonicplatform does. The choice was taken to use Apache ActiveMQ as the messagingsoftware, since this version was used in the literature Open Source ESBs in Action[17] that was studied for this thesis. At the time of writing this the current stableversion is 5.3.0 which was the version used in our tests. However we should pointout that OpenMQ 4.3 [13] was also briefly tested before ActiveMQ was chosen.

4.5.1 System installation

The installation of the Mule ESB platform is also very straightforward. Mule isshipped in the way of an archived zip file and the installation is simply to extractthe files in the package to a suitable folder. After that some environment variableswere configured to let you start the Mule platform regardless where you were in thefolder structure at the command prompt.

The Apache ActiveMQ is installed in a similar way because it also gets shippedas an archived zip file. No further configurations were needed for the ActiveMQsoftware but we had to make sure, for the message flows, that the Mule ESB couldconnect to the ActiveMQ software.

4.5.2 Database to database message flow

Figure 4.17. The ESB platform collects data from a database, transforms it androutes the data to a receiving database server.

First the connectors, which this message flow is using, have to be set up. In our casethe two database connectors and the ActiveMQ connector. By the help of OpenSource ESBs In Action [17] we get a simple example on how a such configurationcould look like. The configuration is done inside XML files and uses Spring to beable to set up all settings in a smooth and easy way.

Early on we could see that if a connector was shut down in our message flowand then restarted it did not reconnect to the platform. This is apparently becauseyou need something called a retry policy for your connectors. Retry policy’s arenot built-in for the Community Edition of Mule, you have to create one yourself. In

37

Page 48: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 4. IMPLEMENTATION

this early stage we have to make our own Java class, which we call InfiniteRetryPol-icyTemplate. The class extends AbstractPolicyTemplate and overrides the methodcreateRetryInstance which returns yet another class. This class, which also has tobe created, needs to implement the interface RetryPolicy. The task for this class isonly to run a thread sleep for a short amount of time. When the thread reactivates,the policyOk method is called which starts the reconnection phase for the Muleplatform on the specific connector. This retry policy class can then be used by allour connectors, and our final configuration for the ActiveMQ part can be seen infigure 4.18.

Figure 4.18. The retry policy is added to the connector configuration in the formof a simple spring property.

Figure 4.19. A new data source which can be used by the database connectors.

For our database connectors we have to set up a spring bean which handles theconnection towards the database, see figure 4.19. Our JDBC connector then usesthis data source to execute the SQL queries which we have configured in our JDBCconnector. We now have the building blocks needed to create our message flow. Asdescribed earlier in the background chapter, a Mule service consists in the simplestcase of an inbound and an outbound endpoint. The outbound endpoint also hasa router to determine where a message should be routed to, if there for exampleare multiple receivers. Since we want to use JMS queues for later testing withpersistent queues, we split up a Mule service into two different services. The firstservice will use our polling JDBC inbound endpoint and send the message through apass-through-router onto a JMS outbound endpoint using the queue db.storage forour ActiveMQ connector. Our other service then uses a JMS inbound endpoint andfetches the message from the db.storage and sends it through a pass-through-routeronto our JDBC outbound endpoint which uses the insert SQL query defined earlier.In the figure 4.20 we can see how our second service looks like.

However we also need to transform the message someway along the road to getour email address from the fetched data. In Sonic’s case we used XSLT but for the

38

Page 49: Enterprise Service Buses: A Comparison Regarding Reliable

4.5. MULE ESB

Figure 4.20. The second part of the service component for the message flow. Thisexample shows a database writer service.

Mule platform we instead use so-called ’Plain Old Java Objects’, in short POJOsto do the transformation. First we have to define that we are going to use a cus-tom transformer by adding the tag <custom-transformer> in our configuration file.Thereafter we can add the custom transformer onto our JDBC inbound endpointby placing the tag <transformer> and refer back to our custom transformer tag byusing the keyword ref.

To create transformers in Java is easy after you have done it the first time. All itneeds is that your class extends AbstractTransformer, which is mentioned in OpenSource ESBs in Action [17] as well as in online documentation. After that you alsohave to override the method doTransform which returns an Object and have anObject and a String as parameters. The method throws a TransformerExceptionfor error handling. Inside the method you can transform your Object message tothe requested format, in my case a simple string. This is done by typecasting theObject to a Map, because the data that has been fetched from the database is aMap container. After that you can fetch the data for the first name and the lastname and attach any email string and return it as a string.

An early notation is that Mule throws messages that are not delivered, forexample if a connection is down. You therefore have to add some sort of errorhandling or exit strategy which will take care of messages that have been rejected.After browsing through online documentation and the references that concernedMule, two exception strategies were discovered. One for the connectors and one forthe entire service. With the help of figure 4.21 you can see where the exceptionstrategies are placed and where the messages are meant to be sent if an exceptionoccurs.

A warning regarding transformations is that when you use custom transformerson an endpoint, all automatically performed transformations disappears, like Objectto JMS or JMS to Object. We therefore have to be careful when we add our owntransformations. In this flow however, we do not have to think about it since weare not using custom transformations on our JMS connections.

One last thing to have in memory is that we have to place our drivers for thedatabase connectors in a suitable place and include them in the Eclipse building

39

Page 50: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 4. IMPLEMENTATION

Figure 4.21. An exception strategy which sends the messages to the dead letterqueue specified.

part. They will then be found when the configuration file is executed by Mule.

4.5.3 Database to file message flow

Figure 4.22. The ESB platform collects data from a database, transforms it andplaces the data in a new file.

Large parts of the message flow above can be reused for the database to file messageflow but with some minor adjustments. The largest adjustment is that for oursecond service we need to replace the JDBC outbound endpoint with a file outboundendpoint. We also need to set up output patterns so that all files do not get thesame name. Still though, we use a pass-through-router since the message is onlygoing to be delivered to one receiver.

Something that was discovered after a while was that when we found out thatexception strategies were needed for the connectors, we had to create a file con-nector as well. In the beginning of the development phase, we only had simple fileconnectors directly in our message flow. But we could not add exception strategieswhen we configured it up in that way. We also had to have retry polices on allconnectors, except for the file connector.

A new transformation class also needs to be developed since in this message flowthe data that we fetch from the database will be converted to an XML file. In the

40

Page 51: Enterprise Service Buses: A Comparison Regarding Reliable

4.5. MULE ESB

same way as before, we create a new Java class which extends AbstractTransformer.In the method doTransform we use the W3C dom package to create an XML doc-ument which we then fill up with the data retrieved from the database. The XMLdocument is created by the use of the DocumentBuilderFactory class and an exam-ple on how an XML document is built up in this way can be found at W3C’s XMLpage [25]. When we have all our data stored in our new XML document, we returnit in the form of a string back to the message flow and if all goes well we end upwith an XML file in our output folder.

4.5.4 Database to multiple receivers message flow

Figure 4.23. The ESB platform collects data from a database, transforms it androutes the data to multiple receivers.

The difference with this message flow compared to the two earlier is that we nowhave to make sure both the file and the database endpoints receives the correctformatted data transformed from one and the same message. This is solved byfirst using topics instead of queues in our services. That way we can have multipleservices subscribing to a topic and receive messages that are placed there. After thatwe then copy our previous two flows and move them inside our new message flowand make sure that they are subscribed to the JMS topic instead of the JMS queuewhich was used before. We now have two subscribers for a message and we can makethe corresponding transformations for each service. We could probably also use amulticast router instead of the pass-through-router to send a message to multipleendpoints, but the chosen solution is easier to implement by a small margin. Thetransformations are not to be forgotten and they now have to be performed on theoutbound endpoint or the inbound endpoint for the JMS topic. But as previouslymentioned, if we use a transformation on a JMS endpoint we have to add the JMSto Object transformer since it will no longer be performed automatically.

41

Page 52: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 4. IMPLEMENTATION

4.5.5 File to database message flow

Figure 4.24. A file is collected and transformed by the ESB platform and the datais then routed to a receiving database server.

There are no surprises regarding the setup of this message flow except that we againhave to create a new transformation class, which will be converting CSV data to aMap data structure. The Map can then be used to insert the data into the databaseby explicitly selecting data from the Map. This is done through the help of the SQLqueries which you define for your connectors. See figure 4.25 for an example of howthis code looks like.

Figure 4.25. A SQL query for inserting the data, taken from a map container, intoa database server.

When we fetch or read our file from the specified folder we also choose to runone of the built-in transformations. This transformation converts a byte array to astring object. This way, our CSV file will become a string containing the substancefrom the file which makes it convenient on our part later on. However, since wecan only insert one user from the original CSV file at a time into the databasewe have to split our message or string object. This is done by implementing acustom router which will split our message into multiple distinct parts. As we didbefore with the custom transformers, we create a custom router class by extendingAbstractMessageSplitter. The method getMessageParts is then overridden and splitour message, which is a string, on every new line. We then make sure that themessage is sent to the correct endpoint and returns the new messages.

All that is left to do is to make sure that the polling frequency is not set too highand that the file connector is used, as described before, so that exception strategiescan be used for the connector.

One thing that we have not mentioned earlier regarding Mule and the waythe configuration file works is that you have to include the keywords ’jms’ or ’file’etc. in the Mule header. If you do not do this, Mule will not understand what a

42

Page 53: Enterprise Service Buses: A Comparison Regarding Reliable

4.5. MULE ESB

’file:connector’ is or a ’jms:connector’. The keywords, in the header, are linked toXML documents containing information regarding the keywords.

4.5.6 File to file message flow

Figure 4.26. A file is collected and transformed by the ESB platform and thenplaced in a new folder.

Another new transformation class is created for this message flow, which will convertour CSV file to an XML document. More than that will not be necessary forthis flow to function and the W3C dom package is used to create our new XMLdocument. The byte array to string transformer is also used and placed before ournew transformer.

4.5.7 File to multiple receivers message flow

Figure 4.27. A file is collected and transformed by the ESB platform and thenrouted to two different receivers.

Since both our message flows that have a file connector as the inbound endpointare using the built-in transformation, byte array to string, the transformation can

43

Page 54: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 4. IMPLEMENTATION

be placed on the inbound endpoint for this multi message flow. Other than that,we will be doing exactly as in previous multi message flows. We simply move thetwo flows above into the configuration file so we get three services. We then changethe use of JMS queues to JMS topics so that each subscriber, the two services, canreceive a message that is being sent through the chain.

4.5.8 Web service message flow

Figure 4.28. A Web service is hosted by the ESB platform where a client can accessit from the outside.

There are a number of different ways to take for this message flow which aredescribed by Rademakers and Dirksen in Open Source ESBs in Action [17], but theroad that was chosen was one of the simplest ones. Our Web service is created inJava and to host it on the Mule platform the CXF [2] connector is used. For theinbound endpoint we use a CXF endpoint where we can choose on what URL itshould listen on for receiving SOAP requests to our Web service. After that, inour message flow chain, we add a component class which is linked to a spring beanwhere we specify which Java class we use as a component class. These componentclasses have been mentioned briefly in the background chapter. They can be placedbetween an inbound and an outbound endpoint and in this case it will be our newWeb service class. However we will not be needing any outbound endpoints forthis message flow since our component class takes care of the response back to theoriginal caller. The figure 4.29 shows how little code that is needed to host a Webservice in Mule.

For our Web service class, we can then define methods which can be called fromoutside the platform. However, we must not forget that the methods should throwan Exception so that the Mule platform can take care of possible errors. This way,error messages are thrown on the dead letter queue instead of disappearing fromthe system. When we launch our message flow a WSDL file will automatically begenerated which makes it easy to host an own Web service.

44

Page 55: Enterprise Service Buses: A Comparison Regarding Reliable

4.5. MULE ESB

Figure 4.29. Code to host a Web service in Mule using CXF.

The method we create in our Web service class uses a database connection wherewe fetch the data, in the form of an address, and return it as a string to the caller.A name is used as input, to find the corresponding home address. We also makesure that if no address is found for a specific name, the string ’address unknown’will be returned, but we could also throw an exception.

4.5.9 Persistent queue setup

To set up persistent queues for our message flows, a setting needs to be added. Onour ActiveMQ connector configuration, the keyword persistentDelivery can be usedand if it is set to true the messages which ends up in a JMS queue will be stored ondisk. As default the setting is false but as you can see in figure 4.30 it is a simpletask to change it. More than that is not necessary to use persistent queues unlessyou want to use an external database for storing the messages or such.

Figure 4.30. Adding persistent delivery to the ActiveMQ connector in Mule.

4.5.10 Transactions

For the Mule platform the ability to use transaction was looked upon. If you starta transaction on a JDBC endpoint it seems that you can only bind this transactionwith other JDBC endpoints. If you have an JMS endpoint as the outbound endpointyou will get an exception. But as we discovered in chapter 3.2, Mule also supportsXA transactions which can handle both JDBC and JMS endpoints.

45

Page 56: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 4. IMPLEMENTATION

To get the XA transactions to work you have to do a number of configurations.First, transactions are set up on the inbound endpoint but the outbound endpointstill needs to have transactions ’available’. The outbound endpoint does not haveto join the transaction started on the inbound endpoint. Because of this we haveto set up XA transaction support for our ActiveMQ messaging system since we useJMS transferring between our two services in our Mule message flows.

We also need to have a transaction handler and the recommendation is touse the built-in JBoss transaction manager. It is initiated by adding the tag<jbossts:transaction-manager/> in our configuration file. To set up the ActiveMQconnector the only thing required is to change its tag to <jms:activemq-xa-connector>.We also have to specify which JMS specification we are going to use with the key-word specification, and we will be using version 1.1 as can be seen in figure 4.31.By adding the tag <xa-transaction> for an endpoint in the Mule service, the trans-action is started on that endpoint. The tag has the keyword ’action’ that needsto be set, where we can choose: NONE, ALWAYS_BEGIN, ALWAYS_JOIN, BE-GIN_OR_JOIN and JOIN_IF_POSSIBLE. We will be starting the transactionson the service which has the outbound endpoint connected to our receiving ser-vice because the receiver will be disconnected in our tests. We thereby use thesetting ALWAYS_BEGIN on our inbound endpoint on that service, and select theALWAYS_JOIN setting for our outbound endpoint.

Figure 4.31. XA capable configuration for the ActiveMQ connector.

The database connectors also have to be looked upon, so they support XAtransactions. This is done by altering the connector class being used.

46

Page 57: Enterprise Service Buses: A Comparison Regarding Reliable

Chapter 5

Results

In this chapter the result from the different scenarios, which we have built up inthe implementation phase, is presented. Possible discussions and conclusions drawnfrom these results are presented in the next chapters.

5.1 Receiver disconnected scenario

Figure 5.1. The receiving part in a message flow is disconnected while the systemis up and running.

As mentioned earlier we disconnected the receiving part from the message flow tosee how the platforms reacted. In that way we could get an overview on how theerror handling worked for the two systems.

5.1.1 Database as sending access solution

We begin with the Sonic platform. We can establish that the containers, whichthe message flows are running inside, starts directly even though a service is notconnected, in this case the receiving part. Error messages do however show up insidethe containers log showing that a connection can not be established to a service.When we then start to send messages through our message flow where we have adatabase as the receiving part and then disconnect that service, a couple of thingsoccur. First, checking in the log files displays clear Java Exceptions which tells usthat something has gone wrong. These exceptions describes that a message couldnot be delivered to the specified part because the connection is not online. The other

47

Page 58: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 5. RESULTS

interesting observation is that the messages are being sent to the dead letter queueor Rejected Message Endpoint which we have configured in the implementationphase. No messages were therefore lost from the system when the receiving partwas disconnected.

When it comes to the database to file message flow it did not differ itself muchcompared to the database to database flow. An important note is however thatwhen the folder is unreachable, or in our case when we remove the USB memorydevice, there are no error messages showing up in the container logs. In the databaseto database message flow there were clear error messages when the connection dis-appeared and another difference is that it takes less time for a message to go fromthe entry point to the exit point in our message flow. The messages do however endup in the dead letter queue when we start sending messages through our messageflow, just as before.

The Mule platform is more strict when it comes down to starting the systemwithout all the services up and running. But as long as we have a retry policy on ourconnections, which we use in our configuration, the system starts as soon as all theconnections are available. First we have our message flow with the database serveras the receiving part. When the database gets disconnected nothing happens inMule which indicates that we have a crash or disconnection scenario. There are nowarnings or error messages in the logs either as there were with the Sonic platform.However, when the messages starts to be delivered through our message flow thewarnings start to appear and our implemented retry policy starts to do its workby trying to reconnect to the database server. In the logs where the errors appearyou can also clearly see that the default connection exception strategy gets calledand sends our messages, which could not be delivered, to the dead letter queue, inour case the db.error queue. If there are multiple messages that are being sent, itwill take some time before they are being processed and wind up in the dead letterqueue which we have specified.

Something interesting happens when we try the transaction configurations onthis message flow. When we disconnect the receiving database server and senda message, instead of calling the default connection exception strategy the defaultservice exception strategy is called. The error messages which shows up in the log arealso of a different characteristics and are more closely connected to the transactionpart than to our disconnected database server. The messages, ready for processing,are at a first glance looking to stay put in the queue, unprocessed, but after awhile they start to disappear. However these messages do not end up in the deadletter queue which we have configured, they disappear completely from the system.Another aspect is that the messages are staying a much longer time in the queuebefore they disappear when comparing with the message flow where transactions aredeactivated. An interesting thing did occur when we later tested persistent queueswhich led to retesting our earlier scenarios. When persistent queues are activatedthe messages which gets lost when using transactions is sent to ActiveMQ’s owndead letter queue called ActiveMQ.DLQ.

Regarding the database to file message flow the only thing that is different com-

48

Page 59: Enterprise Service Buses: A Comparison Regarding Reliable

5.1. RECEIVER DISCONNECTED SCENARIO

paring to the database to database flow is that when a folder becomes unavailablefiles may already be opened for writing. These files becomes corrupted and gets onour USB memory device with a size of zero bytes. However all these messages whichdid not get sent properly ends up in the dead letter queue, even those which gotcorrupted and this must be considered as a positive thing. We also get clear andvisible Java Exceptions in the logs for each message which did not get delivered,explaining that the search path was unavailable.

When the transaction configuration is tested on this message flow, with a fileendpoint as the receiving part, it did not differ much compared to the databaseto database message flow with transactions. The messages still disappear withoutgetting sent to the dead letter queue even though we have configured one up. Butas soon as we activate the persistent queues the messages are delivered to theActiveMQ.DLQ.

5.1.2 File as sending access solution

For the Sonic platform, this test did not behave any different than the previousintegration solution. The messages for both the file to database and the file tofile message flows ended up in the dead letter queue when the receiving part wasdisconnected. Error messages are also displayed in the containers logs for easyviewing.

Mule’s solution or outcome of this test did also not behave any different com-paring to earlier or towards the Sonic platform. As soon as there is an error onthe receiving part, both database or folder, the messages gets delivered to the deadletter queue file.error. They also gets delivered to the dead letter queue if there areproblems with for example putting data into the database, e.g. if the primary keyalready exists in the database table when running the file to database message flow.

Regarding the transaction message flows we are not in for any surprise comparedto earlier results. Messages keeps getting lost if we have not activated persistentqueues.

5.1.3 Web service access solution

We begin with the test for the Sonic platform. Because Web services are bothsenders and receivers we decided that the database server, which is used by themessage flow, gets disconnected from the ESB platform to see what will happen.When we disconnect the database and sends a request through our Web service withthe help of our tool Soap UI [20], the message gets intercepted and delivered to thedead letter queue (Rejected Message Endpoint). However we are left hanging atthe end, waiting on a reply from our Web service. We do not get any message backindicating that something went wrong but after a while a socket timeout occurs.

The Mule platform acts differently compared to the Sonic’s case. When un-plugging the database server from the message flow and sending a message a JavaException is thrown from our component class. Instead of waiting for a socket time-

49

Page 60: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 5. RESULTS

out with our Soap UI tool we immediately get a SOAP:Fault message back witherror codes describing the problem. The error code lets us determine what wentwrong, that the database connection is down. As pointed out earlier we did notget this type of error message when we use the Sonic Web service, even though ourmessage got sent to the dead letter queue.

5.2 Receiver temporary disconnected scenario

Figure 5.2. The receiving part in a message flow is disconnected and then recon-nected while the system is up and running.

The differences between this scenario and the one above is that here we reconnectthe disconnected receiver after a short while, to see if the messages eventually arere-sent.

5.2.1 Database as sending access solution

For the Sonic platform we directly notice that messages which have already been sentto the dead letter queue (RME) does not get re-sent after the receiver is reconnected.It does not matter whether it is the database or file message flow that is tested.Since we also split our messages in our database to database process, this can resultin that some messages or database rows gets delivered to the receiving part whileothers gets sent to the dead letter queue.

Regarding the Mule platform, for both the database to database and the databaseto file message flow the same outcome as with the Sonic platform occurs. The mes-sages which have reached the dead letter queue does not get re-sent. The othermessages which have not yet been processed and sent to the dead letter queue aresent to the receiver as soon as it is reconnected. You will have to look out forcorrupted files when using a folder as the receiving part since many files of size zerobytes appears in the folder. This was nothing that showed up in the database todatabase message flow, with corrupted rows or such.

When the transaction configurations were tested for each message flow, nothingnew showed up. As long as we have persistent queues activated, messages which failsto be sent are sent to the ActiveMQ.DLQ queue instead. However, an interestingaspect regarding this scenario is that there is a variable called max redelivery forActiveMQ connections. If this variable is increased it takes much longer for a

50

Page 61: Enterprise Service Buses: A Comparison Regarding Reliable

5.3. PLATFORM OR MESSAGE SYSTEM CRASH SCENARIO

message to be sent to the dead letter queue and the message may get re-sent beforeit has been redelivered x amount of time.

5.2.2 File as sending access solution

For the Sonic platform, nothing new occurred compared to when a database serveris the sending part, except that for the file to file message flow the messages gotsent with a higher speed. A small down time then results in more messages gettingsent to the dead letter queue, compared to our database to database message flow.

When we take the Mule platform into consideration, common for the messageflows that have a file endpoint as the sending part is that it takes less time to fetchthe data into the system. Therefore it takes shorter time before the messages endup in the dead letter queue. As pointed out earlier, max redelivery only workstogether with the use of transactions. When the transactions are activated youcan notice that the max redelivery variable have an effect on the message flow eventough file endpoints are not supported by XA transactions. Setting the variable maxredelivery to around 200 gives two to three seconds of time before a message getssent to the dead letter queue. This would be enough time for a minor connectiondrop out.

5.2.3 Web service access solution

The Web service message flow for the Sonic platform works just the same as before.If an error has occurred the message is not re-sent and we therefore have to wait ona socket timeout before we can send a new request with our tool Soap UI [20].

When we then use the Mule platform, nothing has changed compared to whenthe Sonic platform was tested. Because the message flow uses synchronous messagetransferring the database server has to be online when the request occurs. It doesnot matter if it is just a short down time because the request is not sent again.However with the Mule platform you at least get a SOAP:Fault message back soyou can resend the request directly.

5.3 Platform or message system crash scenario

Figure 5.3. The platform is taken down while a message flow is running and sendingmessages through the system.

51

Page 62: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 5. RESULTS

In this scenario we tested how the platforms reacted to the use of persistent queuesby letting them crash during transferring of messages. The interesting part was tosee if the platforms resumed their work where they were abruptly stopped.

5.3.1 Database as sending access solutionFirst out is the Sonic platform. With the ’exactly once’ setting on all services,which makes the messages persistent, the messages should still be there after theESB platform crashes. This also seems to be the case but the work is not resumedwhere it stopped upon restarting the Sonic ESB platform. It could be that thetopics, which are used between the services, are marked or something that wouldimply that a crash has occurred and the work should therefore not resume. We canclearly see that the Sonic platform recognizes that the system was not shut downproperly and takes actions. The different message flows does not seem to do anydifference either.

Regarding the Mule platform, if the ESB platform or the message system, in ourcase ActiveMQ, crashes, we instantly notice that the messages are still left wherethey were when we restart the platform as long as we have persistent deliveryactivated. However, data that has been fetched from the database server gets splitup and there is a risk that they have not reached the persistent JMS queue. Thesemessages are lost when the platform crashes. Fetching many database rows perpolling case results in that many messages are lost when the system goes down.

There are different problems regarding if the ESB platform crashes or the mes-sage system crashes. If the ESB platform goes down there is a possibility that a fewmessages gets lost because there is a chance that part of the data that were fetchedhas not reached a persistent JMS queue. If however the message system drops, theESB platform continues to process and fetch data from the database. But as thereare no queues to deliver these messages to, all messages are lost and only a very fewgets sent to the correct part.

Transactions does not seem to have any effect on how persistent queues arehandled by Mule.

5.3.2 File as sending access solutionAs mentioned earlier, the access solution has no effect on how the ESB handles acrash for the Sonic platform.

Regarding the message flows for the Mule platform with file endpoints the onlydifference is that it goes quicker to transfer the messages. Therefore less time is spentfetching the messages which results in that a message spends more time inside apersistent queue than anywhere else inside the message flow.

5.3.3 Web service access solutionThere are problems with our Web service solution when our ESB platform crashessince Web services uses synchronous message transferring. This is true for both the

52

Page 63: Enterprise Service Buses: A Comparison Regarding Reliable

5.4. RECEIVER DISCONNECTED IN MULTI RECEIVER FLOW

Sonic platform and the Mule platform. If you have already sent a request to theESB platform there will not be any response back to the caller. Instead you get aconnection reset exception after a while when the platform crashes.

5.4 Receiver disconnected in multi receiver flow

Figure 5.4. The sending part sends a message which is redistributed by the platformto two different receivers, where one receiver is disconnected from the ESB platform.

Within this scenario the system is tested with a message flow containing multiplereceivers. The preferred outcome is that either every receiver get the message or noone of the receivers get the message which were sent through the message flow.

5.4.1 Database and file access solutions to multiple receivers

Because we did not have any transactions to try with regarding the Sonic platformthe results were expected. When one of the receivers were disconnected the otherones still received the message. The test is also performed with the ’Exactly once’setting which should make a roll back if something went wrong but it seems onlyto work if you have multiple addresses in the JMS message header part.

For the Mule platform, when we disconnected one of the two receivers the otherreceiver still get the message. This is pretty much expected when transactions aredeactivated. The messages that did not get to its receiving part ended up in thedead letter queue. We can, with the help of the dead letter queue, then easily locatewhich of the messages that did not get sent. With some manual implementationyou could resend just that message if needed.

However, when we activate the transactions, to handle cases like this, goingthrough the same procedure as before the outcome is exactly the same. The trans-actions should be rolled back when a problem occur but it does not seem to be thecase.

53

Page 64: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 5. RESULTS

Regarding the file to multiple receivers message flow, something abnormal oc-curred. The messages got multiplied and one message could result in up to fourmessages at the receiving end.

5.5 Summary of resultsA table containing the results from the above mentioned scenarios for both theSonic and the Mule platform.

Scenario Sonic ESB Mule ESBReceiver disconnected Java Exceptions in log files;

No messages were lost;Failed messages found inDLQ; Corrupted files forfile receiver; Socket timeoutfor Web service

Java Exceptions only whensending data; No messageswere lost except when XAtransactions were used with-out persistent queues on;Failed messages found inDLQ; Corrupted files for filereceiver; SOAP:Fault mes-sage for Web service

Receiver temporarydisconnected

No messages were lost;Failed messages found inDLQ; Messages in DLQdid not get re-sent; Sockettimeout for Web service

No messages were lostexcept when XA transac-tions were used withoutpersistent queues on; Failedmessages found in DLQ;Messages in DLQ did notget re-sent; Redelivery func-tion with XA transactions;SOAP:Fault message forWeb service

Platform or messagesystem crash

Messages outside queueswere lost; Did not re-sume work after restart;Connection reset for Webservice

Messages outside queueswere lost; Resumed workafter restart; Connectionreset for Web service

Receiver disconnectedin multi receiver flow

No messages were lost; Mes-sages reached receiver onewhen receiver two was dis-connected

No messages were lost; Mes-sages reached receiver onewhen receiver two was dis-connected; No rollback onreceiver one with XA trans-actions

54

Page 65: Enterprise Service Buses: A Comparison Regarding Reliable

5.6. PERSISTENT DELIVERY PERFORMANCE HIT

5.6 Persistent delivery performance hit

Figure 5.5. The ESB stores the messages in the queues and topics to disk.

The result in this chapter should not be compared between the platforms becausethe message flows are not implemented exactly the same and may not have beenimplemented in an optimal way. A more detailed view can be seen in Appendix A.

5.6.1 Sonic’s database to database performance testIt was a little difficult to test the performance hit of persistent delivery for the Sonicplatform, compared to the Mule platform, when using the database to databasemessage flow. The database connections or the JDBC connectors is significantlyslower when the message flow is running on the Sonic ESB platform comparedto the Mule ESB platform. To reflect the slow database access, we changed ourmessage flow to fetch 200 rows instead of the two rows we have in our database. Wealso tested to increased our polling frequency from the low 1 ms to a substantialfrequency of 100 ms. This way, more processor time would be given to the actualprocess and not to the polling thread.

At the start the message flow is tested without persistent queues activated. Forthe frequency 100 ms, this resulted in that after 60 seconds, around 300 messagesreached the MySQL database and a total number of circa 1300 messages wentthrough all transformations waiting to be delivered to the database. With persistentqueues activated that number went from 300 messages down to 200 messages whichreached our MySQL database after 60 seconds. The number of messages which wentthrough all transformations in our ESB process was 1150. In other words a smallbut noticeable difference between the results. The exact numbers can be viewed inAppendix A.

5.6.2 Mule’s database to database performance testThe problems which showed up above, with the database access, was not apparentwhen we tested the message flow on the Mule platform because the database con-

55

Page 66: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 5. RESULTS

nections were quicker and not the bottleneck of the message flow. We could hereperform the test as we originally had planned, polling against the database everymillisecond to fetch the two rows into our message flow.

The test started with the persistent queue settings off and we could clearlysee that the message queues grew as new data was fetched into the system at thesame time the ESB tried to process them. After a while the number of unprocessedmessages started to stabilize at around 20 to 30 messages at the input queue. When60 seconds had gone we could see that circa 4500 messages were processed anddelivered to our MySQL database. We also tested to increase the original two rows,as we did for Sonic, to see if it had an effect on the result. But we got a similar resultwhich indicates that the database connection was not our bottleneck in this messageflow. The test was then re-done with persistent queues activated which resulted inthat after 60 seconds around 2500 messages were processed and delivered. A quitesubstantial difference compared to the 4500 messages that were sent with persistentqueues deactivated.

56

Page 67: Enterprise Service Buses: A Comparison Regarding Reliable

Chapter 6

Discussion

In this chapter a short discussion around the results that were presented in theprevious chapter is held to shed more light on the causes. There is also discussionsaround possible solutions or enhancements to certain problems.

6.1 Receiver disconnected scenario

Figure 6.1. The receiving part in a message flow is disconnected while the systemis up and running.

The preferred way in this scenario would have been that the messages gets sentto the dead letter queue that we specified in the implementation part, and as theresults shows this seemed to be the case. Although in one case it did not go asexpected, when transactions were used. More on that later on.

As the test result showed there were no large differences between the Sonic andthe Mule platform, despite the system differences. This could be seen as a positivething because it could facilitate a move from one platform to the other. Bothplatforms also clearly displayed errors in the logs if a connection which were usedin the message flow was inaccessible. The Mule platform is however a little morestrict concerning the start up procedure if a service in the flow is unavailable. Butthis is not a problem as long as the retry policy is on. By not allowing the systemto go online when a connection is down, some debugging time may be saved. It ispossible that the problem regarding a disconnected connection would have poppedup later on if you have a large message flow with nodes rarely used. We also got

57

Page 68: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 6. DISCUSSION

simple Java Exceptions with both platforms which could ease the burden to try tolocate an error or bug in the integration solution.

In the test we also saw that an error message is not deployed if a service crasheswhen connected to the Mule platform. This is partly because that service is not usedat the current moment. But if for example the sending part went down we wouldhave got an error message because we are constantly polling against that service.In Sonic however, we also get an instant error message in the log if the receivingpart disconnects, this would have been preferred for Mule. Regarding the scenariowhere we have a file transfer as the receiving part located on a USB memory device,we would probably want some kind of error message when the USB stick is removedfrom the system.

When it comes to the tests with Mule’s different message flows we have theoption to use XA transactions and our results shows that the outcome is a littledifferent when the transactions were used. The first thing that were noticed whenthe receiving part was disconnected, is that instead of calling the default connectionexception strategy the default service exception strategy was called. We couldassume that this depends on that the error occur in the transaction handling part,a service, and it has a somewhat higher priority than the connection part, eventhough the problem lies within the connection territory. The other thing that wasnoticed and of great interest is that messages which could not be delivered, becausethe receiving part was disconnected, did not get sent to the dead letter queuewhich we had specified. Instead they were all lost even though we have a deadletter queue for both connection exceptions and service exceptions. If this happensbecause something I have missed when configuring these dead letter queues seems tobe unclear but I could not find any information that would have proved otherwise.Though, it is curious to see the messages get delivered to another dead letter queue,namely ActiveMQ.DLQ, when persistent queues are activated. It is possible that themessaging system takes over the error handling when transactions gets activated,since the ActiveMQ.DLQ queue is specified there, but it seems rather unlikely. Thisis not the optimal way of handling this problem and it is not good that messagesdisappear from the system. It would have been better if the messages were sent tothe specified dead letter queue or re-sent at a later time.

Another thing which was noticed is that the messages were staying much longerin the queue before they got sent to the dead letter queue or disappear when trans-actions were activated. Probably the transaction handling part tries to resend thesemessages a number of times or the handling of messages takes a longer time whentransactions are on. We could clearly see that when the database to file messageflow was tested the messages got sent straight away to the dead letter queue but ittook longer for them to get there during the database to database message flow. Thefile endpoints does not support XA transactions accordingly and that is probablywhy the messages got sent to the dead letter queue at a quicker pace.

The last thing of interest was when the USB memory device was disconnected.When files were written to the file system in our database to file scenario or in ourfile to file scenario, some files got corrupted with the size of zero bytes. A theory

58

Page 69: Enterprise Service Buses: A Comparison Regarding Reliable

6.2. RECEIVER TEMPORARY DISCONNECTED SCENARIO

regarding this is that it is the underlying file system which is responsible for thisfile corruption. At the same time the operating system prepared files for writingto the USB memory device, the memory device got disconnected resulting in fileswith zero bytes. However, if this was the case, the messages or files, which did getcorrupted, should not be found in the dead letter queue which they were. It couldbe the case that Sonic and Mule prepares to write files to the file system but thedata have not been flushed down to disk before the disconnecting occur. Thereforethe platform notices an error with the destination and sends the message responsiblefor the error to the dead letter queue. Regardless it is positive that the messagesare found in the queue since you would definitely not want corrupted messages inyour message flow.

About the Web service message flows there were some differences regardinghow they were implemented and how the results turned out when comparing SonicESB to Mule ESB. Mule ESB has a good way of handling an error by sending aSOAP:Fault message back to the sender when something goes wrong. In Sonic’scase you got the socket timeout after a while but you have no idea of what wentwrong. If a message get sent to the dead letter queue, or Rejected Message queuein Sonic’s case, the system should be able to send a Fault message back to thesender. This could probably be implemented by hand and is likely the only way toaccommodate this.

6.2 Receiver temporary disconnected scenario

Figure 6.2. The receiving part in a message flow is disconnected and then recon-nected while the system is up and running.

Our hope was that at the same time no messages disappeared from the system,the message flow would resume its work when the link to the receiving part wasreestablished. Another hope was that messages that did not get delivered during thetime the connection went down should be redelivered. However, this is not the casesince both Sonic’s and Mule’s messages are not re-sent when the flow is resumed.This could very well lead to problems, as pointed out in the result chapter, sincewe for example split several rows from our database polling message flows. Somerows may have reached the destination while others ended up in the dead letterqueue. In our simple case it would have been just enough to re-read the rows thatended up in the dead letter queue but in a more complex message flow, where the

59

Page 70: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 6. DISCUSSION

order of the messages are critical, a more advanced solution will probably have tobe implemented by hand.

An interesting notice from the result chapter is that with persistent queuesactivated and increasing the max redelivery variable for the ActiveMQ connection,it took longer time before the messages got delivered to the ActiveMQ.DLQ queue.This happens because when transactions are used, the messages are being re-sentas many times as the max redelivery variable suggest when a failure occur. Assuggested by Open Source ESBs in Action [17] the max redelivery variable shouldwork with both regular transactions, i.e. JMS to JMS or JDBC to JDBC, and XAtransactions which we have seen here.

As pointed out earlier, the file access tests were faster than the database accesssolutions and thereby could the smallest downtime for a file connection make youlose your entire batch of files ready for delivery. However, in Mule the file endpointsdoes not support transactions but when we have transactions activated, the maxredelivery variable yet seems to work. It could be the case that somehow theredelivery function is activated although XA transactions are not supported for thefile connectors.

Web services use, in comparison with the other message flows, synchronousmessage transferring and you could not expect that everything would go smooth ifa service went down. But as mentioned earlier a simple error message would havebeen preferred as we got with the Mule platform but not with the Sonic platform.

As we can see, no messages disappeared from the system and the ones thatended up in the dead letter queue did not get re-sent when the message flow wasresumed. The preferred way for how this should be handled could possible vary fromflow to flow but for a message flow where the order of messages which are beingsent do matter things could get tricky. In most cases you would probably have toimplement a solution by hand to control the messages so they get delivered in thecorrect order, or aborts the transfer if an error occurs. The calculated risk is thoughthat it could lead to manually have to correct database tables on the receiving partif there were errors in the transfer. But as long as no messages disappears from thesystem the problem could always be fixed in some way or another.

6.3 Platform or message system crash scenario

Figure 6.3. The platform is taken down while a message flow is running and sendingmessages through the system.

60

Page 71: Enterprise Service Buses: A Comparison Regarding Reliable

6.4. RECEIVER DISCONNECTED IN MULTI RECEIVER FLOW

The best solution for this scenario had been if no messages were lost even though theplatform crashed. As we have determined, both platforms support persistent queueswhich should solve the problem with a crashing platform but as the result shows thissolution is not enough to reach the goal. In Sonic’s case the messages are still therewhen the system has restarted after a crash but the services are not resumed wherethey were left off before the crash. Since the topics are used internally between theservices in an ESB process it could be the case that the services do not know whichmessages have been processed.

In Mule’s case we have the problem that when large data quantities are collectedfrom a database and the system goes down the data could already be in the messageflow system but not placed in a persistent queue. A simple solution to this couldbe that you only fetch one row at a time and sends an acknowledgment back to thedatabase, by altering a processed column in the table, or another manual solution.

The differences regarding if the platform crashes or if the message system crasheswere also discovered. If the platform goes down, the messages which are not residingin a persistent queue are gone when the platform restarts and some messages arelost which could have been fetched by the system but had not reached a persistentqueue. A solution for this could be that you acknowledge each message that areresiding in a persistent queue and not just when the message has been fetched bythe platform. With Mule we split the messages before they get delivered to thepersistent queue and we therefore get a larger error marginal with more messagesdisappearing from the system. If the message system crashed however, the platformscontinues to fetch messages from the sender but all these messages are lost sincethere is no underlying message system to collect all the messages. The messagescan not be sent to a dead letter queue since there is no queue if the message systemis down.

Regarding resuming of previous tasks when the platform restarted, the messageflows resumed work in Mule’s case. All messages that were lying in the persistentqueues were transferred after the restart, which is not the case for the Sonic platform.This could have been a good thing if no messages were lost in the crash, but ourtests shows us that messages were indeed lost and did not show up in the dead letterqueue. One can argue which is the best solution, to resume the work or not to, butif you resume the work you could get a time consuming job to try to distinguishwhat data that was delivered and what is missing in a complex message flow.

6.4 Receiver disconnected in multi receiver flow

As pointed out earlier in chapter 4.1, the preferred way would have been that ifan error occurs in the flow no receiver should receive the message. The test showshowever to be rather insipid because the XA transactions that were investigatedwere only implemented and working on the Mule platform. Therefore the resultswhen the scenario is running on the Sonic platform were rather expected, that themessage was delivered to the other receivers. To get around this problem you would

61

Page 72: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 6. DISCUSSION

Figure 6.4. The sending part sends a message which is redistributed by the platformto two different receivers, where one receiver is disconnected from the ESB platform.

probably have to manually implement some kind of a service in the process whichwill then get the responsibility to make sure that if an error occurs no receiverswill get the message. The ’exactly once’ setting was tested but as the message gotdelivered to the services responsible for the access to our database or folder beforean error occur there were no indications that would have resulted in a rollback.This is however just a theory and the ’exactly once’ setting should be investigatedmore closely to get an exact knowledge on how it is suppose to function.

Regarding Mule we have transactions in place but it seems that XA transactionswith multiple endpoints did not work as if it would have been regular transactionswith multiple database receivers or multiple JMS receivers. The file endpoints onthe other hand does not support XA transactions and we did not get anything fromusing transactions for these message flows even though we used a common queuefor our implementation. About the errors that occurred when we had file endpointsin our message flow, they are little more difficult to explain. As we described in theresult part, multiple copies of the same message showed up at the receiving partbut the simple answer to this has to be related to the fact that file endpoints doesnot support XA transactions.

During the implementation part a number of different versions of the Mule soft-ware was tested to see if the different versions had any effect on the XA transactions.There were a few differences but they had little or no effect on the outcome for thescenario.

6.5 Persistent delivery performance hit

Even though one should not look much into these numbers since the tests whichwere used are very simple, we could however see that activating persistent queueshad an effect on the performance. This could clearly be seen when testing the Mule

62

Page 73: Enterprise Service Buses: A Comparison Regarding Reliable

6.5. PERSISTENT DELIVERY PERFORMANCE HIT

Figure 6.5. The ESB stores the messages in the queues and topics to disk.

platform, were we did not have the same problems to evaluate persistent queues aswe had with the Sonic platform. A performance hit from around 4500 messages toaround 2500 messages is a quite big jump. Most platforms are probably not runningon the verge of its limit and would have no problems when turning on persistentqueues. If you have message flows that demand this service you would have to takethe performance part into the calculation. It is probably safe to say that the diskaccess slow things down but then again no messages are lost from the queues whenthe platform fails.

Sad enough we could not really compare the Sonic platform with the Muleplatform in this case since in Sonic’s case the platform did not support millisecondpolling with the built-in database service. We had to implement our own simpleversion resulting in that the polling towards the database occurred when a messagewas received from a queue. However this did not increase the load on the databaseservers as much as we had hoped. We were still nowhere near the data performancethat the Mule platform had when fetching data from the database server. There isa possibility that something was overlooked or that Sonic and the database serverswere not optimally configured. One thing that also can be taken into considerationis that between every service in a Sonic ESB process, JMS queues or rather JMStopics are used. In Mule’s case there was only one JMS queue used. In other words,there were a larger number of JMS queues in the Sonic message flows compared tothe message flows in Mule.

Another thing which also should be taken into consideration is that differentservices in a Sonic ESB process takes different amount of CPU time. If a heavyconversion of a message is running, messages which have completed that step maynot be delivered at the same speed to the next node in the message flow since theheavy conversion is eating up all CPU time. There are probably optimizations whichcould be done to push the limit of the number of messages that can be deliveredfrom the sender to the receiver.

63

Page 74: Enterprise Service Buses: A Comparison Regarding Reliable
Page 75: Enterprise Service Buses: A Comparison Regarding Reliable

Chapter 7

Conclusions

In this chapter, the results and the discussions are taken into consideration andreconnected to the questions asked in the beginning of the thesis. Possible recom-mendations, depending on the outcome of the results will also be talked about.

7.1 Scenario resultsAs we have discussed in the previous chapter, the results were quite expected inmost of the test cases and both platforms followed each other in the results that werepresented. However you could be a little critical regarding the scenarios since thereare several permutations of tests that could have been performed. In retrospect youcan also say that the message flows used could have been of a more complex typebut that would probably have led to longer time to complete the work. With morecomplex message flows it is possible that the platforms would have been pushedmore because of more transformations rolling or multiple nodes between the startand the exit endpoint.

7.2 Platform comparisonDuring the work, differences regarding the two platforms have been shown, somelarger than others. The question is how important conclusions that can be drawnfrom the tests that were performed. Earlier on in the introduction part we describedthat we wanted to conclude on how the platforms handled themself in differentsituations and if there were functionality included to keep a high reliability for amessage flow. We also asked ourself if both platforms were equal regarding bothperformance and functionality but let us begin with the more general differences.

7.2.1 General

The first and probably the clearest difference between the platforms was how thedevelopment was done. In Sonic’s case we had the Workbench which allowed us

65

Page 76: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 7. CONCLUSIONS

through the use of graphical tools, create both complex and simple message flows.Through the Workbench you also got a good overview of the message flows thatwere developed. With Mule on the other hand, the message flows were developedby using code in configuration files. Bear in mind that the message flows whichwere developed for the test in this thesis were rather simple, it was easy to get anoverview of the code. The feeling throughout the implementation part was alsothat it was a little bit cumbersome to build the message flows in Sonic compared toMule. This could have been because of the graphical tools and the many settingsand configurations that you had to look for by browsing the graphical environment.In Mule’s case, all the settings and possibilities were gathered together on the sameplace. The only drawback was however that you had to have the documentationclose at hand to know what kind of configurations that could be used for the Muleplatform.

You can also use spring configuration for the Sonic platform, in a similar waythat you use on the Mule platform. Whether it is recommended or not I will leaveunsaid. The impression is also that the Sonic platform has more prebuilt functionswhich can be used compared to Mule were you have to look it up and may haveto implement functions yourself. If the message flows are simple this will not be aproblem since the base functions like support for file polling, JDBC connections andso on are supported but the Sonic platform also has support for other connectivities.In the Mule Community Edition, you had to implement retry polices by hand toget your connections to reconnect to endpoints in case of a failure. To build thesecomponents were however easy compared to building components for Sonic since itwas more overhead code that had to be included to get a service working under theSonic platform.

The final impression regarding the more general bits are however leaning towardsthe Sonic platforms because it feels slightly more robust than its counterpart. How-ever, because of its robustness it is more cumbersome in some cases. The Muleplatform feels a little more smooth to develop for but could not offer exactly thesame solutions as the Sonic platform does from the start.

7.2.2 Reliable messaging

When it comes to the main part, reliability, it gets even harder to compare the plat-forms. We had hopes that no messages under any circumstances would disappearfrom the system but according to our tests which were performed, both platformslost messages in some of the scenarios. This was, as pointed out, not a good solu-tion and is not acceptable at all if the platforms were used in a critical environment.It was only occurring when the platforms crashed, but some integration solutionshave to work every day, and losing data could have catastrophic consequences. Inother words, there is a need for manually implemented functions to be used togetherwith persistent queues to avoid any messages getting lost. During my thesis, theplatforms never crashed on their own which may or may not be an indication thatthis may not be happening frequently.

66

Page 77: Enterprise Service Buses: A Comparison Regarding Reliable

7.3. PROBLEMS THAT AROSE

Other than that both platforms used error handling in a good way, where anadministrator or similar person easily could get informed in case of an error bylooking through the log files or the dead letter queue. A good way to handle errorsis something that every system should require to be able to call itself reliable sincethere is always the possibility of a failure and it needs to be covered in an errorhandling plan.

Positively for the Mule platform was that transactions could be implemented onthe message flows easily. In that way we got access to resending messages whichresulted in that for our message flows, where our receiving part was only temporarydisconnected, all messages could be sent safely. Resending messages should also beavailable in the Sonic platform for regular message flows but was something thatcould not be found and implemented during the short period of time.

7.3 Problems that arose

The biggest problems that occurred during the tests were all connected to theplatform crash test. Messages which are sent to the dead letter queue when aservice fails in a message flow is of course undesirable, but it is still better thanmessages totally disappearing from the system. This was something that happenedwhen the data which the platforms were sending was between the persistent queuesthat were used in the message flows. The hope that no messages were to get lostfrom the system, as pointed out above, could neither Mule or the Sonic platformsatisfy with the configurations that were used in the test. There also did not seemto be any extra functionality built in which could remedy this.

The second problem that occurred was when the message flows included mul-tiple receivers. Before the implementation part took place of these scenarios, thepossibility to use transactions were investigated, to be able to get all or no mes-sages to the endpoints. Sadly this showed to not be the case and our transactionscould not satisfy this goal. The Mule platform could although use the so-called XAtransactions but not for all types of access solutions. File endpoints were a suchaccess solution that did not have support for XA transactions and could not usethe benefit the transactions provided.

For the other scenarios the platforms worked as expected when messages gotsent to the so-called dead letter queue when problems regarding the transferringpart occurred. It is also important that the messages are still stored in this deadletter queue if the platform crashes which was the case when persistent queues wereturned on.

Another problem, if you could call it that, is the way errors were handled whenWeb services was tested. The Mule way of handling errors by sending an errormessage back to the caller felt spontaneously like the better solution compared tothe socket timeout received from the Sonic platform. There is however the possibilityto implement a service that would do the same thing, or close to the same thing, forthe Sonic platform but it is always good to see this kind of functionality included

67

Page 78: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 7. CONCLUSIONS

from the start. The same can be said about the retry policy for the Mule platform,where for the Sonic platform this was handled automatically.

7.4 Possible solutions

The solutions to the problems mentioned above are quite hard to predict but a guessis that it takes some sort of a manual implementation of components or services toaid the platforms in reaching the set-up goal. That messages disappears from thesystem could probably be solved by checking or controlling every message thatreaches the receiver. The problem is however if a message that has failed to reachits destination, no longer is available at the source. This could be the case if youpoll against a folder to fetch files, but could be solved by always storing messagesuntil it has been confirmed that it has reached its destination. Regarding databaseservers, it could be the case that the data inside the database has changed since thelast time you tried to send a message and this is something that needs to be keptin mind when designing manual solutions.

Platforms crashing was also something that were of interest and our guess thatthere would not be any problems as long as persistent queues were used regardingmessage reliability was not correct. It could be the case that something has beenoverlooked in the configurations but it seems that persistent queues are not enoughto avoid losing any message from the system. Some manual implementation isprobably needed to check for missing messages or confirm retrieved ones. As itstands now, the platforms are with the current configurations not ready for a criticalenvironment.

In the same way, you would probably have to implement functionality by handwhen it comes to multiple receivers using different access solutions. This feelslike a common scenario and something that the platforms should be able to havefunctionally for included in the base package. But the risk is that the solutionon the receiving or sending end has something that requires a specific solution tobe able to determine if a message has reached its destination or not, which willend up in implementing functionality by hand either way. Another question forthe solution at the receiving end would probably be how do we perform a rollbackif one receiver of a multi receiving message flow has not received the message.Rollback with a database server is one thing, rollback on a folder containing filesor some other access solution could lead to problems. As mentioned in the report,XA transactions could be a solution, if the access solution supports the protocol.However, data rollback was something that did not work for our message flows. Ifthat is because of a configuration miss or something else is hard to say since thedocumentation regarding this part was a little deficient.

68

Page 79: Enterprise Service Buses: A Comparison Regarding Reliable

7.5. FINAL WORDS ON THE PLATFORMS

7.5 Final words on the platformsFinally you could say that both platforms have their respective strong and weakpoints but both platforms gives a good impression of being able to handle the partswhich it was designed for. The platforms also include much functionality from thestart, which an Enterprise Service Bus product should. The fact that Mule ESB wasbuilt on open source is according to my short experience with this platform neithera disadvantage or an advantage, which was also briefly mentioned in Open SourceESBs in Action [17]. We can also see from our test results that the different accesssolutions have a minor part in how the platforms behave in different situations.However if you have some obscure access solution which you want to connect to theplatform bus you might hit some problems that were not shown here.

When it comes to the minor performance tests that were performed, we shouldagain point out that not too big conclusions should be drawn by the results. Surelyit can look like a remarkable effort by the Mule platform to have a such high flow ofmessages compared to the Sonic platform when the message flows were quite similar.The answer however probably lies in the JDBC connections that were used, whichslowed down the Sonic process and may not have been optimized correctly.

Regarding critical environments it is quite clear that the platforms are not readywithout proper configurations and manual implementations by the side to cope withdemands put up in a such environment. It should however be no doubt of that theycan be used in a such environment, they just needs to be adjusted for the situationwith manual implementation.

69

Page 80: Enterprise Service Buses: A Comparison Regarding Reliable
Page 81: Enterprise Service Buses: A Comparison Regarding Reliable

Chapter 8

Further work

In this report I have, as mentioned earlier, only touched the topic and concentratedon the software itself, the Enterprise Service Bus platform. There are however muchmore to study and other aspects that could increase the reliability regarding messagetransferring for the platform than was brought up in this report. You could alsotake these questions regarding reliability to other levels which could have an effecton reliable message transferring.

8.1 Hardware

An interesting aspect which could be looked closer at is the hardware aspect ofreliable message transferring. How large impact has the hardware part of a platformon the Enterprise Service Bus. Most ESB’s has, as is mentioned in the report, someform of clustering capability to be able to handle large data quantities but does italso work to increase stability? It would have been interesting to study how thethe cluster gets affected if for example a part of the cluster goes down, so-calledredundancy.

• What part does hardware play in the role of stability for an Enterprise ServiceBus platform?

• Can you cluster important/critical points in an ESB network to increase scal-ability and or reliability/redundancy?

• Is there any negative aspects that appears when clustering an ESB platform?

• Is clustering used frequently in the market of integration when it comes toESB platforms or is it just a functionality that looks good on paper?

• Does the integration implementations need any modifications to be able totake part of a clustered ESB?

71

Page 82: Enterprise Service Buses: A Comparison Regarding Reliable

CHAPTER 8. FURTHER WORK

8.2 Organizational levelOn an organizational level there are other angles and aspects to take into consid-eration in a study of this problem. If an error would occur and the message is sentto a so-called dead message queue, which was mentioned in the report, who has theresponsibility to check these messages? Is there someone monitoring the platformsto guarantee reliability around the clock?

• Study how the organization around the integration solution is built to handlepossible message or transfer failures.

• Who has the responsibility, the customer or the company that delivered theintegration implementation?

• What is the technical knowledge of the integration solution if it was deliveredby a third part?

8.3 Security and IntegritySecurity and integrity is something that eminently affects the reliability and stabilityof an integration platform. To call a platform reliable or a message transportationreliable when messages gets corrupted or modified by an external part, I think noone would do. There are many questions that could be asked around this subjectregarding Enterprise Service Buses. In Data provenance in SOA: security, reliability,and integrity [22] the authors mention that you have to look upon a large messageflow with different eyes than you would with a small system. It is enough for onenode in the flow to get compromised for the security and reliability to fail.

• What kind of functionality exists today to handle security issues around anEnterprise Service Bus?

• Is there functionality for handling integrity of data packages as well?

• If functionality for security and integrity exists in modern ESBs, are they perdefault on and do they have an effect on the platform itself?

• How should testing of a corrupt message be made and can the ESB discovera modified package?

There are of course other areas one can study regarding reliable message transferringbut these subjects were close to the work that was made in this report.

72

Page 83: Enterprise Service Buses: A Comparison Regarding Reliable

Bibliography

[1] Apache ActiveMQ website. http://activemq.apache.org/, 2010.

[2] Apache CXF website. http://cxf.apache.org/, 2010.

[3] Chappell, David A., Enterprise Service Bus, O’Reilly Media, Inc., Sebastopol,California, 2004.

[4] EAI Wikipedia website.http://en.wikipedia.org/wiki/Enterprise_application_integration, 2010.

[5] The Eclipse Foundation website. http://www.eclipse.org/, 2010.

[6] H2 Database Engine website. http://www.h2database.com/html/main.html,2010.

[7] Hohpe, G., and Woolf, B., Enterprise Integration Patterns: Designing, Build-ing, and Deploying Messaging Solutions, Addison-Wesley, Pearson Education,Inc., Reading, Massachusetts, 2008.

[8] JMS Sun website. http://java.sun.com/products/jms/, 2010.

[9] Keen, M., Acharya, A., Bishop, S., Hopkins, A., Milinski, S., Nott, C., Robin-son, R., Adams, J., and Verschueren, P., Patterns: Implementing an SOAUsing an Enterprise Service Bus, International Business Machines Corpora-tion, 2004.

[10] Leavitt, N., “Are Web Services Finally Ready to Deliver?," Computer (IEEEComputer Society), vol. 37, no. 11, pp. 14-18, November 2004.

[11] MuleSoft website. http://www.mulesoft.com/, 2010.

[12] MySQL website. http://www.mysql.com/, 2010.

[13] OpenMQ website. https://mq.dev.java.net/, 2010.

[14] Ortiz Jr., S., “Getting on Board the Enterprise Service Bus," Computer (IEEEComputer Society), vol. 40, no. 4, pp. 15-17, April 2007.

73

Page 84: Enterprise Service Buses: A Comparison Regarding Reliable

BIBLIOGRAPHY

[15] Papazoglou, M. P., “Web Services and Business Transactions *," World WideWeb, vol. 6, no. 1, pp. 49-91, March 2003.

[16] Progress Software website. http://www.progress.com/, 2010.

[17] Rademakers, T., and Dirksen, J., Open Source ESBs in Action, ManningPublications Co., Greenwich, CT, 2009.

[18] Schulte, R., Predicts 2003: Enterprise Service Buses Emerge, Gartner Re-search, December 2002.

[19] SOAP W3C website. http://www.w3.org/TR/soap/, 2010.

[20] SoapUI website. http://www.soapui.org/, 2010.

[21] Tai, S., Mikalsen, T. A., and Rouvello, I. “Using Message-oriented Middle-ware for Reliable Web Services Messaging," Web Services, E-Business, andthe Semantic Web, pp. 89-104, Springer-Verlag, Berlin, Heidelberg, 2004.

[22] Tsai, W. T., Wei, X., Chen, Y., Paul, R., Chung, J., and Zhang, D., “Dataprovenance in SOA: security, reliability, and integrity," Service Oriented Com-puting and Applications, vol. 1, no. 4, pp. 223-247, December 2007.

[23] WampServer website. http://www.wampserver.com/, 2010.

[24] XA Wikipedia website. http://en.wikipedia.org/wiki/X/Open_XA, 2010.

[25] XML W3C website. http://www.w3.org/TR/xml/, 2010.

[26] XSLT W3C website. http://www.w3.org/TR/xslt20/, 2010.

74

Page 85: Enterprise Service Buses: A Comparison Regarding Reliable

Appendix A

Performance tests

The values reported in the table below are rounded mean values from running thetests ten times in a row for 60 seconds. The tests were started directly after asystem cold start.

A.1 Sonic

Persistent Frequency Rows per cycle Messages processed Messages insertedNo 1 ms 2 rows 720 170No 1 ms 200 rows 1290 300No 100 ms 2 rows 700 170No 100 ms 200 rows 1320 320Yes 1 ms 2 rows 680 150Yes 1 ms 200 rows 1030 190Yes 100 ms 2 rows 650 150Yes 100 ms 200 rows 1150 210

A.2 Mule

Persistent Frequency Rows per cycle Messages processed Messages insertedNo 1 ms 2 rows 5850 4430No 1 ms 200 rows 8570 4510No 100 ms 2 rows 1220 1220No 100 ms 200 rows 8450 4730Yes 1 ms 2 rows 2470 2460Yes 1 ms 200 rows 2450 2450Yes 100 ms 2 rows 1190 1190Yes 100 ms 200 rows 2490 2480

75

Page 86: Enterprise Service Buses: A Comparison Regarding Reliable

APPENDIX A. PERFORMANCE TESTS

A.3 ExplanationPersistentIf persistent queues are used or not.

FrequencyThe frequency which the ESB platform polls/fetches data from the sending databaseserver.

Rows per cycleThe number of actual rows with data that is fetched from the sending databaseserver each cycle (frequency).

Messages processedThe number of messages that has passed all transformation services inside the ESBand is waiting or has been sent to the receiving database server.

Messages insertedThe number of messages that has been inserted at the receiving database server, inother words, the messages that has completed the message flow.

76

Page 87: Enterprise Service Buses: A Comparison Regarding Reliable

TRITA-CSC-E 2010:113 ISRN-KTH/CSC/E--10/113--SE

ISSN-1653-5715

www.kth.se