comparative analysis for performance measurements of software

53
COMPARATIVE ANALYSIS FOR PERFORMANCE MEASUREMENTS OF SOFTWARE TESTING BETWEEN MOBILE APPLICATIONS AND WEB APPLICATIONS ZAINAB HASSAN MUHAMAD A dissertation submitted in fulfilment of the requirements for the award of the Degree of Master of Computer Science (Software Engineering) Faculty of Computer Science and Information Technology Universiti Tun Hussein Onn Malaysia JULY 2015

Upload: dodung

Post on 25-Jan-2017

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: comparative analysis for performance measurements of software

COMPARATIVE ANALYSIS FOR PERFORMANCE MEASUREMENTS OF

SOFTWARE TESTING BETWEEN MOBILE APPLICATIONS AND WEB

APPLICATIONS

ZAINAB HASSAN MUHAMAD

A dissertation submitted in

fulfilment of the requirements for the award of the

Degree of Master of Computer Science (Software Engineering)

Faculty of Computer Science and Information Technology

Universiti Tun Hussein Onn Malaysia

JULY 2015

Page 2: comparative analysis for performance measurements of software

v

ABSTRACT

Software testing has an important role in software engineering, and is fundamental to

Software Quality Assurance (SQA). Besides the popularity of web applications,

mobile applications have gained paralleled advancement despite increasing

complexity. On one hand, this issue reflects the rising concerns for ensuring

performance both of web and mobile applications. On the other hand, a comparative

analysis of software testing issues between web and mobile applications has not been

completed. Thus, this study aims to employ an effective testing approach that is able

to adapt both of web and mobile application testing to detect possible failures. To

achieve this, UML activity diagrams were developed from four case studies for web

and mobile applications to describe the behaviour of those applications. Test cases

were then generated by using the MBT technique from the developed UML activity

diagrams. Performance measurements Hits per Second, Throughput and Memory

Utilization for each case study were evaluated by execution of test cases that were

generated by using HP LoadRunner 12.02 tool. Finally, the MSE of performance

measurements was compared and analysed among the four case studies. The

experimental results showed that the disparity between the mobile applications and

web applications was obvious. Based on the comparison analysis for software testing

of mobile applications versus web applications that was the web applications were

lesser than mobile applications for software testing of four case studies in terms each

of the Hits per Second, Throughput and Memory Utilization. Consequently, mobile

applications need more attention in the testing process.

Page 3: comparative analysis for performance measurements of software

vi

ABSTRAK

Ujian perisian mempunyai peranan penting dalam bidang kejuruteraan perisian, dan

merupakan asas kepada Jaminan Kualiti Perisian (SQA). Selain aplikasi web yang

semakin dikenali ramai, aplikasi mudah alih juga semakin digemari ramai walaupun

menimbulkan sedikit kerumitan. Dari satu sudut, isu ini menimbulkan sedikit

kebimbangan untuk memastikan kebolehan prestasi kedua-dua web dan aplikasi

mudah alih ini. Walaubagaimanapun, analisis perbandingan isu untuk ujian perisian

antara web dan aplikasi mudah alih telah dilakukan tetapi masih belum selesai. Oleh

sebab itu, kajian ini bertujuan untuk membuat ujian keberkesanan yang mampu

disesuaikan untuk kedua-dua aplikasi iaitu web dan aplikasi mudah alih untuk

mengesan kegagalan mungkin. Untuk mencapai matlamat ini, gambar rajah aktiviti

UML telah diuji ke atas empat kajian kes iaitu untuk web dan juga untuk aplikasi

mudah alih yang mana menerangkan sifat-sifat ataupun cara kerja aplikasi tersebut.

Kes-kes kajian kemudiannya telah dihasilkan dengan menggunakan teknik MBT

daripada perkembangan aktiviti rajah UML. Ukuran prestasi Hits per Second,

Throughput dan Memory Utilization untuk setiap kes kajian telah dinilai dengan

menjalanakan ujian yang dihasilkan dengan menggunakan alat HP LoadRunner

12.02. Akhirnya, bacaan prestasi MSE yang telah dilakukan telah dibandingkan dan

dianalisis di antara empat kes kajian yang lain. Keputusan eksperimen menunjukkan

perbezaan di antara aplikasi mudah alih dan aplikasi web adalah sangat jelas.

Berdasarkan analisis perbandingan untuk ujian perisian aplikasi mudah alih dan

aplikasi web, keputusan menunjukkan aplikasi web adalah lebih rendah daripada

aplikasi mudah alih untuk ujian perisian daripada empat kes kajian sama ada dari

segi Hits per Second, Throughput dan Memory Utilization. Oleh itu, aplikasi mudah

alih adalah lebih memerlukan perhatian di dalam proses pengujian.

Page 4: comparative analysis for performance measurements of software

vii

CONTENTS

TITLE i

DECLARATION ii

DEDICATION iii

ACKNOWLEDGEMENT iv

ABSTRACT v

ABSTRAK vi

CONTENTS vii

LIST OF TABLES xi

LIST OF FIGURES xiv

LIST OF SYMBOLS AND ABBREVIATIONS xviii

LIST OF APPENDIXES xx

CHAPTER 1 INTRODUCTION 1

1.2 Research Background 1

1.3 Problem Statement 3

1.4 Research Objectives 4

1.5 Research Scope 4

1.6 Dissertation Outline 5

CHAPTER 2 LITERATURE REVIEW 7

2.1 Introduction 7

2.2 Software Testing 7

Page 5: comparative analysis for performance measurements of software

viii

2.3 Software Testing Methods 9

2.3.1 Black-box Testing 9

2.3.2 White-box Testing 9

2.3.3 Gray-box Testing 10

2.4 Mobile Application 10

2.4.1 Mobile Applications Types 11

2.4.2 Mobile Application Testing 12

2.5 Web Application 14

2.5.1 Web Applications Types 14

2.5.2 Web Applications Testing 15

2.6 Model-based Testing (MBT) 16

2.7 Unified Modelling Language 18

2.8 Activity Diagram 19

2.9 Performance Testing Measurements 20

2.10 HP LoadRunner Tool 22

2.11 Mean Square Error (MSE) 23

2.12 Related Works 23

2.13 Summary 27

CHAPTER 3 METHODOLOGY 29

3.1 Introduction 29

3.2 The Proposed Framework 29

3.3 Phase 1: Developing UML Activity Diagrams 32

3.4 Phase 2: Generating Test Cases Using MBT

Technique based on Activity Diagram

32

3.4.1 Step 1: The Generation of ADT 32

Page 6: comparative analysis for performance measurements of software

ix

3.4.2 Step 2: The Generation of ADG 33

3.4.3 Step 3: The Generation of Test Paths and

Validation Its Number

33

3.4.4 Step 4: The Generation of Test Cases 35

3.5 Phase 3: Evaluation of Performance Measurements

35

3.6 Phase 4: Comparative Analysis 37

3.7 Summary 38

CHAPTER 4 IMPLEMENTATION AND RESULTS ANALYSIS 39

4.1 Introduction 39

4.2 Implementation of MBT Technique of Four Case

Studies: Mobile Applications and Web Applications

39

4.2.1 Implementation of MBT Technique for the

Case Study 1: Inktera Books of Mobile

Application

40

4.2.2 Implementation of MBT Technique for the

Case Study 2: Kobo Books of Mobile

Applications

57

4.2.3 Implementation of MBT Technique for the

Case Study 3: Inktera of Web Applications

74

4.2.4 Implementation of MBT Technique for the

Case Study 4: Kobo of Web Applications

91

4.3 Comparative Analysis 108

4.4 Summary 111

CHAPTER 5 CONCLUSIONS AND FUTURE WORK 112

5.1 Introduction 112

5.2 Objectives Achievement 112

5.2.1 To Develop UML Activity Diagrams from

Four Case Studies for Web and Mobile

Applications in order to Describe the

Behaviour of those Applications.

113

Page 7: comparative analysis for performance measurements of software

x

5.2.2 To Generate Test Cases by Using the MBT

Technique from Activity Diagrams

Developed in (5.2.1) for each Case Study.

113

5.2.3 To Evaluate Performance Measurements

(Hits per Second, Throughput and Memory

Utilization) for each Case Study by Execution

Test Cases Generated in (5.2.2) by Using HP

LoadRunner Tool.

114

5.2.4 To Compare and Analyse the Results

Obtained in (5.2.3) based on Performance

Measurements (Hits per Second, Throughput

and Memory Utilization.)

114

5.3 Conclusion 115

5.4 Future Work 115

REFERENCES 117

APPENDICES 129

VITA 142

Page 8: comparative analysis for performance measurements of software

xi

LIST OF TABLES

2.1 Activity diagram notations.

20

2.2 Performance measurements definitions and formulas.

21

2.3 Review the comparison of testing techniques of the

related works regarding to mobile applications.

26

2.4 Review the comparison of testing techniques of the

related works regarding to web applications.

27

3.1 Activity dependency table.

33

4.1 The ADT of the Case Study 1: Inktera Books of Mobile

Applications.

42

4.2 The test paths of the Case Study 1: Inktera Books of

Mobile Applications.

45

4.3 The test cases of the Case Study 1: Inktera Books of

Mobile Applications.

46

4.4 The experimental results of the average of the Hits per

Second for the Case Study 1: Inktera Books of Mobile

Applications.

50

4.5 The experimental results of the average of the

Throughput for the Case Study1: Inktera Books of Mobile

Applications.

53

4.6 The experimental results of the average of the Memory

Utilization for the Case Study1: Inktera Books of Mobile

applications.

56

4.7 The ADT of the Case Study 2: Kobo Books of Mobile

Applications.

59

4.8 The test paths of the Case Study 2: Kobo Books of

Mobile Applications.

62

4.9 The test cases of the Case Study 2: Kobo Books of

Mobile Applications.

63

4.10 The experimental results of the average of the Hits per

Second for the Case Study 2: Kobo Books of Mobile

Applications.

67

Page 9: comparative analysis for performance measurements of software

xii

4.11 The experimental results of the average of the

Throughput for the Case Study 2: Kobo Books of Mobile

Applications.

70

4.12 The experimental results of the average of the Memory

Utilization for the Case Study 2: Kobo Books of Mobile

Applications.

73

4.13 The ADT of the Case Study 3: Inktera of Web

Applications.

76

4.14 The test paths of the Case Study 3: Inktera of Web

Applications.

79

4.15 The test cases of the Case Study 3: Inktera of Web

Applications.

80

4.16 The experimental results of the average of the Hits per

Second for the Case Study 3: Inktera of Web

Applications.

84

4.17 The experimental results of the average of the

Throughput for the Case Study 3: Inktera of Web

Applications.

87

4.18 The experimental results of the average of the Memory

Utilization for the Case Study 3: Inktera of Web

Applications.

90

4.19 The ADT of the Case Study 4: Kobo of Web

Applications.

93

4.20 The test paths of the Case Study 4: Kobo of Web

Applications.

96

4.21 The test cases of the Case Study 4: Kobo of Web

Applications.

97

4.22 The experimental results of the average of the Hits per

Second for the Case Study 4: Kobo of Web Applications.

101

4.23 The experimental results of the average of the

Throughput for the Case Study 4: Kobo of Web

Applications.

104

4.24 The experimental results of the average of the Memory

Utilization for the Case Study 4: Kobo of Web

Applications.

107

4.25 The comparative analysis for case studies.

108

4.26 The comparative analysis between Case Study 1 (of

mobile application) with Case Study 3 (of web

application).

111

Page 10: comparative analysis for performance measurements of software

xiii

4.27 The comparative analysis between Case Study 2 (of

mobile application) with Case Study 4 (of web

application).

111

Page 11: comparative analysis for performance measurements of software

xiv

LIST OF FIGURES

2.1 LoadRunner components.

22

3.1 The four phases for the study.

30

3.2 Research framework.

31

3.3 TCBAD algorithm for generating the test paths.

34

3.4 The HP LoadRunner mobile recorder.

36

3.5 The HP LoadRunner web recorder.

36

3.6 HP LoadRunner 12.02 tool controller.

37

4.1 The activity diagram for the Case Study 1: Inktera

Books of Mobile Applications.

41

4.2 The ADG of the Case Study 1: Inktera Books of Mobile

Applications.

44

4.3 The experimental results of the averages of the Hits per

Second of 19 test cases for the Case Study 1: Inktera

Books of Mobile Applications.

48

4.4 The distribution diagram of the Hits per Second of TC_1

of the Sign In (the minimum value) for the Case Study

1: Inktera Books of Mobile Applications.

49

4.5 The distribution diagram of the Hits per Second of TC_

4 of the Browse (the maximum value) for the Case

Study 1: Inktera Books of Mobile Applications.

49

4.6 The experimental results of the averages of the

Throughput of 19 test cases for the Case Study 1:

Inktera Books of Mobile Applications.

51

4.7 The distribution diagram of the Throughput of TC_1 of

the Sign In (The minimum value) for the Case Study 1:

Inktera Books of Mobile Applications.

52

4.8 The distribution diagram of the Throughput of TC_ 4 of

the Browse (The maximum value) for the Case Study 1:

Inktera Books of Mobile Applications.

52

4.9 The experimental results of the averages of the Memory

Utilization of 19 test cases for Case Study 1: Inktera

54

Page 12: comparative analysis for performance measurements of software

xv

Books of Mobile Applications.

4.10 The distribution diagram of the Memory Utilization of

TC_1 of the Sign In (the minimum value) for the Case

Study 1: Inktera Books of Mobile Applications.

55

4.11 The distribution diagram of the Memory Utilization of

TC_ 4 of the Browse (the maximum value) for the Case

Study 1: Inktera Books of Mobile Applications.

55

4.12 The activity diagram for the Case Study 2: Kobo Books

of Mobile Applications.

58

4.13 The ADG for the Case Study 2: Kobo Books of Mobile

Applications.

61

4.14 The experimental results of the averages of the Hits per

Second of 21 test cases for the Case Study 2: Kobo

Books of Mobile Applications.

65

4.15 The distribution diagram of the Hits per Second of TC_1

of the Sign In (the minimum value) for the Case Study

2: Kobo Books of Mobile Applications.

66

4.16 The distribution diagram of the Hits per Second of TC_6

of the Search (the maximum value) for the Case Study

2: Kobo Books of Mobile Applications.

66

4.17 The experimental results of the averages of the

Throughput of 21 test cases for the Case Study 2: Kobo

Books of Mobile Applications.

68

4.18 The distribution diagram of the Throughput of TC_1 of

the Sign In (the minimum value) for the Case Study 2:

Kobo Books of Mobile Applications.

69

4.19 The distribution diagram of the Throughput of TC_ 6 of

the Search (the maximum value) for the Case Study 2:

Kobo Books of Mobile Applications.

69

4.20 The experimental results of the averages of the Memory

Utilization of 21 test cases for the Case Study 2: Kobo

Books of Mobile Applications.

71

4.21 The distribution diagram of the Memory Utilization of

TC_1 of the Sign In (the minimum value) for the Case

Study 2: Kobo Books of Mobile Applications.

72

4.22 The distribution diagram of the Memory Utilization of

TC_ 6 of the Search (the maximum value) for the Case

Study 2: Kobo Books of Mobile Applications.

72

4.23 The activity diagram for the Case Study 3: Inktera of

Web Applications.

75

Page 13: comparative analysis for performance measurements of software

xvi

4.24 The ADG for the Case Study 3: Inktera of Web

Applications.

78

4.25 The experimental results of the averages of the Hits per

Second of 23 test cases for the Case Study 3: Inktera of

Web Applications.

82

4.26 The distribution diagram of the Hits per Second of TC_1

of the Sign In (the minimum value) for the Case Study

3: Inktera of Web Applications.

83

4.27 The distribution diagram of the Hits per Second of TC_6

of the Browse (the maximum value) for the Case Study

3: Inktera of Web Applications.

83

4.28 The experimental results of the averages of the

Throughput of 23 test cases for the Case Study 3:

Inktera of Web Applications.

85

4.29 The distribution diagram of the Throughput of TC_1 of

the Sign In (the minimum value) for the Case Study 3:

Inktera of Web Applications.

86

4.30 The distribution diagram of the Throughput of TC_6 of

the Browse (the maximum value) for the Case Study 3:

Inktera of Web Applications.

86

4.31 The experimental results of the averages of the Memory

Utilization of 23 test cases for the Case Study 3: Inktera

of Web Applications.

88

4.32 The distribution diagram of the Memory Utilization of

TC_1 of the Sign In (the minimum value) for the Case

Study 3: Inktera of Web Applications.

89

4.33 The distribution diagram of the Memory Utilization of

TC_6 of the Browse (the maximum value) for the Case

Study 3: Inktera of Web Applications.

89

4.34 The activity diagram for the Case Study 4: Kobo of Web

Applications.

92

4.35 The ADG for the Case Study 4: Kobo of Web

Applications.

95

4.36 The experimental results of the averages of the Hits per

Second of 22 test cases for the Case Study 4: Kobo of

Web Applications.

99

4.37 The distribution diagram of the Hits per Second of TC_1

of the Sign In (the minimum value) for the Case Study

4: Kobo of Web Applications.

100

4.38 The distribution diagram of the Hits per Second of TC_7 100

Page 14: comparative analysis for performance measurements of software

xvii

of the Search (the maximum value) for the Case Study

4: Kobo of Web Applications.

4.39 The experimental results of the averages of the

Throughput of 22 test cases for the Case Study 4: Kobo

of Web Applications.

102

4.40 The distribution diagram of the Throughput of TC_1 of

the Sign In (the minimum value) for the Case Study 4:

Kobo of Web Applications.

103

4.41 The distribution diagram of the Throughput of TC_7 of

the Search (the maximum value) for the Case Study 4:

Kobo of Web Applications.

103

4.42 The experimental results of the averages of the Memory

Utilization of 22 test cases for the Case Study 4: Kobo

of Web Applications.

105

4.43 The distribution diagram of the Memory Utilization of

TC_1 of the Sign In (the minimum value) for the Case

Study 4: Kobo of Web Applications.

106

4.44 The distribution diagram of the Memory Utilization of

TC_7 of the Search (the maximum value) for the Case

Study 4: Kobo of Web Applications.

106

4.45 Comparative analysis diagram for case studies of mobile

and web applications based on the MSE value of the

Hits per Second.

108

4.46 Comparative analysis diagram for case studies of mobile

and web applications based on the MSE value of the

Throughput.

109

4.47 Comparative analysis diagram for case studies of mobile

and web applications based on the MSE value of the

Memory Utilization.

110

Page 15: comparative analysis for performance measurements of software

xviii

LIST OF SYMBOLS AND ABBREVIATIONS

SQA - Software Quality Assurance

SUT - System Under Test

SDLC - Software Development Life Cycle

GUI - Graphical User Interface

MBT - Model Based Testing

FSM - Finite State Machine

UML - Unified Modelling Language

QA - Quality Assurance

OS - Operating System

iOS - iPhone OS

GPS - Global Positioning System

HTML - Hypertext Markup Language

SQL - Structured Query Language

EDS - Event-Driven Systems

API - Application Programming Interface

I/O - Input/Output

CSS - Cascading Style Sheets

PHP - Personal Home Page

ASP - Active Server Pages

JSP - Java Server Pages

Page 16: comparative analysis for performance measurements of software

xix

WWW - World Wide Web

SBST - Search Based Software Test case generation

FBA - Flow Based Approach

SBT - Search Based Testing

UCTM - Use Case Transition Model

TPG - Test Paths Generating Tool

LTS - Labelled Transition Systems

MSC - Message Sequence Charts

OO - Object-Oriented

HTTP - Hypertext Transfer Protocol

HP - Hewlett Packard

MSE - Mean Square Error

TCBAD - Test Case Generation based on Activity Diagram

ADT - Activity Dependency Table

ADG - Activity Dependency Graph

UI - User Interface

CC - Cyclomatic Complexity

VuGen - Virtual user Generator

Page 17: comparative analysis for performance measurements of software

xx

LIST OF APPENDIXES

A The test cases of the Case Study 1: Inktera Books

of Mobile Applications.

129

B The test cases of the Case Study 2: Kobo Books of

Mobile Applications.

132

C The test cases of the Case Study 3: Inktera of Web

Applications.

135

D The test cases of the Case Study 4: Kobo of Web

Applications.

139

Page 18: comparative analysis for performance measurements of software

CHAPTER 1

INTRODUCTION

1.1 Research Background

Software testing has an important role in software engineering, and it is fundamental

to Software Quality Assurance (SQA). The objective of software testing is to show

the differences between the expected and actual behaviours of the system under test

(SUT). The goal of software testing is to detect whether the behaviour of the system

implemented has visible differences from the expected behaviour stated in the

specification (Sumit & Narendra, 2014).

A web application is a distributed interactive system, which provides a new

view for users to deploy software applications. Web applications are based on multi

architecture. With the popularity of web applications, the reliability and quality of

web applications have become a very critical problem so they require testing in each

and every phase but in an automated manner to get accurate results. Software testing

is a primary way of improving software reliability and assuring software quality.

Testing is the major phase while developing software because it makes the software

much better by finding errors and making several improvements in the system.

Testing web applications is a very important process as they are the fastest growing

area in software engineering which provides several activities such as many business

transactions, academic studies, etc. (Arora & Sinha, 2012; Song & Chen, 2012).

Along with the fast growing market of smart mobile devices such as smartphones

and tablet computers, the availability and popularity of smartphone applications have

dramatically increased. Since more and more people are relying upon smartphone

applications to manage their bills, schedules, emails, shopping, and so on, it is

required that smartphone applications be user-friendly and reliable.

Page 19: comparative analysis for performance measurements of software

2

The development of the mobile application is based on Software

Development Life Cycle (SDLC). Testing is essential for a software development

lifecycle that impacts the popularity of any software and hardware products (Ang et

al., 2014). Moreover, this trend has prompted an explosive growth in the number and

variety of mobile applications being developed. Thus, developers are required to

develop high quality applications in order to be competitive. On the other hand,

mobile applications are usually developed in relatively small-scale projects which

may not be able to support extensive and expensive manual testing. Thus, it is

particularly important to develop automated testing tools for mobile applications

(Yang et al., 2013). The dynamics of web applications have motivated different

testing approaches to be developed and introduced by researcher’s numerous

approaches to testing web applications. Naturally, the main aim of these test

approaches is to discover failures in the required functionality, in order to verify the

user behaviour with the Graphical User Interface (GUI) of the application (Suhaila &

Wan, 2011). Similarly, under the increasing complexity and time-to-market

pressures, performance validation is becoming a major issue for mobile applications.

Due to the GUI intensive nature, the execution of mobile applications heavily relies

on the user interactions with GUI, and few existing testing approaches are effective

and automatically utilize GUI (Ang et al., 2014).

Testing is a very costly and time consuming process. So, in order to cut down

on costs, save time, and increase reliability, Model Based Testing (MBT) approach

were used in this study. MBT is a process of generating test cases and evaluating test

results based on the design and analysis of models. Recently, MBT has gained

attention with the popularization of modelling in software development. Several of

software models are useful among others, such as Finite State Machine (FSM) and

Unified Modelling Language (UML). FSM Models have a long history in design and

testing, however the complex applications imply large state machines that are

difficult to construct and maintain. The UML modelling based testing approach

intends to solve this problem (Sandeep et al., 2012; Ang et al., 2014). UML has now

become the de facto standard for object oriented modelling and design. UML models

are an important source of information for test case design, which, if satisfactorily

exploited, can go a long way in reducing testing cost and effort and at the same time

improving software quality. UML based automatic test generation is a practically

important and theoretically challenging topic and is receiving increasing attention

Page 20: comparative analysis for performance measurements of software

3

from researchers (Santosh & Durga, 2010). MBT is one of the most of significant

techniques that have been applied to generate test cases by using UML diagrams for

web applications by (Zhang et al., 2007; Ke et al., 2010; Prachet & Abhishek, 2013).

Moreover, MBT has been applied to mobile applications by (Chouhan et al., 2012;

Tobias & Volker, 2014; Ang et al., 2014).

Currently, UML activity diagrams support GUI modelling, automated test

case generation and error diagnosis. This approach can reduce the overall test time,

and can effectively detect fatal faults in mobile applications (Ang et al., 2014). In

addition, the UML activity diagram is one of the most important diagrams among the

thirteen diagrams. It is characterized by the high level of abstraction compared to

other diagrams like sequence diagrams, class diagrams, etc. Furthermore, it is able to

represent loops and concurrent activities. UML activity diagrams capture the key

system behaviour. Besides this, they are used for business modelling, control and

object flow modelling, complex operation modelling, etc. The main advantage of this

model is its simplicity and ease of understanding the flow of logic of the system as

well. For all these reasons, activity diagrams are well suited for treating system level

testing of web applications (Aye & Myat, 2014).

1.2 Problem Statement

Currently, mobile applications have parallel advancement with web applications.

This issue reflects the rising concerns for ensuring performance both of web and

mobile applications (Maryam & Rosziati, 2014). As the growth both of web and

mobile applications is rapid, this issue was interesting to some researchers (Vikas &

Rajesh, 2014; Prachet & Abhishek, 2013), and they have taken into consideration

web application testing. On the other hand, other researchers (Tobias & Volker,

2014; Ang et al., 2014) were focused on mobile application testing. But, the

comparative analysis between web application testing and mobile application testing

is an issue that has not yet been resolved. Thus, the motivation of this research is to

employ an effective testing approach, which is able to adapt with both web and

mobile application testing to discover failures in the required performance.

Therefore, the UML activity diagrams will be developed from four case

studies for web and mobile applications to describe the behaviour of those

applications. Test cases will then be generated using the MBT technique based on

Page 21: comparative analysis for performance measurements of software

4

Test Case Generation based on Activity Diagram (TCBAD) model from the

developed UML activity diagrams. In addition, performance measurements Hits per

Second, Throughput and Memory Utilization for each case study were evaluated by

execution of test cases that were generated by using HP LoadRunner tool. Finally,

the performance measurements Hits per Second, Throughput and Memory

Utilization will be compared and analysed among the case studies.

1.3 Research Objectives

The main objectives of this study are to:

(i) Develop UML activity diagrams from four case studies for web and mobile

applications in order to describe the behaviour of those applications.

(ii) Generate test cases by using the MBT technique from activity diagrams

developed in (i) for each case study.

(iii) Evaluate performance measurements (Hits per Second, Throughput and

Memory Utilization) for each case study by execution test cases generated in

(ii) by using HP LoadRunner tool.

(iv) Compare and analyse the results obtained in (iii) based on performance

measurements (Hits per Second, Throughput and Memory Utilization).

1.4 Research Scope

This study focuses on the MBT technique based on TCBAD model proposed by

Chouhan et al. (2012) via modelling four case studies for web and mobile

applications by using UML activity diagrams to describe the behaviour of those

applications. The EdrawMax 7.9 tool (Edraw, 2014) is used to construct the activity

diagram in the context of both web applications and mobile applications. This study

concentrates on automation testing by using the HP LoadRunner 12.02 tool (HP

LoadRunner, 2015) to evaluate the application performance in terms of Hits per

Second, Throughput and Memory Utilization. For the intents of this study,

performance evaluation regarding with web applications has been tested on Windows

Page 22: comparative analysis for performance measurements of software

5

8. On the other hand, performance evaluation of mobile applications has been tested

on Android 4.2.2. The four selected case studies are the following:

(i) Inktera Books is an e-book store application as an Android application

allows user to register, sign in, search for e-books by author name or e-book

title, browse e-books by categories, and access to own library. It is available

at Google play apps.

(ii) Kobo Books is an e-book store application as an Android application allows

user to register, sign in, search for e-books by author name or e-book title,

browse e-books by categories, and access to own library. It is available at

Google play apps.

(iii) Inktera web-based app is an online e-book store allows users to register,

sign in, search for e-books by author name or e-book title, browse e-books

by categories, and access to own library. It is available at

https://www.inktera.com.

(iv) Kobo web-based app is an online e-book store allows user to register, sign

in, search for e-books by author name or e-book title, browse e-books by

categories, and access to own library. It is available at

http://www.kobobooks.com.

1.5 Dissertation Outline

This dissertation provides a description of and report on the effort that was carried

out throughout the duration of the project in order to achieve the project scope and

objectives. This dissertation is divided into five chapters that cover the whole

research. Chapter 1 provides a brief introduction to software testing and its role in

improving reliability and assuring the quality of mobile and web applications.

Furthermore, this chapter explains the problem statement, objectives and scope of

this project. Chapter 2 introduces the different views of the authors through an

overview of the literature relating to testing for mobile applications and web

applications. In this chapter, previous methods and techniques are discussed. In

addition, related topics and works relevant to the study based on various journals and

publications are reviewed and summarized. Chapter 3 contains a discussion of the

research framework. The sections in this chapter cover the framework with attached

Page 23: comparative analysis for performance measurements of software

6

discussions of each phase conducted throughout the study. Chapter 4 discusses the

implementation, analysis and comparison of the outcomes obtained for each case

study of web and mobile applications in this study. Lastly, Chapter 5 concludes the

research. It also discusses objectives achievement and future work in the software

testing domain area.

Page 24: comparative analysis for performance measurements of software

CHAPTER 2

LITERATURE REVIEW

2.1 Introduction

This chapter looks into different views of previous researchers and their conclusions

through an overview of the literature relating to testing for both mobile applications

and web applications. The greater part of this chapter is about critical evaluation of

different methodologies used in this field so as to identify the appropriate approach

for achieving the objectives of this study.

2.2 Software Testing

Software testing is an important stage in the software lifecycle. Testing is a process

of evaluating a system or its components with the intention to determine whether it

satisfies specifications and requirements or not. The testing process is used to

determine the difference between the expected and actual results. In simple words

testing is executing a system in order to identify any gaps, errors or missing

requirements in contrary to the actual desire or requirements. Different companies

have different designations for people who test the software on the basis of their

experience and knowledge, such as a software tester, SQA engineer, Quality

Assurance (QA) Analyst, etc. Testing is applied to find bugs and used to calculate

software bug density. In typical software projects, the percentage of software testing

workload is around 40% (Shilpa & Meenakshi, 2014).

Page 25: comparative analysis for performance measurements of software

8

The variety of definitions of software testing found in the literature reveals

the varied scope of the process, which may be constricted or broadened. On one

hand, software testing is defined as an activity performed for evaluating software

quality and for improving it. Hence, the goal of testing is the systematic detection of

different classes of errors, which can be defined as a human action that produces an

incorrect result in a minimum amount of time and with a minimum amount of effort

(Myers, 2004). On the other hand, software testing is defined as a process of

executing a program with the goal of finding errors. In short, testing means that one

inspects the behaviour of a program on a finite set of test cases with a set of inputs,

execution preconditions, and expected outcomes developed for a particular objective,

such as to exercise a particular program path or to verify compliance with a specific

requirement for which valued inputs always exist (IEEE, 2004).

Software testing is a technique aim at evaluating an attribute or capability of a

program or product and determining that it meets its quality. Testing is a process of

analysing the behaviour of the product to identify the difference between products’

performance against its specifications. It is a process to identify completeness,

correctness and effectiveness of the software. Consequently, software testing is an

important technique for the improvement and measurement of a software system

qualification (Anju, 2014). It provides a means to reduce errors and to cut

maintenance and overall software costs. It has become the most important parameter

in the case of SDLC (Gaurav&Kestina, 2013).

Testing efforts can be divided into test case generation, test execution, and

test evaluation. To reduce the cost of the testing process, efficient test case

generation technique is required. Therefore, it is an important part of software testing

(Pakinam et al., 2011b). The main objective of software testing is to affirm the

quality of a software system by systematically testing the software in carefully

controlled circumstances, another objective is to identify the completeness and

correctness of the software, and finally it uncovers undiscovered errors (Mohd,

2010).

Page 26: comparative analysis for performance measurements of software

9

2.3 Software Testing Methods

Approaches for software testing have traditionally been classified as black-box

testing method and white-box testing method. However, recent works have seen an

introduction of a new testing method that combines both black-box and white-box

testing to produce a hybrid approach known as gray-box testing (Mohd, 2010).

2.3.1 Black-box Testing

Black-box testing is so called because the testing strategy does not require the tester

to possess an internal knowledge of the SUT as a prerequisite of testing. Also known

as data-driven or input/output-driven testing, the test data are derived solely from the

specifications. For instance, testing process conducts without taking advantage of

knowledge of the internal structure of the program (Shivani & Vidhi, 2013). Black-

box testing has little or no regard for the internal logical structure of SUT, and only

examines the fundamental aspects of the system. It ensures that input is properly

accepted and output is correctly produced. In other words, a black-box is any device

whose workings are not understood by or accessible to its user. For example, in

telecommunications, a resistor connected to a phone line makes it impossible for the

telephone company’s equipment to detect when a call has been answered (Mohd,

2011a).

2.3.2 White-box Testing

White-box, also known as code-based, glass-box or logic-driven testing, requires the

tester to possess internal knowledge of the SUT in order to implement this strategy.

The internal knowledge possessed by the tester may be in the form of the design of

the SUT, control flow or data flow, hyperlinks or the code structure of the SUT.

White-box testing strategy derives test data from an examination of the program’s

logic. It neglected the specification (Mohd, 2011b). It can uncover implementation

errors such as poor key management by analysing internal workings and structure of

a piece of software. Therefore, it is applicable at integration, unit and system levels

of the software testing process, where the tester needs to have a look in the source

code and find out which unit of code is behaving inappropriately (Shivani & Vidhi,

Page 27: comparative analysis for performance measurements of software

10

2013). This method can be used for web applications, but is rarely practical for

debugging in large systems and networks. Additionally, white-box testing is

considered as a security testing (Mohd, 2010).

2.3.3 Gray-box Testing

Gray-box testing is also known as hybrid testing, as testers would combine both

black-box and white-box strategies to find failures that are otherwise undetected by

either strategy alone. Its aim is to test a piece of software against its specification, but

using some knowledge of its internal workings (Mohd & Farmeena, 2012). It is

based on the internal data structures and algorithms for designing the test cases of the

program more than black-box testing, but less than clear-box testing (Mohd, 2010).

Gray-box testing technique is used for testing a piece of software against its

specifications, but using some knowledge of its internal working as well. Gray-box

testing is efficient for web application testing (Shivani & Vidhi, 2013).

The methodology used in this study, focused on the black-box testing

approach. This approach is used to assess four case studies for web and mobile

applications based on MBT technique by testing cases generation technique and the

automated testing.

2.4 Mobile Application

Mobile applications, also known as mobile apps, are software applications that can

be installed on handheld devices, such as mobile phone, tablet, e-reader, or other

portable device. It is supported by operating systems and is able to connect to

wireless networks (Gahran, 2011). The mobile software similar to computer

software, the Operating System (OS) of the mobile phone should be considered

before installing the mobile application. Currently there are mainly five kinds of

mobile operating systems, such as Android, iOS, Windows Phone, Symbian, and

BlackBerry OS. Corresponding to various mobile operating systems, different types

of application distribution platforms have emerged, such as Google Play, Windows

Phone store, App Store, etc. Those distribution platforms are able to provide the

specific version of applications that run in specific system environments for a

specific device (Pogue, 2009). Android is the most widely used mobile OS and has

Page 28: comparative analysis for performance measurements of software

11

more users, more phones and more tablets worldwide than any other mobile

operating system (Pallavi & Satyaveer, 2014).

Recently, there is an enormous growth in the popularity and visibility of

smartphones. Smartphones have become a vital part of daily life. Users can enjoy a

countless number of native apps installed on their mobile phones (Tarkoma, 2009),

such as weather forecast, dictionary, Google Map, movie player, social networking,

currency converter, calculator, and games. Furthermore, a great number of mobile

web applications hosted on web servers deliver web content to mobile phones and

serve mobile users in a different way with native apps (Cramer et al., 2010). Indeed,

it is very important to understand the types of mobile application and mobile

application testing techniques, which are discussed in the following subsections.

2.4.1 Mobile Applications Types

Mobile applications fall broadly into three categories: native, web-based, and hybrid.

Native applications run on a device’s OS and are required to be adapted for different

devices. Web-based applications require a web browser on a mobile device. Hybrid

applications are native applications and web-based applications (IBM, 2012; Masi et

al., 2012).

(i) Native Application

Mobile native applications are programs specifically developed to execute on a

specific device platform and machine firmware, and cannot be used for other device

platform without modifications (Kao et al., 2011). Native applications can take full

advantage of device’s capability, such as gyroscopes, cameras, microphones,

speakers, and GPS (Charland & Leroux, 2011). Native applications are actual

applications that are downloaded and installed on mobile devices. Users use Apple’s

App Store, Android Market, or Blackberry App World to find and download

applications for a specific operating system. Native applications give everything

expected from the company that built your device; they are easy to find it in the

store, fast to start, have high performance, and tell the user when updates are needed

(Mario & Eugene, 2014).

Page 29: comparative analysis for performance measurements of software

12

(ii) Web-based Application

Mobile web-based applications reduce the workload at the client’s side very

effectively, solving many issues of a mobile phone, such as battery and computing

resource limitations (Siy et al., 2010). Additionally, users do not have to install any

software application and pay any fee for downloading and deploying the

applications. Instead, they can simply access mobile web applications by using a

mobile browser. Mobile web applications are distinguished from standard websites

because they are designed for a smaller handheld display and touchscreen interface.

The other prominent advantage of a web application is its multi-platform support and

low cost of development (IBM, 2012).

(iii) Hybrid Application

Hybrid mobile applications are also called HTML5 mobile. The hybrid approach

combines native development with web technology. A hybrid application is

downloaded from an app store and stored on the device. A system that combines the

two approaches is designed to host the Hypertext Markup Language (HTML)

resources on a web server for flexibility. But this approach eliminates any offline

availability, as the content is not accessible when the device is not connected to the

network (IBM, 2012). On the other hand, HTML 5 has several features that support

building mobile web applications that work offline (Lubbers et al., 2010). These

include support for a client-side Structured Query Language (SQL) database and for

offline application and data caching. The application cache benefits the mobile

application in two ways. The first is a reduction in the amount of time transferring

and fetching data from the server to mobile devices and the second is enabling web

applications to work as native applications, even without an internet connection

(Huang et al., 2010). For the purposes of this study, the two case studies were used

of hybrid application for mobile applications.

2.4.2 Mobile Application Testing

Mobile application testing techniques should be able to reveal observable failures in

software applications. Besides the traditional failures due to application logic bugs,

Page 30: comparative analysis for performance measurements of software

13

Android often shows failures that are specific to their development platform GUI-

based testing technique (Cuixiong & Iulian, 2011). However, mobile application

testing process depends on traditional approach. Its characteristics must be kept in

mind when deciding which methods to use, such as functional test, performance test,

usability test, security test, load/stress test, etc. (Selvam, 2011). There are several

technical issues specific to mobile applications that need to be considered. Thus,

mobile application testing requires special test cases and techniques because it is

different from traditional ones. Therefore, it is different and more complex than

desktop or web application testing (Wasserman, 2010).

Testing techniques are categorized broadly based on current research that has

presented some techniques for mobile application testing in the literature. Some

event-based techniques test the GUI of Android applications (Cuixiong & Iulian,

2011; Amalfitano et al., 2012). In event-based testing of Event-Driven Systems

(EDS), the behaviour of the system is checked with input consisting of specific event

sequences (Belli et al., 2012; Cuixiong & Iulian, 2011). Moreover, researchers have

applied techniques for random testing and adaptive random testing to Android

system testing (Liu et al., 2010; Chen et al., 2010). Random testing is a dynamic

black-box testing technique in which the software is tested with non-correlating or

unpredictable test data from the specified input domain (Chan et al., 2002).

The other approach, which is GUI-based techniques, uses the GUI to define

system tests (Takala et al., 2011; Chouhan et al., 2012). During the GUI testing for

mobile application, user behaviours play an important role in the key process of the

GUI test case generation. Therefore, automated GUI testing requires accurate

modelling of user behaviours to simulate the interactions between users and GUIs.

Currently, MBT is widely investigated to enable the automation of GUI testing

(Takala et al., 2011). It aims to improve the test efficiency and sufficiency, and is

able to check the consistency between the specifications and implementations (Chen

et al., 2009). UML is a de facto modeling language that is used in MBT techniques,

which can specify, visualize and construct various artifacts of software systems

(Rumbaugh et al., 2001). The use of UML activity diagrams for test has been

extensively studied in MBT (Wang et al., 2004; Chen et al., 2006; Chen et al., 2008;

Chen et al., 2009; Qin et al., 2012; Chouhan et al., 2012). In this context that the

MBT technique based on the UML activity diagram has a significant role in mobile

application testing. This research using MBT technique based on UML activity

Page 31: comparative analysis for performance measurements of software

14

diagram. Then test cases generated were used to test performance of two mobile

applications case studies.

2.5 Web Application

The web application is used to refer to a program or application that is accessed

through a network (Bhupendra & Shashank, 2014). A web application is an

application that is invoked by a client web browser over the Internet or an Intranet. A

web application allows the information processing functions to be initiated remotely

from a client and executed partly on a web server, application server, and/or database

server. These applications are specifically designed to be executed in a web-based

environment. Web applications are playing a very important role in many business

domains like retail, finance, sales, marketing and management (Imran & Roopa,

2012).

2.5.1 Web Applications Types

In general, web based applications are classified into two major categories: static

web applications and dynamic web applications. The first type is based on HTML.

The second type enables the end users to interact with the web application, and uses

a browser (Vipin & Ajay, 2012), it illustrated as follows:

(i) Static Web Applications

Static web applications contain a fixed number of pages and the format of web pages

is fixed which delivers information to the client. Web applications pages run on the

client's browser. This type of website is created from HTML and CSS coding in a

simple text editor like Notepad. Web applications behave like simple printed

newspapers or magazines. These websites have published and printed materials for

end users. Examples include the different websites of newspapers, such as New York

Times, BBC, etc. (Vipin & Ajay, 2012).

Page 32: comparative analysis for performance measurements of software

15

(ii) Dynamic Web Applications

Dynamic web applications can change the web page contents dynamically while the

page is running in a client's browser. This kind of websites uses server-side

programming, such as PHP, ASP, JSP, etc. In order to modify page contents in real

time. Dynamic web applications use client side scripting for preparing dynamic

design and server-side code to handle events, manage sessions and cookies, and store

and retrieve data from databases. Examples include e-commerce sites, e-book store,

online form applications, e-government sites, social networking sites, etc. These

websites work as software and utilities. Dynamic web applications, also called web

apps, run on servers and end users access these applications through web browsers

(Vipin & Ajay, 2012). For the purposes of this study, the two case studies were used

of dynamic web applications.

2.5.2 Web Application Testing

In recent years, with quick expansion of the Internet and the WWW, web

applications have become common. As more and more businesses rely on web

applications, the value and consistency of the applications become essential. Testing

is able to uncover majority of errors in software. Therefore, web applications should

be carefully tested to make sure that specifications are fulfilled by the applications.

Web application testing is more complex than usual programs because of their

characters of distribution, heterogeneity, platform independence and concurrence

(Pourya et al., 2013). It is an effective technique for revealing errors; it is used to

give confidence that the implementation of a web application meets its specifications

and ensures the quality of web applications. With the development and

implementation of web applications in a variety of industries, testing web systems

becomes more and more important and difficult (Kulvinder & Semmi, 2013). Given

the wider coverage of web applications, researchers are aware that appropriate

testing technique has to be developed to satisfy every functional requirement of web

application. Time consuming in test case development is the fundamental part in

testing. Hence, automated testing is essential to save test time..

Several approaches have been introduced in an automatic test case generation

such as Search Based Software Test case generation (SBST). Search based testing

Page 33: comparative analysis for performance measurements of software

16

(SBT) has been widely used in testing of web applications. However, despite much

work on SBT, search based test data generation has not previously been applied to

automate test data generation for web applications (Nadia & Mark, 2011). Another

approach is MBT, which uses GUI modelling. Some researchers used FSM (Song et

al., 2011) and Flow Based Approach (FBA) (Guangzhou & Shujuan, 2009).

Moreover, researchers have applied MBT by using UML in web application testing.

A UML-based approach that models a large as hierarchical profile use case diagrams,

called Use Case Transition Model (UCTM), was proposed by Li et al. (2008). In

addition, automated web application testing approach by using a tool for generating

test paths from an activity diagram called TPG presented by Rosziati & Tan (2009).

UML is gaining popularity due to its easy representation (Li et al., 2008; Prachet &

Abhishek, 2013). UML activity diagrams capture the key system behaviour, and the

UML activity diagram is well suited for the system level testing of systems (Ranjita

et al., 2013). Therefore, MBT technique based on the UML activity diagram has a

significant role in web application testing. Thus, the MBT technique used based on

the UML activity diagram in this study. In order to conduct performance testing for

two case studies of web applications. The MBT will be discussed in the next section.

2.6 Model-based Testing (MBT)

According to Myers (2004), the main factor influencing the cost associated with

testing is the total number of test cases. Each test case requires an allocation of

resources to plan, design, and execute it. Therefore, the automation of test case

generation could be an interesting mechanism to reduce cost and effort of testing. In

this context, MBT is a feasible approach to controlling software quality and reducing

cost and effort related to testing process because test cases are generated from

software models. This technique provides several benefits concerning testing process

for a software organization, such as shorter schedules, lower cost and effort, better

quality, capability to automatically generate many non-repetitive and useful tests, test

harness to automatically run generated tests. This eases updating of test suites for

changed requirements, and these different attributes make the identification and

characterization of MBT approaches an important activity (Aye & Myat, 2014).

MBT is a type of testing strategy that depends on extracting test cases from

different models. It is an important approach with many advantages that able to

Page 34: comparative analysis for performance measurements of software

17

reduce cost and increase effectiveness and quality of testing procedure. As a result,

test cases can be derived from behavioural models, which make MBT a black-box

testing approach in which code is not taken into consideration, but only the

functionality of the application is considered, where test cases are automatically

generated from a behavioural model of SUT. This testing technique provides an

effective way to find errors in applications (Nazish et al., 2014). This is a promising

approach for software quality control and for reducing the costs of a test process

because test cases can be generated from the software specifications (Zeng et al.,

2009). This approach can be described with the three actions which are the notation

used for the model, the test generation algorithm, and the tools that generate

supporting infrastructure for the tests (Cartaxo et al., 2007).

Generally, MBT refers to deriving a suite of test cases from a model that

represents the behaviour of a software system. By executing a set of model-based test

cases, the conformance of the target system to its specification can be validated

(Kim, 2013). MBT creates tests from an abstract model of the software, including

formal specifications and semi-formal design descriptions (Kansomkeat et al., 2008).

It relies on behaviour models based on input and expected output for test case

generation in its implementation. MBT has evolved out of techniques like FSM,

Labelled Transition Systems (LTS), Message Sequence Charts (MSC) and Petri nets

to generate test cases for systems (Shirole & Kumar 2013). One commonly used

modelling diagram for that purpose is a UML specification (Swain et al., 2010; Ke et

al., 2010; Prachet & Abhishek, 2013; Tobias & Volker, 2014).

Test Case Generation based on Activity Diagram (TCBAD) model proposed

by Chouhan et al. (2012) was used MBT technique. TCBAD model transforms the

workflows of stepwise activity and actions in the UI to the UML activity diagram. In

order to generate test cases based on the UML activity diagram. The proposed model

was characterized by coverage and efficiency, saves time and effort and also

increases the quality of generated test cases. Therefore, this study used a TCBAD

model for mobile applications testing and web applications testing, in order to

generate test cases based on developing the activity diagrams.

Page 35: comparative analysis for performance measurements of software

18

2.7 Unified Modelling Language

UML is one of the best modelling tools used for defining the structure of a system as

well as to manage large and complex systems. UML offers a standard way to

visualize a system's architecture blueprint, including elements such as activities,

actors, business processes, database schemas, components, programming language

statements, and reusable software components. It can be used with all processes,

throughout the software development lifecycle, and across different implementation

technologies. UML diagrams represent two different views of a system model which

are the static view and the dynamic view. The structural view includes class

diagrams and composite structure diagrams. While, the dynamic view includes

sequence diagrams, activity diagrams and state machine diagrams (Bipin & Rituraj,

2014).

UML is a standard language which helps customers and developers to

develop software and provide various types of diagrams for the purpose of specifying

the task, visualizing results or models, constructing, documentation, business

modelling and communication. UML basically provides notations in the form of

diagrams and is able to show a prototype of the model without implementation or

coding. The UML diagrams are used in the analysis and design phases of SDLC. The

selection of UML diagram is based on the user’s perspective (Pakinam et al., 2011a).

UML is an object-oriented (OO) modelling language that has gained

widespread use in the software development industry. It has become the de-facto

standard for OO modelling (Li et al., 2008). There are many UML diagrams that are

most commonly used for test case generation, such as the UML state machine

diagram, the UML activity diagram, and the UML sequence diagram (Khandai et al.,

2011). This research focuses on a UML activity diagram that provides a view of the

behaviour of a system by describing the sequence of actions in a process (Fan et al.,

2009). In this study, an appropriate diagram of UML diagrams will be assigned in

order to practically make the MBT technique more rigorous. To achieve this purpose,

one possible approach consists in the utilization of the UML activity diagram

developing from four case studies for web and mobile applications to describe the

behaviour of those applications. Activity diagrams will be discussed in the next

section.

Page 36: comparative analysis for performance measurements of software

19

2.8 Activity Diagram

Activity diagrams are one of the important UML models used in representing the

workflows of stepwise activities and actions with support for choice, iteration and

concurrency. Moreover, activity diagrams can be utilized to describe the business

and operational step-by-step workflows of components in a system. It shows the

overall flow of control between activities as well as the activity-based relationships

among objects, as it has all the characteristics that can improve the quality of the

automatically generated test cases, as well as using these test cases for system,

integration, and regression testing. Different sets of test cases used in those types of

testing should have certain parameters or characteristics; they normally consist of a

unique identifier, preconditions, a series of steps to follow, input, expected output,

and sometimes post conditions (Pakinam et al., 2011a).

Activity diagram is the only design artifact which is good at describing the

flow of control in an OO system. For this reason, activity diagrams are treated as one

of the most important design artifacts among several UML diagrams. As UML

activity diagrams capture the key system behaviour, they are well suited for the

system level testing of systems. Activity diagrams are important among the 13

diagrams supported by UML. They are used for business modelling, control and

object flow modelling, complex operation modelling, etc. The main advantage of this

model is its simplicity and ease of understanding the flow of logic of the system (Aye

& Myat, 2014).

A UML activity diagram has two kinds of modelling elements, which are

Activity nodes and Activity edges, as shown in Table 2.1. There are two main kinds

of nodes in activity diagrams, which are action nodes and control nodes. Action

nodes consume all input data/control tokens when they are ready to generate new

tokens and send them to output activity edges. In addition to control node is an

activity node used to coordinate the flows between other nodes. Control nodes are

initial nodes, flow final nodes, activity final nodes, decision nodes, merge nodes, fork

nodes, and join nodes. Activity edges are edges that represent the flow of control

through the activity and connect the individual components of activity diagrams

(Shanthi & Mohan, 2012). In the case, this study used the UML activity diagram of

both mobile application testing and web application testing to show the behaviour

Page 37: comparative analysis for performance measurements of software

20

and stepwise activities and actions of UI. The UML activity diagram notations used

in this study as shown in Table 2.1.

Table 2.1: Activity diagram notations. (Shanthi & Mohan, 2012)

Notation Represents

Activity

Activity Node

Decision &

Merge

Fork &

Join

Initial

Node

Final

Node

Control Node

Control Flow Edge

Activity Edge

2.9 Performance Testing Measurements

In software engineering, performance testing is in general testing performed to

determine how a system performs in terms of responsiveness and stability under a

particular workload. It can also serve to investigate, measure, validate or verify other

quality attributes of the system, such as scalability, reliability and resource usage

(Shilpa & Meenakshi, 2014).

Performance is concerned with achieving Hits per Second, throughput, and

resource-utilization levels that meet the performance objectives of the project or

product. Performance testing represents the superset of all of the other subcategories

of performance-related testing. There are many tools that can be used to simulate the

load in terms of users, connections and capture data related to Hits per Second,

Throughput, and Resource utilization. Among the most important tools is HP

LoadRunner (Sheetal & Joshi, 2012).

Label

Label Label

Page 38: comparative analysis for performance measurements of software

21

There are various parameters based on which performance of the system is

measured. They are known as performance measurements. One is Resource

utilization, which is the number of resources used to serve users request. These

resources can be memory, processors, I/O disk and network utilizations. To achieve

better system performance, resource allocation should be done efficiently. The

efficient resource utilization leads to better system performance (Kalpan &

Ramakanth, 2012). Our study is focused solely on resource utilization of memory.

Throughput is an important indicator for measuring server performance, which

represents the throughput capacity of the server at any time. Generally, the higher the

Throughput, the better the server performance will be. A Hit per Second is the

number of HTTP requests per second that virtual users submit to the web servers. It

can reflect whether or not the system is stable, when the number of user increases,

the Hits per Second will increase accordingly (Weibiao et al., 2014).

Overall, the definitions and formulas of performance measurements that used

in this study are illustrated in the Table 2.2.

Table 2.2: Performance measurements definitions and formulas.

Measurement Definition Formula Reference

Hits per

Second

The number of HTTP

requests per second

that virtual users

submit to the web

servers. It is measured

in unit Hits per Second

) (2.1)

Where:

Nvu= The number of vusers,

R= Average response time,

T= Average think time.

(Gouri &

Meetu, 2012;

Weibiao et al.,

2014; Pavol et

al., 2012;

Imran et al.,

2012)

Throughput

The total number of

business transactions

completed by the

application in unit time

per second or per hour.

It is measured in units

of Megabytes Per

Second.

(2.2)

Where:

Nvu= The number of vusers,

R= Average response time,

T= Performance testing time.

Memory

Utilization

The amount of memory

used by the Web

Service while

processing the request.

Utilization of 80% is

considered an

acceptable limit. It is

measured as a

percentage.

(2.3)

Where:

TR = Total Resource,

UR = Used Resource.

Page 39: comparative analysis for performance measurements of software

22

2.10 HP LoadRunner Tool

HP LoadRunner is an automated testing and performance product from Hewlett

Packard (HP) for examining system behaviour and performance in actual load. The

HP LoadRunner tool can simulate concurrent users, so that applications can be

simulated by real life user loads while collecting information from key infrastructure

components. The HP LoadRunner tool supports various environments and wide

range of technologies for users to start software testing for a load testing application.

It is the de-facto integrated software testing tool which is built for ease of use, and

also has the powerful flexibility (Prakash & Gopala, 2012).

The HP LoadRunner tool allows users to prevent application performance

problems by detecting bottlenecks. Moreover, it enables users to test web

applications and mobile applications. It gives a picture of end-to-end system

performance so that users can verify that new or upgraded applications meet

performance requirements. It also reduces hardware and software costs by accurately

predicting application scalability and capacity (Rahul & Prince, 2013).

According to Priyanka & Mansi (2014), HP LoadRunner consists of three

components: Virtual user Generator (VuGen), which captures end-user business

process and creates an automated performance testing script also known as a vuser

script; Controller, which organizes, manages and monitors the test; Controller to

execute test for various geographical area; and Analysis, which is used to generate

the results graphs after execution of the test (see Figure 2.1).

Figure 2.1: LoadRunner components. (HP LoadRunner, 2015)

Page 40: comparative analysis for performance measurements of software

23

2.11 Mean Square Error (MSE)

Mean Square Error (MSE) is the mean of the overall squared prediction errors. It

takes into account the bias, or the tendency of the estimator to overestimate or

underestimate the actual values, and the variability of the estimator, or the standard

error. Suppose that is an estimate for a population parameter Y. The MSE of an

estimator is the expected value of ( Y)2. An MSE of zero means that the

estimator predicts observations of the parameter Y perfectly. Different values of

MSE can be compared to determine how well different models explain a given data

set. The smaller the MSE is, the closer the fit to the data. To get MSE follow the

equation (Salkind, 2010):

Where:

N = Number of Values.

Y = Actual Value.

= Estimated Value.

2.12 Related Works

Due to the lack of research on comparisons between web and mobile applications

testing, some of the research related to automated testing and techniques are

reviewed in this section to generate test cases for web and mobile applications. In

addition to background of the work requirements for this dissertation, other related

works that consist of similar efforts to demonstrate the state-of-the-art in the test case

generation are also presented.

Ang et al. (2014) proposed the ADAutomation framework that can enable

automated GUI testing for smartphone applications based on UML activity diagrams,

which supports user behaviour modelling, automated GUI test case generation, and

post-test analysis and debugging. A tool chain based on this framework has been

developed. The experimental results showed on two industrial designs. This

approach can reduce the overall test time, testing efforts as well as improving test

(2.4)

Page 41: comparative analysis for performance measurements of software

24

adequacy. Moreover, this approach effectively detects fatal faults in complex GUI

implementations to improve the quality of designs.

Tobias & Volker (2014) proposed an approach to test case generation and

automated execution by using MBT to improve the testing of context-aware mobile

applications by reducing test cases from design-time system models. This approach

includes a four-tier process, and UML activity diagrams are enriched with context

parameters and test data using a UML supporting for context-awareness modelling.

By using model transformation, from the activity diagram a Petri Net representation

is obtained. The Petri Net representation sample is analysing the model for parallel

and cyclic structures that need to be considered when choosing adequate test

coverage criteria. From the reachability tree on the Petri Net all paths from root node

to leaf nodes are extracted that eventually form test cases. From the Petri Net

markings in each path activity diagram elements are obtained; those irrelevant for

test case execution are removed from the path. Using model-to-code transformation,

from the platform-independent model, platform-specific test cases are derived apt for

automated execution. This approach has been validated with a mobile context-aware

application from which a set of test cases satisfying the coverage criteria has been

generated and automatically executed.

Vikas & Rajesh (2014) proposed a technique for MBT to test case generation

technique for web applications. The technique used UML diagrams, namely

sequence and web diagrams for test case generation of web applications. Web

diagrams provide functional requirements such as attributes, operation, form and

frame set and sequence diagram provide the most important part of the navigational

web application, which is dynamic behavior of the application under test. Web and

Sequence diagrams were mapped to databases where test cases generated by

applying data mining principles of SQL. The technique had been verified and

validated with case studies ranging from simple to complex considering the real

situations that will arise during testing process. Test cases generated provide pre-

condition and post-condition for the test execution. The proposed approach included

major contributions as test cases were automatically generated from sequence and

web diagrams, problems related to omit data have been overcome. Moreover, use of

sequence diagrams helped to capture dynamic behavior of web applications.

Furthermore, the extraction of data from a combination of the web and sequence

diagrams has improved the test cases information.

Page 42: comparative analysis for performance measurements of software

117

REFERENCES

Agarwal, B., Tayal, S. and Gupta, M. (2010). Software Engineering and testing An

Introduction. Hingham, Toronto: Infinity Science Press, Jones and Bartlett’s

books. ISBN: 978-1-934015-55-1.

Amalfitano, D., Fasolino, A., Tramontana, P. and Amatucci, N. (2013). Considering

Context Events in Event-based Testing of Mobile applications. Proceedings

of the 2013 IEEE Sixth International Conference on Software Testing.

Verification and Validation workshops (ICSTW). March 18-22, 2013.

Luxembourg: IEEE.126-133.

Amalfitano, D., Fasolino, A., Tramontana, P., De Carmine, S. and Memon, A.

(2012). Using GUI ripping for automated testing of Android applications.

Proceedings of the 27th

ACM International Conference on Automated

Software Engineering. New York, NY, USA: ACM. 258– 162.

Ang, L., Zishan, Q., Mingsong, C. and Jing, L. (2014). ADAutomation: An Activity

Diagram Based Automated GUI Testing Framework for Smartphone

Applications. Proceedings of the Eighth IEEE International Conference on

Software Security and Reliability (SERE), June 30, 2014-July 2, 2014. San

Francisco, CA: IEEE. 68 – 77.

Anju, B. (2014). A Comparative Study of Software Testing Techniques.

International Journal of Computer Science and Mobile Computing, 3(6),

579-584.

Arora, A. & Sinha, M. (2012). Web Application Testing. International Journal of

Scientific & Engineering Research, 3(2), 1-6, ISSN 2229-5518.

Aye, K. & Myat, M. (2014). An Efficient Approach for Model Based Test Path

Generation. International Journal of Information and Education

Technology, 5(10), 763- 767. doi: 10.7763/IJIET.2015.V5.607.

Page 43: comparative analysis for performance measurements of software

118

Belli, F., Beyazit, M. and Memon, A. (2012). Testing is an Event-Centric Activity.

Proceedings of the IEEE Sixth International Conference on Software

Security and Reliability Companion (SERE-C), 20-22 June 2012.

Gaithersburg, MD: IEEE. 198 - 206.

Ben (2014). Characterizing and Measuring Hits per Second for Web Applications.

Retrieved on December 12, 2014, from

http://blog.nexcess.net/2011/06/29/characterizing-and-measuring-Hits per

Second-for-web-applications-2/.

Bhupendra, S. & Shashank, S. (2014). A Model For Performance Testing Of Ajax

Based Web Applications. International Journal of Research in Engineering

and Technology (Ijret), 3(4), 889- 893.

Bipin, P. & Rituraj, J. (2014). Importance of Unified Modelling Language for Test

Case Generation in Software Testing. International Journal of Computer

Technology & Applications., 5 (2), 345-350. ISSN: 2229-6093.

Cartaxo, E., Neto, F. and Machado, P. (2007). Test Case Generation By Means Of

UML Sequence Diagrams and Labeled Transition Systems. Proceedings of

the ISIC. IEEE International Conference on Systems, Man and Cybernetics,

7-10 Oct. 2007. Montreal: IEEE. 1292 – 1297.

Chan, K., Chen, T. and Towey, D. (2002). Restricted random testing. Proceedings of

the 7th International Conference on 7th European Conference on Software

Quality Helsinki, Finland, June 9–13, 2002, London, UK: Springer. 321–

330.

Charland, A. & Leroux, B. (2011). Mobile application development: Web vs. Native.

Magazine Communications of the ACM, 54(5), 49-53.

doi:10.1145/1941487.1941504.

Chen, M., Mishra, P. and Kalita, D. (2008). Coverage-driven automatic test

generation for UML activity diagrams. Proceedings of the ACM 18th Great

Lakes symposium on VLSI (GLSVLSI). New York, NY, USA: ACM. 139–

142.

Page 44: comparative analysis for performance measurements of software

119

Chen, M., Qin, X. and Li, X. (2006). Automatic test case generation for UML

activity diagrams. Proceedings of the 2006 international workshop on

Automation of software test (AST). New York, NY, USA: ACM. 2–8.

Chen, M., Qin, X., Xu, W., Wang, L., Zhao, J. and Li, X. (2009). UML Activity

Diagram-Based Automatic Test Case Generation for Java Programs. The

Computer Journal, 52(5), 545–556, 2009. doi: 10.1093/comjnl/bxm057.

Chen, T., Kuo, F., Robert, G. and Tse, T. (2010). Adaptive Random Testing: The

ART of test case diversity. Journal of Systems and Software archive, 83(1),

60-66. doi:10.1016/j.jss.2009.02.022.

Cheng, B., Xiaoyan, W., Xiaoxiao, H., Chen, J., (2011). Multimedia Conferencing

Management Using Web Services Orchestration over Public Networks.

Journal of Convergence Information Technology (JCIT), 6(6), 361 - 375.

Chouhan, C., Shrivastava, V. and Parminder, S. (2012). Test Case Generation based

on Activity Diagram for Mobile Application. International Journal of

Computer Applications, 57(23), 4-9. doi: 10.5120/9436-3563.

Chouhan, C., Shrivastava, V., Parminder, S. and Soni, P. (2013). Test Case

Generation on the Origin of Activity Diagram for Navigational Mobiles.

International Journal of Advanced Computational Engineering and

Networking, 1(2), 32-36. ISSN (p): 2320-2106.

Cramer, H., Rost, M., Belloni, N., Bentley, F. and Chincholle, D. (2010). Research in

the large. Using app stores, markets, and other wide distribution channels in

Ubicomp research. Proceedings of the ACM 12th International Conference

Adjunct Papers On Ubiquitous Computing – Adjunct. New York, NY, USA:

ACM. 511-514.

Cuixiong, H. & Iulian, N. (2011). Automating GUI testing for Android applications.

In Proceedings of the ACM 6th

International Workshop on Automation of

Software Test (AST '11). New York, NY, USA: ACM. 77-83.

doi:10.1145/1982595.1982612.

Edraw (2014). Edraw Max Pro. Retrieved on October 09, 2014, from

https://www.edrawsoft.com/EDrawMax.php.

Page 45: comparative analysis for performance measurements of software

120

Fan, X., Shu, J., Liu, L. and Liang, Q.J. (2009). Test Case Generation from UML

Subactivity and Activity Diagram. Proceedings of the IEEE 2nd

International Symposium on Electronic Commerce and Security, 22-24 May

2009. Nanchang: IEEE. 244–248.

Gahran, A. (2011). What’S A Mobile App?. Retrieved on October 02, 2014, from

http://www.contentious.com/2011/03/02/whats-a-mobile-app/.

Gaurav, S. & Kestina, R. (2013). Software Testing Techniques for Test Cases

Generation. International Journal of Advanced Research in Computer

Science and Software Engineering, 3(9), 261-265.

Gouri, S. & Meetu, A. (2012). Software Performance Engineering. International

Journal of Research in IT & Management, 2(2), 364-371. ISSN 2231-4334.

Guangzhu, J. & Shujuan, J. (2009). A Quick Testing Model of Web Performance

Based on Testing Flow and its Application. Proceedings of the IEEE 6th

International Conference on Web Information Systems and Applications,

18-20 Sept. 2009. Xuzhou, Jiangsu: IEEE. 57-61.

doi:10.1109/WISA.2009.16.

HP LoadRunner (2015). HP LoadRunner 12.02. Retrieved on March 01, 2015, from

http://www8.hp.com/us/en/software-solutions/LoadRunner-load-testing/

index.html.

Huang, Y., Cao, J., Jin, B., Tao, X. and Lu, J. (2010). Cooperative cache consistency

maintenance for pervasive internet access. Wireless Communications and

Mobile Computing, 10(3), 436-450. doi:10.1002/wcm.819.

IBM (2012). Native, web or hybrid mobile-app development. IBM Software,

Thought Leadership White Paper. Retrieved on September 10, 2014, from

http://www.computerworld.com.au/whitepaper/371126/native-web-or-

hybrid-mobile-app-development/download/.

IEEE (2004). Guide to the Software Engineering Body of Knowledge. United States

of America: Angela Burgess. ISBN 0-7695-2330-7.

Imran, A. & Roopa, S. (2012). Quality Assurance And Integration Testing Aspects In

Web Based Applications. International Journal of Computer Science

Page 46: comparative analysis for performance measurements of software

121

Engineering and Applications (IJCSEA), 2(3), 109-116. doi:

10.5121/ijcsea.2012.2310.

Imran, A., Gias, A.U. and Sakib, K. (2012). An Empirical Investigation of Cost-

Resource Optimization for running Real-Life Applications in Open Source

Cloud. Proceedings of the International Conference on High Performance

Computing and Simulation (HPCS), 2-6 July 2012. Madrid: IEEE. 718 -

723. doi:10.1109/HPCSim.2012.6267002.

Kalpan, B. & Ramakanth, K. (2012). A Survey on Performance Testing Approaches

of Web Application and Importance of WAN Simulation in Performance

Testing .International Journal on Computer Science and Engineering

(IJCSE), 4(5), 883-885. ISSN: 0975-3397.

Kansomkeat, S., Offutt, J., Abdurazik, A. and Baldini, A. (2008). A Comparative

Evaluation Of Tests Generated From Different UML Diagrams.

Proceedings of the IEEE 9th

ACIS International Conference on Software

Engineering, Artificial Intelligence, Networking, and Parallel/Distributed

Computing, 6-8 Aug. 2008. Phuket: IEEE. 867–872. doi:

10.1109/SNPD.2008.48.

Kao, Y., Lin, C., Yang, K. and Yuan, S. (2011). A Cross-Platform Runtime

Environment for Mobile Widget-Based Application Proceedings of the

IEEE International Conference on Cyber-Enabled Distributed Computing

and Knowledge Discovery, 10-12 Oct. 2011. Beijing: IEEE. 68 – 71.

doi:10.1109/CyberC.2011.20.

Ke, H., Xiao-Hong, L. and Zhi-Yong, F. (2010). Approach to activity diagram model

driven testing for Web applications. Journal of Computer Applications,

30(9), 2365-2369.

Khandai, M., Acharya, A. and Mohapatra, D.P. (2011). A Novel Approach of Test

Case Generation for Concurrent Systems Using UML Sequence Diagram.

Proceedings of the IEEE 3rd International Conference on Electronics

Computer Technology (ICECT), 8-10 April 2011. Kanyakumari: IEEE.157–

161. doi:10.1109/ICECTECH.2011.5941581.

Page 47: comparative analysis for performance measurements of software

122

Kim, H. (2013). Hybrid Model Based Testing for Mobile Applications. International

Journal of Software Engineering and Its Applications, 7(3), 223-238.

Kulvinder, S. & Semmi (2013). Finite State Machine based Testing of Web

Applications. International Journal of Software and Web Sciences (IJSWS),

4(1), 41- 46. ISSN: 2279-0063.

Li, L. Miao, H. and Qian, Z. (2008). A UML-Based Approach to Testing Web

Applications. Proceedings of the IEEE International Symposium on

Computer Science and Computational Technology, 20-22 Dec. 2008.

Shanghai: IEEE. 397 – 401.

Liu, Z., Gao, X. and Long, X. (2010). Adaptive Random Testing of Mobile

Application. Proceedings of the IEEE 2nd

International Conference on

Computer Engineering and Technology, 16-18 April 2010. Washington,

DC, USA: IEEE. 297-301. doi: 10.1109/ICCET.2010.5485442.

Mario, K. & Eugene, O. (2014). Native, HTML5, or Hybrid: Understanding Your

Mobile Application Development Options. Retrieved on September 20,

2014, from: https://developer.salesforce.com/page/Native,_HTML5,_or_Hybrid:_

Understanding_Your_Mobile_Application_Development_Options.

Maryam, A. & Rosziati, I. (2014). A Comparative Study of Web Application Testing

and Mobile Application Testing. Advanced Computer and Communication

Engineering Technology, Lecture Notes in Electrical Engineering. Springer

International Publishing, 315, 491-500. doi: 10.1007/978-3-319-07674-

4_48.

Masi, E., Cantone, G., Mastrofini, M., Calavaro, G. and Subiaco, P. (2012). Mobile

apps development: A framework for technology decision making.

Proceedings of International Conference on Mobile Computing,

Applications, and Services. Springer Berlin Heidelberg. 64–79. ISSN 1867-

8211.

Mohd, E. & Farmeena, K. (2012). A Comparative Study of White Box, Black Box

and Grey Box Testing Techniques. International Journal of Advanced

Computer Science and Applications, 3(6), 12-15.

Page 48: comparative analysis for performance measurements of software

123

Mohd, E. (2010). Different Forms of Software Testing Techniques for Finding

Errors. IJCSI International Journal of Computer Science Issues, 7(3), 11-16.

Mohd, E. (2011a). Different Approaches to Black Box Testing Technique for

Finding Errors. International Journal of Software Engineering &

Application (IJSEA), 2(4), 31-40.

Mohd, E. (2011b). Different Approaches to White Box testing Technique for

Finding Errors. International Journal of Software Engineering &

Application (IJSEIA), 5(3), 1-13.

Myers (2004). Art of Software Testing. 3rd

.Hoboken, NJ, USA: John Wiley & Sons,

Incorporated.

Nadia, A. & Mark, H. (2011). Automated web application testing using search based

software engineering. Proceedings of the IEEE 26th

International

Conference on Automated Software Engineering (ASE), 6 - 10 November

2011. Lawrence, Kansas, USA: IEEE. 3 – 12.

Nazish, R., Nadia, R., Saba, A. and Zainab, N. (2014). Model Based Testing in Web

Applications. International Journal of Scientific Engineering and Research

(IJSER), 2(1), 56-60.

Pakinam, N., Nagwa, L., Mohamed H. and Mohamed F. (2011a). A Proposed Test

Case Generation Technique Based on Activity Diagrams. International

Journal of Engineering & Technology, 11(3), 35-52.

Pakinam, N., Nagwa, L., Mohamed, H. and Mohamed, F. (2011b). Test Case

Generation and Test Data Extraction Techniques. International Journal of

Electrical & Computer Sciences, 11(3), 82-89.

Pallavi, R. & Satyaveer, T. (2014). Android Mobile Automation Framework.

International Journal of Engineering and Computer Science, 3(10), 8555-

8560.

Pavol, T., Ondrej, V., Lukas, S. (2012). The Usage of Performance Testing for

Information Systems. International Conference on Information and

Computer Applications (ICICA), 24 (12), 317- 321.

Page 49: comparative analysis for performance measurements of software

124

Pogue, D. (2009). A Place to Put Your Apps. Retrieved October 02, 2014, from:

http://www.nytimes.com/2009/11/05/technology/personaltech/05pogue.html

?pagewanted=all.

Pourya, N., Suhaimi, B.,Mohammad, H. and Abolghasem, Z. (2013). A Comparative

Evaluation of approaches for Web Application Testing. The International

Journal of Soft Computing and Software Engineering (JSCSE), 3(3), 333-

341. doi: 10.7321/jscse.v3.n3.50.

Prachet, B. & Abhishek, K. (2013). Model Based Regression Testing Approach of

Service oriented Architecture (SOA) Based Application: A Case Study.

International Journal of Computer Science and Informatics, 3(2), 11-16.

Prakash, V. & Gopala, S. (2012). Cloud Computing Solution- Benefits and Testing

Challenges. Journal of Theoretical and Applied Information Technology,

39(2), 114-118. ISSN: 1992-8645.

Pratibha, F. & Manju, K. (2014). Research of Load Testing and Result Based on

LoadRunner. SSRG International Journal of Civil Engineering (SSRG-

IJCE), 1, 1-4. ISSN: 2348 – 8352.

Priya, S. & Sheba, P. (2013). Test Path Generation Using UML Sequence Diagram.

International Journal of Advanced Research in Computer Science and

Software Engineering, 3(4), 1069-1076.

Priyanka, J. & Mansi, G. (2014). Why Performance Testing. International Journal of

Advance Research in Computer Science and Management Studies, 5(2), 45-

49. ISSN 2222-1719.

Qin, X., Mishra, P. and Koo, H. (2012). System-Level Validation: High-Level

Modeling and Directed Test Generation Techniques. Springer Science &

Business Media, 2012. ISBN: 1461413583, 9781461413585.

Rahul, M. & Prince, J. (2013). Novel Testing Tools for a Cloud Computing

Environment- A Review. The SIJ Transactions on Computer Science

Engineering & its Applications (CSEA), 1(3), 83-87. ISSN: 2321 – 2381.

Page 50: comparative analysis for performance measurements of software

125

Ranjita, K., Vikas, P. and Prafulla, K. (2013). Generation of Test Cases Using

Activity Diagram. International Journal of Computer Science and

Informatics, 3(2), 1-10. ISSN: 2231 –5292.

Ravi, R. (2013). Mobile Application Testing and Challenges. International Journal

of Science and Research (IJSR), 2(7), 56-58.

Rosziati I. & Tan, S. (2009), Automatic Test Path Generation Based on UML

Activity Diagram. IEEE 5th International Conference on Computer,

Telecommunication and System (ICTS 2009), Surabaya, Indonesia, August

2009. 61-66. ISSN 2085-1944.

Rumbaugh, J., Jacoboson, I. and Booch, G. (2001). The Unified Modeling Language

User Guide. Addison-Wesley.

Salkind, N. (2010). Encyclopedia of Research Design. SAGE. ISBN:

9781412961271.

Sandeep, K., Sangeeta, S. and Gupta, J. (2012). A Novel Approach for Deriving Test

Scenarios and Test Cases from Events. Journal of Information Processing

Systems, 8(2), 213-240. doi: 10.3745/JIPS.2012.8.2.213.

Santosh, K. & Durga, P. (2010). Test Case Generation from Behavioural UML

Models. International Journal of Computer Applications, 6(8), 5-11.

Selvam, R. & Kartikeyani,V. (2011). Mobile Software Testing-Automated Test Case

Design Strategies. International Journal on computer Science and

engineering (IJCSE). 3(4), 1450-1461. ISSN: 0975-3397.

Shanthi, A. & Mohankumar, G. (2012). A Novel Approach for Automated Test Path

Generation Using TABU Search Algorithm. International Journal of

Computer Applications, 48(13), 222-224. ISSN 2250-0987.

Sheetal, S. & Joshi, S. (2012). Identification of Performance Improving Factors for

Web Application by Performance Testing. International Journal of

Emerging Technology and Advanced Engineering, 2(8), 433-436. ISSN

2250-2459.

Page 51: comparative analysis for performance measurements of software

126

Shilpa, S. & Meenakshi, S. (2014). Enhanced Grinder Framework with Scheduling

and Improved Agents. International Journal of Computer Science and

Information Technologies, 5(1), 55-59.

Shirole, M. & Kumar, R. (2013). UML Behavioural Model Based Test Case

Generation: A Survey. ACM SIGSOFT Software Engineering Notes, 38(4),

1–13.

Shivani, A. & Vidhi, P. (2013). Bridge between Black Box and White Box – Gray

Box Testing Technique. International Journal of Electronics and Computer

Science Engineering, 2(1), 175-185. ISSN- 2277-1956.

Siy, M., Zhijie, Q. and Lei, L. (2010). Research on Mobile Web Applications End to

End Technology. Proceedings of the IEEE 10th

International Conference on

Computer and Information Technology (CIT), June 29 2010-July 1 2010.

Bradford: IEEE. 2061 – 2065. doi:10.1109/CIT.2010.350.

Song, B. & Chen, S. (2012). Coverage Criteria Guided Web Application Interactions

Testing. Proceedings of the IEEE 3rd

World Congress on Software

Engineering (WCSE), 6-8 Nov. 2012. Wuhan: IEEE. 46 – 50.

doi:10.1109/WCSE.2012.17.

Song, B., Gong, S. and Chen, S. (2011). Model Composition and Generating Tests

for Web Applications. In Seventh IEEE International Conference on

Computational Intelligence and Security, 2011.

Suhaila, M. & Wan, M. N. (2011). An Outlook of State-of-the-Art Approaches in

Functional Testing of Web Application. International MultiConference of

Engineers and Computer Scientists, 744 -749. ISBN 978-988182103-4.

Sumit, M. & Narendra, K. (2014). Model Based Testing Of Website. International

Journal on Computational Sciences & Applications (IJCSA), 4(1), 143-152.

doi:10.5121/ijcsa.2014.4114.

Swain, S.K., Pani, S.K. and Mohapatra, D.P. (2010). Model Based Object-Oriented

Software Testing. Journal of Theoretical & Applied Information

Technology, 14(1/2), 30-36.

Page 52: comparative analysis for performance measurements of software

127

Takala, T., Katara, M. and Harty, J. (2011). Experiences of system-level model-

based GUI Testing of an Android application. Proceedings of the IEEE 4th

International Conference on Software Testing, Verification, and Validation

(ICST 2011). Los Alamitos, CA, USA: IEEE. Mar. 2011. 377–386.

Tarkoma, S. (2009). Mobile Middleware-Architectures, Patterns, and Practice. UK:

John Wiley & Sons.

Thirumalai, R. & Balasubramanian, N. (2013). Performance Measurement of Web

Applications Using Automated Tools. Proceedings of the International

MultiConference of Engineers and Computer Scientists, 13 – 15 March,

2013. Hong Kong: IMECS. ISBN: 978-988-19251-8-3.

Tobias, G. & Volker, G. (2014). A Model-Based Approach to Test Automation for

Context-Aware Mobile Applications. Proceedings of the ACM 29th

Annual

Symposium on Applied Computing. New York, NY, USA: ACM. 420-427.

doi:10.1145/2554850.2554942.

Vikas, S. & Rajesh, B. (2014). Model based Test Cases Generation for Web

Applications. International Journal of Computer Applications (0975 –

8887), 92(3), 23-31.

Vipin, S. & Ajay, P. (2012). Representation of Object-Oriented Database for the

Development of Web Based Application Using Db4o. Journal of Software

Engineering and Applications, 5(9), 687-694. doi: 10.4236/jsea.2012.59082.

Wang, L., Yuan, J., Yu, X., Hu, J., Li, X. and Zheng, G. (2004). Generating test

cases from UML activity diagram based on gray-box method. Proceedings

of the IEEE 11th Asia-Pacific Software Engineering Conference (APSEC),

30 Nov.-3 Dec. 2004. IEEE. 284–291. doi:10.1109/APSEC.2004.55.

Wasserman, A. (2010). Software Engineering Issues for Mobile Application

Development. Proceedings of the ACM FSE/SDP workshop on Future of

software engineering research. New York, NY, USA: ACM. 397–400.

doi:10.1145/1882362.1882443.

Weibiao, Z., Fang, H., Shuanshuan, B., Lei, D., Jie, F., Ling, J. (2014). Performance

Evaluation of Concurrent-access-intensive WebGIS Applications Based on

Page 53: comparative analysis for performance measurements of software

128

CloudStack. International Journal of Advancements in Computing

Technology(IJACT), 6(5), 41-52.

Yang, W., Prasad, M. and Xie, T. (2013). A Grey-Box Approach for Automated

GUI-Model Generation of Mobile Applications. Proceedings of the 16th

International Conference, FASE 2013, Held as Part of the European Joint

Conferences on Theory and Practice of Software, ETAPS, March 16-24,

2013. Rome, Italy: Springer. 250–265. doi:10.1007/978-3-642-37057-1_19.

Yasir, S. & Olvisa, D. (2011). Web application performance modeling using layered

queuing networks. Electronic notes in theoretical Computer science,

275,123-142.

Zeng, F., Chen, Z., Cao, Q. and Mao, L. (2009). Research On Method Of Object-

oriented Test Cases Generation Based on UML and LTS. Proceedings of the

IEEE 1st International Conference on Information Science and Engineering

(ICISE), 26-28 Dec. 2009. Nanjing: IEEE. 5055 – 5058.

doi:10.1109/ICISE.2009.965.

Zhang, G., Rong, M. and Zhang, J. (2007). A Business Process of Web Services

Testing Method Based on UML2.0 Activity Diagram. IEEE Workshop on

Intelligent Information Technology Application. Zhang Jiajie: IEEE. 59- 65.

doi:10.1109/IITA.2007.83.