a novel approach to automatic model-based test case...

16

Upload: others

Post on 14-Mar-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

Scientia Iranica D (2017) 24(6), 3132{3147

Sharif University of TechnologyScientia Iranica

Transactions D: Computer Science & Engineering and Electrical Engineeringwww.scientiairanica.com

A novel approach to automatic model-based test casegeneration

A. Rezaee and B. Zamani�

MDSE Research Group, Department of Software Engineering, University of Isfahan, Isfahan, Iran.

Received 24 May 2016; received in revised form 31 December 2016; accepted 17 July 2017

KEYWORDSModel-based testing;Automated testing;UML state machine;AMPL;Constraint solver.

Abstract. This paper proposes a new method for automatic generation of test cases usingmodel-based testing. As test model, class and state diagrams are used and constraints areexpressed using Object Constraint Language (OCL). First, the state machine is convertedinto a mathematical representation in AMPL (A Mathematical Programming Language).Then, using a search algorithm and based upon coverage criteria, the abstract paths areselected from state machine. Second, using symbolic execution, the generated abstractpath along with the constraints on this path is converted into the data of generatedmathematical model. Third, the generated mathematical problem is solved with solversthat have interface with AMPL, and the test data are produced for each abstract testcase. Finally, the generated test data and abstract paths are transformed into executabletest cases. All-Transitions and All-States coverage criteria are used for conducting thesearch algorithm as well as the criteria to evaluate the quality of the generated test cases.To validate the work, by utilizing various solvers, the test cases are generated for variousproblems. The proposed technique is implemented as a tool, named MoBaTeG. The toolshows good result in terms of test case generation execution time, test goals satisfactionrate, source code instructions coverage, and boundary values generation.

© 2017 Sharif University of Technology. All rights reserved.

1. Introduction

Given the complexity of today's computer systems,modeling is considered as a necessity, particularlyin software engineering. Unlike traditional softwaredevelopment processes that focus on the code, in ModelDriven Development (MDD), the main artifact is themodel that drives the process. The �nal goal of MDDis to automatically build software from the model [1].

*. Corresponding author. Tel.: +98-31-37934537;Fax: +98-31-36699529E-mail addresses: [email protected] (A. Rezaee);[email protected] (B. Zamani)

doi: 10.24200/sci.2017.4528

The basis of all model driven approaches is modeltransformation [2]. This means that the model withhigh level of abstraction is converted into anothermodel with lower level of abstraction and, eventually,it is converted into code. Model transformation is thedistinguishing factor between the traditional and theMDD approaches.

Software testing is one of the important partsof any software development process. Approximately,50% of the project budget is spent on software test-ing [3]. Traditional testing is performed by a testerusing manual approaches. Testing software systems intraditional manner is complex, time consuming, anderror prone [4]. One of the systematic approaches totest case generation is Model-Based Testing (MBT).In MBT, the conformance of the System Under Test

Page 2: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147 3133

(SUT) to the test model is evaluated. The test modelcan be created using one of the modeling languages,UML, automata, and petri net, to name a few [5].

In this work, a new method is presented forautomatic generation of test cases using MBT tech-niques. In this approach, UML class diagram is usedfor modeling the static dimension of the system andUML state machine is used for modeling the dynamicbehavior of the system. Constraints of the systemare expressed using OCL. The OCL constraints canbe used for de�ning the guards of state machine'stransitions, state machine's state invariants, and thepre- and post-conditions of class diagram's operations.The operations can be used as the e�ect of UMLstate machine's transitions. The proposed method iscomprised of several steps. First, the state machine isconverted into a mathematical representation in AMPL(A Mathematical Programming Language (AMPL)).In this step, the model of AMPL is generated usingsome model transformations. Then, with a forwardand depth-�rst search algorithm and based upon All-Transitions or All-States coverage criteria, the abstractpaths from state machine are selected. Second, usingsymbolic execution, the generated abstract path andits element constraints are converted into the data ofgenerated mathematical model. Third, the generatedmathematical problem is solved with the state of theart and powerful solvers that have interface with AMPLand the test data are produced for each abstract testcase. Finally, the generated test data and abstractpaths are transformed into executable Java unit test.

It should be noted that each solver implementsone (some) speci�c algorithm(s) or heuristic(s). Thesealgorithms can be di�erent in di�erent solvers. Hence,each solver can solve only one (some) speci�c type(s)of problems. Mo-BaTeG can solve di�erent types ofproblems using various solvers. Kurth [6] listed someof the AMPL solvers and the problem types they couldsolve. The following are examples of such solvers:Cplex, Minos, Gecode, Jacop, and Couenne.

In this work, All-Transitions and All-States cov-erage criteria are used to conduct the search algorithmas well as the criteria for evaluating the quality ofthe generated test cases. In order to increase errordetection power of the algorithm, the boundary valueanalysis is used for boundary test data generation. Inthis research, by utilizing various solvers, the test casesare generated for various problems, e.g., linear, non-linear, decidable, and undecidable, that are de�ned byOCL constraints in the test model.

This paper is organized as follows. Section 2covers the related works in the context of MBT andautomatically generated test cases. Abstract testcase generation using di�erent algorithms as well astest data generation using di�erent technologies willbe discussed in this section. Section 3 describes

the proposed approach for automatic generation oftest cases as well as algorithm, data structures, andmeta-model for building the tool called MoBaTeG(Model-Based Test Generator). MoBaTeG is devel-oped based upon ParTeG (Both tools are available inhttp://parteg.sourceforge.net). In Section 4, we eval-uate our proposed approach via several case studies.Also, MoBaTeG is compared with ParTeG. Section 5contains the conclusion, in which we review the contri-butions of this research and address the future worksand threats to validity.

2. Related works

This section addresses the related works in the domainof automatic generation of test cases. Utting [5]categorized di�erent approaches of MBT based uponseveral criteria, as indicated in Figure 1. His taxonomywas based on the modeling language that was usedfor building the test model, di�erent algorithms andtechniques that were used for test case generation, andthe ways that the test cases were executed againstSUT. These criteria have been shown in Figure 1 asthree main branches of taxonomy. Since our research isfocused on test generation, we only consider the secondbranch in the �gure.

As illustrated in Figure 1, the \Test Generation"branch has two sub-branches. The �rst one (\TestSelection Criteria") categorizes the coverage criteria,which are used as the basis for the selection of abstracttest cases. The second one (\Technology") categorizesseveral technologies for concrete test data generation.

In the following sections, �rst, we describe therelated works about abstract test case generations inSection 2.1. Then, the related works about concretetest data generation will be described in Section 2.2.

Figure 1. Utting's taxonomy of di�erent MBTapproaches [5].

Page 3: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

3134 A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147

Finally, the di�erences of our tool, MoBaTeG, with twoother tools are mentioned in Section 2.3.

2.1. Abstract test case generationThe coverage criteria are used for conducting searchalgorithm as well as for evaluating the quality ofgenerated test cases. After selecting a coverage cri-terion for test case generation, the abstract test casegeneration algorithm will be executed, which will tryto satisfy the determined coverage criterion. Also,the coverage criteria satisfaction rate determines thequality of generated test cases by our approach.

Wei�leder [7] proposed several coverage criteriathat were based on data ow and model structure. Inhis proposed approach, each coverage criterion wouldbe converted into some test goals. Then, the searchalgorithm would be used to satisfy the generatedtest goals. The search algorithm traversed the statemachine's transitions with backward approach and con-sidered guard conditions to satisfy them. Since bruteforce search of searching space was impossible, thedata ow-based approach was employed for conductingbackward search algorithm towards data de�nitions.The proposed approach was implemented as a free andopen source tool called ParTeG.

We propose a forward search algorithm basedupon state machine for abstract test case generationand test goals satisfaction. In our forward searchalgorithm, we traverse state machine in a forwardmanner and, in each state, check the satisfaction ofunsatis�ed test goals. In other words, our algorithmis run once and for each found path, it tries to satisfythe test goals. The backward search that is proposedin [7] is run once for each test goal. Furthermore,our proposed algorithm can support composite statesand di�erent ways through which the system can enterthe composite state or exit it. However, some of there-searchers proposed some approaches that did notconsider composite state and only considered restrictedparts of state machine, or did one additional step for attening state machine before the execution of testcase generation algorithm. We do not atten statemachines.

In [8,9], some of the searching graph algorithmsare introduced as a technique for �nding proper testcases. Kurth [6] used depth- and breadth-�rst searchfor abstract test case generation from UML activitydiagram. Also, he introduced two parameterized ter-mination conditions to ensure the termination of searchalgorithms. The user could identify the maximumamount of test cases to be generated as well as themaximum depth of model to be searched by searchalgorithms. He did not use coverage criteria and testgoals for conducting search algorithm. We design andimplement a depth-�rst search algorithm with forwardapproach on UML state machine. Furthermore, we

use structural-based coverage criteria to conduct thesearch algorithm. The proposed search algorithm triesto satisfy test goals that are related to the selectedcoverage criteria.

2.2. Concrete test data generationAs indicated in the \Technology" branch in Figure 1,several technologies can be used for test data gen-eration: random test data generation from feasibleset, search-based test data generation, model checking,symbolic execution, theorem proving, and constraintsolving. Random selection of test data cannot beconsidered as a proper method for test data generation,because it may not satisfy all the constraints withinabstract path and the generated test data may be oflow quality. However, it can be used for comparisonwith other approaches.

Wei�leder and Sokenou [10] proposed a data-oriented approach for generating test data. In thisapproach, they used abstract interpretation and cat-egorized input data in several parts based on thedata domain. Then, the boundary value of eachpart would be selected as test data. In contrast todata partitioning for boundary test data generation,we use mathematical optimization for boundary valueanalysis. Also, we use symbolic execution for testdata generation in contrast to data-oriented test datageneration.

Ali et al. [11] de�ned some �tness functions basedon OCL and generated test data using search algo-rithms. They used genetic algorithm and implementedtheir approach as an OCL-based constraint solver.However, up to this date, this solver has not beenmade publicly available. Malburg and Fraser [12] useda mixed approach. They used genetic algorithm forimproving test data; also, for conducting genetic algo-rithm, they used special mutation operators performingdynamic symbolic execution. Their approach focusedon white-box testing using the program's source code.Also, PEX [13] from Microsoft® could be used forwhite-box unit test. It used dynamic symbolic exe-cution based on source code. Furthermore, for solvingpath conditions, it used a constraint solver. We usestatic symbolic execution and do not need source codefor test case generation. Hence, our tool can also beused for black-box testing.

Krieger and Knapp [14] proposed a techniquefor converting OCL constraints into Boolean formulaeand speci�ed some states of system that could satisfythe constraints using SAT solvers. We use AMPLprogramming language and model OCL constraints inAMPL. Hence, our approach is not limited to onespecial heuristic or solver, and we are able to use severalheuristics and solvers that have interface with AMPL.Each solver is able to solve one or some special types

Page 4: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147 3135

of problems. We are able to solve di�erent types ofproblems using di�erent solvers.

In [6], Kurth proposed an algorithm for convertingactivity diagram, control ow path, and OCL con-straints into mathematical model in AMPL modelinglanguage. After generating AMPL model, the testdata would be generated using solvers. His proposedapproach was implemented as an open source toolcalled AcT (Activity Tester).

We use AMPL and solver-based technique, too.The di�erence between our work and [6] is that weuse state machine as input model; however, in [6],activity diagram was used as test model. In [6], theOCL constraints could be used for de�ning the guardof control ows and post-conditions of each activity.In our work, OCL constraints can be used for de�ningthe guard of transitions, pre- or post-conditions of classdiagram's operations (these operations can be used asthe e�ect of state machine's transitions), and statesinvariants. The semantics of these constraints aredi�erent from those used in [6]. Hence, the algorithmfor converting them into AMPL is also di�erent.

2.3. SummaryTo summarize the properties of our tool, MoBaTeG,versus ParTeG [7] and AcT [6], we have preparedTable 1. In this table, three tools are compared from�ve aspects: test model, coverage criteria, abstract testcase generation algorithm, test data generation, andboundary value analysis.

3. MoBaTeG: Model-Based Test Generator

In Model-Based Testing (MBT), after building themodel, it will be used for test generation. Usually,the modeling is done in a manual process by modeleror tester using a modeling tool. Afterwards, the test

cases will be generated automatically from test modelusing a model-based test generation tool. In this paper,the test generation problem is converted into a mathe-matical representation in AMPL language. Then, witha forward and depth-�rst search algorithm and basedupon All-Transitions or All-States coverage criteria,the abstract paths from state machine are selected.Second, using symbolic execution, the generated ab-stract path and its element constraints are convertedinto the data of generated mathematical model. Third,the generated mathematical problem is solved with thestate of the art and powerful solvers that have interfacewith AMPL and the test data are produced for eachabstract test case. Finally, the generated test dataand abstract paths are transformed into executabletest cases. The overview of our approach is shown inFigure 2. There are four steps in this �gure that willbe described in Sections 3.1 to 3.4, respectively.

3.1. NormalizationThis step accepts UML model as input and generatestest case graph as out-put. The goal of this partis checking the correctness of OCL constraints. Ifall constraints are syntactically correct and supportedby our approach, the abstract syntax tree will begenerated for them. Also, we copy the required partsof UML input model and related OCL constraints inan internal structure, called test case graph. Then, weuse this structure for test case generation and forgetinput UML model. Also, in this part, we transformthe coverage criteria into test goals. All of these tasksare performed using ParTeG tool. It should be notedthat we have modi�ed internal structure of this tool insome parts so that it is able to support our approach'srequirements. Partial meta-model of the test casegraph is indicated in Figure 3.

The properties of TCGTransition class are those

Table 1. Comparing MoBaTeG with ParTeG [7] and AcT [6].

ParTeG AcT MoBaTeG

Test modelUML state machineand class diagram

UML activityand class diagram

UML state machineand class diagram

Coverage criteriaStructural-based anddata ow-based

No use Structural-based

Abstract test casegeneration algorithm

Backward search Backward and forward search Forward search

Test data generation Abstract interpretation Symbolic execution Symbolic execution

Boundary value analysis Data partitioning Mathematical optimization Mathematical optimization

Page 5: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

3136 A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147

Page 6: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147 3137

Figure 3. Partial test case graph meta-model (adapted from [7]).

properties that are added to the test case graph meta-model in this paper. Note that ParTeG does notsupport several pre- or post-conditions. However,in this approach, we support several pre- or post-conditions for each transition. The guard constraintof UML state machine transition and pre-condition(s)of the operation that are called in the e�ect of thattransition, if exists, are converted into transition pre-condition(s) in the test case graph. Also, the post-condition(s) of the operation that is called in the e�ectof UML state machine transition, if exists, is (are)converted into transition post-condition(s) in the testcase graph.

In the proposed approach, Integer, Real, andBoolean data types are sup-ported. Also, we support if-then-else statements for de�ning OCL constraints. Ta-ble 2 shows the supported operations in this approach.

3.2. Mathematical programmingIn this step, we transform input model to mathematicalrepresentation in AMPL. This part accepts test casegraph as input and generates AMPL model. In thissection we describe the transformation of test casegeneration problem into mathematical and constraintsatisfaction problem. As illustrated in Figure 2, thispart consists of model variable to AMPL variabletransformation, the pre-conditions of state machine'stransition to AMPL constraints transformation, thepost-conditions of state machine's transition to AMPLconstraints transformation, continuity constraints ad-dition to AMPL model, and state invariant to AMPL

Table 2. Supported operations for de�ning OCLconstraints.

Mathematical operations �, +, *, /

Relational operations <, <=, >, >=, =, <>

Boolean operations and, or, not, Xor, Implies

constraint transformation. Each of these steps isdescribed in the following sections.

3.2.1. Transforming test model variables to AMPLvariables

In this step, all of the test model's variables involvedin de�ning the OCL constraints are identi�ed. For thispurpose, we design and implement one model visitorthat extracts all variables and parameters that are usedin OCL constraints, and then, these variables will betransformed into AMPL variables.

The data structure that is used for holding vari-ables is shown in Figure 4. The model visitor getstest case graph, traverses all transitions, and visits allstates of it. If model visitor �nds any constraints inany of these elements, it extracts the constraint andgives it to constraint visitor. The constraint visitoraccesses atomic elements of constraints by traversingthe abstract syntax tree of constraints with regard toconstraint type. Then, with checking of these atomicelements, it recognizes the parameters and variables

Figure 4. The data structure for storing data (from [6]).

Page 7: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

3138 A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147

and copies them into the data structure shown inFigure 4.

After determining the system's parameters andvariables, it is necessary to convert them into AMPL.In the AMPL model, the type of each variable can beInteger, Real, or Boolean. The type of AMPL variablesis Real by default. We need to de�ne upper and lowerbounds for Integer and Real variables. We set lowerand upper bounds of these variables to �10000 and+10000, respectively. Without de�ning any bound fordata types, some solvers cannot �nd correct answer tothe problem in hand. Furthermore, we consider 0 and 1for lower and upper bounds of Boolean data type thatare equivalent to \false" and \true," respectively.

Since a variable can have di�erent values in eachlevel of execution, we de�ne them as an array ofvariables with constant length, i.e. \pathlength," inAMPL. The \pathlength" is a parameter that is usedin this paper for specifying the number of executionlevels in one abstract path. This parameter is de�nedin model section of AMPL and its value is determinedin data section of AMPL. If the \isParameter" propertyof TCGVariable is \true," then TCGVariable is aparameter. In this case, the value of TCGVariable isconstant during execution, so we de�ne it as a singlevariable.

3.2.2. Transforming pre-conditions of state machinetransitions into AMPL

For each transition in state machine that has at leastone pre-condition, we de�ne one activation set with arandom name and copy this name in \PreSetName"property of current transition. The activation setspeci�es in which level of execution the pre-conditionmust be considered. The activation set is de�ned inAMPL model as a subset of 0, ..., Pathlength. Theactivation set is empty by default.

The pre-conditions of current transition are de-�ned as an indexed set of constraints on activationset. In AMPL, it is necessary for constraint to havea unique name. We use a random name with \ pre"su�x and one number that is speci�ed with one counteras the name of each pre-condition. Also, we useconstraint visitor for traversing abstract syntax treeof each pre-condition and extract pre-condition asstring from related abstract syntax tree. For eachatomic constraint in abstract syntax tree, if the atomicconstraint is a parameter constraint, we only need tocopy the name of this parameter in AMPL model.If the atomic constraint is a variable, we need tospecify the index of variable. For this purpose, ifthe variable has not \@pre" label, we need to accessthe value of the variable in current level of execution,and if the variable has \@pre" label, we need toaccess the value of variable in the previous level ofexecution.

3.2.3. Transforming post-conditions of state machinetransitions into AMPL

The transformation of transitions' post-conditions issimilar to the transformation of pre-conditions. Theonly di�erence is that for some transitions, we needto add continuity constraint. Adding continuity con-straints is described in the next section.

3.2.4. Adding continuity constraints to AMPL modelIn transition's pre-conditions, the values of variablesare determined based upon the current execution level.If the transition has some post-conditions, the execu-tion level of algorithm increases by one unit and theconstraint must be evaluated based upon the values ofvariables in the new level of execution, except whenthe variable has \@pre" label. For each variable thatparticipates in the post-conditions and does not have\@pre" label, a new value must be set so that allconstraints in the current execution level are evaluatedas \true." If some of the model variables do notparticipate in describing the post-conditions, or all theparticipated variables have \@pre" label, we need toadd continuity constraint for them. The continuityconstraint says that the variable must hold its previousvalue. For example, for variable \x", the continuityconstraint is \x=x@pre" in OCL language. The AMPLequivalent to this constraint is \x[i] = x[i � 1]".Each continuity constraint is de�ned as an indexedconstraint on the activation set of transition's post-conditions (\PostSetName"). To ensure the uniquenessof constraint's name, we use \PostSetName" propertyof current transition with \ post" su�x and a numberthat is speci�ed with a counter.

3.2.5. Transforming states invariant into AMPLconstraints

In UML state machine, each state can have a stateinvariant. While the system is in one state, the stateinvariant must be \true." For converting state invari-ant into AMPL constraint, the implemented algorithmgets test case graph as input and meets all states. If thestate is instance of \TCGRealNode," it can have stateinvariant. If the \TCGRealNode" has a state invariant,this constraint is transformed into AMPL constraint.If the state does not have name, one random name isassigned to it. This name will be used for the name ofstate invariant's activation set. Then, the constraint isde�ned as an indexed set on activation set. The nameof the constraint is built by adding the \ invariant"su�x to the name of the state.

3.3. Abstract test cases and concrete test datageneration

Abstract test cases are some paths in the test modelthat are selected according to some speci�c criteria.We use structural coverage criteria for conductingtest generation algorithm. After the test goals are

Page 8: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147 3139

generated according to these coverage criteria usingParTeG, we use one depth-�rst and forward searchalgorithm to extract abstract path from test model.Each abstract path satis�es at least one test goal. Thesearch algorithm that is designed and implemented inthis paper starts the execution from the initial stateof the state machine and traverses state machine'stransitions in depth using forward search approach.By visiting each state, the algorithm checks whetherany of the test goals can be satis�ed according to theabstract path or not. Furthermore, the satis�ability ofconstraint within path is checked. For this purpose,the current abstract path is converted into AMPLdata using symbolic execution. Then, we try tosolve the AMPL model using di�erent state of the artand powerful solvers that have interface with AMPL.If the solver cannot solve the constraint within theabstract path, that path is recognized as an \InfeasiblePath," and the search algorithm must do backtrack andcontinue the execution in another unvisited path. If thesolver can solve all the constraints within the path, thesearch algorithm can continue execution in that pathin depth. Furthermore, if this abstract path can satisfyat least one test goal that is not satis�ed yet, the datagenerated by the solver will be used as test data.

To ensure that the algorithm terminates and isnot trapped in an in�nite loop, the algorithm uses somefuel as search limit. The fuel determines how manytransitions the algorithm is allowed to traverse. If thefuel of algorithm is �nished in some state, the searchalgorithm does backtrack and continues the executionin another unvisited path. For each backtrack, the fuelof algorithm is increased by one unit.

The search algorithm supports composite statesand di�erent ways that the system can enter compositestate or exit it. We consider three di�erent ways forentering the composite state as well as exiting it. Inthe following, the three ways for entering the compositestate have been described.

1. The target state of transition is an entry pointand with another transition, the system enters acomposite state;

2. The target state of transition is a composite state

and after that, the system must continue theexecution for each initial state of composite state'sregions;

3. The source of transition is outside and the target oftransition is inside the composite state.

Also, three ways for exiting composite state are asfollows:

1. The target state of transition is exit point and byanother transition, the system exits the compositestate;

2. The target state of transition is �nal state of thecomposite state and after that, the system mustcontinue the execution for each transition for whichthe composite state is source state.

3. The source of transition is inside and the target oftransition is outside the composite state.

We consider and moderate all of these ways for enteringor exiting the composite state. We describe threefeatures of our algorithm for abstract path and testdata generation in the next three sections.

3.3.1. Infeasible path detectionOne of the important features of the presented algo-rithm is the capability of infeasible path detection.The infeasible path means that there is not any datafor satisfying all of the constraints within the abstractpath. For example, if the abstract path contains a � 12and a � 5 as constraints and there is not any post-condition between these two constraints that changethe value of a, these constraints cannot be satis�ed;thus, this abstract path is recognized as \InfeasiblePath" and the search algorithm must do backtrack.With detection of such paths and preventing searchalgorithm to progress in these paths, most of the searchspace can be eliminated, which leads to lower executiontime of the algorithm.

As an example, consider the state machine inFigure 5, which includes an infeasible path indicatedby the red color. This path is infeasible since thereis not any test data to satisfy it. Each path needs tostart with any value for variable \a" that is greater

Figure 5. Feasible and infeasible paths.

Page 9: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

3140 A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147

than or equal to 12. With this value, the transitionfrom \State0" to \State1" can be satis�ed. If \Event1"occurs, the third transition of green path will besatis�ed and \Operation1" will be run as an e�ect oftransition. Calling \Operation1" will set variable \a"to 3 based upon post condition of \Operation1". How-ever, the occurrence of \Event2" leads to traversing thethird transition of the red path and results in calling\Operation2". In this case, the value of variable \a"has not changed. The guard condition of the fourthtransition in both paths (red and green) is de�ned as\a <= 5". In the third transition of the green path,the value of variable \a" is set to 3. Hence, the fourthtransition can be satis�ed. On the other side, the thirdtransition of the red path does not lead to change inthe value of variable \a" and it keeps its previous value(a >= 12). Therefore, the guard condition of thefourth transition cannot be satis�ed. This is why thered path is an infeasible path.

If it is impossible to satisfy a path, all of theextensions of that path will be infeasible, too. Withincrease in the depth of search path, the numberof detected abstract paths will grow exponentially.For one large state machine that each state has twooutgoing transitions and the depth of state machine is20, the total number of abstract paths will be 1048576.If one abstract path with length 6 is recognized as aninfeasible path, 16384 abstract paths will be removedfrom the search space. Thus, with \Infeasible PathDetection" feature of search algorithm, we can get ridof searching for many useless paths.

3.3.2. Warm startWarm start is one of the features of AMPL program-ming language and most of its solvers. If a problemhas been solved several times and in each executiondi�erent data are used, the knowledge that is gainedin previous executions can be utilized and the problemis solved faster; this feature is called \Warm Start."When using \Warm Start" feature, the AMPL modelmust be �xed and the AMPL data must change ineach execution. We do this with transforming statemachine to AMPL model and for each abstract path,we generate AMPL data using symbolic execution.Therefore, the AMPL model will be �xed and theAMPL data will be reset in each execution.

Also, if the solver has been changed between theexecutions or the solver cannot gain any knowledgefrom previous executions, AMPL saves the last solu-tion. Many of the algorithms can use last solutionas a starting point. Furthermore, in this paper, byutilizing depth-�rst search algorithm, the newly foundabstract path is extension of the last found path inmost cases. Hence, the previous solution can be veryuseful because the new path only has a few additionalconstraints compared to the last found path.

3.3.3. Boundary value analysisAny data that satisfy all of the constraints withinabstract path can be used as test data. However, datathat are at the boundary of feasible set can be moreuseful than other data inside this set. Therefore, we useboundary value analysis for boundary data generationin our work.

If all constraints of test model are linear, usingsimplex algorithm leads to automatic generation ofboundary values. Simplex algorithm is one of thefamous algorithms, which is used by many solvers thatsolve linear problems. However, if some of the con-straints of test model are non-linear and the problemdoes not include any objective function, as soon as thesolver �nds a solution, it will return the solution. Thissolution does not guarantee the generation of boundarytest data. Hence, we use one proper objective functionthat leads to boundary data generation before callingthe solver.

3.4. Unit test generationAs described in Section 3.3, the abstract paths aregenerated according to coverage criteria and the testdata are generated for each abstract path using AMPLprogramming language and related solvers. The gen-erated abstract test cases as well as concrete test dataare stored in data structures. Figure 6 shows this datastructure.

As shown in Figure 6, a test suite may containzero or more test cases and each test case contains oneabstract path as well as zero or more variables withtheir values. Furthermore, each abstract path containsseveral transitions. Each transition has one target stateand may have several pre- and post-conditions. Also,each transition may or may not have some events withor without some parameters. Furthermore, each statemay or may not have one state invariant. Test suiteis associated with SUT, which has two properties thatspecify the class and package on which the test must beexecuted. Finally, the function that handles the eventsis speci�ed in SUT class.

4. Evaluation of MoBaTeG

The proposed approach for test case generation isimplemented as an Eclipse-based and open source tool,named MoBaTeG, using Java programming language.We use �ve case studies for evaluating the MoBaTeGtool. Using di�erent solvers is one of the mainadvantages of our work. The evaluation frameworkis described in Section 4.1, and the evaluation ofMoBaTeG is given in Section 4.2.

4.1. Evaluation frameworkWe have evaluated MoBaTeG from di�erent aspectsusing �ve case studies. All of the evaluations havebeen executed on a computer with Intel® coreTM 2

Page 10: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147 3141

Figure 6. Unit test meta-model.

Duo processor, 4 GB memory, and the 64-bit versionof Microsoft® Windows 7 operating system. In the fol-lowing sections, we will describe the evaluation criteria.

4.1.1. Test case generation execution timeThe test case generation execution time is one of themain criteria for automatic testing of software systems.Lower test case generation execution time can decreasetest case generation cost. The execution time of thepresented approach depends upon two main criteria,including the required time for solving a problem andthe number of problems to be solved.

Test case generation execution time is calculatedby di�erent solvers. Test case generation algorithm willbe executed 20 times for each coverage criterion andfor each solver. Then, the average test case generationexecution time will be calculated.

4.1.2. Test goals satisfaction rateThe coverage criteria can be used as the main criteriafor evaluating the quality of generated test cases, i.e.the percentage of test goals satisfaction using di�erentsolvers will be used for evaluating the quality of gen-erated test cases. We have used two kinds of coveragecriteria, All-Transitions and All-States.

The quality of generated test cases that will beevaluated by this criterion depends upon the algorithmthat is designed for abstract test case generation,the algorithm for transforming test case generationproblem into mathematical problem, and the ability ofthe used solver for solving the generated mathematicalproblem.

4.1.3. The coverage of source code instructionsThe coverage of source code instructions is anothercriterion that can be used for evaluating the qualityof generated test cases. Higher coverage of source code

instructions indicates that the quality of generated testcases is better. EclEmma [15] is used for calculating thecoverage of generated test cases against source code.EclEmma is a tool for calculating the coverage of Javasource code. In this evaluation criterion, source codeinstructions refer to Java Bytecode instructions.

4.1.4. Comparing with ParTeGParTeG is the only available MBT tool that uses UMLclass diagram and UML state machine annotated withOCL constraints as test model. Hence, we will compareour tool MoBaTeG with ParTeG. The comparisonbetween MoBaTeG and ParTeG will be done based on\Test Case Generation Execution Time," \Test GoalsSatisfaction Rate," and \The Coverage of Source CodeInstructions" criteria.

4.1.5. Boundary value analysisGenerating boundary values will increase the quality ofgenerated test cases and fault detection power of suchtest cases. Hence, the boundary values of generatedtest data will be evaluated.

4.1.6. Using di�erent problemsA good solution must be able to solve di�erent prob-lems in terms of complexity. Therefore, ability ofthe proposed approach for test case generation willbe evaluated using �ve di�erent problems, which di�erin terms of complexity. We will use linear and non-linear problems as well as decidable and undecidableproblems.

4.2. Case studiesFive case studies have been used in this section for eval-uating our tool, Mo-BaTeG. The �rst two problems aretwo versions of a triangle classi�er. The classi�er getsthree numbers as the sizes of triangle's edges. Then, it

Page 11: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

3142 A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147

classi�es the triangle under one of the titles: \invalid,"\scalene," \isosceles," and \equilateral." For modelingthe �rst problem (version one of the classi�er), we donot use any Boolean operator in the de�nition of OCLconstraints. For the second problem (version two of theclassi�er), we use a Boolean operator in the de�nitionof OCL constraints.

The motivation for using triangle classi�er withand without Boolean Operations is that the generatedmathematical problems related to these two case stud-ies are classi�ed into two di�erent kinds of problems interms of complexity. Generated mathematical problemof \Triangle Classi�er with Boolean Operations" isa sample of Satis�ability Modulo Theories (SMT)problems and the generated mathematical problem of\Triangle Classi�er without Boolean Operations" is asample of Integer Linear Programming (ILP) problems.Each of these problems needs di�erent e�ort for solving.Hence, generating test cases using these case studies inour work shows the ability of our approach for solvingthese two di�erent kinds of problems.

The third problem models the behavior of anair pump and shows the relationship between \air,"\temperature," \volume," and \pressure." Ideal gasequation has been used for describing the relation

between these variables. The class diagram and itsoperations as pre- and post-condition(s) of this casestudy are shown in Figure 7. The generated mathe-matical problem related to this case study is a sampleof Mixed Integer Non-Linear Programming (MINLP)problems.

The fourth problem models the behavior of anATM system. This case study contains 4 compositestates. 2 out of 4 composite states have entry pointpseudo state for entering the composite state. Togenerate test data for ATM case study, a sample ofSMT problems needs to be solved.

The last problem models the behavior ofContinuous Casting Machine (CCM). CCM runs theprocess whereby molten steel is solidi�ed into a billet,bloom, or slab.

Table 3 shows the speci�cations of our casestudies. The �rst two problems are samples of linearand decidable problems. The third problem is neitherlinear nor decidable. The fourth and �fth problems areboth linear and decidable. In the state machines ofcase studies, the numbers of states are 17, 10, 20, 47,and 25, respectively. Also, the numbers of transitionsare 30, 12, 21, 60, and 29, respectively. The test datageneration problem complexity for all of the generated

Figure 7. The class diagram and operations pre- and post-conditions.

Page 12: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147 3143

Table 3. Speci�cations of �ve case studies used for evaluating MoBaTeG.

# Problems Linear Decidable Numberof states

Number oftransitions

Problemtype

Problemcomplexity

Used datatype

1 Triangle classi�er withoutBoolean operations Yes Yes 17 30 ILP NP-hard Integer

2 Triangle classi�er withBoolean operations Yes Yes 10 12 SMT NP-hard Integer

3 Air pump No No 20 21 MINLP NP-hardInteger,Real,Boolean

4 ATM Yes Yes 47 60 SMT NP-hard Integer,Boolean

5 CCM Yes Yes 25 29 ILP NP-hard Integer,Boolean

mathematical problems related to case studies is NP-Hard. In the �rst two problems, only Integer data typeis used. Whereas, the third problem uses three datatypes in its model, namely, Integer, Real, and Boolean.Finally, two problems use Integer and Boolean datatypes for modeling.

4.2.1. Test case generation execution timeThe average execution time of test case generation forboth MoBaTeG and ParTeG tools has been calculatedbased on 20 executions. We consider All-Transitionsand All-States coverage criteria. The average ofexecution times has been shown in Table 4. Two solvershave been used for calculating the average executiontime using MoBaTeG tool for each problem.

For the �rst problem, the execution time ofParTeG is better than that of both of the solvers

performed by MoBaTeG. In the next section, we willsee that for the �rst problem, ParTeG cannot generateany test data. This is why the execution time of thistool is low. For the second problem, the execution timeof ParTeG is signi�cantly higher than that of MoBaTeGfor both solvers. In the class diagram of the third prob-lem, there are some operations that have several post-conditions. As mentioned in Section 3.1, ParTeG doesnot support several pre- or post-conditions. Thus, forthis problem, ParTeG considers only one of the post-conditions of each operation. Despite this, ParTeGcannot generate any test case for the considered con-straints. But, our tool, MoBaTeG, generates testcases within appropriate execution time. Therefore,this problem shows the superiority of MoBaTeG overParTeG.

In the fourth case study, we use GeCoDE and

Table 4. The test case generation execution time of MoBaTeG and ParTeG.

Execution time (ms)of MoBaTeG

Execution time (ms)of ParTeG

# Problem Solver AllStates

AllTransitions

AllStates

AllTransitions

1 Triangle classi�er withoutBoolean operations

LpSolve 312 329 171 241Minos 307 363

2 Triangle classi�er withBoolean operations

GeCoDE 242 526 15484 15849JaCoP 2019 2366

3 Air pump Bonmin 8117 8291 Notsupported

NotsupportedCouenne 9537 11897

4 ATM GeCoDE 1667 1739 Notsupported

NotsupportedIlogCP 221 171

5 CCM GeCoDE 1660 1655 Notsupported

NotsupportedJaCoP 28679 30207

Page 13: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

3144 A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147

IlogCP for solving mathematical problem and gener-ating test data. ParTeG cannot generate any testcases for this case study. When using All-Statecoverage criteria, the execution time of MoBaTeGusing GeCoDE and IlogCP solvers is 1667 (ms) and221 (ms), respectively. These values are 1739 (ms) and171 (ms) when using All-Transitions coverage criteria,respectively. Therefore, MoBaTeG generates test casesin appropriate time.

In the last case study, when using All-Statecoverage criteria, the execution time of MoBaTeGusing GeCoDE and JaCoP solvers is 1660 (ms) and28679 (ms), respectively. These values are 1655 (ms)and 30207 (ms) when using All-Transitions coveragecriteria, respectively. ParTeG cannot generate anytest cases for this case study, too. The superiorityof MoBaTeG over ParTeG is again shown in thisevaluation.

4.2.2. Test goals satisfaction rateTest goals satisfaction rate has been calculated forboth MoBaTeG and ParTeG tools as well as foreach problem and coverage criterion. This evaluationre ects the power of the abstract test case generationalgorithm, the proposed algorithm for transformingtest case generation problem into mathematical ones,and the solvers to solve the generated mathematicalproblems.

The test goals satisfaction rate for all of the casestudies has been shown in Table 5. We have used twosolvers for solving each problem. The �rst problem issolved by using MoBaTeG tool with both LpSolve andMinos solvers. This evaluation shows 100% satisfactionfor both All-States and All-Transitions coverage crite-

ria. However, these values for ParTeG tool are 12.5%and 7.14%. For the second problem, when MoBaTeGis used for test case generation and the selected solveris GeCoDE, all of the test goals that are generatedusing All-Transitions and All-States will be satis�ed(i.e., the result is 100%). However, using All-States andAll-Transitions coverage criteria for MoBaTeG withJaCoP solver leads to 87.5% and 90.90% satisfaction,respectively. These values are 100% and 90.90% forParTeG. In this problem, MoBaTeG and ParTeG havesimilar behavior. While ParTeg cannot generate anytest cases for the third problem, MoBaTeG satis�es allof the generated test goals for both coverage criteriawhen using Couenne solver. The results for All-Statesand All-Transitions coverage criteria for MoBaTeG toolwith Bonmin solver are 85.71% and 100%, respectively.As we mentioned in Section 4.2.1, ParTeG cannotgenerate any test cases for the fourth case study(ATM). Therefore, the test goal satisfaction rate ofParTeG is 0% for both of All-States and All-Transitionscoverage criteria. However, MoBaTeG satis�es 92% oftest goals using All-State coverage criteria with bothsolvers. Test goal satisfaction rate of MoBaTeG whenusing GeCoDE and IlogCP for All-Transition coveragecriteria is 79% and 20%, respectively. In the lastcase study, using MoBaTeG tool with both GeCoDEand JaCoP solvers results in 100% satisfaction for All-States coverage criteria and 93.1% satisfaction for All-Transitions coverage criteria. However, ParTeG cannotgenerate any test cases.

4.2.3. The coverage of source code instructionsAfter the test cases have been generated, they are runagainst source code and the source code instructions

Table 5. The test goals satisfaction rates of MoBaTeG and ParTeG.

Test goals satisfactionrate of MoBaTeG

Test goals satisfactionrate of ParTeG

# Problem Solver All-States

All-Transitions

All-States

All-Transitions

1 Triangle classi�er withoutBoolean operations

LpSolve 100% 100% 12.5% 7.14%Minos 100% 100%

2 Triangle classi�er withBoolean operations

GeCoDE 100% 100% 100% 90.90%JaCoP 87.5% 90.90%

3 Air pump Bonmin 85.71% 100% Notsupported

NotsupportedCouenne 100% 100%

4 ATM GeCoDE 92% 79% Notsupported

NotsupportedIlogCP 92% 20%

5 CCM GeCoDE 100% 93.1% Notsupported

NotsupportedJaCoP 100% 93.1%

Page 14: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147 3145

Table 6. The coverage of source code instructions using MoBaTeG and ParTeG.

MoBaTeG ParTeG# Problem All States All- Transitions All- States All- Transitions

1 Triangle classi�er withoutBoolean operations

102 out of 11092.7%

106 out of 11096.4%

15 out of 11013.6%

83 out of 11075.5%

2 Triangle classi�er withBoolean operations

138 out of 138100%

138 out of 138100%

138 out of 138100%

138 out of 138100%

3 Air pump 209 out of 21298.6%

212 out of 212100%

Notsupported

Notsupported

4 ATM 229 out of 25888.8%

229 out of 25888.8%

Notsupported

Notsupported

5 CCM 489 out of 58284%

558 out of 58295.9%

Notsupported

Notsupported

coverage will be calculated using \EclEmma" tool. Theresults for \Coverage of Source Code Instructions" forall case studies are shown in Table 6.

The source code of the �rst problem consistsof 110 instructions. When the test cases that aregenerated using MoBaTeG with All-States coveragecriterion are run against source code, 102 out of110 instructions are covered (92.7% coverage). Thevalue of this criterion when using ParTeG with All-States coverage criterion is 15 out of 110 instructions(13.6%). When All-Transitions coverage criterion isused for generating test cases for the �rst problem,106 out of 110 instructions are covered using MoBaTeG(96.4%) and 83 out of 110 instructions are covered usingParTeG (75.5% coverage). This problem shows thesuperiority of MoBaTeG over ParTeG.

The second problem's source code consists of 138instructions. For both coverage criteria when usingeach of the tools, the generated test cases cover allinstructions of the source code. Hence, in this problem,both tools have the same behavior.

The third problem's source code consists of 212instructions. When the generated test cases byMoBaTeG and All-States coverage criterion are runagainst source code, 209 out of 212 instructions arecovered (98.6% coverage). The result of using All-Transitions coverage criterion and MoBaTeG is 100%.As mentioned earlier, ParTeG cannot generate any testcases for the third problem.

The fourth problem's source code consists of 258instructions. Running the generated test cases byMoBaTeG against source code covers 229 out of 258(88.8%) source code instructions. This value is equalfor both All-State and All-Transition coverage criteria.Running the generated test cases by MoBaTeG againstthe source code of the last problem results in 84% (489out of 582) coverage for All-State coverage criterion

and 95.9% (558 out of 582) coverage for All-Transitioncoverage criterion. As mentioned earlier, ParTeGcannot generate any test cases for the fourth and �fthproblems.

4.2.4. Boundary value analysisIn this work, the boundary value analysis is performedfor boundary data generation. As mentioned in Sec-tion 3.3.3, if all of the model's constraints are linear,using linear solvers can generate boundary test data.But, if the model's constraints are non-linear, usingproper objective function can generate boundary testdata.

As can be seen in Table 3, the �rst, second,fourth, and �fth problems are linear. Hence, using theproposed approach must generate boundary test dataautomatically. The results of our experiments in thissection prove this claim. The third problem is non-linear. Thus, we use a proper linear objective functionand send it to AMPL. Then, we check the generatedtest data for some abstract paths. We see that allgenerated test data are boundary data.

5. Conclusion

This paper proposed an algorithm for automatic gener-ation of test cases from UML class diagrams and UMLstate machines annotated with OCL constraints as wellas executable Java unit tests. The detailed elaborationof tasks and subtasks for test case generation fromUML models is given in Section 3. The main partof these tasks is the transformation of test modeland its constraints into the corresponding model ina mathematical programming language, AMPL, andusing solvers for solving the model. Section 4 wasdedicated to evaluation of the proposed algorithm viaseveral case studies. In the following, we �rst review

Page 15: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

3146 A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147

the contributions of this research and then discuss thefuture works and threats to validity.

5.1. ContributionsIn this paper, a new depth-�rst and forward searchalgorithm is proposed, which is based on UML statemachine and aims at satisfying test goals that aregenerated form coverage criteria. The abstract testcases are generated using this search algorithm. Theproposed forward search algorithm is run once andfor each abstract path that is found, satisfaction ofunsatis�ed test goals is checked. Instead, ParTeG runsbackward search one time for each test goal. Forexample, for 20 test goals, ParTeG needs to run itsback-ward search 20 times, whilst MoBaTeG runs itsforward search algorithm only once.

MoBaTeG supports composite states and all thepossible ways for entering and exiting such states.However, some other approaches do not support com-posite states or they need to atten state machinesbefore the execution of test case generation algorithm,which is an extra task. Also, they may support onlysome of the elements of UML state machine. As anexample, ParTeG does not support entry and exit pointpseudo states, while MoBaTeG supports these pseudostates.

To prevent search algorithm from traversing in-�nite paths or getting trapped in loops that mayexist in the state machine, we consider fuel for oursearch algorithm. This fuel is equal to the numberof transitions that the search algorithm is allowed totraverse. If the fuel of algorithm is �nished, it mustdo backtrack and continue its search in another paththat it has not visited so far. With each backtrack, thefuel of algorithm will be increased by one unit. Thisfeature of MoBaTeG prevents search algorithm fromgetting trapped in in�nite loops.

As another achievement of our approach, wemay refer to the solver-based algorithm for test datageneration from UML state machine and using sym-bolic execution and constraint programming system.Di�erent kinds of problems, e.g., linear, non-linear,and mixed integer non-linear, as well as satis�abilitymodulo theories can be solved using the proposedapproach facilitated by the capability of using di�erentstate of the art and powerful solvers.

In the proposed approach, we consider Boolean,Integer, and Real data types. Also, we use earlydetection of infeasible paths during the generationof abstract test cases and remove infeasible pathsfrom searching space. Pruning search space leads toimproving execution time of search algorithm.

Furthermore, we can mention the boundary testdata generation that has higher fault detection rate.The test data at the boundaries of feasible set are morevaluable than those within feasible set. On the other

hand, the coverage criteria are used for evaluating thequality of generated test cases as well as the criteria forconducting the search algorithm. Coverage criteria areconverted into test goals and the search algorithm triesto satisfy them. Also, the satisfaction rate of thesetest goals can be used for evaluating the quality ofgenerated test cases (higher test goals satisfaction ratemeans higher test cases quality).

In Section 4, several evaluations are performed forevaluating our approach. MoBaTeG shows good resultsfor all of the evaluation criteria. Also, we comparedour tool with ParTeG. The superiority of our tool hasbeen shown over ParTeG. But, it should be notedthat ParTeG supports several kinds of coverage criteriathat our tool does not support. For example, wecan refer to \Modi�ed Condition/Decision Coverage"that is focused on the isolated impact of each atomicexpression on the whole condition value [7].

Finally, we can mention that, as a result of ourstudy, MoBaTeG is attained as a free, open source,and Eclipse-based tool.

5.2. Future workIn this paper, two coverage criteria have been usedfor conducting search algorithms as well as evaluatingthe quality of generated test cases. It is better toconsider more coverage criteria for test case generation,\Condition/Decision Coverage" and \All TransitionPairs" to name a few.

Also, we plan to consider other kinds of unit testsas the format of the output of generated test cases.Hence, users can de�ne output format of test casesbased on the programming language of implementa-tion. The meta-model that is proposed for test casegeneration can be used for this purpose.

We considered only some parts of OCL operationsand expressions. We plan to add other OCL opera-tions and expressions to the proposed approach. Forexample, we can mention set expressions. Anotheridea is to consider some operations and functionsof AMPL that do not exist in OCL. But, we mustconsider them during parsing constraints and abstractsyntax tree generation. Furthermore, we plan tosupport \Enumerated" data type by using indexed setof AMPL.

Several evaluations are carried out for evaluatingour approach using academic and industrial case stud-ies. We plan to evaluate our approach on more complexproblems from industry in the future.

Finally, we plan to consider parallel state ma-chines as well as history states for modeling dynamicbehavior of system.

5.2.1. Threats to validityThreats to the proposed approach and its validityare divided into internal and external threats. An

Page 16: A novel approach to automatic model-based test case generationscientiairanica.sharif.edu/article_4528_e96df86a6aae84ce3fb28a1163aceff6.pdf · using manual approaches. Testing software

A. Rezaee and B. Zamani/Scientia Iranica, Transactions D: Computer Science & ... 24 (2017) 3132{3147 3147

internal threat is that for input models, the proposedapproach uses UML class diagram and UML statemachine annotated with OCL. While most of themodelers are familiar with UML and can generate UMLmodels easily, it may be di�cult for other people. Tomitigate this threat, several sample models have beenprovided along with our tool. An external threat to thisstudy is the problem of generalizing the results. Tomitigate this threat, several academic and industrialcase studies with di�erent levels of complexity arecarried out, but it may not be enough to argue thatthe proposed approach can be used for any type ofsoftware. Therefore, as mentioned in subsection 5.2,the approach must be evaluated with more complexproblems from the industry.

Acknowledgements

We are very much grateful to Stephan Wei�leder andFelix Kurth for their valuable comments and guidance.

References

1. Selic, B. \The pragmatics of model-driven develop-ment", Software, IEEE, 20(5), pp. 19-25 (2003).

2. Sendall, S. and Kozaczynski, W. \Model transforma-tion: The heart and soul of model-driven software de-velopment", Software, IEEE, 20(5), pp. 42-45 (2003).

3. Korel, B. \Automated software test data generation",Software Engineering, IEEE Transactions on, 16(8),pp. 870-879 (1990).

4. El-Far, I.K. and Whittaker, J.A. \Model-based soft-ware testing", Encyclo-pedia of Software Engineering(2001).

5. Utting, M., Pretschner, A. and Legeard, B. \A tax-onomy of model-based testing approaches", SoftwareTesting, Veri�cation and Reliability, 22(5), pp. 297-312 (2012).

6. Kurth, F. \Automated generation of unit tests fromUML activity diagrams using the AMPL interfacefor constraint solvers", Master's Thesis, HamburgUniversity of Technology, Germany, Hamburg (Jan.2014).

7. Wei�leder, S. \Test models and coverage criteria forautomatic model-based test generation with UMLstate machines", PhD Thesis, Humboldt University ofBerlin (2010).

8. Kundu, D. and Samanta, D. \A novel approach togenerate test cases from UML activity diagrams",Journal of Object Technology, 8(3), pp. 65-83 (2009).

9. Linzhang, W., Jiesong, Y., Xiaofeng, Y., Jun, H.,Xuandong, L. and Guo-liang, Z. \Generating testcases from UML activity diagram based on gray-boxmethod", In Software Engineering Conference, 2004.11th Asia-Paci�c, pp. 284-291, IEEE (2004).

10. Wei�leder, S. and Sokenou, D. \Automatic test casegeneration from UML models and OCL expressions",In Software Engineering (Workshops), pp. 423-426(2008).

11. Ali, S., Iqbal, M.Z., Arcuri, A. and Briand, L. \Asearch-based OCL constraint solver for model-basedtest data generation", In Quality Software (QSIC),2011 11th International Conference on, pp. 41-50,IEEE (2011).

12. Malburg, J. and Fraser, G. \Combining search-basedand constraint-based testing", In Proceedings of the2011 26th IEEE/ACM International Conference onAutomated Software Engineering, pp. 436-439, IEEEComputer Society (2011).

13. Tillmann, N. and De Halleux, J. \Pex-white box testgeneration for. net", In Tests and Proofs, pp. 134-153,Springer (2008).

14. Krieger, M.P. and Knapp, A. \Executing under-speci�ed OCL operation contracts with a satsolver", Electronic Communications of the EASST,15 (2008). https://journal.ub.tu-berlin.de/eceasst/ ar-ticle/view/176

15. Eclipse Foundation, \EclEmma." http://www.ecle-mma.org/. [Online; accessed 10-December-2013].

Biographies

Amin Rezaee received his BSc from the Universityof Fasa, Fars, Iran, in 2011, and his MSc from theUniversity of Isfahan, Isfahan, Iran, in 2014, both inComputer Engineering (Software).

He was software testing researcher of Interna-tional System Engineering & Automation Company(Irisa) in 2012. Amin Rezaee joined the AutomationSystems Group of Irisa in 2014 as a software designerand programmer.

Bahman Zamani received his BSc from the Univer-sity of Isfahan, Isfahan, Iran, in 1991, and his MSc fromSharif University of Technology, Tehran, Iran, in 1997,both in Computer Engineering (Software). He obtainedhis PhD degree in Computer Science from ConcordiaUniversity, Montreal, QC, Canada, in 2009.

From 1998 to 2003, he was a researcher andfaculty member of the Iranian Research Organizationfor Science and Technology (IROST), Isfahan branch.Dr. Zamani joined the Department of Computer En-gineering at the University of Isfahan in 2009 as anAssistant Professor. His main research interest isModel-Driven Software Engineering (MDSE). He isthe founder and director of MDSE research group atUniversity of Isfahan.