experiments on semantic interoperability of agent reputation models using the soari architecture

11
Experiments on semantic interoperability of agent reputation models using the SOARI architecture Luis G. Nardin, Anarosa A.F. Brand ~ ao , Jaime S. Sichman Laborato ´rio de Te´cnicas Inteligentes, Escola Polite ´cnica, Universidade de S ~ ao Paulo (USP), Av. Luciano Gualberto, 158 trav. 3, 05508-970 S ~ ao Paulo SP, Brazil article info Available online 17 June 2011 Keywords: Semantic interoperability Service oriented architecture Reputation Multiagent systems Artificial intelligence abstract In the last decades, we have experienced a rapid increase in the number of available online e-services. Agent-based computing has been advocated as a natural computational model to automate the interaction with those services, thus enabling the formation of multiagent systems. In these latter, agents may use trust and reputation as the main control mechanism and they usually exchange such information in order to accelerate reputation evaluation. However, due to the semantic heterogeneity of the different reputation models, agents interaction about reputation has to deal with interoperability issues. Therefore, this paper presents some experiments using SOARI, an architecture that enables the semantic interoperability among agents that have heterogeneous reputation models. Such experiments were conducted using two reputation testbeds and three agent reputation models in order to analyze the accuracy of the agents reputation evaluation in the presence of a more expressive communication apparatus, as well as the effect of the heterogeneity among reputation models on this accuracy. & 2011 Elsevier Ltd. All rights reserved. 1. Introduction The growing use of the Internet in the last decades stimulated the increase in the number of available online e-services, such as e-Commerce, e-Government and e-Science (Rowley, 2006). Nowa- days, most of the search, selection and use of these services are directly performed by humans. However, in the near future, people are expected to delegate these tasks to software components, requiring them to cooperate and negotiate among themselves. In order to this scenario becomes practical, there is a need for autonomous components that act and interact in flexible ways to achieve their objectives in such uncertain, dynamic and open environments. Agent-based computing has been advocated as the natural computational model for such systems (Jennings, 2001), since it presents the capabilities for both acting autonomously and engaging in social activities such as cooperation, negotiation and collaboration (Wooldridge, 2009). Nevertheless, in open environments, where agents can enter or leave at any time, taking part in social activities may expose them to risks, for instance, when making decisions based on informa- tion provided by non-knowledgeable or malevolent agents. Therefore, as occurs in human societies, agents in a virtual society may become susceptible to the emergence of social dilemmas. A social dilemma occurs whenever individuals in interdependent situations face choices in which the maximization of short-term self-interest yields outcomes leaving all participants worse off from feasible alternatives (Ostrom, 1998). Some solutions to this problem are based on trust models, which serve as a decision criterion for an agent to act and engage in social activities. In the last years, some computational agent trust models were proposed, for instance, Histos and Sporas (Zacharia and Maes, 2000), MMH (Mui, Mohtashemi and Halberstadt) (Mui et al., 2002a), ReGreT (Sabater-Mir and Sierra, 2002), FIRE (Huynh et al., 2004), Repage (Sabater-Mir et al., 2006) and L.I.A.R. (Liar Identifica- tion for Agent Reputation)(Muller and Vercouter, 2008). These models are mostly based on the concept of reputation borrowed from the social sciences, in which reputation could be considered a social property or a social process (Conte and Paolucci, 2002). In order to accelerate and to improve the robustness of their reputa- tion evaluations, agents generally exchange reputation information about third-parties. However, since there is no consensus about a single unifying reputation definition, the semantics associated with reputation concepts differ from one model to another, which raises an interoperability problem as depicted in Fig. 1. Fig. 1 shows three agents named Alice, Bob and Clara. Suppose that Alice and Bob have directly interacted with Clara (dashed line) and internally represented her reputation evaluation. In order to improve her reputation evaluation of Clara, Alice requests Bob his reputation evaluation of Clara. Since Alice and Bob use different reputation models, respectively, RM 1 and RM 2 , they are unable to communicate directly using their internal reputation model Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/engappai Engineering Applications of Artificial Intelligence 0952-1976/$ - see front matter & 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.engappai.2011.05.004 Corresponding author. E-mail addresses: [email protected] (L.G. Nardin), [email protected] (A.A.F. Brand ~ ao), [email protected] (J.S. Sichman). Engineering Applications of Artificial Intelligence 24 (2011) 1461–1471

Upload: independent

Post on 15-May-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Engineering Applications of Artificial Intelligence 24 (2011) 1461–1471

Contents lists available at ScienceDirect

Engineering Applications of Artificial Intelligence

0952-19

doi:10.1

� Corr

E-m

anarosa

jaime.si

journal homepage: www.elsevier.com/locate/engappai

Experiments on semantic interoperability of agent reputation models usingthe SOARI architecture

Luis G. Nardin, Anarosa A.F. Brand~ao �, Jaime S. Sichman

Laboratorio de Tecnicas Inteligentes, Escola Politecnica, Universidade de S ~ao Paulo (USP), Av. Luciano Gualberto, 158 trav. 3, 05508-970 S ~ao Paulo SP, Brazil

a r t i c l e i n f o

Available online 17 June 2011

Keywords:

Semantic interoperability

Service oriented architecture

Reputation

Multiagent systems

Artificial intelligence

76/$ - see front matter & 2011 Elsevier Ltd. A

016/j.engappai.2011.05.004

esponding author.

ail addresses: [email protected] (L.G. Nardin

[email protected] (A.A.F. Brand~ao),

[email protected] (J.S. Sichman).

a b s t r a c t

In the last decades, we have experienced a rapid increase in the number of available online e-services.

Agent-based computing has been advocated as a natural computational model to automate the

interaction with those services, thus enabling the formation of multiagent systems. In these latter,

agents may use trust and reputation as the main control mechanism and they usually exchange such

information in order to accelerate reputation evaluation. However, due to the semantic heterogeneity

of the different reputation models, agents interaction about reputation has to deal with interoperability

issues. Therefore, this paper presents some experiments using SOARI, an architecture that enables the

semantic interoperability among agents that have heterogeneous reputation models. Such experiments

were conducted using two reputation testbeds and three agent reputation models in order to analyze

the accuracy of the agents reputation evaluation in the presence of a more expressive communication

apparatus, as well as the effect of the heterogeneity among reputation models on this accuracy.

& 2011 Elsevier Ltd. All rights reserved.

1. Introduction

The growing use of the Internet in the last decades stimulatedthe increase in the number of available online e-services, such ase-Commerce, e-Government and e-Science (Rowley, 2006). Nowa-days, most of the search, selection and use of these services aredirectly performed by humans. However, in the near future, peopleare expected to delegate these tasks to software components,requiring them to cooperate and negotiate among themselves.

In order to this scenario becomes practical, there is a need forautonomous components that act and interact in flexible ways toachieve their objectives in such uncertain, dynamic and openenvironments. Agent-based computing has been advocated as thenatural computational model for such systems (Jennings, 2001),since it presents the capabilities for both acting autonomouslyand engaging in social activities such as cooperation, negotiationand collaboration (Wooldridge, 2009).

Nevertheless, in open environments, where agents can enter orleave at any time, taking part in social activities may expose themto risks, for instance, when making decisions based on informa-tion provided by non-knowledgeable or malevolent agents.Therefore, as occurs in human societies, agents in a virtual societymay become susceptible to the emergence of social dilemmas.

ll rights reserved.

),

A social dilemma occurs whenever individuals in interdependentsituations face choices in which the maximization of short-termself-interest yields outcomes leaving all participants worse offfrom feasible alternatives (Ostrom, 1998).

Some solutions to this problem are based on trust models, whichserve as a decision criterion for an agent to act and engage in socialactivities. In the last years, some computational agent trust modelswere proposed, for instance, Histos and Sporas (Zacharia and Maes,2000), MMH (Mui, Mohtashemi and Halberstadt) (Mui et al.,2002a), ReGreT (Sabater-Mir and Sierra, 2002), FIRE (Huynh et al.,2004), Repage (Sabater-Mir et al., 2006) and L.I.A.R. (Liar Identifica-

tion for Agent Reputation) (Muller and Vercouter, 2008). Thesemodels are mostly based on the concept of reputation borrowedfrom the social sciences, in which reputation could be considered asocial property or a social process (Conte and Paolucci, 2002). Inorder to accelerate and to improve the robustness of their reputa-tion evaluations, agents generally exchange reputation informationabout third-parties. However, since there is no consensus about asingle unifying reputation definition, the semantics associated withreputation concepts differ from one model to another, which raisesan interoperability problem as depicted in Fig. 1.

Fig. 1 shows three agents named Alice, Bob and Clara. Supposethat Alice and Bob have directly interacted with Clara (dashed line)and internally represented her reputation evaluation. In order toimprove her reputation evaluation of Clara, Alice requests Bob hisreputation evaluation of Clara. Since Alice and Bob use differentreputation models, respectively, RM1 and RM2, they are unableto communicate directly using their internal reputation model

Fig. 1. Reputation interaction scenario.

L.G. Nardin et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1461–14711462

concepts as depicted, which requires some mechanism or architec-ture to overcome such limitation and provide support to the agentreputation interoperability.

In some of our previous work (Nardin, 2009; Nardin et al.,2008a,b; Nardin et al., 2009), the service oriented architecturenamed SOARI (Service Oriented Architecture for Reputation Inter-action) was presented and preliminary experiments were con-ducted. SOARI is an architecture that aims at providing semanticinteroperability among agent reputation models and, thus, atenabling a more expressive communication about reputationamong agents. Here they are followed up, presenting newexperiments using SOARI and their associated results in order toanswer two questions: (1) is there any improvement in thereputation evaluation accuracy when enabling a more expressivecommunication? and (2) is there any improvement in the reputa-tion evaluation accuracy when considering the heterogeneity ofreputation models? Such questions were already addressed inNardin (2009) and Nardin et al. (2009) considering just two agentreputation models (Repage and L.I.A.R.) and the analysis is nowextended by rebuilding the experiments with the inclusion of athird model (MMH).

The rest of the document is structured as follows. Section 2provides some background information on Semantic Interoper-ability and contextualizes our work while comparing it with theliterature. In Section 3, the SOARI architecture and some imple-mentation decisions are discussed. Some experiments in the artappraisal domain are presented in Section 4, followed by theirresults and analysis in Section 5. Finally, some conclusions andfuture work are presented in Section 6.

2. Semantic interoperability issues and related work

Interoperability is the ability of two or more systems orcomponents to exchange and use shared information (IEEE, 1991).In order to do that, systems should be able to access, process andinterpret such information. Therefore, issues related to the informa-tion heterogeneity among these systems or components mayendanger the activities that must be executed to achieve interoper-ability. Such activities could be classified as integration ones andcan be defined along three dimensions (Sheth, 1998): structural,syntactic and semantic. The semantic dimension refers to integra-tion activities related to the need of solving semantic conflictsamong heterogeneous data sources. For instance, this occurs whendifferent applications mean different things by using similar terms.In this work, the focus is related to semantic interoperability, whichis closely tied to the systems or components domain.

Semantic interoperability is the ability that two or more hetero-geneous and distributed systems or components have of working

together while sharing information with common understanding ofits meaning (Buranarach, 2001). Several communities are dealingwith this problem: Service Oriented Architecture (SOA) (Vetere andLenzerini, 2005), geographical information systems (GIS) (Visseret al., 2002), health care (Ryan and Eklund, 2008) and enterpriseapplication integration (EAI) (Contreras and Sheremetov, 2008),among others.

According to Visser et al. (2000), there are three approaches fordealing with semantic interoperability: centralized, decentralizedand hybrid. In the centralized approach, all agents use the samecommon ontology to internally represent information and to interactwith other agents. Interoperability is not a problem in this approach,since all agents share the same ontology to describe the domain inwhich they are interacting. In the decentralized approach, each agenthas its own ontology to internally represent information and tointeract with other agents. In this case, full semantic interoperabilityis achieved if each agent has the mapping from its own ontology tothe ontologies of all other agents it interacts with. In the hybridapproach, each agent has its own internal ontology to represent thedomain and uses a common domain ontology to interact with otheragents. Interoperability occurs if each agent has the mapping of itsown ontology to the common domain ontology.

Whenever multiagent systems (MAS) are considered, speciallythe ones characterized as open and distributed, semantic interoper-ability is an issue that is still unsolved. Nonetheless, the existence ofheterogeneous reputation models brings the problem of semanticinteroperability to the reputation domain (Vercouter et al., 2007).This specific problem is the one we have been working on:semantic interoperability concerning agent interaction about repu-tation in MAS. This issue has been gaining attention of the researchcommunity but, at the best of our knowledge, there are just a fewrelated work about this specific subject in the literature.

Alnemr et al. (2010) deal with reputation interoperabilityissues through the definition of an object named reputation object

(RO). They adopt RO to represent the reputation of someone orsomething using the reputation information related to severaldomain contexts. They propose an OWL-ontology to describetheir RO model and show some applications in rule-based opensystems. Their approach allows domain independent interactionconcerning reputation, but they do not mention how such aninteraction could occur among agents that are not rule-basedspecified, which is the case our approach deals with.

Trivellato et al. (2009a) define POLIPO, an ontology-basedframework to enable interoperability, portability and autonomyin dynamic coalitions of heterogeneous systems. Such a frame-work adopts some trust management through credentials andauthorization rules from a global and shared ontology. Therefore,they define a reputation-based ontology alignment for enhancingPOLIPO to deal with different ontologies from the same domain(Trivellato et al., 2009b). Their focus is on access control duringdynamic coalitions while ours is on reputation interaction.

3. SOARI: Service Oriented Architecture for ReputationInteraction

SOARI (Fig. 2) is a service oriented architecture that providessupport to the semantic interoperability among agents thatimplement heterogeneous reputation models. Its main underlyingidea is that the mapping among different reputation models,represented as ontologies, may be executed externally to theagents and be available online as a service for agents to use.

The advantage of using service oriented architecture, from adesign/programming perspective, is that the agents becomesimpler and they have their dynamic workload alleviated sincethey do not need to perform the ontology mapping function

L.G. Nardin et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1461–1471 1463

internally. Moreover, since the ontology mapping results arestored in an external service, they may be reused by other agentsthat enter the system and have an internal reputation model thatis available in the service.

The architecture design considers that different agents mayhave heterogeneous reputation models and the interoperationamong them is performed using the hybrid semantic interoper-ability approach proposed by Visser et al. (2000). By adopting thehybrid approach, agents are prevented from knowing otheragents’ internal reputation models and avoid the use of suchinformation in a hazardous way. As a common domain ontology,we adopted the functional ontology of reputation (FORe) (Casareand Sichman, 2005a), since it subsumes several of the availablereputation models (Casare and Sichman, 2005b) and it wasdeveloped in our research group. Other candidates for being thecommon reputation ontology were proposed in Chang et al.(2006) and Alnemr et al. (2010); however, they cover a narrowerscope of the reputation domain than FORe. Due to space limita-tion, details about FORe are not presented here. Interested readersare pointed to Casare and Sichman (2005a,b).

SOARI is an extension of the general agent architecture forreputation interoperability proposed in Vercouter et al. (2007). Itextends the previous architecture by splitting its reputation mapping

module (RMM) into two distinct and specialized modules: the

Fig. 3. Communication betw

Fig. 2. Service oriented architecture for reputation interaction.

ontology mapping service (OMS), an external service which performsontology mapping and translation functions (Kalfoglou andSchorlemmer, 2003), and the translator module (TM), an agent’sinternal module which performs message translation functions.

Communication about reputation between SOARI-basedagents occurs using the common ontology and follows thesequence presented in Fig. 3. (1) Agent A wants to communicatewith Agent B, then it uses the TM to query the OMS in order toobtain its message’s internal reputation ontology concepts trans-lated to the common ontology concepts; (2) After receiving thetranslation, it uses its interaction module (IM) to send thetranslated message to Agent B; (3) Agent B receives the messagethrough its IM, and it uses the TM to query the OMS and translatethe message’s concepts to its internal reputation ontology. Below,we briefly describe the ontology mapping service and the translator

module modules.

3.1. Ontology mapping service

The OMS (Fig. 4) is a service outside the agent that implementstwo main functionalities: (i) to map concepts from the targetreputation model ontology to the concepts of the common ontology

een SOARI-based agents.

Fig. 4. The ontology mapping service components.

L.G. Nardin et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1461–14711464

and vice-versa; and (ii) to answer ontology concept translationrequests from the TM. The latter is performed via its web services(Booth et al., 2004) interface. Therefore, the OMS existence isindependent of the agents since it is provided as a service. Moreover,it also inherits the technological benefits from web services, forinstance, scalability and fault tolerance.

The OMS is composed of the ontology repository, the classifier

module, the inference engine interface, the inference engine, thetranslation repository and the query interface.

The classifier module is the core component of the OMS and itsfunctioning follows a three-step process, which is required to beperformed only once for each reputation model ontology. Theclassifier module continually observes the ontology repository.When it detects the insertion of a new reputation model ontologyin the repository, it reads the ontology and classifies it using theinference engine through the inference engine interface. In thesequence, it identifies the mapping between the FORe conceptsand the reputation model ontology concepts analyzing theirsubsumption relation and, finally, it stores the mapping in thetranslation repository, a relational database. Such mapping is thenavailable for querying through the query interface.

So that the classifier module is able to classify the reputationmodel ontology, the latter must be described in OWL DL (webontology language description logic) (McGuinness and vanHarmelen, 2004). This constraint is explained in Section 3.3.Moreover, its concepts must be described using the terms of thecommon vocabulary, the same terms used to describe the con-cepts of the common reputation ontology. More details about thefunction of each component can be found in Nardin et al. (2008a).

3.2. Translator module

The translator module (Fig. 5) resides inside the agent and ittranslates reputation messages. It has four main activities: (i) totranslate the concepts in reputation messages from the commonontology into the internal agent’s reputation model ontologywhenever the message comes from the interaction module (IM);(ii) to translate the concepts in reputation messages from theinternal agent’s reputation model ontology into the commonontology whenever the message is sent to IM; (iii) to trigger afunction in the reputation reasoner module (RRM) based on theinterpretation of messages written using the reputation model

Fig. 5. The translator module components.

ontology; and (iv) to create a message using the reputation modelontology concepts whenever requested by RRM.

The TM utilizes the mappings available at the OMS translation

repository in order to perform the message translation. Among itsfunctionalities, two important ones are: (i) dealing with transla-tion issues and (ii) reputation value transformation.

Translation issues are considered any concept translation froma source ontology to a target ontology the mapping of which isnot one to one. For instance, incompleteness is a translation issuein which a concept from a source ontology is not mapped to anyconcept of a target ontology; and ambiguity is a translation issuein which a concept from a source ontology maps more than oneconcept of the target ontology.

The TM translation strategy interface is the module responsiblefor implementing the strategies to handle translation issues. Whensome concept translation incurs in a translation issue, it may notonly solve the translation issue but also it must guarantee that thestrategy implemented is logically equivalent (A 3 B), meaningthat if concept A is translated into concept B, then concept B mustbe translated into concept A. Examples of strategies to deal withtranslation issues can be found in Nardin et al. (2008b).

Another important TM functionality is the reputation valuetransformation, which is implemented through the value trans-

form interface module. It transforms the reputation value repre-sented in a source ontology into the value representation of atarget ontology.

Flexibility and configurability were considered the mainrequired characteristics for this module design, since not all theagents implement the same IM and RRM, and each agent mayhave different translation strategies. Flexibility allows TM adap-tation to interact with different kinds of IM and RRM, whileconfigurability allows the selection of specific translation strate-gies and value transformations to satisfy the agent needs. A moredetailed description about its components and operation can befound in Nardin et al. (2008a).

3.3. Implementation considerations

The OWL DL language was chosen as the default ontologylanguage description in this work. Even not using its full cap-abilities, this was the selected language since OWL is the World

Wide Web Consortium (W3C) recommendation for creating andsharing ontologies, there are sound and complete DL reasoners(Sirin et al., 2007; Tsarkov and Horrocks, 2006; Haarslev andMoller, 2003) that provide the classification service necessary forthe classifier module, and FORe was already described in OWL DL.

Besides the OWL DL reasoning, rule-based OWL reasoning isanother approach that could be used by the classifier module toperform the ontology mapping function. The idea of this approachis to map OWL to a rule formalism that applies (a subset of) theOWL semantics in the KB of a rule engine (Antoniou et al., 2005;Meditskos and Bassiliades, 2008). This approach is more advanta-geous than using OWL DL reasoning when applied to ontologieswith a very large amount of instances (Antoniou et al., 2005;Meditskos and Bassiliades, 2008). SOARI deals only with ontologyconcepts and the reasoning for classifying them. Nevertheless, theadoption of a rule-based approach would increase the OMS proces-sing requirements and the risk of inserting translation errors, sincethe ontology would need to be translated into a set of rules.Therefore, there is no advantage in adopting such approach.

In order to perform the classification service required by OMS,the inference engine chosen was Pellet (Sirin et al., 2007). Itsselection was motivated because it is completely developed inJava, open-source and has a method call integration to theProtege-OWL Plugin. Additionally, the OMS was designed withthe inference engine interface to allow the replacement of the

L.G. Nardin et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1461–1471 1465

inference engine if required, guaranteeing flexibility to thearchitecture.

Probability distribution was chosen to represent the reputationvalue in SOARI, since it is more expressive than the boolean,bounded real and discrete sets representations proposed in Pinyolet al. (2007). Due to space limitation, details about the transfor-mation functions among these representations implemented inSOARI can be found in Pinyol et al. (2007).

SOARI is operational and available at http://soari.sourceforge.net.

4. Experiments

In order to demonstrate the usefulness of SOARI, this sectiondescribes some experiments and presents the impact on the accuracyof the agents reputation evaluation when using (i) a more expressivereputation communication and (ii) heterogeneous reputation mod-els. More specifically, two questions are to be answered:

1.

Is there any improvement in the accuracy of the agentsreputation evaluation when enabling a more expressive com-munication about reputation?

2.

Is there any improvement in the accuracy of the agentsreputation evaluation when considering the heterogeneity ofreputation models?

In order to answer those questions, some experiments wereperformed using the ART (Agent Reputation and Trust Testbed)(Fullam et al., 2005a,b) and FOReART (FORe Agent Reputation andTrust Testbed) (Vercouter et al., 2007; Brand~ao et al., 2007)testbeds considering agents with three (Repage, L.I.A.R. andMMH) reputation models. These experiments extend the onespresented in our previous work (Nardin et al., 2009; Nardin,2009), since they were rebuilt and reconducted, including a thirdreputation model (MMH).

4.1. Reputation models

The agent reputation models used in this work are Repage,L.I.A.R. and MMH. The selection of those reputation models wasmotivated by the fact that, although they have some similaritiessuch as perform direct and indirect reputation evaluations aboutthird-parties, they differ significantly regarding the semantic oftheir reputation concepts.

Next, we describe the terminologies identified as concepts ineach of the reputation models are described.

L.I.A.R. is a model for the implementation of social control duringagent interaction proposed by Muller and Vercouter (2008). Ithas five different types of reputation: direct interaction-based

reputation (DIbRp), which is built on directly exchanged mes-sages between the evaluator agent and the target; Indirect

interaction-based reputation (IIbRp), which is built on messagesobserved by the evaluator agent about the target; Observation

recommendation-based reputation (ObsRcbRp), which is built onmessages received by the evaluator agent from third-partiesconcerning the target; Evaluation recommendation-based reputa-

tion (EvRcbRp), which is built on evaluations received by theevaluator agent from third-parties concerning the target; andReputation recommendation-based reputation (RpRcbRp), which isbuilt on reputation evaluations received by the evaluator agentfrom third-parties concerning the target. L.I.A.R. reputationvalues are represented by a unique real number in the domain½�1,þ1� [ unknown.

� Repage is a computational system based on a reputation model

proposed by Conte and Paolucci (2002). It is composed of two

main concepts: image, which is an evaluative belief formedusing information acquired by agent experience or propagatedthird-party images; and reputation, which is a meta-beliefformed based on anonymous reputation value transmitted inthe social network concerning the target agent. Repage modelsits reputation values as a fuzzy set, represented by a tuple ofreal values that sum one.

� MMH—Mui et al. (2002b) propose an intuitive typology of

reputation. It distinguishes reputation in 10 different typesstructured as a tree. In the topmost level of the tree thereputation (RepRep) concept can be found, which is dividedinto individual reputation (IndRep) and group reputation

(GrpRep) based on the target entity nature. Individual reputa-

tion is considered derived either (1) from direct encounters orobservations, or (2) from inferences based on informationgathered indirectly, and it is further divided into direct reputa-

tion (DirRep) and indirect reputation (InrRep). Direct reputation

considers the existence of direct experience with another agentand it is divided into encounter derived reputation (EncRep),which is based on actual encounters between a reputed agentand the evaluator agent, and observed reputation (ObsRep),which is based on the observation of other agents encounters.Indirect reputation is inferred based on information gatheredindirectly and it is divided into prior derived reputation (PriRep),which is prior belief the agent has about strangers, group

derived reputation (GrDRep), which is prior reputation esti-mates for agents in social groups, and propagated reputation

(PropRep), which is the reputation provided by other agentsconcerning third-parties. MMH reputation values are repre-sented by a unique real number in the domain [0,þ1].

4.2. Simulation testbeds

The ART (Fullam et al., 2005a) testbed is a platform thatprovides a common environment to compare different agentreputation models and implementations. It simulates an iterativeart appraisal game, in which agents evaluate paintings for clientsand gather reputations and opinions from other agents to produceaccurate appraisals. In this game, there are two types of agents:appraisal agents and client agents. The client agents contract andpay appraisal agents in order to have their paintings evaluated. Theappraisal agents have different knowledge (expertise) about dif-ferent painting eras and they can buy reputation and opinion aboutthird-parties from other appraisal agents in order to improve theirreputation and painting evaluations. At the end of the game, thewinner is the agent that obtains the highest final bank balance.

In this scenario, the need for reputation comes from the dualityof the need for cooperation to evaluate some of the paintingsbecause the agents are only competent in some painting eras, andthe competition to earn the largest share of the client pool.

In the ART testbed, interoperability among agents is obtainedby mapping the reputation models evaluations into a single valuein the domain [0:1]. Although not explicitly defined, it is assumedthat value 0 refers to the lowest reputation value and 1 to thehighest reputation value. A more detailed description about theART testbed can be found in Fullam et al. (2005b).

The ART platform, despite enabling interoperability amongagents, has some drawbacks: (i) it limits the communicationexpressiveness among agents to values in the domain [0:1], and(ii) it generates initial random values to execute each simulation,turning impossible the use of the same input data in othersimulations for comparison purposes.

In order to overcome those drawbacks, Vercouter et al. (2007)proposed an extension to the ART testbed named FOReART.Such extension enables the agents to communicate through

1 An agent is trusted if the associated reputation attribute value is greater

than a threshold: in Repage is ImageZ ¼ 0:5 and/or ReputationZ ¼ 0:8; in L.I.A.R.

is XZ ¼ 0:7, where X ¼ fDIbRp, IIbRp, or RpRcbRpg; and in MMH is Y Z ¼ 0:7,

where Y ¼ fEncRep, ObsRep or PropRepg:

L.G. Nardin et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1461–14711466

symbolic messages. Moreover, the function that generates theinitial random values were replaced by one that loads initial datafrom a text file, thus allowing the use of the same initial data indifferent simulations. Nevertheless, the game is the same playedat ART testbed. More details about the FOReART testbed can befound in Brand~ao et al. (2007), Vercouter et al. (2007).

4.3. Agent model

In the ART and FOReART testbeds, the game proceeds as aseries of time steps and, at each iteration, the simulation enginetriggers a predefined set of methods in a synchronous andordered way. Such predefine set of methods represents the agentmodel, which is used to implement the agent’s strategy. In thissection, the general strategy used by all the agents in theexperiments performed using both testbeds is described.

This general strategy allows the representation of two types ofappraisal agents: honest and dishonest. Honest agents answer therequests from other appraisal agents only when they haveexpertise about the requested painting era and their answercontains information coherent with their internal belief. Dishonest

agents answer all the requests from other appraisal agents, evenwhen they do not have enough expertise about that painting era,and they never answer the requests with information coherentwith their internal belief.

Agent models in the ART and FOReART testbeds are imple-mented by extending the abstract agent class and implementingthe predefined methods that describe the agent’s strategy (Fullamet al., 2005a). The pseudo-code representing the general strategyis presented in Algorithm 1.

Algorithm 1 (General agent model strategy).

prepareReputationRequests(){//Request reputation of all other agents

}prepareReputationAcceptsAndDeclines(){

if lieðÞ jj ðexpertise 4 expertiseThresholdÞ thenreturn true

elsereturn false

end if}

prepareReputationReplies(){if lie() then

return reputation�0.4else

return reputation

end if}

prepareCertaintyRequests(){// Request certainty of the first in the list}

prepareCertaintyReplies(){if lie() then

return expertise þ 0.5

else if ðexpertise 4 expertiseThresholdÞ thenreturn experise

end if}

prepareOpinionRequests(){

if TrustForðÞ jj ðcertainty 4 certaintyThresholdÞ then// Request opinion

end if}

prepareOpinionCreationOrders(){// Inform the simulator how much to spend on the opinion

request}prepareOpinionProviderWeights(){

// Inform simulator about the weight to consider of eachrequested opinion}prepareOpinionReplies(){

// Send opinion information to the requesting agents}

In the beginning of each game iteration, a set of clientpaintings is assigned to an appraisal agent for appraisal. For eachpainting assigned, the appraisal agent performs reputation trans-actions. First, it requests to other agents in the testbed thereputation of possible appraisers of that painting era (repare

ReputationRequests()). Then, it answers the reputation requestsreceived from other agents (prepareReputationAcceptsAndDe-

clines() and prepareReputationReplies()). If it is a dishonest agent,it accepts all the requests. Otherwise, it only accepts the requestsit has expertise greater than a predefined expertise threshold(expertise threshold ¼ 0.7). To all the accepted requests, thedishonest agent answers with a reputation value, which does notreflect its internal reputation evaluation.

After performing the reputation transaction, the agent per-forms certainty transactions. It first selects a group of agents andrequests them their certainty about a specific painting era (pre-

pareCertaintyRequests()). Next, it answers the certainty requestsreceived from other agents (prepareCertaintyReplies()). An honest

agent whose expertise is greater than a predefined expertisethreshold (expertise threshold ¼ 0.7), answers the request withits expertise value. However, a dishonest agent answers therequest with the maximum between 1 and its expertise valueplus 0.5.

After performing the certainty transactions, the agent requeststhe opinion of the agents it trusts1 or the agents from which itreceived a certainty value greater than a predefined certaintythreshold (certaintythreshold ¼ 0.5) (prepareOpinionRequests()).Such trust thresholds were defined arbitrarily; however, they willnot endanger the experiments since all the agents use this samegeneral values.

In order that the simulator computes the final opinion values,agents provide it with the opinions (prepareOpinionCreationOr-

ders()) and the weights (prepareOpinionProviderWeights()). Finally,the opinion values are also provided to the requester agents(prepareOpinionReplies()).

4.4. SOARI integration and usage

In order to integrate with SOARI and use it, the agent mustadapt its IM and RRM modules, and have its reputation modelontology mapping available in the OMS.

Albeit required, the adaptations in IM and RRM modules arereduced to a minimum since the translator module was designedguided by the flexibility and configurability characteristics. The IMrequires to be adapted in order (i) to identify messages aboutreputation that are expressed using common reputation ontologyconcepts; (ii) to extract the messages content; and (iii) to send it tothe interaction module interface (IMI) via the translator controller (TC).

Table 2Summary of experiments.

Id Honest agent reputationmodel

Dishonest agentreputation model

Platform

exp1 L.I.A.R. L.I.A.R. ART

exp2 Repage Repage ART

exp3 MMH MMH ART

exp4.1 L.I.A.R., Repage and MMH L.I.A.R. ART

exp4.2 L.I.A.R., Repage and MMH Repage ART

exp4.3 L.I.A.R., Repage and MMH MMH ART

exp5 L.I.A.R. L.I.A.R. FOReART

exp6 Repage Repage FOReART

exp7 MMH MMH FOReART

exp8.1 L.I.A.R., Repage and MMH L.I.A.R. FOReART

exp8.2 L.I.A.R., Repage and MMH Repage FOReART

exp8.3 L.I.A.R., Repage and MMH MMH FOReART

L.G. Nardin et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1461–1471 1467

It must also be capable of receiving messages from the IMI and ofsending it to the target agents. The adaptation required on the RRMis simpler since it only needs to register itself in the reputation

reasoner interface (RRI) to allow its callback. However, a specific RRIinstance must be developed for each reputation model in order toenable the message exchange from TM to RRM and vice-versa.

On the other hand, if the agent reputation model ontologymapping is not available in the OMS, a three-step process must beperformed in order to have it available. First, the reputationmodel concepts have to be identified. In Section 4.1, all theRepage, L.I.A.R. and MMH reputation model concepts areidentified.

After that, it is necessary to describe, in OWL (Horridge et al.,2004), each of the reputation model concepts identified in theprevious step in terms of the same common vocabulary adoptedto describe the common reputation model concepts; in our case,FORe concepts. For example, the L.I.A.R Direct Interaction-based

Reputation (DIbRep) concept that has at least one associationthrough the hasInformationSource relation to the DirectExperience

from the common vocabulary, is formally described:

(hasInformationSourceðDirectExperienceÞ

Finally, the ontology must be uploaded to the OMS, which willexecute the classifier module to process it and generate anontology mapping as a result. For example, the mapping resultof L.I.A.R. and FORe associated the L.I.A.R. DIbRep concept with theFORe DirectReputation concept.

It is worth noting that the two first steps were performedmanually, while the third is automated by SOARI. A more detaileddescription of the SOARI usage and the mapping results can befound in Nardin et al. (2008a).

4.5. Experiments description

The experiments consist of executing the art appraisal gamepreviously described using the ART and FOReART testbed with 20honest agents and 01 dishonest agent.

The main objective of these experiments was to identify themean value of the reputation assigned by the honest agents tothe dishonest agent. In order to enable the comparison betweenthe experiments, the initial painting era knowledge and clientsdistribution were identical in all of them. Moreover, all agentsused the same configuration parameters (Table 1) and agentmodel (see Section 4.3) in all the simulations.

To attain the goal, we considered the execution of 10 simula-tions (p ¼ 10) for each experiment with 100 cycles each. Eachsimulation was composed of 21 agents (n ¼ 21); 20 agents werehonest and 1 agent was dishonest (i ¼ [1,20] and j ¼ 21). Themean value of the reputation assigned to the dishonest agent by

Table 1Testbed’s configuration parameters.

Parameter Description Value

numberOfTimeStepsPerSession Number of time steps for game 100

averageClientsPerAgent Initial number of paintings for

appraisal per agent

4

numberOfPaintingEras Number of painting eras in the game 20

f_clientFee Amount received for each painting

appraisal

100

cr_reputationCost Cost per reputation request 1

nb_opinionMsg Maximum number of opinion

messages

5

cp_opinionCost Cost per opinion request 10

nb_certaintyMsg Maximum number of certainty

messages

20

cp_certaintyCost Cost per certainty request 2

each honest agent (rj) considered only the value obtained in thelast simulation cycle (l ¼ 100 and m ¼ 100). The value of the lastsimulation cycle was used because we considered it the mostaccurate reputation evaluation.

Formally, consider a set of n agents, where i¼ f1,2, . . . ,n�1gare honest agents and j¼n is a dishonest agent. Moreover, considerthat rsk

ij is the reputation value assigned by agent i to agent j incycle k on simulation s. Typically, the reputation value assignedby agent i to agent j on simulation s corresponds to the meanreputation value of a set of cycles. Thus,

rsij ¼

Pmk ¼ l rsk

ij

m�lþ1,

where l and m represent, respectively, the lower and upper cyclelimits. The mean reputation value assigned by the honest agentsto the dishonest agent in simulation s is

rsj ¼

Pn�1i ¼ 1 rs

ij

n�1

Finally, given a set of simulations s¼ 1, . . . ,p that compose anexperiment, the mean value of the dishonest agent is

rj ¼

Pps ¼ 1 rs

j

p

The experiments performed were classified based on threedimensions: (1) reputation model(s) used by the honest agents(Repage, L.I.A.R., MMH or mixed), (2) reputation model used bythe dishonest agent (Repage, L.I.A.R., MMH), and (3) platform used(ART or FOReART) (Table 2). Moreover, the mixed experiments(exp4 and exp8) are split into three others based on the reputa-tion model of the dishonest agent. In the other experiments, thedishonest agent uses the same reputation model as the honest

agents. In addition, in the mixed experiments, each type of the 03reputation models is used by 07 of the 21 agents.

5. Experiments analysis

The purpose of this section is to provide an analysis of theexperiments results. However, first a brief explanation of themethodology used to analyze them is presented, and thenthe actual analysis and some discussion about it are provided.

The analysis presented in this section improves the onespresented in Nardin et al. (2009), since those experiments wereanalyzed considering that the resulting data followed the normaldistribution. However, this was a strong constraint that could notbe true all the time and the conduction of new analysis withoutconsidering such a constraint was imperative to validate the

Table 4Expressiveness hypotheses result.

L.G. Nardin et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1461–14711468

analysis’ results. Also, it extends the results presented in Nardin(2009), in which a nonparametric hypothesis test was adopted(Wilcoxon’s Rank Sum Test) (Boslaugh and Watters, 2008) toanalyze the resulting data of experiments conducted using justtwo reputation models (Repage and L.I.A.R.).

The methodology used to analyze the data generated by theexperiments described in Section 4 is based on a statisticalhypothesis testing. The statistical hypothesis test adopted is thenonparametric statistical hypothesis test Wilcoxon’s rank sum test.As aforementioned, this hypothesis test was selected because theexperiments resulting data did not follow the normal distributionand we could not ignore such data characteristic.

The analysis performed in this section was based on theL.I.A.R., Repage and MMH reputation models attributes. Forreputation model attribute, we mean the different concepts ofreputation defined in each reputation model. The analyzedattributes are presented in Table 3.

5.1. Analyzing the effects of the communication expressiveness

In order to analyze the effects of the more expressive com-munication enabled by SOARI, it was verified if the mean value ofthe dishonest agent’s reputation model attributes (rj) obtainedusing numerical reputation values (ART experiments) weregreater than the similar ones obtained in the symbolic testbed(FOReART experiments). The underlying idea was that if theseresults were statistically different, then it would mean that thedishonest agent is better identified if reputation is expressed andexchanged in a more expressive way. Thus, using Wilcoxon’s ranksum test, a set of hypotheses was required to demonstrate it. Thegeneral form of the hypothesis is:

H1: The mean value of the reputation model attribute fromART experiments is greater than the same attribute meanvalue from the FOReART experiments, in which a greaterreputation mean value means a worse detection of thedishonest agent.

This hypothesis, from the point of view of the reputationmodel attribute is expressed mathematically as QX

ART 4QXFOReART ,

where X is a L.I.A.R., a Repage or a MMH reputation modelattribute. In order to validate this hypothesis using the Wilcoxon’srank sum test, the following test was performed:

H0 : QXART rQX

FOReART

H1 : QXART 4QX

FOReART

The complete set of hypotheses to demonstrate the effects of themore expressive communication is composed of hypotheses fromA to H, and each one is related to a reputation model attribute,respectively, DIbRp, IIbRp, RpRcbRp, Image, Reputation, EncRep,ObsRep and PropRep.

Table 3Analyzed reputation models attributes.

Reputationmodel

Attribute name

L.I.A.R. Direct Interaction-based Reputation (DIbRp)

Indirect Interaction-based Reputation (IIbRp)

Observation Recommendation-based Reputation

(ObsRcbRp)

Repage Image

Reputation

MMH Encounter Derived Reputation (EncRep)

Observed Reputation (ObsRep)

Propagated Reputation (PropRep)

The hypothesis test was applied to the results of pairs ofexperiments presented in Table 2, considering the risk level (a) of0.05 and the degree of freedom of 18, the hypotheses generatedthe results presented in Table 4 ( means that H0 was rejected,which confirms the hypothesis; | means that H0 was notrejected; thus, the hypothesis cannot be confirmed; and - (dash)means that the hypothesis is not applicable for the pair ofexperiments).

Analyzing the information in Table 4, hypotheses C, D, E and Hare verified to reject the H0 (indicated by ), except in the case of(exp4.1, exp8.2) for parameter D, confirming that there is somegain in using more expressive communication, while hypothesesA, B, F and G do not (indicated by |). From the reputation modelpoint of view, hypotheses A, B and C are associated with theL.I.A.R. reputation model (DIbRp, IIbRp and RpRcbRp attributes),hypotheses D and E are associated with the Repage reputationmodel (image and reputation attributes), while hypotheses F, Gand H are associated with the MMH reputation model (EncRep,ObsRep and PropRep attributes).

Considering that the Repage agents update image and reputa-

tion attributes with received information, while the L.I.A.R. andMMH agents update only the RpRcpRp and PropRep attribute,respectively, one can conclude that a more expressive commu-nication about reputation has a significant statistical impact onagents using all three reputation models, since the detection ofthe dishonest agent is improved in the hypotheses that considerthe attributes impacted by communication.

Hence, there is an improvement in the accuracy of the agents’reputation evaluation when enabling more expressive commu-nication concerning reputation.

This conclusion is the same obtained in Nardin (2009); how-ever, in the latter the same set of experiments was performedusing only two reputation models (L.I.A.R. and Repage). Thissuggests that even with more than two reputation models, amore expressive communication improves the accuracy in thedetection of a dishonest agent. Therefore, our architecture, whiledealing with semantic interoperability through an ontology-basedapproach, brings some benefits while supporting a more expres-sive communication among agents with heterogeneous reputa-tion models.

5.2. Analyzing the effect of the reputation model heterogeneity

The analysis of the effect of reputation model heterogeneitywas performed by testing if the mean value of the dishonest

agent’s reputation model attributes (rj) obtained from experi-ments with a homogeneous reputation model were higher than

Pair Hypothesis

A B C D E F G H

(exp1, exp5) | | – – – – –

(exp2, exp6) – – – – – –

(exp3, exp7) – – – – – | |(exp4.1, exp8.1) | | | |(exp4.1, exp8.2) | | | | |(exp4.1, exp8.3) | | | |(exp4.2, exp8.1) | | | |(exp4.2, exp8.2) | | | |(exp4.2, exp8.3) | | | |(exp4.3, exp8.1) | | | |(exp4.3, exp8.2) | | | |(exp4.3, exp8.3) | | | |

L.G. Nardin et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1461–1471 1469

the similar ones obtained from mixed experiments. The under-lying idea was that if these results were statistically different, itwould then mean that heterogeneous environments, composed ofagents with different reputation models, would better identify thedishonest agent, since different aspects of the behavior of thelatter could be better captured in the presence of differentreputation models. Thus, to demonstrate it using Wilcoxon’s ranksum test a set of hypotheses was required. The general form of thehypothesis is:

H1: The mean value of the reputation model attribute fromexperiments with homogeneous reputation model is greaterthan the same attribute mean value from mixed experiments,where greater reputation mean value means a worse detectionof the Dishonest agent.

This hypothesis, from the point of view of the reputationmodel attribute is mathematically expressed as QX

P=M 4QXP=Mixed,

where M is the reputation model (L.I.A.R., Repage or MMH), X is anattribute and P is the testbed platform (ART or FOReART). In orderto validate this hypothesis using the Wilcoxon’s rank sum test,the following test was performed:

H0 : QXP=M rQX

P=Mixed

H1 : QXP=M 4QX

P=Mixed

The complete set of hypotheses to demonstrate the effects ofheterogeneous reputation models is composed of hypothesesfrom I to X. Hypotheses I to P are related to the ART platform(Table 5), where each one is related to a reputation modelattribute, respectively, DIbRp, IIbRp, RpRcbRp, image, reputation,EncRep, ObsRep and PropRep. Hypotheses Q to X are related to theFOReART platform (Table 6), where the same attributes and orderof Table 5 were used.

Table 5ART hypotheses result.

Pair Hypothesis

I J K L M N O P

(exp1, exp4.1) | | – – – – –

(exp1, exp4.2) | | | – – – – –

(exp1, exp4.3) | | | – – – – –

(exp2, exp4.1) – – – | | – – –

(exp2, exp4.2) – – – | | – – –

(exp2, exp4.3) – – – | | – – –

(exp3, exp4.1) – – – – – | | |(exp3, exp4.2) – – – – – | | |(exp3, exp4.3) – – – – – | |

Table 6FOReART hypotheses result.

Pair Hypothesis

Q R S T U V W X

(exp5, exp8.1) | | – – – – –

(exp5, exp8.2) | | – – – – –

(exp5, exp8.3) | | – – – – –

(exp6, exp8.1) – – – | | – – –

(exp6, exp8.2) – – – | | – – –

(exp6, exp8.3) – – – | | – – –

(exp7, exp8.1) – – – – – | |(exp7, exp8.2) – – – – – |(exp7, exp8.3) – – – – – |

When applied to the results of the pairs of experimentspresented in Table 2, considering the risk level (a) of 0.05 andthe degree of freedom of 18, those hypotheses generate theresults presented in Tables 5 and 6 ( means that H0 wasrejected, which confirms the hypothesis; | means that H0 wasnot rejected; thus, hypothesis cannot be confirmed; and - (dash)means that the hypothesis is not applicable for the pair ofexperiments).

Analyzing Table 5, which represents the ART testbed results,one can see that almost all the hypotheses did not reject H0

(indicated by |). This indicates that heterogeneity, in most cases,does not improve the detection of a dishonest agent. This resultdiffers from the one obtained in Nardin (2009), since in the latterthe hypothesis related to the RpRcbRp attribute rejected H0

(indicated by ), while the other L.I.A.R. and Repage hypothesesdid not (indicated by |).

Although in Nardin (2009) there was a suspicion that therewere some intrinsic or implementation model characteristics thatprovided benefits to the L.I.A.R. reputation model, this could notbe confirmed when including a third reputation model. Since aquantitative analysis was not enough to provide a conclusionabout how heterogeneity influences the detection accuracy of adishonest agent in the ART testbed, a detailed qualitative analysisshould be conducted in order to understand the interdependenceamong the reputation models.

On the other hand, analyzing Table 6, one can see that thereare hypotheses that rejected H0 (indicated by ), which arerelated to the RpRcbRp attribute from L.I.A.R. (hypothesis S), andEncRep and ObsRep attributes from MMH (hypotheses V and W).

Therefore, the reputation model heterogeneity can be con-cluded to have a significant statistical impact on agents usingL.I.A.R. and MMH reputation models. Diversely, the Repagereputation model does not present a significant statistical impacton agents when in a heterogeneous environment and when usinga more expressive communication.

The L.I.A.R. and Repage results corroborate the resultsobtained in Nardin (2009). However, when comparing theresults obtained for MMH and L.I.A.R., they can be consideredto be the inverse since the attributes characteristics from bothreputation models are similar (DIbRp � EncRep, IIbRp � ObsRep

and RpRcbRp � PropRep) and their analysis result are theinverse (while in L.I.A.R. the RpRcbRp attribute, which is theonly one affected by the communication, has a significantstatistical impact, in MMH the attributes EncRep and ObsRep

that have a significant statistical impact are the two that are notaffected by the communication).

Even obtaining a similar result from Nardin (2009), it is notpossible to conclude that the significant statistical impact onagents is related to heterogeneity, since they do not follow thesame behavior pattern. Therefore, it is necessary to perform adetailed qualitative analysis in order to understand the inter-dependence among the reputation models and how heteroge-neity influences them.

6. Conclusions and future work

Some experiments on semantic interoperability among agentreputation models using the SOARI architecture in the art apprai-sal domain were presented. Simulations were conducted consid-ering L.I.A.R., Repage and MMH reputation models in order toanswer two questions: (1) is there any improvement in thereputation evaluation accuracy when enabling a more expressivecommunication? and (2) how does the heterogeneity of reputa-tion models influence the evaluation accuracy of the dishonestagents’ reputation? These questions were defined considering

L.G. Nardin et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1461–14711470

that the inclusion of semantics within communicative acts amongagents give more expressive power to communication.

The results obtained using a more expressive communicationconcerning reputation presented an improvement in the accuracyof the reputation evaluation of other agents and corroborated theresults presented in Nardin (2009). However, the results ofreputation model heterogeneity did not allow us to concludesuch improvement and/or corroborate with the conclusions pre-sented in Nardin (2009). It is thus made necessary to perform adetailed qualitative analysis to explain the quantitative analysisresults and for better understanding the reputation modelsinterdependence.

Although our approach and architecture were implementedand tested in a specific problem domain, we believe they areapplicable to other domains. Therefore, we intend to evaluate thepossible application of the general approach, i.e. providing agentinteroperability using a service oriented architecture based onontology in different aspects of multiagent systems other thanreputation, for instance, organizational models (Coutinho et al.,2008).

Additionally, we also intend to perform some experimentsconsidering dishonest agents that do not lie all the time andcompare their results to the one that always lies.

We advocate that the proposed approach, when the associatedtechnologies are fully developed, will heavily impact any applica-tion area that could benefit from accurate information concerningthe reputation of autonomous agents, for instance e-Commerce,e-Government and e-Services.

The possibility of automating the process of aligning ontolo-gies for further mapping is also among our future work objectives.In this case, both the ontology mapping service (OMS) and thetranslator module (TM) would be redesigned to enable the use ofautomatic ontologies alignment approaches (Jean-Mary et al.,2009; Noy and Musen, 2001; Wang and Xu, 2009).

Acknowledgments

This project is partially supported by FAPESP/Brazil. Jaime S.Sichman is partially supported by CNPq/Brazil. We would like toacknowledge Tomas Monteiro Chaib for the implementation ofthe MMH reputation model and its integration to SOARI.

References

Alnemr, R., Paschke, A., Meinel, C., 2010. Enabling reputation interoperabilitythrough semantic technologies. In: Proceedings of the Sixth InternationalConference on Semantic Systems, I-SEMANTICS ’10, ACM, New York, USA,pp. 13:1–13:9.

Antoniou, G., Damasio, C.V., Grosof, B., Horrocks, I., Kifer, M., Maluszynski, J., Peter, F.,2005. Combining Rules and Ontologies. A survey. Deliverable I3-D3, Institutio-nen for datavetenskap, Linkopings Universitetet.

Booth, D., Haas, H., Mccabe, F., Newcomer, E., Champion, M., Ferris, C., Orchard, D.,2004. Web Services Architecture. W3C Recommendation, W3C. /http://www.w3.org/TR/ws-arch/S.

Boslaugh, S., Watters, D.P.A., 2008. Statistics in a Nutshell. O’Reilly & Associates,Inc., Sebastopol, CA, USA.

Brand~ao, A.A.F., Vercouter, L., Casare, S.J., Sichman, J.S., 2007. Extending the ARTTestbed to deal with heterogeneous agent reputation models. In: Castelfran-chi, C., Barber, S., Sabater-Mir, J., (Eds.), M.S. (Eds.), Proceedings of the 10thInternational Workshop on Trust in Agent Societies. Honolulu, USA.

Buranarach, M., 2001. The Foundation for Semantic Interoperability on the WorldWide Web. Ph.D. Thesis. University of Pittsburgh.

Casare, S., Sichman, J.S., 2005. Towards a functional ontology of reputation. In:AAMAS ’05: Proceedings of the Fourth Joint Conference on AutonomousAgents and Multiagent Systems, ACM, Utrecht, The Netherlands, pp. 505–511.

Casare, S., Sichman, J.S., 2005b. Using a functional ontology of reputation tointeroperate different agent reputation models. Journal of the BrazilianComputer Society 11 (2), 79–94.

Chang, E., Hussain, F.K., Dillon, T.S., 2006. Reputation ontology for reputationsystems. In: Meersman, R., Tari, Z., Herrero, P. (Eds.), SWWS ’06:Proceedings of

the International Workshop on Web Semantics, Lecture Notes in ComputerScience, vol. 4278. Springer, Montpellier, France, pp. 1724–1733.

Conte, R., Paolucci, M., 2002. Reputation in Artificial Societies. Social Beliefs forSocial Order. Kluwer, Boston.

Contreras, M., Sheremetov, L., 2008. Industrial application integration using theunification approach to agent-enabled semantic SOA. Robotics and Computer-Integrated Manufacturing 24 (5), 680–695.

Coutinho, L., Brand~ao, A.A.F., Sichman, J.S., Boissier, O., 2008. Model-drivenintegration of organizational models. In: AOSE ’08: Proceedings of the NinthInternational Workshop on Agent Oriented Software Engineering, Lisbon,Portugal.

Fullam, K.K., Klos, T.B., Muller, G., Sabater-Mir, J., Schlosser, A., Topol, Z., Barber,K.S., Rosenschein, J.S., Vercouter, L., Voss, M., 2005. A specification of the agentreputation and trust (ART) testbed: experimentation and competition for trustin agent societies. In: AAMAS ’05: Proceedings of the Fourth International JointConference on Autonomous Agents and Multiagent Systems, ACM, New York,USA, pp. 512–518.

Fullam, K.K., Klos, T.B., Muller, G., Sabater-Mir, J., Topol, Z., Barber, K.S.,Rosenschein, J., Vercouter, L., 2005b. The agent reputation and trust (ART)testbed architecture. In: Proceeding of the 2005 Conference on ArtificialIntelligence Research and Development. IOS Press, Amsterdam, The Nether-lands, pp. 389–396.

Haarslev, V., Moller, R., 2003. Racer: a core inference engine for the semantic web.In: International Workshop on Evaluation of Ontology-Based Tools, pp. 27–36.

Horridge, M., Knublauch, H., Rector, A., Stevens, R., Wroe, C., 2004. A practicalguide to building OWL ontologies using the protege-OWL plugin and CO-ODEtools edition 1.0. Technical Report. The University of Manchester and StanfordUniversity, The University of Manchester.

Huynh, T.D., Jennings, N.R., Shadbolt, N., 2004. FIRE: an integrated trust andreputation model for open multi-agent systems. In: Proceedings of the 16thEuropean Conference on Artificial Intelligence, Valencia, Spain, pp. 18–22.

IEEE, 1991. IEEE Standard Computer Dictionary: A Compilation of IEEE StandardComputer Glossaries. IEEE Computer Society Press, New York, USA.

Jean-Mary, Y.R., Shironoshita, P., Kabuka, M.R., 2009. ASMOV: results for OAEI2009. In: Proceedings of the ISWC 2009 Workshop on Ontology Matching,CEUR Workshop Proceedings, vol. 551. CEUR-WS.org, Linkping, Sweden,pp. 152–159.

Jennings, N.R., 2001. An agent-based approach for building complex softwaresystems. Communications of the ACM 44 (4), 35–41.

Kalfoglou, Y., Schorlemmer, M., 2003. Ontology mapping: the state of the art. TheKnowledge Engineering Review 18 (1), 1–31.

McGuinness, D.L., van Harmelen, F., 2004. OWL Web Ontology Language Overview.W3C Recommendation, W3C. /http://www.w3.org/TR/owl-features/S.

Meditskos, G., Bassiliades, N., 2008. Combining a DL reasoner and a rule engine forimproving entailment-based OWL reasoning. In: The Semantic Web—ISWC2008, Proceedings of the Seventh International Semantic Web Conference,ISWC 2008, Lecture Notes in Computer Science, vol. 5318. Springer, Karlsruhe,Germany, 26–30 October 2008, pp. 277–292.

Mui, L., Halberstadt, A., Mohtashemi, M., 2002a. Evaluating reputation in multi-agents systems. In: Falcone, R., Barber, K.S., Korba, L., Singh, M.P. (Eds.), Trust,Reputation, and Security: Theories and Practice, AAMAS 2002 InternationalWorkshop, Bologna, Italy, July 15, 2002, Selected and Invited Papers. Springer,Bologna, Italy, pp. 123–137.

Mui, L., Mohtashemi, M., Halberstadt, A., 2002. A computational model of trust andreputation. pp. 2431–2439.

Muller, G., Vercouter, L., 2008. L.I.A.R. Achieving Social Control in Open andDecentralised Multi-Agent Systems. Technical Report. Ecole Nationale Super-ieure des Mines de Saint-Etienne, Saint-Etienne, France.

Nardin, L.G., 2009. Uma arquitetura de apoio �a interoperabilidade de modelos dereputac- ~ao de agentes. Master’s Thesis. Escola Politecnica - Universidade de S~aoPaulo, 146 p.

Nardin, L.G., Brand~ao, A.A., Sichman, J.S., Vercouter, L., 2008. SOARI: A ServiceOriented Architecture to Support Agent Reputation Models Interoperability.Lecture Notes in Artificial Intelligence, vol. 5396. Springer-Verlag, Berlin,Heidelberg, pp. 292–307.

Nardin, L.G., Brand~ao, A.A.F., Sichman, J.S., Vercouter, L., 2008. A service-orientedarchitecture to support agent reputation models interoperability. In: Proceed-ings of the Third Workshop on Ontologies and Their Applications, CEURWorkshop Proceedings, vol. 427. CEUR-WS.org, Salvador, Brasil.

Nardin, L.G., Muller, G., Brand~ao, A.A.F., Sichman, J.S., Vercouter, L., 2009. Effects ofexpressiveness and heterogeneity of reputation models in the ART testbed:some preliminary experiments using the SOARI architecture. In: Proceedingsof the 12th International Workshop on Trust in Agent Societies, Budapest,Hungary.

Noy, N.F., Musen, M.A., 2001. Anchor-PROMPT: using non-local context forsemantic matching. In: Proceedings of IJCAI Workshop on Ontologies andInformation Sharing, pp. 63–70.

Ostrom, E., 1998. A behavioral approach to the rational choice theory of collectiveaction: presidential address, american political science association, 1997. TheAmerican Political Science Review 92 (1), 1–22.

Pinyol, I., Sabater-Mir, J., Cunı, G., 2007. How to talk about reputation using acommon ontology: From definition to implementation. In: Proceedings of theNinth International Workshop on Trust in Agent Societies, Honolulu, USA,pp. 90–101.

Rowley, J., 2006. An analysis of the e-service literature: towards a research agenda.Internet Research 16 (3), 339–359.

L.G. Nardin et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1461–1471 1471

Ryan, A., Eklund, P.W., 2008. A framework for semantic interoperability inhealthcare: a service oriented architecture based on health informaticsstandards. In: Andersen, S.K., Klein, G.O., Schulz, S., Aarts, J. (Eds.), MIE. Studiesin Health Technology and Informatics, vol. 136. IOS Press, pp. 759–764.

Sabater-Mir, J., Paolucci, M., Conte, R., 2006. Repage: REPutation and ImAGEamong limited autonomous partners. Journal of Artificial Societies and SocialSimulation 9 (2). /http://jasss.soc.surrey.ac.uk/9/2/3.htmlS.

Sabater-Mir, J., Sierra, C., 2002. Social ReGreT, a reputation model based on socialrelations. SIGecom Exchange 3 (1), 44–56.

Sheth, A.P., 1998. Changing focus on interoperability in information systems: fromsystem, syntax, structure to semantics. In: Interoperating Geographic Informa-tion Systems. Kluwer Academic Publishers, pp. 5–30.

Sirin, E., Parsia, B., Grau, B.C., Kalyanpur, A., Katz, Y., 2007. Pellet: a practical OWL-DL reasoner. Web Semantics: Science, Services and Agents on the World WideWeb 5 (2), 51–53.

Trivellato, D., Spiessens, F., Zannone, N., Etalle, S., 2009. POLIPO: policies &ontologies for interoperability, portability, and autonomy. In: IEEE Interna-tional Symposium on Policies for Distributed Systems and Networks, IEEE,London, UK, pp. 110–113.

Trivellato, D., Spiessens, F., Zannone, N., Etalle, S., 2009. Reputation-based ontologyalignment for autonomy and interoperability in distributed access control. In:Proceedings of the International Conference on Computational Science andEngineering, CSE ’09, vol. 3, IEEE, pp. 252–258.

Tsarkov, D., Horrocks, I., 2006. Factþþ description logic reasoner: system descrip-

tion. In: Proceedings of Automated Reasoning. Springer, pp. 292–297.Vercouter, L., Casare, S.J., Sichman, J.S., Brand~ao, A., 2007. An experience on

reputation models interoperability based on a functional ontology. In: IJCAI

’07: Proceedings of the 20th International Joint Conference on Artificial

Intelligence, Hyderabad, India, pp. 617–622.Vetere, G., Lenzerini, M., 2005. Models for semantic interoperability in service-

oriented architectures. IBM Systems Journal 44 (4), 887–903.Visser, U., Stuckenschmidt, H., Schuster, G., Vogele, T., 2002. Ontologies

for geographic information processing. Computers & Geosciences 28 (1),

103–117.Visser, U., Stuckenschmidt, H., Wache, H., Vogele, T., 2000. Enabling technologies

for interoperability. In: Visser, U., Pundt, H. (Eds.), Proceedings of the Work-

shop on the 14th International Symposium of Computer Science for Environ-

mental Protection, TZI, Bonn, pp. 35–46.Wang, P., Xu, B., 2009. Lily: ontology alignment results for OAEI 2009. In:

Proceedings of the ISWC 2009 Workshop on Ontology Matching, CEUR Work-

shop Proceedings, vol. 551. CEUR-WS.org, Linkping, Sweden, pp. 186–192.Wooldridge, M., 2009. An Introduction to MultiAgent Systems, second ed. John

Wiley & Sons Ltd.Zacharia, G., Maes, P., 2000. Trust management through reputation mechanisms.

Journal of Applied Artificial Intelligence 14 (9), 881–907.