evaluation of agent building tools and implementation of a prototype for information gathering leif...

29
Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

Upload: leon-singleton

Post on 04-Jan-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

Evaluation of Agent Building Tools and Implementation of a

Prototype for Information GatheringLeif M. Koch

University of Waterloo

August 2001

Page 2: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

2

Content

Advantages of Agent Building Toolkits Benchmark for ABTs Information Gathering System Conclusions and Future Work

Page 3: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

3

Multi-Agent Systems

Dynamic environments– # of agents can change– agents can be specialized– resource type and location can change

Scalability Modularity

Page 4: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

4

MAS Problem Domains

Application Domain Agent Domain

Example:• Retrieve information• Compute relevance

Example:• Find other agents• Compose messages• Process messages

Page 5: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

5

Agent Building Toolkits

Libraries for agent specific domain GUI for rapid MAS creation Runtime analysis for agencies Standardized communication

– KQML (Knowledge Query Manipulation Language)

– FIPA ACL (Agent Communication Language)

Page 6: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

6

Selection of an ABT

Large # of ABTs (http://www.agentbuilder.com/AgentTools)

Different concepts Different standards Different performance

=> Benchmark for ABTs

Page 7: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

7

Benchmark for ABTs

Feature Quality of ABT

Performance ofconstructed MAS

+

BenchmarkResult

Page 8: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

8

Feature Quality

Comprises several categories– Coordination (assignment, negotiation)– Communication (standard)– Connectivity (name service, yellow pages)– Scalability (# of agents per JVM)– Usability (documentation, examples)

Page 9: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

9

Feature Quality: Categories

Each category comprises several parameters(e.g. assignment, negotiation)

Each parameter pk is assigned value 0..4

Category value ci = Σ pk

Category sum fs = Σ wici

Feature Quality Q = fs / fmax

Page 10: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

10

MAS

MAS Performance

User requests information Execution time of MAS taken # of agents and resources in MAS differ

Useragent

ResourceagentInformation

retrieval

Trigger

Page 11: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

11

Benchmark System Architecture

JVM 2

JVM 1

BenchmarkApp.

AgentStarter Facilitator

Resource mInterface n

Interface 1 Resource 1

Page 12: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

12

Benchmark Computation

B = (w * P) / Q

Feature Quality Q Weighted

Performance P

+

BenchmarkResult B

Page 13: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

13

Tested Toolkits (1)

Zeus– GUI for agent development– rulebase and actions integrated– good support– good documentation– difficulties w/ large # of agents or resources– different timing concept

Page 14: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

14

Tested Toolkits (2)

FIPA-OS– runtime analysis tools– implements FIPA standards– rulebase optional– good documentation– concept of actions easy to learn– poor scalability

Page 15: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

15

Tested Toolkits (3)

JADE– runtime analysis tools– good documentation– difficulties with facilitator requests– apparently very performant

Page 16: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

16

Benchmark Results

MAS Performance

1

10

100

1000

10000

100000

1000000

1-1 1-5 10-1 10-5 50-1 50-5

Test Run (Agents-Resources, i.e. 1-1 comprises 1 agent of each type with the interface agent

requiring one resource)

Tim

e (l

og

arit

hm

ic)

Jade

Fipa

Zeus

Page 17: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

17

Information Gathering System

Goal:more relevant information

Idea– agents are connected to search engines– relevance of results is computed– user provides feedback on relevance

Web BrowserWeb Browser

Interface AgentInterface Agent

Resource AgentResource Agent

Resource AgentResource Agent

Resource AgentResource Agent

AltaVista

Excite

Google

Page 18: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

18

Relevance Computation

Vector Space Representation– stop words (on, and, etc.) removed– words reduced to stems– frequency of stems in document set computed– weights for stems computed using TF-IDF

(term frequency - inverse document frequency)– weights represent document

Page 19: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

19

Relevance: Example (1)

Document 1: A simple example!

Document 2: Another example!

Step 1: Remove stop words– doc 1: simple example– doc 2: another example

Page 20: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

20

Relevance: Example (2)

Step 2: Create Stems– doc 1: simpl exampl– doc 2: another exampl– list of stems: simpl exampl another

Step 3: Frequency fik of stem k in doc i

– fik = 1

Page 21: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

21

Relevance: Example (3)

Step 4: Computing weights– Inverse doc frequency IDFi = log (N / di)

– N # of docs, di # of docs containing stem i

– wik = fik * IDFk

– IDFsimpl = log(2/1) = log 2 = IDFanother

– IDFexampl = log(2/2) = 0

Page 22: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

22

Relevance: Example (4)

Step 5: Create Vectors– list of stems: [simpl, exampl, another]– doc 1: [log2, 0, 0]– doc 2: [0, 0, log2]

Page 23: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

23

Relevance: Proximity

Distance of vectors indicates relevance Prototype computes cosine between vectors

Page 24: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

24

Relevance: Feedback

Query is vector itself Feedback positive: weights of doc added Feedback negative: weights of doc

subtracted IGS saves weights and compares queries

each time

Page 25: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

25

Feedback: Example (1)

Stems: [weather, waterloo, station, ontario] Query: [weather, waterloo] Weights:

– query: [0.5, 0.5, 0, 0]– doc 1: [0.3, 0, 0.7, 0]– doc 2: [0, 0.6, 0, 0.4]

IGS presents document 1

Page 26: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

26

Feedback: Example (2)

User states result is relevant:– query: [0.8, 0.5, 0.7, 0]– normalized: [0.4, 0.25, 0.35, 0]

Next time [weather, waterloo], updated query weights are used

Page 27: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

27

Future Work

Benchmark gathers information on toolkits automatically, using agents

Resource agents connect to other information resources (databases)

Additional layer to process meta-information

Page 28: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

28

Conclusions (1)

Agent Building Tools support developers significantly

Zeus is easier to start with, some changes are difficult (protocols)

Benchmark can reduce evaluation time Some problems with toolkit might not be

revealed during benchmark process

Page 29: Evaluation of Agent Building Tools and Implementation of a Prototype for Information Gathering Leif M. Koch University of Waterloo August 2001

29

Conclusions (2)

Information gathering system succesfully deals with changing environment

Feedback on single document can result in tedious learning phase