evaluation of agent building tools and implementation of a prototype for information gathering leif...
TRANSCRIPT
Evaluation of Agent Building Tools and Implementation of a
Prototype for Information GatheringLeif M. Koch
University of Waterloo
August 2001
2
Content
Advantages of Agent Building Toolkits Benchmark for ABTs Information Gathering System Conclusions and Future Work
3
Multi-Agent Systems
Dynamic environments– # of agents can change– agents can be specialized– resource type and location can change
Scalability Modularity
4
MAS Problem Domains
Application Domain Agent Domain
Example:• Retrieve information• Compute relevance
Example:• Find other agents• Compose messages• Process messages
5
Agent Building Toolkits
Libraries for agent specific domain GUI for rapid MAS creation Runtime analysis for agencies Standardized communication
– KQML (Knowledge Query Manipulation Language)
– FIPA ACL (Agent Communication Language)
6
Selection of an ABT
Large # of ABTs (http://www.agentbuilder.com/AgentTools)
Different concepts Different standards Different performance
=> Benchmark for ABTs
7
Benchmark for ABTs
Feature Quality of ABT
Performance ofconstructed MAS
+
BenchmarkResult
8
Feature Quality
Comprises several categories– Coordination (assignment, negotiation)– Communication (standard)– Connectivity (name service, yellow pages)– Scalability (# of agents per JVM)– Usability (documentation, examples)
9
Feature Quality: Categories
Each category comprises several parameters(e.g. assignment, negotiation)
Each parameter pk is assigned value 0..4
Category value ci = Σ pk
Category sum fs = Σ wici
Feature Quality Q = fs / fmax
10
MAS
MAS Performance
User requests information Execution time of MAS taken # of agents and resources in MAS differ
Useragent
ResourceagentInformation
retrieval
Trigger
11
Benchmark System Architecture
JVM 2
JVM 1
BenchmarkApp.
AgentStarter Facilitator
Resource mInterface n
Interface 1 Resource 1
12
Benchmark Computation
B = (w * P) / Q
Feature Quality Q Weighted
Performance P
+
BenchmarkResult B
13
Tested Toolkits (1)
Zeus– GUI for agent development– rulebase and actions integrated– good support– good documentation– difficulties w/ large # of agents or resources– different timing concept
14
Tested Toolkits (2)
FIPA-OS– runtime analysis tools– implements FIPA standards– rulebase optional– good documentation– concept of actions easy to learn– poor scalability
15
Tested Toolkits (3)
JADE– runtime analysis tools– good documentation– difficulties with facilitator requests– apparently very performant
16
Benchmark Results
MAS Performance
1
10
100
1000
10000
100000
1000000
1-1 1-5 10-1 10-5 50-1 50-5
Test Run (Agents-Resources, i.e. 1-1 comprises 1 agent of each type with the interface agent
requiring one resource)
Tim
e (l
og
arit
hm
ic)
Jade
Fipa
Zeus
17
Information Gathering System
Goal:more relevant information
Idea– agents are connected to search engines– relevance of results is computed– user provides feedback on relevance
Web BrowserWeb Browser
Interface AgentInterface Agent
Resource AgentResource Agent
Resource AgentResource Agent
Resource AgentResource Agent
AltaVista
Excite
18
Relevance Computation
Vector Space Representation– stop words (on, and, etc.) removed– words reduced to stems– frequency of stems in document set computed– weights for stems computed using TF-IDF
(term frequency - inverse document frequency)– weights represent document
19
Relevance: Example (1)
Document 1: A simple example!
Document 2: Another example!
Step 1: Remove stop words– doc 1: simple example– doc 2: another example
20
Relevance: Example (2)
Step 2: Create Stems– doc 1: simpl exampl– doc 2: another exampl– list of stems: simpl exampl another
Step 3: Frequency fik of stem k in doc i
– fik = 1
21
Relevance: Example (3)
Step 4: Computing weights– Inverse doc frequency IDFi = log (N / di)
– N # of docs, di # of docs containing stem i
– wik = fik * IDFk
– IDFsimpl = log(2/1) = log 2 = IDFanother
– IDFexampl = log(2/2) = 0
22
Relevance: Example (4)
Step 5: Create Vectors– list of stems: [simpl, exampl, another]– doc 1: [log2, 0, 0]– doc 2: [0, 0, log2]
23
Relevance: Proximity
Distance of vectors indicates relevance Prototype computes cosine between vectors
24
Relevance: Feedback
Query is vector itself Feedback positive: weights of doc added Feedback negative: weights of doc
subtracted IGS saves weights and compares queries
each time
25
Feedback: Example (1)
Stems: [weather, waterloo, station, ontario] Query: [weather, waterloo] Weights:
– query: [0.5, 0.5, 0, 0]– doc 1: [0.3, 0, 0.7, 0]– doc 2: [0, 0.6, 0, 0.4]
IGS presents document 1
26
Feedback: Example (2)
User states result is relevant:– query: [0.8, 0.5, 0.7, 0]– normalized: [0.4, 0.25, 0.35, 0]
Next time [weather, waterloo], updated query weights are used
27
Future Work
Benchmark gathers information on toolkits automatically, using agents
Resource agents connect to other information resources (databases)
Additional layer to process meta-information
28
Conclusions (1)
Agent Building Tools support developers significantly
Zeus is easier to start with, some changes are difficult (protocols)
Benchmark can reduce evaluation time Some problems with toolkit might not be
revealed during benchmark process
29
Conclusions (2)
Information gathering system succesfully deals with changing environment
Feedback on single document can result in tedious learning phase