2004, g.tecuci, learning agents center cs 7850 fall 2004 learning agents center and computer science...
Post on 29-Dec-2015
222 Views
Preview:
TRANSCRIPT
2004, G.Tecuci, Learning Agents Center
CS 7850 Fall 2004
Learning Agents Center and Computer Science Department
George Mason University
Gheorghe Tecuci tecuci@gmu.edu
http://lac.gmu.edu/
2004, G.Tecuci, Learning Agents Center
OverviewOverview
Artificial Intelligence and intelligent agents
Knowledge acquisition for agents development
Domain for hands-on experience
Overview of the course
Class introduction and course’s objectives
2004, G.Tecuci, Learning Agents Center
Cartoon
2004, G.Tecuci, Learning Agents Center
Course ObjectivesCourse Objectives
Present principles and major methods of knowledge acquisition for the development of knowledge-based agents that incorporate the problem solving knowledge of a subject matter expert.
Major topics include: overview of knowledge engineering; analysis and modeling of the reasoning process of a subject matter expert; ontology design and development; rule learning; problem solving and knowledge-base refinement.
The course will emphasize the most recent advances in this area, such as: agent teaching and learning; mixed-initiative knowledge base refinement; knowledge reuse; frontier research problems.
Provide an overview of Knowledge Acquisition and Problem Solving.
2004, G.Tecuci, Learning Agents Center
Course Objectives (cont)Course Objectives (cont)
Learn about all the phases of building a knowledge-based agent and experience them first-hand by using the Disciple agent development environment to build an intelligent assistant that helps the students to choose a Ph.D. Dissertation Advisor.
Disciple has been developed in the Learning Agents Center of George Mason University and has been successfully used to build knowledge-based agents for a variety of problem areas, including: planning the repair of damaged bridges and roads; critiquing military courses of action; determining strategic centers of gravity in military conflicts; generating test questions for higher-order thinking skills in history and statistics.
Link Knowledge Acquisition and Problem Solving concepts to hands-on applications by building a knowledge-based agent.
2004, G.Tecuci, Learning Agents Center
Course organization and grading policyCourse organization and grading policy
The classes will consist of:
- a theoretical recitation part where the instructor will present and discuss the various methods and phases of building a knowledge-based agent;
- a practical laboratory part where the students will apply this knowledge to specify, design and develop the Ph.D. selection advisor.
Grading Policy
- Exam, covering the theoretical aspects presented – 50%
- Assignments, consisting of lab participation and the contribution to the development of the Ph.D. selection advisor – 50%
Regular assignments will consist of incremental developments of the Ph.D. selection advisor which will be presented to the class.
Course organization
2004, G.Tecuci, Learning Agents Center
ReadingsReadings
Lecture notes provided by the instructor (required).
Tecuci G., Building Intelligent Agents: An Apprenticeship Multistrategy Learning Theory, Methodology, Tool and Case Studies, Academic Press, 1998 (recommended).
Additional papers recommended by the instructor.
2004, G.Tecuci, Learning Agents Center
OverviewOverview
Artificial Intelligence and intelligent agents
Overview of the course
Class introduction and course’s objectives
Knowledge acquisition for agents development
Domain for hands-on experience
2004, G.Tecuci, Learning Agents Center
Artificial Intelligence and intelligent agentsArtificial Intelligence and intelligent agents
What is an intelligent agent
Characteristic features of intelligent agents
What is Artificial Intelligence
Sample tasks for intelligent agents
Why are intelligent agents important
2004, G.Tecuci, Learning Agents Center
Artificial Intelligence is the Science and Engineering
that is concerned with the theory and practice of
developing systems that exhibit the characteristics
we associate with intelligence in human behavior:
perception, natural language processing, reasoning,
planning and problem solving, learning and
adaptation, etc.
What is Artificial IntelligenceWhat is Artificial Intelligence
2004, G.Tecuci, Learning Agents Center
Central goals of Artificial IntelligenceCentral goals of Artificial Intelligence
Understand the principles that make intelligence possible(in humans, animals, and artificial agents)
Developing intelligent machines or agents(no matter whether they operate as humans or not)
Formalizing knowledge and mechanizing reasoningin all areas of human endeavor
Making the working with computers as easy as working with people
Developing human-machine systems that exploit the complementariness of human and automated reasoning
2004, G.Tecuci, Learning Agents Center
Artificial Intelligence and intelligent agentsArtificial Intelligence and intelligent agents
What is an intelligent agent
Characteristic features of intelligent agents
What is Artificial Intelligence
Sample tasks for intelligent agents
Why are intelligent agents important
2004, G.Tecuci, Learning Agents Center
What is an intelligent agentWhat is an intelligent agent
IntelligentAgent
user/environment
output/
sensors
effectors
input/
An intelligent agent is a system that:
• perceives its environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or other complex environment);
• reasons to interpret perceptions, draw inferences, solve problems, and determine actions; and
• acts upon that environment to realize a set of goals or tasks for which it was designed.
2004, G.Tecuci, Learning Agents Center
Humans, with multiple, conflicting drives, multiple senses, multiple possible actions, and complex sophisticated control structures, are at the highest end of being an agent.
At the low end of being an agent is a thermostat.It continuously senses the room temperature, starting or stopping the heating system each time the current temperature is out of a pre-defined range.
The intelligent agents we are concerned with are in between. They are clearly not as capable as humans, but they are significantly more capable than a thermostat.
What is an intelligent agent (cont.)What is an intelligent agent (cont.)
2004, G.Tecuci, Learning Agents Center
An intelligent agent interacts with a human or some other agents via some kind of agent-communication language and may not blindly obey commands, but may have the ability to modify requests, ask clarification questions, or even refuse to satisfy certain requests.
It can accept high-level requests indicating what the user wants and can decide how to satisfy each request with some degree of independence or autonomy, exhibiting goal-directed behavior and dynamically choosing which actions to take, and in what sequence.
What is an intelligent agent (cont.)What is an intelligent agent (cont.)
2004, G.Tecuci, Learning Agents Center
An intelligent agent can :
• collaborate with its user to improve the accomplishment of his or her tasks;
• carry out tasks on user’s behalf, and in so doing employs some knowledge of the user's goals or desires;
• monitor events or procedures for the user;
• advise the user on how to perform a task;
• train or teach the user;
• help different users collaborate.
What an intelligent agent can doWhat an intelligent agent can do
2004, G.Tecuci, Learning Agents Center
Artificial Intelligence and intelligent agentsArtificial Intelligence and intelligent agents
What is an intelligent agent
Characteristic features of intelligent agents
What is Artificial Intelligence
Sample tasks for intelligent agents
Why are intelligent agents important
2004, G.Tecuci, Learning Agents Center
If an object is on top of another object that is itself on top of a third object then the first object is on top of the third object.
RULE x,y,z OBJECT, (ON x y) & (ON y z) (ON x z)
represents
ONCUP1 BOOK1 ON TABLE1
CUP BOOK TABLE
INSTANCE-OF
OBJECT
SUBCLASS-OF
Application Domain Model of the Domain
An intelligent agent contains an internal representation of its external application domain, where relevant elements of the application domain (objects, relations, classes, laws, actions) are represented as symbolic expressions.
Knowledge representation and reasoningKnowledge representation and reasoning
ONTOLOGY
This mapping allows the agent to reason about the application domain by performing reasoning processes in the domain model, and transferring the conclusions back into the application domain.
(cup1 on book1) & (book1 on table1) (cup1 on table1)(cup1 on table1)
2004, G.Tecuci, Learning Agents Center
Basic agent architectureBasic agent architecture
Ontology
Rules/Cases/…
Problem SolvingEngine
Intelligent Agent
User/Environment Output/
Sensors
Effectors
Input/
Knowledge Base
Implements a general method of interpreting the input problem based on
the knowledge from the knowledge base
Data structures that represent the objects from the application domain, general laws governing them, action that can be performed with them, etc.
RULE x,y,z OBJECT, (ON x y) & (ON y z) (ON x z)
ONCUP1 BOOK1 ON TABLE1
CUP BOOK TABLE
INSTANCE-OF
OBJECT
SUBCLASS-OF
ONCUP1 BOOK1 ON TABLE1
CUP BOOK TABLE
INSTANCE-OF
OBJECT
SUBCLASS-OF
ONTOLOGY
2004, G.Tecuci, Learning Agents Center
The knowledge possessed by the agent and its reasoning processes should be understandable to humans.
The agent should have the ability to give explanations of its behavior, what decisions it is making and why.
Without transparency it would be very difficult to accept, for instance, a medical diagnosis performed by an intelligent agent.
The need for transparency shows that the main goal of artificial intelligence is to enhance human capabilities and not to replace human activity.
Transparency and explanationsTransparency and explanations
2004, G.Tecuci, Learning Agents Center
An agent should be able to communicate with its users or other agents.
The communication language should be as natural to the human users as possible. Ideally, it should be free natural language.
The problem of natural language understanding and generation is very difficult due to the ambiguity of words and sentences, the paraphrases, ellipses and references which are used in human communication.
Ability to communicateAbility to communicate
2004, G.Tecuci, Learning Agents Center
In order to solve "real-world" problems, an intelligent agent needs a huge amount of domain knowledge in its memory (knowledge base).
Example of human-agent dialog:
User: The toolbox is locked.
Agent:The key is in the drawer.
In order to understand such sentences and to respond adequately, the agent needs to have a lot of knowledge about the user, including the goals the user might want to achieve.
Use of huge amounts of knowledgeUse of huge amounts of knowledge
2004, G.Tecuci, Learning Agents Center
User:The toolbox is locked.
Agent:
Why is he telling me this?
I already know that the box is locked.
I know he needs to get in.
Perhaps he is telling me because he believes I can help.
To get in requires a key.
He knows it and he knows I know it.
The key is in the drawer.
If he knew this, he would not tell me that the toolbox is locked.
So he must not realize it.
To make him know it, I can tell him.
I am supposed to help him.
The key is in the drawer.
Use of huge amounts of knowledge (example)Use of huge amounts of knowledge (example)
2004, G.Tecuci, Learning Agents Center
An intelligent agent usually needs to search huge spaces in order to find solutions to problems.
Example: A search agent on the Internet.
Exploration of huge search spacesExploration of huge search spaces
2004, G.Tecuci, Learning Agents Center
A heuristic is a rule of thumb, strategy, trick, simplification, or any other kind of device which drastically limits the search for solutions in large problem spaces.
Heuristics do not guarantee optimal solutions. In fact they do not guarantee any solution at all.
A useful heuristic is one that offers solutions which are good enough most of the time.
Use of heuristicsUse of heuristics
Intelligent agents generally attack problems for which no algorithm is known or feasible, problems that require heuristic methods.
2004, G.Tecuci, Learning Agents Center
The ability to provide some solution even if not all the
data relevant to the problem is available at the time a
solution is required.
Reasoning with incomplete or conflicting dataReasoning with incomplete or conflicting data
The ability to take into account data items that are more or less in contradiction with one another (conflicting data or data corrupted by errors).
Example: The reasoning of a military intelligence analyst that has to cope with the deception actions of the enemy.
Examples: The reasoning of a physician in an intensive care unit.Planning a military course of action.
2004, G.Tecuci, Learning Agents Center
The ability to improve its competence and performance.
An agent is improving its competence if it learns to solve a broader class of problems, and to make fewer mistakes in problem solving.
An agent is improving its performance if it learns to solve more efficiently (for instance, by using less time or space resources) the problems from its area of competence.
Ability to learnAbility to learn
2004, G.Tecuci, Learning Agents Center
Extended agent architectureExtended agent architecture
Ontology
Rules/Cases/Methods
Problem SolvingEngine
Intelligent Agent
User/Environment Output/
Sensors
Effectors
Input/
Knowledge Base
LearningEngine
The learning engine implements methods for extending and refining the knowledge
in the knowledge base.
2004, G.Tecuci, Learning Agents Center
Artificial Intelligence and intelligent agentsArtificial Intelligence and intelligent agents
What is an intelligent agent
Characteristic features of intelligent agents
What is Artificial Intelligence
Sample tasks for intelligent agents
Why are intelligent agents important
2004, G.Tecuci, Learning Agents Center
Sample tasks for intelligent agentsSample tasks for intelligent agents
Example: Determine the actions that need to be performed in order to repair a bridge.
Planning: Finding a set of actions that achieve a certain goal.
Example: Critiquing a military course of action (or plan) based on the principles of war and the tenets of army operations.
Critiquing: Expressing judgments about something according to certain standards.
Example: Interpreting gauge readings in a chemical process plant to infer the status of the process.
Interpretation: Inferring situation description from sensory data.
2004, G.Tecuci, Learning Agents Center
Sample tasks for intelligent agents (cont.)Sample tasks for intelligent agents (cont.)
Examples: Predicting the damage to crops from some type of insect.Estimating global oil demand from the current geopolitical world situation.
Prediction: Inferring likely consequences of given situations.
Examples: Determining the disease of a patient from the observed symptoms.Locating faults in electrical circuits.Finding defective components in the cooling system of nuclear reactors.
Diagnosis: Inferring system malfunctions from observables.
Example: Designing integrated circuits layouts.
Design: Configuring objects under constraints.
2004, G.Tecuci, Learning Agents Center
Sample tasks for intelligent agents (cont.)Sample tasks for intelligent agents (cont.)
Examples: Monitoring instrument readings in a nuclear reactor to detect accident conditions.Assisting patients in an intensive care unit by analyzing data from the monitoring equipment.
Monitoring: Comparing observations to expected outcomes.
Examples: Suggesting how to tune a computer system to reduce a particular type of performance problem.Choosing a repair procedure to fix a known malfunction in a locomotive.
Debugging: Prescribing remedies for malfunctions.
Example: Tuning a mass spectrometer, i.e., setting the instrument's operating controls to achieve optimum sensitivity consistent with correct peak ratios and shapes.
Repair: Executing plans to administer prescribed remedies.
2004, G.Tecuci, Learning Agents Center
Sample tasks for intelligent agents (cont.)Sample tasks for intelligent agents (cont.)
Examples: Teaching students a foreign language.Teaching students to troubleshoot electrical circuits.Teaching medical students in the area of antimicrobial therapy selection.
Instruction: Diagnosing, debugging, and repairing student behavior.
Example: Managing the manufacturing and distribution of computer systems.
Control: Governing overall system behavior.
Any useful task:Information fusion.Information assurance.Travel planning.Email management.Help in choosing a Ph.D. Dissertation Advisor
2004, G.Tecuci, Learning Agents Center
Artificial Intelligence and intelligent agentsArtificial Intelligence and intelligent agents
What is an intelligent agent
Characteristic features of intelligent agents
What is Artificial Intelligence
Sample tasks for intelligent agents
Why are intelligent agents important
2004, G.Tecuci, Learning Agents Center
Why are intelligent agents importantWhy are intelligent agents important
Humans have limitations that agents may alleviate (e.g. memory for the details that isn’t effected by stress, fatigue or time constraints).
Humans and agents could engage in mixed-initiative problem solving that takes advantage of their complementary strengths and reasoning styles.
2004, G.Tecuci, Learning Agents Center
Why are intelligent agents important (cont)Why are intelligent agents important (cont)
The evolution of information technology makes intelligent agents essential components of our future systems and organizations.
Our future computers and most of the other systems and tools will gradually become intelligent agents.
We have to be able to deal with intelligent agents either as users, or as developers, or as both.
2004, G.Tecuci, Learning Agents Center
Intelligent agents are systems which can perform tasks requiring knowledge and heuristic methods.
Intelligent agents: ConclusionIntelligent agents: Conclusion
Intelligent agents are helpful, enabling us to do our tasks better.
Intelligent agents are necessary to cope with the increasing complexity of the information society.
2004, G.Tecuci, Learning Agents Center
OverviewOverview
Artificial Intelligence and intelligent agents
Overview of the course
Class introduction and course’s objectives
Knowledge acquisition for agents development
Domain for hands-on experience
2004, G.Tecuci, Learning Agents Center
Problem: Choosing a Ph.D. Dissertation Advisor Problem: Choosing a Ph.D. Dissertation Advisor
Choosing a Ph.D. Dissertation Advisor is a crucial decision for a successful dissertation and for one’s future career.
An informed decision requires a lot of knowledge about the potential advisors.
In this course we will develop an agent that interacts with a student to help selecting the best Ph.D. advisor for that student.
See the project notes: “1. Problem”
2004, G.Tecuci, Learning Agents Center
OverviewOverview
Artificial Intelligence and intelligent agents
Overview of the course
Class introduction and course’s objectives
Knowledge acquisition for agents development
Domain for hands-on experience
2004, G.Tecuci, Learning Agents Center
Knowledge Acquisition for agent developmentKnowledge Acquisition for agent development
Demo: Agent teaching and learning
Approaches to knowledge acquisition
Research vision on agents development
Disciple approach to agent development
2004, G.Tecuci, Learning Agents Center
How are agents built: Manual knowledge acquisitionHow are agents built: Manual knowledge acquisition
A knowledge engineer attempts to understand how a subject matter expert reasons and solves problems and then encodes the acquired expertise into the agent's knowledge base.
The expert analyzes the solutions generated by the agent (and often the knowledge base itself) to identify errors, and the knowledge engineer corrects the knowledge base.
KnowledgeEngineer
Knowledge Base
Problem Solving Engine
Intelligent Agent
ProgrammingDialog
Results
SubjectMatter Expert
2004, G.Tecuci, Learning Agents Center
Why it is hardWhy it is hard
The knowledge engineer has to become a kind of subject matter expert in order to properly understand expert’s problem solving knowledge. This takes time and effort.
Experts express their knowledge informally, using natural language, visual representations and common sense, often omitting essential details that are considered obvious. This form of knowledge is very different from the one in which knowledge has to be represented in the knowledge base (which is formal, precise, and complete).
This transfer and transformation of knowledge, from the domain expert through the knowledge engineer to the agent, is long, painful and inefficient (and is known as "the knowledge acquisition bottleneck“ of the AI systems development process).
2004, G.Tecuci, Learning Agents Center
Mixed-initiative knowledge acquisitionMixed-initiative knowledge acquisition
The expert teaches the agent how to perform various tasks, in a way that resembles how an expert would teach a human apprentice when solving problems in cooperation.
This process is based on mixed-initiative reasoning that integrates the complementary knowledge and reasoning styles of the subject matter expert and the agent, and on a division of responsibility for those elements of knowledge engineering for which they have the most aptitude, such that together they form a complete team for knowledge base development.
SubjectMatter Expert
Knowledge Base
Problem Solving Engine
Intelligent Learning
Agent
KnowledgeDialog
Results
Learning Engine
2004, G.Tecuci, Learning Agents Center
This is the most promising approach to overcome the knowledge acquisition bottleneck.
DARPA’s Rapid Knowledge Formation Program (2000-2004):
Emphasized the development of knowledge bases directly by the subject matter experts.
Central objective: Enable distributed teams of experts to enter and modify knowledge directly and easily, without the need for prior knowledge engineering experience. The emphasis was on content and the means of rapidly acquiring this content from individuals who possess it with the goal of gaining a scientific understanding of how ordinary people can work with formal representations of knowledge.
Program’s primary requirement: Development of functionality enabling experts to understand the contents of a knowledge base, enter new theories, augment and edit existing knowledge, test the adequacy of the knowledge base under development, receive explanations of theories contained in the knowledge base, and detect and repair errors in content.
Mixed-initiative knowledge acquisition (cont.)Mixed-initiative knowledge acquisition (cont.)
2004, G.Tecuci, Learning Agents Center
Autonomous knowledge acquisitionAutonomous knowledge acquisition
The learning engine builds the knowledge base from a data base of facts or examples.
In general, the learned knowledge consists of concepts, classification rules, or decision trees. The problem solving engine is a simple one-step inference engine that classifies a new instance as being or not an example of a learned concept.
Defining the Data Base of examples is a significant challenge.
Current practical applications limited to classification tasks.
Knowledge Base
Problem Solving Engine
Autonomous Learning
Agent
KnowledgeData
Results
Learning EngineData Base
2004, G.Tecuci, Learning Agents Center
Autonomous knowledge acquisition (cont.)Autonomous knowledge acquisition (cont.)
The knowledge base is built by the learning engine from data provided by the text understanding system able to understand textbooks.
In general, the data consists of facts acquired from the books.
This is not yet a practical approach, even for simpler agents.
Knowledge Base
Problem Solving Engine
Autonomous Language Understanding and Learning Agent
KnowledgeData Learning Engine
Text Text Understanding
Engine
Results
2004, G.Tecuci, Learning Agents Center
Knowledge Acquisition for agent developmentKnowledge Acquisition for agent development
Disciple approach to agent development
Approaches to knowledge acquisition
Research vision on agents development
Demo: Agent teaching and learning
2004, G.Tecuci, Learning Agents Center
The expert teachesthe agent how to perform
various tasks in a way thatresembles how the expert
would teach a person.
1. Mixed-initiative problem solving
2. Teaching and learning
3. Multistrategy learning In
terf
ace
ProblemSolving
Learning
Ontology+ Rules
Disciple approach to agent development
Research Problem: Elaborate a theory, methodology and family of systems for the development of knowledge-based agents by subject matter experts, with limited assistance from knowledge engineers.
Approach: Develop a learning agent that can be taught directly by a subject matter expert while solving problems in cooperation.
The agent learnsfrom the expert,
building, verifyingand improving itsknowledge base
2004, G.Tecuci, Learning Agents Center
Disciple-WA (1997-1998): Estimates the best plan of working around damage to a transportation infrastructure, such as a
damaged bridge or road.
River bedSite 106
Left bankSite 105Right bank
Site 107
Site 103:cross-sectionDamage 200: destroyed bridge
Bridge/Rivergap = 25 meters
Near approach(Right approach)Site 108
Far approach(Left approach)Site 104
River bedSite 106
Left bankSite 105Right bank
Site 107
Site 103:cross-sectionDamage 200: destroyed bridge
Bridge/Rivergap = 25 meters
Near approach(Right approach)Site 108
Far approach(Left approach)Site 104
River bedSite 106
Left bankSite 105Right bank
Site 107
Site 103:cross-sectionDamage 200: destroyed bridge
Bridge/Rivergap = 25 meters
Near approach(Right approach)Site 108
Far approach(Left approach)Site 104
Demonstrated that a knowledge engineer can use Disciple to rapidly build and update a knowledge base capturing knowledge from military engineering manuals and a set of sample solutions provided by a subject matter expert.
Disciple-COA (1998-1999): Identifies strengths and weaknesses in a Course of Action, based on the principles of war and
the tenets of Army operations.
Demonstrated the generality of its learning methods that used an object ontology created by another group (TFS/Cycorp).
Demonstrated that a knowledge engineer and a subject matter expert can jointly teach Disciple.
Sample Disciple agents
Mission: BLUE-BRIGADE2 attacks to penetrate RED-MECH-REGIMENT2 at 130600 Aug in order to enable the completion of seize OBJ-SLAM by BLUE-ARMOR-BRIGADE1.
Close: BLUE-TASK-FORCE1, a balanced task force (MAIN EFFORT) attacks to penetrate RED-MECH-COMPANY4, then clears RED-TANK-COMPANY2 in order to enable the completion of seize OBJ-SLAM by BLUE-ARMOR-BRIGADE1. BLUE-TASK-FORCE2, a balanced task force (SUPPORTING EFFORT 1) attacks to fix RED-MECH-COMPANY1 and RED-
MECH-COMPANY2 and RED-MECH-COMPANY3 in order to prevent RED-MECH-COMPANY1 and RED-MECH-COMPANY2 and RED-MECH-COMPANY3 from interfering with conducts of the MAIN-EFFORT1, then clears RED-MECH-COMPANY1 and RED-MECH-COMPANY2 and RED-MECH-COMPANY3 and RED-TANK-COMPANY1. …
Reserve: The reserve, BLUE-MECH-COMPANY8, a mechanized infantry company, follows Main Effort, and is prepared to reinforce ) MAIN-EFFORT1.
Security: SUPPORTING-EFFORT1 destroys RED-CSOP1 prior to begin moving across PL-AMBER by MAIN-EFFORT1 in order to prevent RED-MECH-REGIMENT2 from observing MAIN-EFFORT1. …
Deep: Deep operations will destroy RED-TANK-COMPANY1 and RED-TANK-COMPANY2 and RED-TANK-COMPANY3.
Rear: BLUE-MECH-PLT1, a mechanized infantry platoon secures the brigade support area.
Fires: Fires will suppress RED-MECH-COMPANY1 and RED-MECH-COMPANY2 and RED-MECH-COMPANY3 and RED-MECH-COMPANY4 and RED-MECH-COMPANY5 and RED-MECH-COMPANY6.
End State: At the conclusion of this operation, BLUE-BRIGADE2 will enable accomplishing conducts forward passage of lines through BLUE-BRIGADE2 by BLUE-ARMOR-BRIGADE1. MAIN-EFFORT1 will complete to clear RED-MECH-COMPANY4 and RED-TANK-COMPANY2. SUPPORTING-EFFORT1 will complete to clear RED-MECH-COMPANY1 and RED-MECH-COMPANY2 and RED-MECH-COMPANY3 and RED-TANK-COMPANY1. SUPPORGING-EFFORT2 will complete to clear RED-MECH-COMPANY5 and RED-MECH-COMPANY6 and RED-TANK-COMPANY3.
2004, G.Tecuci, Learning Agents Center
A Disciple agent for Center of Gravity determination
The center of gravity of an entity (state, alliance, coalition, or group) is the foundation of capability, the hub of all power and movement, upon which everything depends, the point against which all the energies should be directed.
Carl Von Clausewitz, “On War,” 1832.
If a combatant eliminates or influences the enemy’s strategic center of gravity, then the enemy will lose control of its power and resources and will eventually fall to defeat. If the combatant fails to adequately protect his own strategic center of gravity, he invites disaster.
(Giles and Galvin, USAWC 1996).
2004, G.Tecuci, Learning Agents Center
Knowledge bases and agent development by subject matter experts, using learning agent technology. Experiments in the USAWC courses.
Formalization ofthe center of gravitydetermination process
319jw Case Studies inCenter of Gravity Analysis
Use of Disciple in a sequence of two joint warfighting courses
589jw Military Applications of Artificial Intelligence
Students developedscenarios
Students developed
agents
Extended KB
stay informedbe irreplaceable
communicate be influential
Integrated KB
Initial KB
have supportbe protected
be driving force
432 concepts and features, 29 tasks, 18 rulesFor COG identification for leaders
37 acquired concepts andfeatures for COG testing
COG identification and testing (leaders)
Domain analysis and ontology development (KE+SME)
Parallel KB development (SME assisted by KE)
KB merging (KE)
Knowledge Engineer (KE)
All subject matter experts (SME)
DISCIPLE-COG DISCIPLE-COG DISCIPLE-COG DISCIPLE-COG DISCIPLE-COG
Training scenarios:Iraq 2003
Arab-Israeli 1973War on Terror 2003
Team 1 Team 2 Team 3 Team 4 Team 5
5 features10 tasks10 rules
Learned features, tasks, rules
14 tasks14 rules
2 features19 tasks19 rules
35 tasks33 rules
3 features24 tasks23 rules
Unified two featuresDeleted 4 incomplete rulesRefined 11 rules+9 features 478 concepts and features+105 tasks 134 tasks+95 rules 113 rules
DISCIPLE-COG
Testing scenario:North Korea 2003Correctness = 98.15%
Completeness = 89.33%
2.5 examples/rule5.47 hours average training time
Identify the strategic COG candidates for the Sicily_1943 scenario
Anglo_allies_1943
Identify the strategic COG candidates for Anglo_allies_1943
Which is an opposing force in the Sicily_1943 scenario?
Is Anglo_allies_1943 a single member force or a multi-member force?
Anglo_allies_1943 is a multi-member force
Identify the strategic COG candidates for the Anglo_allies_1943 which is a multi-member force
What type of strategic COG candidates should I consider for a multi-member force?
Identify the strategic COG candidates corresponding to the multi-member nature of the Anglo_allies_1943
I consider the candidates corresponding to the multi-member nature of the force
What type of strategic COG candidates should I consider for the multi-member nature of the force?
I consider the relationships between the members of the force
I consider the type of operations being conducted by the members of the force
Identify the strategic COG candidates with respect to the type of operations being conducted by the members of the Anglo_allies_1943
Which is the primary force element that will conduct the campaign for Anglo_allies_1943?
Allied_forces_operations_Husky
Is Allied_forces_operations_Husky made up of a true single group or are there subgroups?
Allied_forces_operations_Husky is made up of several subgroups
Identify the strategic COG candidates with respect to the type of operations being conducted by Allied_forces_operations_Husky
Synergistic collaboration and transition to the USAWCGeorge Mason University - US Army War College
ArtificialIntelligenceResearch
Mili
tary
Stra
tegy
Rese
arch
Military
Education
& PracticeDisciple
2004, G.Tecuci, Learning Agents Center
Government
Military
People
Economy
Alliances
Etc.
Which are the critical capabilities?
Are the critical requirements of these capabilities satisfied?
If not, eliminate the candidate.
If yes, do these capabilities have any vulnerability?
• Based on the concepts of critical capabilities, critical requirements and critical vulnerabilities, which have been recently adopted into the joint military doctrine of USA (Strange , 1996).
• Applied to current war scenarios (e.g. War on terror 2003, Iraq 2003) with state and non-state actors (e.g. Al Qaeda).
Identify potential primary sources of moral or physical
strength, power and resistance from:
Test each identified COG candidate to determine whether it has all the necessary critical
capabilities:
Identification of COG candidates Testing of COG candidates
Approach to Center of Gravity (COG) determination
2004, G.Tecuci, Learning Agents Center
Problem Solving Approach: Task Reduction
S1
S11a
S1n
S11b1 S11bm
T11bmT11b1
T1nT11a
…
…
T1
Q1
S11bT11b
A1nS11A11
……
A11b1 A11bm
S11bQ11b
President Roosevelt is a strategic COG candidate that can be eliminated
Test whether President Roosevelt has the critical capability to be protected
Test whether President Roosevelt has the critical capability to stay informed
Test whether President Roosevelt has the critical capability to communicate
President Roosevelt has the critical capability to communicate through executive orders, through military orders, and through the Mass Media of US 1943. These communication
means have no significant vulnerabilities
Test whether President Roosevelt has the critical capability to have support
Test whether President Roosevelt has the critical
capability to be a driving force
Test whether President Roosevelt has the critical capability to be influential
Test whether President Roosevelt is a viable strategic COG candidate
Does President Roosevelt have all the necessary critical capabilities?
No.The necessary critical capabilities are: be protected, stay informed, communicate,
be influential, be a driving force, have support and be irreplaceable
Which are the critical capabilities that President Roosevelt should have to be a COG candidate?
Test whether President Roosevelt has the critical
capability to be irreplaceable
President Roosevelt has the critical capability to be protected. President Roosevelt is protected by US Service 1943 which has no significant vulnerability
President Roosevelt has the critical capability to stay informed. President Roosevelt receives essential intelligence from intelligence agencies which have no significant
vulnerability
President Roosevelt has the critical capability to be influential because he is the head of the government of US 1943, the commander in chief of the military of US 1943, and is a trusted
leader who can use the Mass Media of US 1943. These influence means have no significant vulnerabilities.
President Roosevelt has the critical capability to have support because he is the head of a democratic government with a history of good decisions, a trusted commander in chief of the military, and the people are willing to make sacrifices for unconditional surrender of
European Axis. The means to secure continuous support have no significant vulnerability.
President Roosevelt has the critical capability to be a driving force. The main reason for President Roosevelt to pursue the goal of unconditional surrender of European Axis is “preventing separate peace by the members of the Allied Forces”. Also, “the western
democratic values” provides President Roosevelt with determination to persevere in this goal. There is no significant vulnerability in the reason and determination.
President Roosevelt does not have the critical capability to be irreplaceable. US 1943 would maintain the goal of unconditional surrender of European Axis irrespective of its leader because “the goal was established and the country was committed to it”. There is no
significant vulnerability resulted from the replacement of President Roosevelt
A complex problem solving task is performed by:
• successively reducing it to simpler tasks;
• finding the solutions of the simplest tasks;
• successively composing these solutions until the solution to the initial task is obtained.
Object Ontology Reduction Rules
Composition Rules
Knowledge Base
2004, G.Tecuci, Learning Agents Center
Question Which is a member of ?O1 ?Answer ?O2
IFIdentify and test a strategic COG candidate corresponding to a member of the ?O1
THENIdentify and test a strategic COG candidate for ?O2
US_1943
Which is a member of Allied_Forces_1943?
We need to
Identify and test a strategic COG candidatecorresponding to a member of the Allied_Forces_1943
Therefore we need to EXAMPLE OF REASONING
STEP
IFIdentify and test a strategic COG candidate corresponding to a member of a force
The force is ?O1
THENIdentify and test a strategic COG candidate for a force
The force is ?O2
Plausible Upper Bound Condition ?O1 is multi_member_force
has_as_member ?O2 ?O2 is force
Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance
has_as_member ?O2 ?O2 is single_state_force
Identify and test a strategic COG candidate for US_1943
Problem Solving and Learning
LEARNED RULE
FORMAL STRUCTURE
has as member
force
multi group force single group force single state force
US 1943
multi state force
single member forcemulti member force
Allied Forces 1943
multi state alliance multi state coalition
equal partners multi state alliance
dominant partner multi state alliance
. . .
. . .
. . .
INFORMAL STRUCTURE
ONTOLOGY FRAGMENT
2004, G.Tecuci, Learning Agents Center
012345678
Stro
ngly
Dis
agre
e
Dis
agre
e
Neu
tral
Agr
ee
Stro
ngly
Agr
ee
012345678
Stro
ngly
Dis
agre
e
Dis
agre
e
Neu
tral
Agr
ee
Stro
ngly
Agr
ee
DiscipleAgent KB
Problem
solving
Disciple was taught based on the expertise of Prof. Comello in center of gravity analysis.
Disciple helps the students to perform a center of gravity analysis of an assigned war scenario.
Teaching
Learning
012345678
Stro
ngly
Dis
agre
e
Dis
agre
e
Neu
tral
Agr
ee
Stro
ngly
Agr
ee
The use of Disciple is an assignment that is well suited to the course's learning objectives
Disciple should be used in future versions of this course
Use of Disciple at the US Army War College
319jw Case Studies in Center of Gravity Analysis
Disciple helped me to learn to perform a strategic COG
analysis of a scenario
Global evaluations of Disciple by officers from the Spring 03 course
2004, G.Tecuci, Learning Agents Center
0123456789
Stro
ngly
Dis
agre
e
Dis
agre
e
Neu
tral
Agr
ee
Stro
ngly
Agr
ee
0123456789
Stro
ngly
Dis
agre
e
Dis
agre
e
Neu
tral
Agr
ee
Stro
ngly
Agr
ee
0123456789
Stro
ngly
Dis
agre
e
Dis
agre
e
Neu
tral
Agr
ee
Stro
ngly
Agr
ee
Use of Disciple at the US Army War College
589jw Military Applications of Artificial Intelligence course
Students teach Disciple their COG analysis expertise,
using sample scenarios (Iraq 2003, War on terror 2003, Arab-Israeli 1973)
Students test the trained
Disciple agent based on a
new scenario (North Korea
2003)
I think that a subject matter expert can use Disciple to build an agent, with limited assistance from a knowledge engineer
Spring 2001COG identification
Spring 2002COG identification
and testing
Spring 2003COG testing based on
critical capabilities
Global evaluations of Disciple by officers during three experiments
2004, G.Tecuci, Learning Agents Center
Extended KB
stay informedbe irreplaceable
communicate be influential
Integrated KB
Initial KB
have supportbe protected
be driving force
432 concepts and features, 29 tasks, 18 rulesFor COG identification for leaders
37 acquired concepts andfeatures for COG testing
COG identification and testing (leaders)
Domain analysis and ontology development (KE+SME)
Parallel KB development (SME assisted by KE)
KB merging (KE)
Knowledge Engineer (KE)
All subject matter experts (SME)
DISCIPLE-COG DISCIPLE-COG DISCIPLE-COG DISCIPLE-COG DISCIPLE-COG
Training scenarios:Iraq 2003
Arab-Israeli 1973War on Terror 2003
Team 1 Team 2 Team 3 Team 4 Team 5
5 features10 tasks10 rules
Learned features, tasks, rules
14 tasks14 rules
2 features19 tasks19 rules
35 tasks33 rules
3 features24 tasks23 rules
Unified 2 features Deleted 4 rules Refined 12 rulesFinal KB:+9 features 478 concepts and features+105 tasks 134 tasks+95 rules 113 rules
DISCIPLE-COG
Testing scenario:North Korea 2003
Correctness = 98.15%
5h 28min average training time / team3.53 average rule learning rate / team
Parallel development and merging of knowledge bases
2004, G.Tecuci, Learning Agents Center
Knowledge Acquisition for agent developmentKnowledge Acquisition for agent development
Approaches to knowledge acquisition
Research vision on agents development
Demo: Agent teaching and learning
Disciple approach to agent development
2004, G.Tecuci, Learning Agents Center
Demonstration
DiscipleDemo
Teaching Disciple how to determine whether a strategic leader has the critical capability to be protected.
2004, G.Tecuci, Learning Agents Center
Knowledge Acquisition for agent developmentKnowledge Acquisition for agent development
Approaches to knowledge acquisition
Research vision on agents development
Demo: Agent teaching and learning
Disciple approach to agent development
2004, G.Tecuci, Learning Agents Center
Vision on the future of software development
MainframeComputers
Software systems developed and used by computer experts
PersonalComputers
Software systems developedby computer experts
and used by persons thatare not computer experts
LearningAgents
Software systems developed and used by persons that are
not computer experts
DISCIPLE
Inte
rfac
e
ProblemSolving
Learning
Ontology+ Rules
2004, G.Tecuci, Learning Agents Center
OverviewOverview
Artificial Intelligence and intelligent agents
Overview of the course
Class introduction and course’s objectives
Knowledge acquisition for agents development
Domain for hands-on experience
2004, G.Tecuci, Learning Agents Center
Overview of the courseOverview of the course
Mixed-initiative knowledge acquisition. Overview of the Disciple approach.
Problem solving through task reduction.Modeling the reasoning of subject matter experts.
Overview of knowledge engineering and ofthe manual knowledge acquisition methods.
Ontology design and development.
Agent teaching and multistrategy learning.
Mixed-initiative problem solving and knowledge base refinement.
Knowledge bases integration.
Discussion of frontier research problems.
De
velopm
ent of an
assistan
t for ch
oo
singa
Ph
.D. D
issertatio
n A
dvisor
Scripts development for scenario elicitation.
2004, G.Tecuci, Learning Agents Center
Additional recommended readingAdditional recommended reading
G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp. 1-12.
top related