chapter 2: intelligent agents - school of computer...

25
Chapter 2: Intelligent Agents Prepared by: Dr. Ziad Kobti ©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 1

Upload: dotuyen

Post on 28-May-2018

231 views

Category:

Documents


0 download

TRANSCRIPT

Chapter 2: Intelligent Agents

Prepared by: Dr. Ziad Kobti

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 1

Agents and Environments

• Rational agents are central to approaching artificial intelligence.

• AGENT – Anything that can be viewed as perceiving its

environment through sensors and acting upon that environment through actuators.

– Agent function: agent’s behaviour; maps any given percept sequence to an action.

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 2

Agents and Environments

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 3

Good behaviour: The concept of Rationality

• Rationality depends on: – The performance measure that defines the

criterion of success – The agent’s prior knowledge of the environment – The actions that the agent can perform – The agent’s percept sequence to date

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 4

Rational Agent

• Definition: – For each possible percept sequence, a rational

agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 5

Omniscience, learning, and autonomy

• Omniscient agent: knows the actual outcome of its actions and can act accordingly; impossible in reality!

• Rationality is not the same as perfection! • Rationality maximizes expected performance, while

perfection maximizes actual performance. • Information gathering is an important part of

rationality; e.g. Exploration. • Learning is also required by rational agents; as

environment is modified, agent can learn. • Autonomy is an essential concept in rational agents:

depend more its percepts rather than prior knowledge.

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 6

Specifying the task environment

• PEAS: Performance measure, Environment, Actuators (control), Sensors.

• E.g. Google driverless car (see video) – Can you identify the PEAS?

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 7

Specifying the task environment

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 8

Properties of task environments

• Fundamentally observable vs. partially observable – A task environment is effectively fully observable if

the sensors detect all aspects that are relevant to the choice of action; relevance depends on the performance measure.

• Single agent vs. multiagent – Competitive – Cooperative – Emergence of communication

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 9

Properties of task environments

• Deterministic vs. Stochastic – No uncertainty in deterministic environment – Most real situations are so complex that it is

impossible to keep track of all the unobserved aspects; treated as stochastic.

– Uncertain environment is not fully observable or not deterministic.

– Stochastic implies uncertainty; (probabilities). – nondeterministic environment is one in which actions

are characterized by their possible outcomes, but no probabilities are attached to them.

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 10

Properties of task environments

• Episodic vs Sequential – Episodic task environment is where the agent’s

experience is divided into atomic episodes. In each episode the agent received a percept and then performs a single action. (simpler!)

– Sequential environment: the current decision could affect all future decisions.

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 11

Properties of task environments

• Discrete vs. Continuous – Applies to the state of the environment, to the

way time is handled, and to the percepts and actions of the agents.

• Known vs. Unknown – Agent’s state of knowledge about the ‘laws of

physics’ of the environment. – Eg. Outcome of an action

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 12

Examples of task environments and their characteristics

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 13

The Structure of Agents

• Agent behaviour = the action that is performed after any given sequence of percepts

• AI’s role is to design an agent program that implements the agent function; i.e. the mapping from percepts to actions.

Agent = architecture + program

(architecture: where the program will run on some sort of computing device with physical sensors and actuators)

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 14

Agent Program

• Four basic kinds of agent programs that embody the principles underlying almost all intelligent systems: – Simple reflex agents – Model-based reflex agents – Goal-based agents – Utility-based agents

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 15

Simple-Reflex-Agent

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 16

Simple-Reflex-Agent function SIMPLE-REFLEX-AGENT( percept) returns an action persistent: rules, a set of condition—action rules

state INTERPRET-INPUT(percept) rule RULE-MATCH(state, rules) action rule.ACTION

return action SR agent acts according to a rule whose condition matches the current state, as defined by the percept

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 17

Model-based reflex agent

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 18

Model-based reflex agent function MODEL-BASED-REFLEX-AGENT(percept) returns an action persistent: state, the agent's current conception of the world state model, a description of how the next state depends on current state and action rules, a set of condition—action rules action, the most recent action, initially none state UPDATE-STATE(state, action, percept, model) rule RULE MATCH(state, rules) action rule.ACTION return action MBR agent keeps track of the current state of the world using an internal model. It then chooses an action in the same way as the reflex agent

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 19

Goal-based agents

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 20

Goa-based agents

• Agent needs goal information that describes situations that are desirable. E.g. a destination.

• Search and planning subfields of AI are devoted to find action sequences that achieve the agent’s goals. – Discussed in details in future chapters…

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 21

Utility-based agents

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 22

Learning agents

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 23

How the components of agent programs work

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 24

Rational Agent: Example SR

• Design a wall following agent – Sensors: can sense 1 patch directly in front of it – If the patch is blocked, then it turns left, else it

continues forward in its current direction. – See netlogo example

?

©2013 by Dr.Ziad Kobti - May not be reproduced without permission. 25