1 cs 385 fall 2006 chapter 7 knowledge representation 7.1.1, 7.1.5, 7.2

18
1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

Upload: lindsay-mathews

Post on 21-Jan-2016

223 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

1

CS 385 Fall 2006Chapter 7

Knowledge Representation

7.1.1, 7.1.5, 7.2

Page 2: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

2

Knowledge Representation

What do we have to represent what we know?– predicate calculus

– production systems

What other representations do we have?– mathematical objects (functions, matrices,...)

– data structures (CS 132)

– use-case diagrams (CS 310)

– ER diagrams (CS 325)

What more do we need?

Distinction between – the representational scheme (predicate calculus)

– the medium it is implemented in (PROLOG)

Page 3: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

3

Categories of Representational Schemes:

Logical: – predicate calculus with PROLOG to implement

– there are other logics

Procedural: – production system

– rule-based system (expert system, later)

Network: – nodes are objects, arcs are relations (a map)

Structured networks:– extensions to networks where each node is more complex

This chapter: the last two.

Page 4: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

4

Logic versus Association

Logical representations: – predicate calculus loses important real-world knowledge:

– person(X) → breathes(X) missing a lot of meaning

– OK to check grammar but not to interpret a sentence

Associations (semantic net): – the meaning of an object needs to be defined in terms of a network

of associations with other objects. lungs, oxygen,...

– this alternate to logic is used by psychologists and linguists to characterize the nature of human understanding

– an object is defined in terms of its associations with other objects

– inheritance hierarchy (canary isa bird, associate properties with each).

Page 5: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

5

Figure 6.1: Semantic network developed by Collins and Quillian in their research on human information storage and response times (Harmon and King 1985).

Page 6: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

6

Figure 6.2: Network representation of properties of snow and ice.

Page 7: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

7

7.1.5 Frames

Networks allow for some inheritance, but not effective aggregation

Networks can get big and messy

Frames: an object has named slots with– values

– procedures

– links to other frames

Slots – filled in as information becomes available

– loosely correspond to relations in a conceptual graph

Advantage: it is clearer the main object is a dog

Easier to indicate hierarchies via inheritance (from animal)

Accessories point to collar and bowl frames

Procedural info can be attached

e.g. how to calculate default values such as 4 legs.

e.g. deductions based on marital status, number of children,...

dog

superclass: animal

covering: fur

legs: 4

location:

accessories: collar, bowl

Page 8: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

8

Figure 6.12: Part of a frame description of a hotel room “Specialization” indicates a pointer to a superclass.

Page 9: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

9

7.3 Conceptual Graphs

A particular representation for semantic nets– finite, connected, bipartite

(2 types of nodes with vertices from 1 set to other) graphs

– nodes are concepts ( boxes) or conceptual relations (ellipses)

– each graph represents one concept

• a bird flies

• a dog has color brown

• a child has a parent that is mother and parent father.

Page 10: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

10

Labeling Concept Nodes

Concept node (box): can be labeled with type and referent.

Referent can be nothing, name, marker or variable:

dog

dog:fido

dog:#1232

dog:*X

• a dog barks

• fido barks

• a particular (unnamed) dog parks and bites

• a dog bit its owner

barks

barks

bites

barks

agent bites

objectdog:*X

personobject

agentowns

Page 11: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

11

Type Hierarchy

Lattice – a partial ordering on a set of types.

Equivalence relation: – reflexive: a = a

– symmetric: a = b → b = a

– transitive a = b and b = c → a = c

Order: – reflexive and transitive

– all elements are comparable

Partial order– some elements are comparable

– how would you relate human, student, parent, math student, cs student, anna, zahra, dog, fido

Page 12: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

12

┬ (universal type)

generalization human dog specialization

student parent

cs student

anna zahra fido

┴ (absurd type)

Page 13: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

13

Operations to Create New Graphs

copy: an exact copy

restrict: nodes replaced by a node representing their specialization

dog is a specialized animal

Note we lost information about dog

join: combines the two with the substitutions.

May have redundant info

simplify: removes it

animal eats

dog barks

dog eats

dog

barks

eats

Page 14: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

14

• loses the bone

• what if emma is a cat?

• what if the two dogs are not the same?

These are not sound inference

rules, but often good enough for

plausible, common sense

reasoning.

No guarantee the results are true

Page 15: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

15

7.2.5 Propositional Nodes

Defining relations between propositions:

dog:fido barks

person: mary knowexperiencer

object

proposition

Page 16: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

16

7.2.6 Conceptual Graphs and Logic

Rules:

1. Assign a variable to each generic concept: X ↔ dog

2. Assign a name to each individual concept: emma

3. Each concept node has a predicate with same name as in 2 and variables as in 1: dog(X). dog(emma)

4. Each n-ary conceptual relation (barks, bites) is an n-ary predicate whose name is the same as the relation and arguments correspond to the concept nodes linked to the relation

5. All variables are existentially quantified

dog

bites

barks

X (dog(X) barks(X) bites(X)

dog(emma) barks(emma) bites(emma)

dog: emma

bites

barks

Page 17: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

17

7.2.6 Conceptual Graphs and Logic

dog agent bites personobject

dog:*X agent1 bites

object2dog:*X

personobject1

agent2owns

X ↔ dog, Y ↔ bites, Z ↔ person

X, Y, Z (dog(X) agent(X,Y) bites(Y) object(Y, Z)

person(Z)

easier but not following the algorithm:

X, Y (dog(X) bites(X, Y) person(Y)

X, Y, Z, W (dog(X) agent1(X,Y) bites(Y) object1(Y, Z)

person(Z) agent2(Z,W) owns(W) object2(W,X)

X, Y (dog(X) person(Y) owns(Y,X) bites(X,Y)

Page 18: 1 CS 385 Fall 2006 Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

18

7.2.6 Conceptual Graphs and Logic

negation:

dog:fido barks

proposition

neg

dog barks

proposition

neg

fido does not bark

¬ (dog(fido) bark(fido) dog(fido) → ¬ bark(fido) bark(fido) → ¬ dog(fido)

dogs do not bark

X ¬ (dog(X) bark(X))