inductive learning of rules

21
1 Inductive Learning of Rules Mushroom Edible? Spores Spots Color Y N Brown N Y Y Grey Y N Y Black Y N N Brown N Y N White N Y Y Brown Y Y N Brown N N Red Don’t try this at hom

Upload: erol

Post on 23-Feb-2016

56 views

Category:

Documents


0 download

DESCRIPTION

Inductive Learning of Rules. Mushroom Edible? SporesSpots Color YN BrownN YY GreyY NY BlackY NN BrownN YN WhiteN YY BrownY YN Brown NN Red. Don’t try this at home. Types of Learning. What is learning? - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Inductive Learning of Rules

1

Inductive Learning of RulesMushroom Edible?Spores Spots Color

Y N Brown NY Y Grey YN Y Black YN N Brown NY N White NY Y Brown Y

Y N BrownN N Red

Don’t try this at home...

Page 2: Inductive Learning of Rules

2

Types of LearningWhat is learning?

Improved performance over time/experience Increased knowledge

Speedup learning No change to set of theoretically inferable facts Change to speed with which agent can infer them

Inductive learning More facts can be inferred

Page 3: Inductive Learning of Rules

3

Mature TechnologyMany Applications

Detect fraudulent credit card transactions Information filtering systems that learn user

preferences Autonomous vehicles that drive public highways

(ALVINN) Decision trees for diagnosing heart attacks Speech synthesis (correct pronunciation) (NETtalk)

Data mining: huge datasets, scaling issues

Page 4: Inductive Learning of Rules

4

Defining a Learning Problem

Experience:Task:Performance Measure:

A program is said to learn from experience E with respect to task T and performance measure P, if it’s performance at tasks in T, as measured by P, improves with experience E.

Page 5: Inductive Learning of Rules

5

Example: CheckersTask T:

Playing checkersPerformance Measure P:

Percent of games won against opponents

Experience E: Playing practice games against itself

Page 6: Inductive Learning of Rules

6

Example: Handwriting RecognitionTask T:

Performance Measure P:

Experience E:

Recognizing and classifying handwritten words within images

Page 7: Inductive Learning of Rules

7

Example: Robot DrivingTask T:

Performance Measure P:

Experience E:

Driving on a public four-lane highway using vision sensors

Page 8: Inductive Learning of Rules

8

Example: Speech Recognition

Task T:

Performance Measure P:

Experience E:

Identification of a word sequence from audio recorded from arbitrary speakers ... noise

Page 9: Inductive Learning of Rules

9

IssuesWhat feedback (experience) is available?What kind of knowledge is being

increased? How is that knowledge represented?What prior information is available? What is the right learning algorithm?How avoid overfitting?

Page 10: Inductive Learning of Rules

10

Choosing the Training ExperienceCredit assignment problem:

Direct training examples: E.g. individual checker boards + correct move for

each Indirect training examples :

E.g. complete sequence of moves and final resultWhich examples:

Random, teacher chooses, learner chooses

Supervised learningReinforcement learningUnsupervised learning

Page 11: Inductive Learning of Rules

11

Choosing the Target FunctionWhat type of knowledge will be learned?How will the knowledge be used by the

performance program?E.g. checkers program

Assume it knows legal moves Needs to choose best move So learn function: F: Boards -> Moves

hard to learn Alternative: F: Boards -> R

Page 12: Inductive Learning of Rules

12

The Ideal Evaluation FunctionV(b) = 100 if b is a final, won board V(b) = -100 if b is a final, lost boardV(b) = 0 if b is a final, drawn boardOtherwise, if b is not final

V(b) = V(s) where s is best, reachable final board

Nonoperational…Want operational approximation of V: V

Page 13: Inductive Learning of Rules

13

How Represent Target Functionx1 = number of black pieces on the boardx2 = number of red pieces on the boardx3 = number of black kings on the boardx4 = number of red kings on the boardx5 = number of black pieces threatened by redx6 = number of red pieces threatened by black

V(b) = a + bx1 + cx2 + dx3 + ex4 + fx5 + gx6

Now just need to learn 7 numbers!

Page 14: Inductive Learning of Rules

14

Target FunctionProfound Formulation:

Can express any type of inductive learning as approximating a function

E.g., Checkers V: boards -> evaluation

E.g., Handwriting recognition V: image -> word

E.g., Mushrooms V: mushroom-attributes -> {E, P}

Inductive bias

Page 15: Inductive Learning of Rules

15

Theory of Inductive Learning

Dx

x)Pr(

Page 16: Inductive Learning of Rules

16

Theory of Inductive LearningSuppose our examples are drawn with a probability

distribution Pr(x), and that we learned a hypothesis f to describe a concept C.

We can define Error(f) to be:

where D are the set of all examples on which f and C disagree.

Dx

x)Pr(

Page 17: Inductive Learning of Rules

17

PAC LearningWe’re not perfect (in more than one way). So why

should our programs be perfect?What we want is:

Error(f) < for some chosen But sometimes, we’re completely clueless: (hopefully,

with low probability). What we really want is: Prob ( Error(f) < .

As the number of examples grows, and should decrease.

We call this Probably approximately correct.

Page 18: Inductive Learning of Rules

18

Definition of PAC LearnabilityLet C be a class of concepts.We say that C is PAC learnable by a hypothesis space H if:

there is a polynomial-time algorithm A, a polynomial function p, such that for every C in C, every probability distribution Pr, and

and , if A is given at least p(1/, 1/) examples, then A returns with probability 1- a hypothesis whose error is

less than .k-DNF, and k-CNF are PAC learnable.

Page 19: Inductive Learning of Rules

19

Version Spaces: A Learning Alg.Key idea:

Maintain most specific and most general hypotheses at every point. Update them as examples come in.

We describe objects in the space by attributes: faculty, staff, student 20’s, 30’s, 40’s. male, female

Concepts: boolean combination of attribute-values: faculty, 30’s, male, female, 20’s.

Page 20: Inductive Learning of Rules

20

Generalization and Specializ...A concept C1 is more general than C2 if it

describes a superset of the objects: C1={20’s, faculty} is more general than C2={20’s,

faculty, female}. C2 is a specialization of C1.

Immediate specializations (generalizations).The version space algorithm maintains the

most specific and most general boundaries at every point of the learning.

Page 21: Inductive Learning of Rules

21

ExampleT

male female faculty student 20’s 30’s

male, fac male,stud female,fac female,stud fac,20’s fac, 30’s

male,fac,20 male,fac,30 fem,fac,20 male,stud,30