chapter 7 operant conditioning: schedules and theories of reinforcement

30
Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Upload: marybeth-richard

Post on 23-Dec-2015

235 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Chapter 7Operant Conditioning:

Schedules and Theories

Of Reinforcement

Page 2: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Now that we have discussed reinforcement . . . .

It is time to discuss just HOW reinforcements can and should be delivered

In other words, there are other things to consider than just WHAT the reinforcer should be!

Page 3: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Think about this!

If you were going to reinforce your puppy for going to the bathroom outside, how would you do it?Would you give him a Liv-a-Snap every

time? Some of the time? Would you keep doing it the same way or

would you change your method as you go along?

Page 4: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

What is a schedule of reinforcement?

A schedule of reinforcement is the response requirement that must be met in order to obtain reinforcement. In other words, it is what you have to do to

get the goodies!

Page 5: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Continuous vs. IntermittentReinforcement

Continuous A continuous

reinforcement schedule (CRF) is one in which each specified response is reinforced

Intermittent An intermittent

reinforcement schedule is one in which only some responses are reinforced

Page 6: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Intermittent Schedules

When you want to reinforce based on a certain number of responses occurring (for example, doing a certain number of math problems correctly), you can use a ratio schedule

When you want to reinforce the first response after a certain amount of time has passed (for example when a teacher gives a midterm test), you can use an interval schedule

Page 7: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Four Types of Intermittent Schedules

Ratio Schedules

Fixed Ratio

Variable Ratio

Interval Schedules

Fixed Interval

Variable Interval

Page 8: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Fixed Ratio Schedule

On a fixed ratio schedule, reinforcement is contingent upon a fixed, predictable number of responses Characteristic pattern:

High rate of response Short pause following each reinforcer

Reading a chapter then taking a break is an example

A good strategy for “getting started” is to start with an easy task

Page 9: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Fixed Ratio, continued

Higher Ratio requirements result in longer post-reinforcement pausesExample: The longer the chapter you

read, the longer the study break!

Ratio Strain – a disruption in responding due to an overly demanding response requirementMovement from “dense/rich” to “lean”

schedule should be done gradually

Page 10: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Fixed Ratio: FR

Fixed Ratio is abbreviated “FR” and a number showing how many responses must be made to get the reinforcer is added:Ex. FR 5 (5 responses needed to get a

reinforcer)

Page 11: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Variable Ratio Schedule

On a variable ratio schedule, reinforcement is contingent upon a varying, unpredictable number of responses Characteristic pattern:

High and steady rate of response Little or no post-reinforcer pausing

Hunting, fishing, golfing, shooting hoops, and telemarketing are examples of behaviors on this type of schedule

Page 12: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Other facts aboutVariable Ratio Schedules

Behaviors on this type of schedule tend to be very persistent

This includes unwanted behaviors like begging, gambling, and being in abusive relationships

“Stretching the ratio” means starting out with a very dense, rich reinforcement schedule and gradually decreasing the amount of reinforcement

The spouse, gambler, or child who is the “victim” must work harder and harder to get the reinforcer

Page 13: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Variable Ratio: VR

Variable Ratio: VR

Variable Ratio is abbreviated “VR” and a number showing an average of how many responses between 1 and 100 must be made to get the reinforcer is added: Ex. VR 50 (an average of 50 responses needed to

get a reinforcer – could the the next try, or it could take 72!

Gambling is the classic example!

Page 14: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Fixed Interval Schedules

On a fixed interval schedule, reinforcement is contingent upon the first response after a fixed, predictable period of time Characteristic pattern:

A “scallop” pattern produced by a post-reinforcement pause followed by a gradually increasing rate of response as the time interval draws to a close

Glancing at your watch during class provides an example!

Student study behavior provides another!

Page 15: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Fixed Interval: FI

Fixed Interval is abbreviated “FI” and a number showing how much time must pass before the reinforcer is available:FI 30-min (reinforcement is available for

the first response after 30 minutes have passed)

Ex. Looking down the tracks for the train if it comes every 30 minutes

Page 16: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Variable Interval Schedule

On a variable interval schedule, reinforcement is contingent upon the first response after a varying, unpredictable period of time Characteristic pattern:

A moderate, steady rate of response with little or no post-reinforcement pause.

Looking down the street for the bus if you are waiting and have no idea how often it comes provides an example!

Page 17: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Variable Interval: VI

Variable Interval is abbreviated “VI” and a number showing the average time interval that must pass before the reinforcer is available: VI 30-min (reinforcement is available for the first

response after an average of 30 minutes has passed)

Ex. Hilary’s boyfriend, Michael, gets out of school and turns on his phone some time between 3:00 and 3:30 – the “reward” of his answering his phone puts her calling behavior on a VI schedule, so she calls every few minutes until he answers

Page 18: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Noncontingent Reinforcement

What happens when reinforcement occurs randomly, regardless of a person or animal’s behavior?

Weird Stuff!Like what?

Page 19: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Superstitious Behavior

Examples include:Rituals of gamblers, baseball players, etc.Elevator-button-pushing behavior

Noncontingent reinforcement can sometimes be used for GOOD purposes (not just weird or useless behaviors!)

Page 20: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Good, useful examples

Giving noncontingent attention to childrenSome bad behaviors like tantrums are

used to try to get attention from caregiversThese behaviors can be diminished by

giving attention noncontingently

Children need both contingent AND non-contingent attention to grown up healthy and happy!

Page 21: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Theories of Reinforcement

In the effort to answer the question, “What makes reinforcers work?”, theorists have developed some . . . . .

THEORIES!!!!!

Page 22: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

So here’s the first one:

If you are hungry and go looking for food and eat some, you will feel more comfortable because the hunger has been reduced.

The desire to have the uncomfortable “hunger drive” reduced motivates you to seek out and eat the food

Page 23: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Drive Reduction Theory

So this is one thing that can make reinforcers work:An event is reinforcing to the extent that it

is associated with a reduction in some type of physiological drive

This type of approach may explain some behaviors (like sex) but not others (like playing video games)

Page 24: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Incentive Motivation

Sometimes, we just do things because they are FUN!

When this happens, we can say that motivation is coming from some property of the reinforcer itself rather than from some kind of internal driveExamples include playing games and

sports, putting spices on food, etc.

Page 25: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

We can also think about how we use reinforcers.

We can use a behavior we love (high probability behavior) to reinforce a behavior we don’t like to do very much (low probability behavior). This is sometimes called “Grandma’s Principle” Bobby, you can read those comic books once you

have mowed the grass!

To use this theory, you have to know the “relative probability” of each behavior

Page 26: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

What do you do if you only know the “probability” for one?

You can use the next theory!

Let’s say you know that a person likes to play video games. You can use playing video games as a reinforcer IF you:Restrict access to playingMake sure the person is getting to play

less frequently than they prefer to

Page 27: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

This is the “Response Deprivation Hypothesis”

Any behavior that you can restrict access to and keep it below the person or animal’s preferred level of doing it can be used as a reinforcer

Think of some examples!

Page 28: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Behavioral Bliss Point

The Response Deprivation Hypothesis makes an assumption that there is an optimal or best level of behavior that a person or animal tries to maintain If you could do ANYTHING at all you

wanted to do, how would you distribute your time?

This would tell you your “behavioral bliss point” for each activity or behavior

Page 29: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

Behavioral Bliss Point cont’d

An organism that has free access to alternative activities will distribute its behavior in such a way as to maximize overall reinforcement

In other words, if you can do anything you want, you will spend time on each thing you do in a way that will give you the most pleasure

Page 30: Chapter 7 Operant Conditioning: Schedules and Theories Of Reinforcement

But this is real life!

This means that you can almost never achieve your “behavioral bliss point”

So you have to compromise by coming as close as you can, given your circumstances

No wonder we hate to leave our childhoods behind!