27 terms

Reinforcement Schedules

Schedules of Reinforcement
Rules that state the relationship between a certain behaviour and its consequences, defined in terms of duration or time or number of responses required to present a reinforcer
Continuous Reinforcement
Every occurrence of an operant response is reinforced
Continuous Reinforcement Factors (3)
Leads to rapid acquisition of a target behaviour, as well as rapid increase in target behaviour, but is not common in real life situations
Intermittent Reinforcement
The target behaviour is only reinforced on some occassions
Four types of Intermittent Schedules
Fixed Ratio, Fixed Interval, Variable Ratio, Variable Interval
Fixed Ratio
A reinforcer is delivered after a fixed number of responses
Variable Ratio
A reinforcer is delivered after a number of responses, varying around an average
Fixed Interval
A reinforcer is delivered after a fixed length of time
Variable Interval
A reinforcer is delivered after a length of time, varying around an average
Three Schedule Effects
Rate of responding, Pattern of responding, Resistance to extinction
Rate of responding
Controlled by how frequently the target behaviour is reinforced, so rich schedules produce higher rater and ratio schedules produce higher rates
Pattern of responding - variable ratio
Steady, rapid rate of responding
Pattern of responding - fixed ratio
Steady, rapid rate of responding with post-reinforcement pause
Pattern of responding - fixed interval
Acceleration of responding following the post-reinforcement pause, until the delivery of a reinforcer (overall a low rate of response)
Pattern of responding - variable interval
Steady, moderate rate of responding
Partial Reinforcement Extinction Effect (Who, what)
Humphrey, partial or intermittent reinforcement schedules are more resistant to extinction than continuous reinforcement schedules
What schedule types are more resistant to extinction? (3)
Ratio, Variable, Lean
Discrimination Hypothesis
In order for behaviour to change, a subject must be able to discriminate the changes in reinforcement contingency (which is harder when intermittent)
Generalisation Decrement Hypothesis
There is decreased responding in a generalisation test when the test stimulus becomes less similar to the training stimulus
What causes post-reinforcement pause? (3 Hypotheses)
Satiation, Fatigue, Remaining-Responses
Satiation Hypothesis
Reinforcers decrease in strength over time; incorrect as longer pauses are found with longer fixed ratio schedules
Fatigue Hypothesis
The subject is tired after responding and so pauses to recover after reinforcement; incorrect as subjects often work harder and without pauses
Remaining-Responses Hypothesis
After receiving a reinforcer, the subject realises that they are the furthest away from receiving their next reinforcer; correct with support from multiple schedules research
Multiple Schedules
A subject is presented with two or more different reinforcement schedules one at a time, signaled by different discriminative stimuli
Why do subjects respond faster on VR than VI schedules? (2 Hypotheses)
Modal, Molecular
Modal Theory
There is a linear relationship between response rate and reinforcer rate for VR schedules, but there is an asymptote for VI schedules, so higher response rates aren't worth the effort
Molecular Theory
Longer inter-response times are more frequently reinforced on VI schedules than VR schedules, which are reinforced by shorter inter-response times