Chapter 7 Psychology: Learning
Professor Noll, USF
Terms in this set (74)
What is learning?
The acquisition of new knowledge, skills, or responses from experience that results in a relatively permanent change in the state of the learner.
What is habituation?
A general process in which repeated or prolonged exposure to a stimulus results in a gradual reduction in responding.
Kind of implicit learning.
ex: A turtle draws its head back into its shell when its shell is touched. After being touched repeatedly, the turtle realizes it's not in danger and no longer hides.
What is sensitization?
A simple form of learning that occurs when presentation of a stimulus leads to an increased response to a later stimulus.
What is classical conditioning?
A type of learning that occurs when a neutral stimulus produces a response after being paired with a stimulus that naturally produces a response.
Pavlov showed that dogs learned to salivate to neutral stimuli such as a bell or a tone after that stimulus had been associated with another stimulus that naturally evokes salivation
When the dogs were initially presented with a plate of food, they began to salivate. Pavlov called the presentation of food an unconditioned stimulus.
He called the dog's salivation an unconditioned response.
Pavlov soon discovered that he could make the dogs salivate to neutral stimuli, things that dont usually make animals salivate such as the ringing of a bell. Pavlov paired the presentation of food with the ticking of a metronome. Sure enough, he found that the dogs salivated to these sounds, which became a conditioned stimulus.
When the CS is paired with the US, the animal will learn to associate food with the sound and eventually the CS is sufficient enough to produce a response.
What is an unconditional stimulus?
Something that reliably produces a naturally occurring reaction in an organism.
What is an unconditioned response?
A reflexive reaction that is reliably produced by an unconditioned stimulus.
What is a conditioned stimulus?
A previously neutral stimulus that produces a reliable response in an organism after being paired with a US.
What is a conditioned response?
A reaction that resembles an unconditioned response but is produced by a conditioned stimulus.
What is acquisition?
The phase of classical conditioning when the CS and the US are presented together. During this phase, typically there is a gradual increase in learning. It starts low, rises rapidly, and then slowly tapers off.
What is second-order conditioning?
Conditioning where a CS is paired with a stimulus that became associated with the US in an earlier procedure. Occurs after conditioning.
Ex: Pavlov repeatedly paired a new CS, a black square, with the now reliable tone. After a number of training trials, his dogs produced a salivary response to the black square even though the square itself had never been directly associated with the food.
Second-order conditioning helps explain why some people desire money to the point that they hoard it and value it even more than the objects it purchases.
What is extinction?
The gradual elimination of a learned response that occurs when the CS is repeatedly presented without the US.
ex: For example, in Pavlov's classic research, a dog was conditioned to salivate to the sound of a bell. When the bell was presented repeatedly without the presentation of food, the salivation response eventually became extinct.
What is spontaneous recovery?
The tendency of a learned behavior to recover from extinction after a rest period.
The ability of the CS to elicit the CR was weakened, but it was not eliminated.
ex: A child is taught to go to sleep when the light is turned off. However, for many months the child no longer falls asleep when the light is turned off. Then, the child begins to fall asleep when the light is turned off again.
What is generalization?
The CR is observed even though the CS is slightly different from the CS used during acquisition. The conditioning generalizes to stimuli that are similar to the CS used during the original training. The more the new stimulus changes, the less conditioned responding is observed.
When an organism generalizes to a new stimulus, two things happen. First, by responding to the new stimulus used during generalization testing, the organism demonstrates that it recognizes the similarity between the original CS and the new stimulus. Second, by displaying diminished responding to that new stimulus, it also tells us that it notices a difference between the two stimuli.
ex: In the classic Little Albert experiment, researchers John B. Watson and Rosalie Rayner conditioned a little boy to fear a white rat. The researchers observed that the boy experienced stimulus generalization by showing fear in response to similar stimuli including a dog, a rabbit, a fur coat, a white Santa Claus beard and even Watson's own hair.
What is discrimination?
The capacity to distinguish between similar but distinct stimuli.
ex: In another classic experiment conducted in 1921, researcher Shenger-Krestovnika paired the taste of meat (the unconditioned stimulus) with the sight of a circle. The dogs then learned to salivate (the conditioned response) when they saw the circle. Researchers also observed that the dogs would begin to salivate when presented with an ellipse, which was similar but slightly different that the circle shape. After failing to pair the sight of the ellipse with the taste of meat, the dogs were able to eventually discriminate between the circle and ellipse.
What happened in the Little Albert experiment?
Watson and Raynier presented Little Albert with a variety of stimuli: a white rat, a dog, etc. Watson unexpectedly struck a large steel bar with a hammer, producing a loud noise. Albert cried.
Watson and Raynier then led Little Albert through the acquisition stage of classical conditioning. Eventually, the sight of the rat scared Albert. Little Albert also showed stimulus generalization. The sight of the white rabbit, a seal-fur coat, and a Santa Claus mask produced the same kind of fear reactions in the infant.
Study proven unethical.
A model for Behaviorists.
More about Classical Conditioning?
Classical Conditioning occurs when an animal has learned to set up an expectation. The sound of a bell served to set up this cognitive state for the laboratory dogs; Pavlov, because of the lack of any reliable link with food, did not.
The Rescorla-Wagner model introduced a cognitive component that accounted for a variety of classical conditioning phenomena that were difficult to view before. For example, the model predicted that conditioning would be easier when the CS was an unfamiliar event than when it was familiar. The reason is that familiar events, being familiar, already have expectations associated with them, making new conditioning difficult. This model presents an unconscious effort.
*CLASSICAL CONDITIONING CAN BE USED TO CREATE A PHOBIA, AS MODELED IN LITTLE ALBERT.
ABOUT TASTE AVERSIONS: As you may have already realized, conditioned taste aversions are a great example of some of the fundamental mechanics of classical conditioning. The previously neutral stimulus (the food) is paired with an unconditioned stimulus (an illness), which leads to an unconditioned response (feeling sick). After this one-time pairing, the previously neutral stimulus (the food) is now a conditioned stimulus that elicits a conditioned response (avoiding the food).*
The Neural Elements of Classical Conditioning?
The amygdala, particularly an area known as the central nucleus, is also critical for emotional conditioning.
The Evolutionary Elements of Classical Conditioning?
To have adaptive value, this mechanism should have several properties:
There should be rapid learning that occurs in perhaps one or two trials.
Conditioning should be able to take place over very long intervals, perhaps up to several hours.
The organism should develop the aversion to the smell or taste of the food rather than its ingestion.
Learned aversions should occur more often with novel foods than familiar ones.
What is biological preparedness?
A propensity for learning particular kinds of associations over others.
Behaviorism is the school of psychology MOST associated with:
What study is the study of behaviors that are reactive?
Involuntary behaviors make up only a small portion of our behavioral repertoires.
What is operant conditioning?
Operant conditioning is a type of learning in which the consequences of an organism's behavior determine whether it will be repeated in the future. The study of operant conditioning is the exploration of behaviors that are active.
Who was Edward Thorndike and what did he focus on?
Thorndike was a scientist before Pavlov that examined active behaviors. He emphasized on instrumental behaviors, that is, behavior that required an organism to do something, solve a problem, or otherwise manipulate elements of its environment.
A hungry cat placed in a puzzle box would try various behaviors to get out-scratching at the door, meowing loudly, sniffing the inside of the box, putting its paw through the openings-but only one behavior opened the door and led to food: tripping the lever in just the right way.Fairly quickly, the cats became quite skilled at triggering the lever for their release. Notice what's going on. At first, the cat enacts any number of likely (but ultimately ineffective) behaviors, but only one behavior leads to freedom and food. Over time, the ineffective behaviors become less and less frequent, and the one instrumental behavior (going right for the latch) becomes more frequent.
What is the law of effect?
Behaviors that are followed by a "satisfying state of affairs" tend to be repeated and those that produce an "unpleasant state of affairs" are less likely to be repeated.
What is operant behavior and coined the term?
the behavior that an organism produces that has some impact on the environment. B.F. Skinner coined the term.
Making behaviors occur less?
What does a Skinner box, or an operant conditioning chamber, do?
Allows a researcher to study the behavior of small organisms in a controlled environment.
Reinforcement and Punishment?
To keep these possibilities distinct, Skinner used the term positive for situations in which a stimulus was presented and negative for situations in which it was removed. Consequently, there is positive reinforcement (where a rewarding stimulus is presented) and negative reinforcement (where an unpleasant stimulus is removed), as well as positive punishment (where an unpleasant stimulus is administered) and negative punishment (where a rewarding stimulus is removed).
Why is reinforcement more effective than punishment when it comes to learning?
Punishment signals that an unacceptable behavior has occurred, but it doesn't specify what should be done instead. Spanking a young child for starting to run into a busy street certainly stops the behavior, but it doesn't promote any kind of learning about the desired behavior.
What are primary reinforcers?
Reinforcers that help satisfy biological needs.
What are secondary reinforcers?
Reinforcers that relate with sustaining biological needs, such as money, which helps us acquire food and shelter.
ex: Flashing lights, originally a neutral CS, acquire powerful negative elements through association with a speeding ticket and a fine.
What determines the effectiveness of a reinforcer?
A key determinant of the effectiveness of a reinforcer is the amount of time between the occurrence of a behavior and the reinforcer: The more time that elapses, the less effective the reinforcer
The smoker who desperately wants to quit smoking will be reinforced immediately by the feeling of relaxation that results from lighting up, but may have to wait years to be reinforced with better health that results from quitting; the dieter who sincerely wants to lose weight may easily succumb to the temptation of a chocolate sundae that provides reinforcement now, rather than waiting weeks or months for the reinforcement (looking and feeling better) that would be associated with losing weight.
What determines the effectiveness of a punisher?
The longer the delay between a behavior and the administration of punishment, the less effective the punishment will be in suppressing the targeted behavior. For example, a parent whose child misbehaves at a shopping mall may be unable to punish the child immediately with a time-out because it is impractical in the mall setting.
Where does learning take place?
What does operant conditioning consist of?
Most behavior is under stimulus control, which develops when a particular response only occurs when an appropriate discriminative stimulus, a stimulus that indicates that a response will be reinforced, is present.
Skinner discussed this process in terms of a "three-term contingency": In the presence of a discriminative stimulus (classmates drinking coffee together in Starbucks), a response (joking comments about a psychology professor's increasing waistline and receding hairline) produces a reinforcer (laughter among classmates). The same response in a different context-the professor's office-would most likely produce a very different outcome
Stimulus control, perhaps not surprisingly, shows both discrimination and generalization effects similar to those we saw with classical conditioning
Difference in extinction with classical conditioning and operant conditioning?
In classical conditioning, the US occurs on every trial no matter what the organism does. In operant conditioning, the reinforcements only occur when the proper response has been made, and they don't always occur even then. Not every trip into the forest produces nuts for a squirrel, auto salespeople don't sell to everyone who takes a test drive, and researchers run many experiments that do not work out and never get published. Yet these behaviors don't weaken and gradually extinguish. In fact, they typically become stronger and more resilient.
Key difference between operant and classical conditioning?
In operant conditioning, the pattern with which reinforcements appeared was crucial. However, in classical conditioning, the number of times a learning trial occurred was important.
Skinner developed what method?
The schedules of reinforcement. The two most important are interval schedules, based on the time intervals between reinforcements, and ratio schedules, based on the ratio of responses to reinforcements.
What is a fixed-interval schedule?
Reinforcers that are presented at fixed-time periods, provided that the appropriate response is made. hey show little responding right after the presentation of the reinforcement, but as the next time interval draws to a close, they show a burst of responding. Many undergraduates behave exactly like this. They do relatively little work until just before the upcoming exam, then engage in a burst of reading and studying.
What is a variable-interval schedule?
A behavior that is reinforced based on an average time that has expired since the last reinforcement. Variable-interval schedules are not encountered that often in real life, although one example might be radio promotional giveaways, such as tickets to rock concerts. Both fixed-interval schedules and variable-interval schedules tend to produce slow, methodical responding because the reinforcements follow a time scale that is independent of how many responses occur.
What is a fixed-ratio schedule?
Reinforcement is delivered after a specific number of responses have been made. One schedule might present reinforcement after every fourth response, a different schedule might present reinforcement after every 20 responses; the special case of presenting reinforcement after each response is called continuous reinforcement, and it's what drove Skinner to investigate these schedules in the first place.
ex: some credit card companies return to their customers a percentage of the amount charged.
What is a variable-ratio schedule?
the delivery of reinforcement is based on a particular average number of responses. For example, if a laundry worker was following a 10-response variable-ratio schedule instead of a fixed-ratio schedule, she or he would still be paid, on average, for every ten shirts washed and ironed but not for each 10th shirt. Not surprisingly, variable-ratio schedules produce slightly higher rates of responding than fixed-ratio schedules primarily because the organism never knows when the next reinforcement is going to appear.
What is intermittent reinforcement?
When only some of the responses made are followed by reinforcement, they produce behavior that is much more resistant to extinction than a continuous reinforcement schedule. One way to think about this effect is to recognize that the more irregular and intermittent a schedule is, the more difficult it becomes for an organism to detect when it has actually been placed on extinction.
if you've put your dollar into a slot machine that, unbeknownst to you, is broken, do you stop after one or two plays? Almost certainly not. If you're a regular slot player, you're used to going for many plays in a row without winning anything, so it's difficult to tell that anything is out of the ordinary. Under conditions of intermittent reinforcement, all organisms will show considerable resistance to extinction and continue for many trials before they stop responding.
What is the intermittent reinforcement effect?
The fact that operant behaviors that are maintained under intermittent reinforcement schedules resist extinction better than those maintained under continuous reinforcement.
What is shaping?
Learning that results from the reinforcement of successive steps to a final desired behavior.
ex: Lisa wants to teach her dog, Rover, to bring her the TV remote control. She places the remote in Rover's mouth and then sits down in her favorite TV-watching chair. Rover doesn't know what to do with the remote, and he just drops it on the floor. So Lisa teaches him by first praising him every time he accidentally walks toward her before dropping the remote. He likes the praise, so he starts to walk toward her with the remote more often. Then she praises him only when he brings the remote close to the chair. When he starts doing this often, she praises him only when he manages to bring the remote right up to her. Pretty soon, he brings her the remote regularly, and she has succeeded in shaping a response.
What is successive approximation?
A behavior that gets incrementally closer to the overall desired behavior.
What is superstitious behavior?
Superstition occurs when one implies there is a causal relationship between behaviors when it was merely an accidental correlation. One will simply repeat behaviors that were accidentally reinforced.
ex: Baseball players who enjoy several home runs on a day when they happened not to have showered are likely to continue that tradition, laboring under the belief that the accidental correlation between poor personal hygiene and a good day at bat is somehow causal. This "stench causes home runs" hypothesis is just one of many examples of human superstitions
Who was Edward Tolman and what did he propose?
Tolman was one of the first researchers to object Skinner's strictly behaviorist interpretation of learning, and was the strongest early advocate of a cognitive approach to operant learning. Tolman argued that there was more to learning than just knowing the circumstances in the environment (the properties of the stimulus) and being able to observe a particular outcome (the reinforced response). Instead, Tolman proposed that an animal established a means-ends relationship. That is, the conditioning experience produced knowledge or a belief that, in this particular situation, a specific reward (the end state) will appear if a specific response (the means to that end) is made.
Tolman and his students conducted studies that focused on latent learning and cognitive maps, two phenomena that strongly suggest that simple S-R interpretations of operant learning behavior are inadequate.
What is latent learning?
Something is learned, but it is not manifested as a behavioral change until sometime in the future.
What is a cognitive map?
A mental rep. of the physical features of the environment.
What aids in the repeating behaviors of reinforcement?
Pleasure centers in the brain. Specifically part of the limbic system. The neurons in the medial forebrain bundle, a pathway that meanders its way from the midbrain through the hypothalamus into the nucleus accumbens, are the most susceptible to stimulation that produces pleasure. These bundles of cells involve behaviors such as eating, drinking, and sexual activities. Second, the neurons all along this pathway and especially those in the nucleus accumbens itself are all dopaminergic (i.e., they secrete the neurotransmitter dopamine).
What is reward prediction error?
the difference between the actual reward received versus the amount of predicted or expected reward.
For example, when an animal presses a lever and receives an unexpected food reward, a positive prediction error occurs (a better than expected outcome) and the animal learns to press the lever again. By contrast, when an animal expects to receive a reward from pressing a lever but does not receive it, a negative prediction error occurs (a worse than expected outcome) and the animal will subsequently be less likely to press the lever again. Reward prediction error can thus serve as a kind of "teaching signal" that helps the animal to learn to behave in a way that maximizes reward.
Georgia likes to make tuna sandwiches for lunch. She uses her electric can opener and then usually gives the can with a few bits of tuna left inside to the cat. Georgia recently noticed that the cat comes running into the kitchen every time she uses her can opener to open anything. The cat's behavior is a result of:
Cats love to chase the light produced by laser pointers. Some people may believe that this behavior shows how dumb cats are but, in reality, it supports the idea of cats' biological _____ to chase moving objects.
Because of _____, a person can differentiate between the men's restroom and the women's restroom, and know which one to use.
Ernest's date smiles at him, and he suddenly knows that she probably would return his kiss. Her smile is functioning as a:
What is observational learning?
learning takes place by watching the actions of others.
Who was Albert Bandura and what did he do?
A researcher that placed an adult in a room full of pre-schoolers. The adult was told to aggressively attack a Bobo doll. After the adult left, the children were allowed to play with the Bobo doll, and all of them viciously attacked the Bobo doll. When the adults were being punished for their vicious actions, the kids decreased their malicious behavior towards the Bobo dolls. Vice versa with rewards.
What is a diffusion chain?
individuals initially learn a behavior by observing another individual perform that behavior, and then serve as a model from which other individuals learn the behavior. Operational learning is best suited for a diffusion chain situation.
ex: team sports
What is important about observational learning in animals?
Observational learning in animals shows that each animal has a specific biological predisposition for certain behaviors. A fear or a motive strongly suggests that it is embedded in a biological predisposition for a certain species.
What is enculturation hypothesis?
Being raised in a human culture has a profound effect on the cognitive abilities of chimpanzees, especially their ability to understand the intentions of others when performing tasks such as using tools, which in turn increases their observational learning capacities
Which regions of the frontal and parietal lobes do mirror neurons reside?
Area 40 and 44.
What is implicit learning?
learning that takes place largely independent of awareness of both the process and the products of information acquisition. Occurs without awareness.
ex: When you first learned to drive a car, for example, you probably devoted a lot of attention to the many movements and sequences that needed to be carried out simultaneously ("step lightly on the accelerator while you push the turn indicator and look in the rearview mirror while you turn the steering wheel"). That complex interplay of motions is now probably quite effortless and automatic for you. Explicit learning has become implicit over time.
Ways to test implicit learning?
Artificial grammar tasks and serial reaction time tasks.
What are some differences of explicit and implicit learning?
In implicit learning, there is relatively no difference between the way humans process, but in explicit learning, there is a myriad of ways that humans display processing, accounting for a large difference. Implicit learning also is unrelated to IQ. Implicit learning changes very little over a life span, however they decline more slowly than explicit learning abilities.
Implicit learning is remarkably resistant to various disorders that are known to affect explicit learning.
The fact that individuals suffering amnesia show INTACT implicit learning strongly suggests that the brain structures that underlie implicit learning are distinct from those that underlie explicit learning.
Which parts of the brain are used in explicit learning?
Prefrontal cortex, parietal cortex, and hippocampus.
Which parts of the brain are used in implicit learning?
Broca's area, which is a part of the brain that plays a salient role in language production. There is a decrease of the use of the occipital lobe, which calls for visual processing.
Observational learning challenges the _____ explanation of learning because it involves no direct reinforcement.
Lucas used to startle every time the family dog would bark. Now he continues whatever he is doing and appears to not even notice the dog's bark. Lucas has probably become _____ to the dog's bark.
During a research study, Delphine is asked to view a series of pictures. The experimenter also tells her that she will later be asked to categorize the pictures she has seen. What region of Delphine's brain is LESS likely to be active than another study participant who was NOT told about the grouping task?
the parietal cortex
the prefrontal cortex
Joni's best friend frequently speaks Spanish around her. Joni barely knew Spanish before she met her best friend and one day realizes that she just followed a short and simple exchange between her best friend and their waiter. Joni's ability to pick up some rudimentary Spanish without consciously trying to do so is a good example of:
a diffusion chain.
When does extinciton occur in both classical and operant conditioning?
In classical conditioning, this happens when a conditioned stimulus is no longer paired with an unconditioned stimulus.
In operant conditioning, extinction can occur if the trained behavior is no longer reinforced or if the type of reinforcement used is no longer rewarding.
Stimulus generalization and discrimination in operant conditioning
If Lisa enjoys Rover's antics with the TV remote only in the daytime and not at night when she feels tired, Rover will put the remote under her chair only during the day, because daylight has become a signal that tells Rover his behavior will be reinforced. Daylight has become a discriminative stimulus. A discriminative stimulus is a cue that indicates the kind of consequence that's likely to occur after a response. In operant conditioning, stimulus discrimination is the tendency for a response to happen only when a particular stimulus is present.
Suppose Lisa's dog, Rover, began to put the remote under her chair not only during the day but also whenever a bright light was on at night, thinking she would probably pat him. This is called stimulus generalization. In operant conditioning, stimulus generalization is the tendency to respond to a new stimulus as if it is the original discriminative stimulus.
YOU MIGHT ALSO LIKE...
Psychology | Sdorow, Rickabaugh, Betz
Psych Chapter 7
Introducing Psychology: Chapter 7
Psych 101 - Ch. 7
OTHER SETS BY THIS CREATOR
Contact lens practice for state exam
ACLS Exam 3
Pharm Exam 3 (readings)
THIS SET IS OFTEN IN FOLDERS WITH...
Ch 6 Memory
Ch 7 Learning
Chapter 15: Treatment of Psychological Disorders
Chapter 6 - Memory; Psychology