Only $35.99/year

Psychology Chapter 7/10

Terms in this set (66)

Unconditioned Response

In classical conditioning an unconditioned response is the unlearned response that occurs naturally in reaction to the unconditioned stimulus.

For example, if the smell of food is the unconditioned stimulus, the feeling of hunger in response to the smell of food is the unconditioned response.

When trying to distinguish between the unconditioned response and the conditioned response, try to keep a few key things in mind:

-The unconditioned response is natural and automatic

-The unconditioned response is innate and requires no prior learning

-The conditioned response will occur only after an association has been made between the UCS and the CS

-The conditioned response is a learned response

Unconditioned Stimulus

In classical conditioning, the unconditioned stimulus (UCS) is one that unconditionally, naturally, and automatically triggers a response.

For example, when you smell one of your favorite foods, you may immediately feel very hungry. In this example, the smell of the food is the unconditioned stimulus.

Conditioned Response

In classical conditioning, the conditioned response is the learned response to the previously neutral stimulus.

For example, let's suppose that the smell of food is an unconditioned stimulus, a feeling of hunger in response the smell is a unconditioned response, and a the sound of a whistle is the conditioned stimulus. The conditioned response would be feeling hungry when you heard the sound of the whistle.

The classical conditioning process is all about pairing a previously neutral stimulus with another stimulus that naturally and automatically produces a response. After pairing the presentation of these two together enough times, an association is formed. The previously neutral stimulus will then evoke the response all on its own. It is at this point that the response becomes known as the conditioned response.
1. Acquisition

Acquisition is the initial stage of learning when a response is first established and gradually strengthened.

During the acquisition phase of classical conditioning, a neutral stimulus is repeatedly paired with an unconditioned stimulus.

An unconditioned stimulus is something that naturally and automatically triggers a response without any learning.

After an association is made, the subject will begin to emit a behavior in response to the previously neutral stimulus, which is now known as a conditioned stimulus. It is at this point that we can say that the response has been acquired.

For example, imagine that you are conditioning a dog to salivate in response to the sound of a bell. You repeatedly pair the presentation of food with the sound of the bell. You can say the response has been acquired as soon as the dog begins to salivate in response to the bell tone.

Once the response has been established, you can gradually reinforce the salivation response to make sure the behavior is well learned.

2. Extinction

Extinction is when the occurrences of a conditioned response decreases or disappears. In classical conditioning, this happens when a conditioned stimulus is no longer paired with an unconditioned stimulus.

For example, if the smell of food (the unconditioned stimulus) had been paired with the sound of a whistle (the conditioned stimulus), it would eventually come to evoke the conditioned response of hunger. However, if the unconditioned stimulus (the smell of food) were no longer paired with the conditioned stimulus (the whistle), eventually the conditioned response (hunger) would disappear.

3. Generalization

Stimulus Generalization is the tendency for the conditioned stimulus to evoke similar responses after the response has been conditioned.

For example, if a dog has been conditioned to salivate at the sound of a bell, the animal may also exhibit the same response to stimuli that are similar to the conditioned stimulus.

In John B. Watson's famous Little Albert Experiment, for example, a small child was conditioned to fear a white rat. The child demonstrated stimulus generalization by also exhibiting fear in response to other fuzzy white objects including stuffed toys and Watson's own hair.

4. Discrimination

Discrimination is the ability to differentiate between a conditioned stimulus and other stimuli that have not been paired with an unconditioned stimulus.

For example, if a bell tone were the conditioned stimulus, discrimination would involve being able to tell the difference between the bell tone and other similar sounds. Because the subject is able to distinguish between these stimuli, he or she will only respond when the conditioned stimulus is presented.
The "Little Albert" experiment was a famous psychology experiment conducted by behaviorist John B. Watson and graduate student Rosalie Rayner.

Previously, Russian physiologist Ivan Pavlov had conducted experiments demonstrating the conditioning process in dogs. Watson was interested in taking Pavlov's research further to show that emotional reactions could be classically conditioned in people.

The participant in the experiment was a child that Watson and Rayner called "Albert B." but is known popularly today as Little Albert.

Around the age of 9 months, Watson and Rayner exposed the child to a series of stimuli including a white rat, a rabbit, a monkey, masks, and burning newspapers and observed the boy's reactions. The boy initially showed no fear of any of the objects he was shown.

The next time Albert was exposed to the rat, Watson made a loud noise by hitting a metal pipe with a hammer. Naturally, the child began to cry after hearing the loud noise. After repeatedly pairing the white rat with the loud noise, Albert began to cry simply after seeing the rat.

The Little Albert experiment presents and example of how classical conditioning can be used to condition an emotional response.

In addition to demonstrating that emotional responses could be conditioned in humans, Watson and Rayner also observed that stimulus generalization had occurred.

After conditioning, Albert feared not just the white rat, but a wide variety of similar white objects as well. His fear included other furry objects including Raynor's fur coat and Watson wearing a Santa Claus beard.
Biological preparedness is the idea that people and animals are inherently inclined to form associations between certain stimuli and responses.

This concept plays an important role in learning, particularly in understanding the classical conditioning process.

Some associations form easily because we are predisposed to form such connections, while other associations are much more difficult to form because we are not naturally predisposed to form them.

For example, it has been suggested that biological preparedness explains why certain types of phobias tend to form more easily.

We tend to develop a fear of things that may pose a threat to our survival, such as heights, spiders, and snakes. Those who learned to fear such dangers more readily were more likely to survive and reproduce.

People (and animals) are innately predisposed to form associations between tastes and illness. Why?

It is most likely due to the evolution of survival mechanisms. Species that readily form such associations between food and illness are more likely to avoid those foods again in the future, thus ensuring their chances for survival and the likelihood that they will reproduce.


Biological preparedness makes it so that people tend to form fear associations with these threatening options. Because of that fear, people tend to avoid those possible dangers, making it more likely that they will survive. Since these people are more likely to survive, they are also more likely to have children and pass down the genes that contribute to such fear responses.
Primary reinforcers are biological. Food, drink, and pleasure are the principal examples of primary reinforcers. But, most human reinforcers are secondary, or conditioned.

Examples include money, grades in schools, and tokens.

Secondary reinforcers acquire their power via a history of association with primary reinforcers or other secondary reinforcers.

For example, if I told you that dollars were no longer going to be used as money, then dollars would lose their power as a secondary reinforcer.

Here's an example of how a secondary reinforcer is established.

Let's train a dog to sit. First we would introduce the discriminative stimulus, the word "sit." We could just say "sit" and when the dog sits, we would give it some food. The food would be the primary reinforcer. Immediately after we gave it the food we would say, "good dog." "Good dog" is our secondary reinforcer of praise. We would then repeat the above process many times. Gradually, we would give the food less often, but the dog would continue to sit when we told it to. The words "good dog" gradually became a secondary reinforcer.

Another example would be in a token economy. Many therapeutic settings use the concept of the token economy. Remember, a token is just an object that symbolizes some other thing. For example, poker chips are tokens for money. In New York City, subway tokens used to be pieces of metal that could be inserted into the turnstiles of the subway. Small debts were often paid off using tokens in New York because of the token's value of one subway ride. However, attempting to pay off debts elsewhere using NYC subway tokens would not be acceptable.

In a token economy, people earn tokens for making certain responses; then those tokens can be cashed in for privileges, food, or drinks. For example, residents of an adolescent halfway house may earn tokens by making their beds, being on time to meals, not fighting, and so on. Then, being able to go to the movies on the weekend may require a certain number of tokens.

Poker chips are also tokens. Can you see why?
In shaping, behaviors are broken down into many small, achievable steps.

To test this method, B. F. Skinner performed shaping experiments on rats, which he placed in an apparatus (known as a Skinner box) that monitored their behaviors.

The target behavior for the rat was to press a lever that would release food. Initially, rewards are given for even crude approximations of the target behavior—in other words, even taking a step in the right direction.

Then, the trainer rewards a behavior that is one step closer, or one successive approximation nearer, to the target behavior.

For example, Skinner would reward the rat for taking a step toward the lever, for standing on its hind legs, and for touching the lever—all of which were successive approximations toward the target behavior of pressing the lever.

As the subject moves through each behavior trial, rewards for old, less approximate behaviors are discontinued in order to encourage progress toward the desired behavior.

For example, once the rat had touched the lever, Skinner might stop rewarding it for simply taking a step toward the lever.

In Skinner's experiment, each reward led the rat closer to the target behavior, finally culminating in the rat pressing the lever and receiving food.

In this way, shaping uses operant-conditioning principles to train a subject by rewarding proper behavior and discouraging improper behavior.

In summary, the process of shaping includes the following steps:

-Reinforce any response that resembles the target behavior.

-Then reinforce the response that more closely resembles the target behavior.

-You will no longer reinforce the previously reinforced response.

-Next, begin to reinforce the response that even more closely resembles the target behavior.

-Continue to reinforce closer and closer approximations of the target behavior.

-Finally, only reinforce the target behavior.
Reinforcement schedules determine how and when a behavior will be followed by a reinforcer.

A schedule of reinforcement is a tactic used in operant conditioning that influences how an operant response is learned and maintained.

Each type of schedule imposes a rule or program that attempts to determine how and when a desired behavior occurs.

Behaviors are encouraged through the use of reinforcers, discouraged through the use of punishments, and rendered extinct by the complete removal of a stimulus.

Schedules vary from simple ratio- and interval-based schedules to more complicated compound schedules that combine one or more simple strategies to manipulate behavior.

-A reinforcement schedule is a tool in operant conditioning that allows the trainer to control the timing and frequency of reinforcement in order to elicit a target behavior.

-Continuous schedules reward a behavior after every performance of the desired behavior; intermittent (or partial) schedules only reward the behavior after certain ratios or intervals of responses.

-Intermittent schedules can be either fixed (where reinforcement occurs after a set amount of time or responses) or variable (where reinforcement occurs after a varied and unpredictable amount of time or responses).

-Intermittent schedules are also described as either interval (based on the time between reinforcements) or ratio (based on the number of responses).

-Different schedules (fixed-interval, variable-interval, fixed-ratio, and variable-ratio) have different advantages and respond differently to extinction.

-Compound reinforcement schedules combine two or more simple schedules, using the same reinforcer and focusing on the same target behavior.
Fixed vs. Variable, Ratio vs. Interval

Fixed refers to when the number of responses between reinforcements, or the amount of time between reinforcements, is set and unchanging.

Variable refers to when the number of responses or amount of time between reinforcements varies or changes.

Interval means the schedule is based on the time between reinforcements, and ratio means the schedule is based on the number of responses between reinforcements.

Simple intermittent schedules are a combination of these terms, creating the following four types of schedules:

1. With a FIXED RATIO schedule, there are a set number of responses that must occur before the behavior is rewarded.

This can be seen in payment for work such as fruit picking: pickers are paid a certain amount (reinforcement) based on the amount they pick (behavior), which encourages them to pick faster in order to make more money.

In another example, Carla earns a commission for every pair of glasses she sells at an eyeglass store. The quality of what Carla sells does not matter because her commission is not based on quality; it's only based on the number of pairs sold.

This distinction in the quality of performance can help determine which reinforcement method is most appropriate for a particular situation: fixed ratios are better suited to optimize the quantity of output, whereas a fixed interval can lead to a higher quality of output.

2. In a VARIABLE-RATIO schedule, the number of responses needed for a reward varies.

This is the most powerful type of intermittent reinforcement schedule.

In humans, this type of schedule is used by casinos to attract gamblers: a slot machine pays out an average win ratio—say five to one—but does not guarantee that every fifth bet (behavior) will be rewarded (reinforcement) with a win.

3. A FIXED-INTERVAL schedule is when behavior is rewarded after a set amount of time.

This type of schedule exists in payment systems when someone is paid hourly: no matter how much work that person does in one hour (behavior), they will be paid the same amount (reinforcement).

4. With a VARIABLE-INTERVAL schedule, the subject gets the reinforcement based on varying and unpredictable amounts of time.

People who like to fish experience this type of reinforcement schedule: on average, in the same location, you are likely to catch about the same number of fish in a given time period.

However, you do not know exactly when those catches will occur (reinforcement) within the time period spent fishing (behavior).

SUMMARY

All of these schedules have different advantages.

In general, ratio schedules consistently elicit higher response rates than interval schedules because of their predictability.

For example, if you are a factory worker who gets paid per item that you manufacture, you will be motivated to manufacture these items quickly and consistently.

Variable schedules are categorically less-predictable so they tend to resist extinction and encourage continued behavior.

Both gamblers and fishermen alike can understand the feeling that one more pull on the slot-machine lever, or one more hour on the lake, will change their luck and elicit their respective rewards.

Thus, they continue to gamble and fish, regardless of previously unsuccessful feedback.

https://www.boundless.com/psychology/textbooks/boundless-psychology-textbook/learning-7/operant-conditioning-47/schedules-of-reinforcement-200-12735/
In continuous reinforcement, the desired behavior is reinforced every single time it occurs.

This schedule is best used during the initial stages of learning in order to create a strong association between the behavior and the response.

For example, imagine that you are trying to teach a dog to shake your hand. During the initial stages of learning, you would probably stick to a continuous reinforcement schedule in order to teach and establish the behavior. You might start by grabbing the animal's paw, performing the shaking motion, saying "Shake," and then offering a reward each and every time you perform this sequence of steps. Eventually, the dog will start to perform the action on his own, and you might opt to continue reinforcing every single correct response until the behavior is well established.

Once the response if firmly attached, reinforcement is usually switched to a partial reinforcement schedule.

VS

Partial Reinforcement Schedules

In partial or intermittent reinforcement, the response is reinforced only part of the time.

Learned behaviors are acquired more slowly with partial reinforcement, but the response is more resistant to extinction.

For example, think of our earlier example where you were training a dog to shake. While you initially used a continuous schedule, reinforcing every single instance of the behavior may not always be realistic. Eventually, you might decide to switch to a partial schedule where you provide reinforcement after so many responses occur or after so much time has elapsed.


There are four schedules of partial reinforcement:

Fixed-ratio schedules are those where a response is reinforced only after a specified number of responses. This schedule produces a high, steady rate of responding with only a brief pause after the delivery of the reinforcer.

An example of a fixed-ratio schedule would be delivering a food pellet to a rat after it presses a bar five times.

Variable-ratio schedules occur when a response is reinforced after an unpredictable number of responses. This schedule creates a high steady rate of responding. Gambling and lottery games are good examples of a reward based on a variable ratio schedule. In a lab setting, this might involve delivering food pellets to a rat after one bar press, again after four bar presses, and a third pellet after two bar presses.

Fixed-interval schedules are those where the first response is rewarded only after a specified amount of time has elapsed. This schedule causes high amounts of responding near the end of the interval, but much slower responding immediately after the delivery of the reinforcer. An example of this in a lab setting would be reinforcing a rat with a lab pellet for the first bar press after a 30-second interval has elapsed.

Variable-interval schedules occur when a response is rewarded after an unpredictable amount of time has passed. This schedule produces a slow, steady rate of response. An example of this would be delivering a food pellet to a ​rat after the first bar press following a one-minute interval, another pellet for the first response following a five-minute interval, and a third food pellet for the first response following a three-minute interval.
In psychology, latent learning refers to knowledge that only becomes clear when a person has an incentive to display it.

For example, a child might learn how to complete a math problem in class, but this learning is not immediately obvious.

Only when the child is offered some form of reinforcement for completing the problem does this learning reveal itself.

Latent learning is important because in most cases the information we have learned is not always recognizable until the moment that we need to display it.

While you might have learned how to cook a roast by watching your parents prepare dinner, this learning may not be apparent until you find yourself having to actually cook a meal on your own.

How Latent Learning Works

When we think about the learning process, we often focus only on learning that is immediately obvious.

We teach a rat to run through a maze by offering rewards for correct responses. We train a student to raise his hand in class by offering praise for the appropriate behaviors.

But not all learning is immediately apparent. Sometimes learning only becomes evident when we need to utilize it.

According to psychologists, this "hidden" learning that only manifests itself when reinforcement is offered is known as latent learning.

How Was Latent Learning Discovered

The term latent learning was coined by psychologist Edward Tolman during his research with rats, although the first observations of this phenomenon were made earlier by researcher Hugh Blodgett.

In experiments that involved having groups of rats run a maze, rats that initially received no reward still learned the course during the nonreward trials. Once rewards were introduced, the rats were able to draw upon their "cognitive map" of the course.

These observations demonstrated that learning can take place even when an organism does not display it right away.
Observational learning describes the process of learning through watching others, retaining the information, and then later replicating the behaviors that were observed.

What is Observational Learning?

There are a number of learning theories, such as classical conditioning and operant conditioning, that emphasize how direct experience, reinforcement, or punishment lead to learning. However, a great deal of learning happens indirectly.

For example, think of how a child watches his parents wave at one another and then imitates these actions himself. A tremendous amount of learning happens through this process of watching and imitating others. In psychology, this is known as observational learning.

Observational learning is sometimes also referred to as shaping, modeling, and vicarious reinforcement. While it can take place at any point in life, it tends to be the most common during childhood as children learn from the authority figures and peers in their lives.

It also plays an important role in the socialization process, as children learn how to behave and respond to others by observing how their parents and other caregivers interact with each other and with other people.


Psychologist Albert Bandura is the researcher perhaps most often identified with learning through observation. He and other researchers have demonstrated that we are naturally inclined to engage in observational learning.

In fact, children as young as 21 days old have been shown to imitate facial expressions and mouth movements.

If you've ever made faces at an infant and watched them try to mimic your funny expressions, then you certainly understand how observational learning can be such a powerful force even from a very young age.

Bandura's social learning theory stresses the importance of observational learning.

In his famous Bobo doll experiment, Bandura demonstrated that young children would imitate the violent and aggressive actions of an adult model. In the experiment, children observed a film in which an adult repeatedly hit a large, inflatable balloon doll. After viewing the film clip, children were allowed to play in a room with a real Bobo doll just like the one they saw in the film.

What Bandura found was that children were more likely to imitate the adult's violent actions when the adult either received no consequences or when the adult was actually rewarded for their violent actions. Children who saw film clips in which the adult was punished for this aggressive behavior were less likely to repeat the behaviors later on.
The Information Processing Model is a framework used by cognitive psychologists to explain and describe mental processes. The model likens the thinking process to how a computer works.

Just like a computer, the human mind takes in information, organizes and stores it to be retrieved at a later time. Just as the computer has an input device, a processing unit, a storage unit, and an output device, so does the human mind have equivalent structures.

In a computer, information is entered by means of input devices like a keyboard or scanner. In the human mind, the input device is called the Sensory Register, composed of sensory organs like the eyes and the ears through which we receive information about our surroundings.

As information is received by a computer, it is processed in the Central Processing Unit, which is equivalent to the Working Memory or Short-Term Memory. In the human mind, this is where information is temporarily held so that it may be used, discarded, or transferred into long-term memory.

In a computer, information is stored in a hard disk, which is equivalent to the long-term memory. This is where we keep information that is not currently being used. Information stored in the Long-Term Memory may be kept for an indefinite period of time.

When a computer processes information, it displays the results by means of an output device like a computer screen or a printout. In humans, the result of information processing is exhibited through behavior or actions - a facial expression, a reply to a question, or body movement.

The Information Processing Model is often used by educators and trainers to guide their teaching methodologies.
Jessica is 16 years old. She goes to visit her grandmother today, and they talk once again about her goals to become a doctor.

Her face lights up as her grandmother tells her she is going to be a wonderful doctor and help so many people. She says, 'Remember, Jessica, you can do anything you want if you keep believing in yourself.'

Years go by, and Jessica never forgets her grandmother's words of encouragement.

When she is a senior in college, she becomes very discouraged by her difficult classes and worries about getting into med school.

But every time she wonders if she can achieve her goal, she reminds herself of her grandmother's words.

In fact, she will remember those words even once she becomes a doctor. Jessica went through all the stages of information processing in her time with her grandmother and thereafter.

The 1st stage she went through was ATTENDING. In this stage, she was listening and paying close attention to her grandmother's words that she could do whatever she wanted if she believed in herself. When we attend or focus on an event or a conversation, we are preparing ourselves to receive it.

The 2nd stage Jessica went through was ENCODING. This is what happened when she was taking in her grandmother's words. If she was neither paying attention to them nor placing any importance on them, she would not have encoded them.

The 3rd stage was STORING. In this stage, her grandmother's words were entering her memory bank, ready to be called upon at some other time.

The final stage was RETRIEVING. This happened when Jessica went through a tough time in college and looked back on her grandmother's words, bringing them up to her conscious awareness. She retrieved this information in order to use it.
Summary

The Primacy/Recency Effect is the observation that information presented at the beginning (Primacy) and end (Recency) of a learning episode tends to be retained better than information presented in the middle.

When we talk about the Primacy Effect and the Recency Effect, we are talking about the theory and application of the following:

". . . the Primacy Effect . . . you remember some things at the beginning of a list because it occurred first. There is the beginning, a long middle that blurs together, and now it is the end." The Primacy Effect is the beginning. You remember it because that is where you started. The Recency Effect is the finish. You remember the end the best.

To understand Primacy and Recency, let's look at an example from the business world.

A new product or service is released to the market. The first step in the promotion process is to contact those who may be able to feature the product in a news story, interview, or other introductory venue.

Next, the product is advertised. Adverts place the most important and attention getting information at the beginning. Followed by a brief explanation in the middle, and end with a memorable statement designed to persuade the potential customer to buy. The goal is to have you remember the end of the advertisement and thus buy their product.

The Recency effect has most effect in repeated persuasion messages when there is a delay between the messages. Advertisers are aware of this when they schedule commercial messages.

Recency and Learning

One cannot define and discuss the Recency Effect in learning without understanding the Primacy Effect. Primacy Effect means that we remember best what we see or hear first - this becomes primary. In learning, this means that we remember best what we learn first.

The research that supports that we remember what was learnt at the beginning of a lesson - when the Primacy Effect is at work - also tells us that what we remember least that which occurs in the middle of a learning session.

Sometimes we miss the reasoning and facts behind and supporting our learning. We are susceptible to the information we get as a result of the Recency effect at the end of the lesson, whether such information is accurate or not. Promoters of new products, recognize this. The aim to persuade us to buy their product by using attention-getting and memorable closing comments.
Psychologists often make distinctions among different types of memory. There are three main distinctions:

1. Implicit vs. Explicit memory

Sometimes information that unconsciously enters the memory affects thoughts and behavior, even though the event and the memory of the event remain unknown. Such unconscious retention of information is called implicit memory.

Example: Tina once visited Hotel California with her parents when she was ten years old. She may not remember ever having been there, but when she makes a trip there later, she knows exactly how to get to the swimming pool.

Explicit memory is conscious, intentional remembering of information. Remembering a social security number involves explicit memory.

2. Declarative vs. Procedural memory

Declarative memory is recall of factual information such as dates, words, faces, events, and concepts. Remembering the capital of France, the rules for playing football, and what happened in the last game of the World Series involves declarative memory. Declarative memory is usually considered to be explicit because it involves conscious, intentional remembering.

Procedural memory is recall of how to do things such as swimming or driving a car. Procedural memory is usually considered implicit because people don't have to consciously remember how to perform actions or skills.

3. Semantic vs. episodic memory

Declarative memory is of two types: semantic and episodic.

Semantic memory is recall of general facts, while episodic memory is recall of personal facts.

Remembering the capital of France and the rules for playing football uses semantic memory. Remembering what happened in the last game of the World Series uses episodic memory.
Sensory memory is really many sensory memory systems, one associated with each sense.

For example, there is a sensory memory for vision, called iconic memory, and one for audition (hearing), called echoic memory. Here are some characteristics of these two sensory memory systems:

1. Iconic Memory (vision)
Capacity: Essentially that of the visual system (Sperling)
Duration: About 0.5 to 1.0 seconds (Sperling)
Processing: None additional beyond raw perceptual processing

2. Echoic Memory (hearing}
Capacity: ????
Duration: About 4 to 5 seconds
Processing: None additional beyond raw perceptual processing

Iconic memory (also known as visual persistence) refers to the short term visual memories people store when seeing something very briefly.

They create pictures in the mind. Unlike long-term memories which can be stored for a lifetime, these iconic mental images will only last for milliseconds and will fade quickly.

"Here's an iconic memory I have from 40 YEARS ago: I am standing on a snow-covered mountain peak in Vail Colorado. The cloudless sky is intensely blue because the air at that altitude is so dry, and it contrasts richly with the white snow. There are ski tracks crossing and criss-crossing leading down from my summit. My skiing partner is wearing a white stocking cap, a red ski jacket and dark blue ski pants which are too big for her. She has on those "mirror" sunglasses and I can see my reflection when she looks at me.

I think this is an iconic memory and it's all visual, no sound. As I relate it, I can SEE it very clearly in my mind."

Echoic memory is a component of sensory memory (SM) that is specific to retaining auditory information. The sensory memory for sounds that people have just perceived is the form of echoic memory.

Unlike visual memory, in which our eyes can scan the stimuli over and over, the auditory stimuli cannot be scanned over and over. Overall, echoic memories are stored for slightly longer periods of time than iconic memories.

"Now here's an echoic memory from 60 years ago: I am back on our farm in high school and about to do my nightly chores. I am late milking the family cow (not sure why I got that job - maybe because I'm the youngest of 3 brothers.). As I start to walk toward the barn, I have an echoic memory in my mind of the cow bellowing. I can HEAR it really. It sounds something like this: MMMMNNN-EERROOOOOOOOOOOH!

Loosely translated into English it means "Somebody get their ass out here and milk me! I'm so full of milk it hurts like Hell and my teats are starting to drip milk!" (They really do when you don't milk the cow for a day.)"
Once a memory is created, it must be stored (no matter how briefly). Many experts think there are three ways we store memories:

First in the sensory stage; then in short-term memory; and ultimately, for some memories, in long-term memory. Because there is no need for us to maintain everything in our brain, the different stages of human memory function as a sort of filter that helps to protect us from the flood of information that we're confronted with on a daily basis.

The creation of a memory begins with its perception:

The registration of information during perception occurs in the brief sensory stage that usually lasts only a fraction of a second. It's your sensory memory that allows a perception such as a visual pattern, a sound, or a touch to linger for a brief moment after the stimulation is over.

Short-Term Memory

This second stage is the first stop for incoming information. It holds only a certain amount of information for a brief amount of time, unless there is further processing into long-term memory. It is also referred to as one's working memory, as it serves any number of functions like remembering phone numbers, plans for the day, etc.

Jessica made plans earlier in the week to meet with her grandmother and didn't use a planner, but the date and time remained in her short-term memory.

Long-Term Memory

In this stage, the information we've received becomes implanted in our minds. There is no limit to the amount and types of information we can retain in this storehouse. We are not aware of every memory we have stored, but they are still there, simply not triggered.

Jessica may not spend any time thinking of her grandmother's words during her career as a doctor. Until that is, the memory is triggered by, let's say, people telling her they won't be able to do this or that with their future.
Short-term memory (STM) is the second stage of the multi-store memory model proposed by the Atkinson-Shiffrin. The duration of STM seems to be between 15 and 30 seconds, and the capacity about 7 items.

Short term memory has three key aspects:

1. limited capacity (only about 7 items can be stored at a time)

2. limited duration (storage is very fragile and information can be lost with distraction or passage of time)

3. encoding (primarily acoustic, even translating visual information into sounds).

There are two ways in which capacity is tested, one being span, the other being recency effect.

The Magic number 7 (plus or minus two) provides evidence for the capacity of short term memory. Most adults can store between 5 and 9 items in their short-term memory. This idea was put forward by Miller (1956) and he called it the magic number 7. He though that short term memory could hold 7 (plus or minus 2 items) because it only had a certain number of "slots" in which items could be stored.

However, Miller didn't specify the amount of information that can be held in each slot. Indeed, if we can "chunk" information together we can store a lot more information in our short term memory.

Miller's theory is supported by evidence from various studies, such as Jacobs (1887). He used the digit span test with every letter in the alphabet and numbers apart from "w" and "7" because they had two syllables. He found out that people find it easier to recall numbers rather than letters. The average span for letters was 7.3 and for numbers it was 9.3.

The duration of short term memory seems to be between 15 and 30 seconds, according to Atkinson and Shiffrin (1971). Items can be kept in short term memory by repeating them verbally (acoustic encoding), a process known as rehearsal.

Using a technique called the Brown-Peterson technique which prevents the possibility of retrieval by having participants count backwards in 3s.

Peterson and Peterson (1959) showed that the longer the delay, the less information is recalled. The rapid loss of information from memory when rehearsal is prevented is taken as an indication of short term memory having a limited duration.

Baddeley and Hitch (1974) have developed an alternative model of short-term memory which they call working memory.
Memory consolidation is the process where our brains convert short-term memories into long-term ones. We only store short-term memories for about 30 seconds, so if we're ever going to remember anything, all that information has to be moved into long-term memory.

Memory Consolidation and Synapses

In order to understand how memory consolidation works, it's helpful to understand how synapses work in the brain.

Think of it like an electrical system conducting a current: the synapses pass the signals from neuron to neuron, with the help of neurotransmitters.

The more frequently signals are passed, the stronger the synapses become. This process, called potentiation, is believed to play a major role in the learning and memory processes. When two neurons fire at the same time repeatedly, they become more likely to fire together in the future. Eventually, these two neurons will become sensitized to one another.

As we acquire new experiences, information, and memories, our brains create more and more of these connections. Essentially, the brain can rearrange itself, establishing new connections while weeding out old ones.

How Memory Consolidation Works

By rehearsing or recalling information over and over again, these neural networks become strengthened. For example, if you study the same material regularly over a long period, the pathways involved in remembering that information becomes stronger.

The repeated firing of the same neurons makes it more likely that those same neurons will be able to repeat that firing again in the future.

As a result, you will be able to remember the information later with greater ease and accuracy.

Another way to think of these synaptic pathways: They're similar to a path in the woods.

The more often you walk the path, the more familiar it becomes and the easier it is to traverse.
The levels of processing model (Craik and Lockhart, 1972) focuses on the depth of processing involved in memory, and predicts the deeper information is processed, the longer a memory trace will last.

Craik defined depth as:

"the meaningfulness extracted from the stimulus rather than in terms of the number of analyses performed upon it." (1973, p. 48)

Unlike the multi-store model it is a non-structured approach. The basic idea is that memory is really just what happens as a result of processing information. Memory is just a by-product of the depth of processing of information, and there is no clear distinction between short term and long term memory.

Therefore, instead of concentrating on the stores/structures involved (i.e. short term memory & long term memory), this theory concentrates on the processes involved in memory.


We can process information in 3 ways:

Shallow Processing

- This takes two forms
1. Structural processing (appearance) which is when we encode only the physical qualities of something. E.g. the typeface of a word or how the letters look.

2. Phonemic processing - which is when we encode its sound.

Shallow processing only involves maintenance rehearsal (repetition to help us hold something in the STM) and leads to fairly short-term retention of information.

This is the only type of rehearsal to take place within the multi-store model.

Deep Processing

This involves:

3. Semantic processing, which happens when we encode the meaning of a word and relate it to similar words with similar meaning.

Deep processing involves elaboration rehearsal which involves a more meaningful analysis (e.g. images, thinking, associations etc.) of information and leads to better recall.

For example, giving words a meaning or linking them with previous knowledge.

Summary

Levels of processing: The idea that the way information is encoded affects how well it is remembered. The deeper the level of processing, the easier the information is to recall.
First discovered by Terje Lømo in 1966, long-term potentiation (LTP) is a long-lasting strengthening of synapses between nerve cells.

Psychologists use LTP to explain long-term memories. That is, long-term memories are thought to be biologically based on LTP because humans cannot retain memories for the long term (the cells could not communicate with each other) unless connections between nerve cells are sufficiently strong for an extended period of time.

LTP is also related to learning: without LTP, learning some skills might be difficult or impossible. In experimental psychology, researchers have induced LTP in mammals by repeatedly stimulating the synapses of nerve cells. Research on LTP has also focused on its relation to neurodegenerative diseases, especially Alzheimer's.

LTP has been most thoroughly studied in the mammalian hippocampus, an area of the brain that is especially important in the formation and/or retrieval of some forms of memory (see Chapter 31).

In humans, functional imaging shows that the human hippocampus is activated during certain kinds of memory tasks, and that damage to the hippocampus results in an inability to form certain types of new memories.

In rodents, hippocampal neurons fire action potentials only when an animal is in certain locations. Such "place cells" appear to encode spatial memories, an interpretation supported by the fact that hippocampal damage prevents rats from developing proficiency in spatial learning tasks (Figure 25.4).

Although many other brain areas are involved in the complex process of memory formation, storage, and retrieval, these observations have led many investigators to study this particular form of synaptic plasticity in the hippocampus.
Motivated forgetting is a theorized psychological behavior in which people may forget unwanted memories, either consciously or unconsciously.

Although it might get confusing for some, it's completely different from defense mechanism. Motivated forgetting is also defined as a form of conscious coping strategy.

For instance, a person might direct his/her mind towards unrelated topics when something reminds them of unpleasant events. This could lead to forgetting of a memory without having any intention to forget, making the action of forgetting motivated, hence, Motivated Forgetting.

Psychological Repression, an unconscious act

The concept of psychological repression was first developed in 1915. The concept was based on Sigmund Freud's psychoanalytic model, which suggested that people subconsciously push unpleasant thoughts and feelings into unconscious. However, repressed memories, although repressed, have been known to influence behavior, dreams, decision making, emotional response and so on. For instance, a child abused by a parent, who had repressed the memory, has trouble forming relationships. Psychoanalysis was the treatment method offered by Freud for repressed memories, with the goal to bring back the fears and emotions unto the conscious level.

Thought Suppression, a conscious act

The deliberate or conscious attempt to suppress memories is referred to as thought suppression. This phenomenon involves conscious strategies and intentional context shifts, so it is goal directed. For instance, if a person faces with stimulants of unpleasant memories, he/she might deliberately try to push the memory into the unconscious by thinking about something else. But, thought suppression can be a time consuming task and quite difficult too. Also, the memories can easily resurface with minimal prompting, which is why it's closely associated with Obsessive-Compulsive Disorder.

https://www.psychestudy.com/cognitive/memory/motivated-forgetting
Prospective memory refers to remembering to perform intended actions in the future, or simply, remembering to remember.

Examples of prospective memory include:

-remembering to take medicine at night before going to bed

-remembering to deliver a message to a friend

-remembering to pick up flowers for a significant other on an anniversary.

Because a great deal of each day is spent forming intentions and acting on those intentions, it is no surprise that at least half of everyday forgetting is due to prospective memory failures (Crovitz & Daniel, 1984).

It is important to understand prospective memory not only because of the ubiquity of prospective memory demands, but also because prospective memory failures can be devastating.

For example, aircraft pilots must remember to perform several actions sequentially prior to take-off and landing and failure to remember to perform any of these actions may result in injury or death.

Although aircraft crew prospective memory failures rarely occur or lead to injury, Dismukes (2006) noted that almost 1/5 of major airline accidents can be attributed to prospective memory failures.

Moreover, people who must remember to take medication depend upon their prospective memory for maintaining their health. In a recent Australian survey (Nelson, Reid, Ryan, Willson, & Yelland, 2006), individuals who reported to forgetting to take their blood pressure medication at least one time were significantly more likely to have a heart attack or die than individuals who did remember to take their medication.

Because intentionally forgetting has the potential to be devastating, it is important to learn more about the strategies that improve prospective memory. To do so, a greater understanding of prospective memory must be obtained, with careful focus on how memories are retrieved. By understanding how intentions can be successfully retrieved, strategies can be formulated which will promote efficiency and functionality.
Short-term memory (STM) is the second stage of the multi-store memory model proposed by the Atkinson-Shiffrin. The duration of STM seems to be between 15 and 30 seconds, and the capacity about 7 items.

Short term memory has three key aspects:

1. limited capacity (only about 7 items can be stored at a time)

2. limited duration (storage is very fragile and information can be lost with distraction or passage of time)

3. encoding (primarily acoustic, even translating visual information into sounds).

There are two ways in which capacity is tested, one being span, the other being recency effect.

The Magic number 7 (plus or minus two) provides evidence for the capacity of short term memory. Most adults can store between 5 and 9 items in their short-term memory. This idea was put forward by Miller (1956) and he called it the magic number 7. He though that short term memory could hold 7 (plus or minus 2 items) because it only had a certain number of "slots" in which items could be stored.

However, Miller didn't specify the amount of information that can be held in each slot. Indeed, if we can "chunk" information together we can store a lot more information in our short term memory.

Miller's theory is supported by evidence from various studies, such as Jacobs (1887). He used the digit span test with every letter in the alphabet and numbers apart from "w" and "7" because they had two syllables. He found out that people find it easier to recall numbers rather than letters. The average span for letters was 7.3 and for numbers it was 9.3.

The duration of short term memory seems to be between 15 and 30 seconds, according to Atkinson and Shiffrin (1971). Items can be kept in short term memory by repeating them verbally (acoustic encoding), a process known as rehearsal.

Using a technique called the Brown-Peterson technique which prevents the possibility of retrieval by having participants count backwards in 3s.

Peterson and Peterson (1959) showed that the longer the delay, the less information is recalled. The rapid loss of information from memory when rehearsal is prevented is taken as an indication of short term memory having a limited duration.

Baddeley and Hitch (1974) have developed an alternative model of short-term memory which they call working memory.
Some of the strongest evidence for the multi-store model (Atkinson & Shiffrin, 1968) comes from serial position effect studies and studies of brain damaged patients.

Experiments show that when participants are presented with a list of words, they tend to remember the first few and last few words and are more likely to forget those in the middle of the list.

This is known as the serial position effect. The tendency to recall earlier words is called the primacy effect; the tendency to recall the later words is called the recency effect.

Murdock (1962)

Procedure

Murdock asked participants to learn a list of words that varied in length from 10 to 40 words and free recall them. Each word was presented for one to two seconds.


Results

He found that the probability of recalling any word depended on its position in the list (its serial position). Words presented either early in the list or at the end were more often recalled, but the ones in the middle were more often forgotten. This is known as serial position effect.

serial position effect

The improved recall of words at the beginning of the list is called the primary effect; that at the end of the list, the recency effect. This recency effect exists even when the list is lengthened to 40 words.


Conclusion

Murdock suggested that words early in the list were put into long term memory (primacy effect) because the person has time to rehearse each word acoustically. Words from the end of the list went into short term memory (recency effect) which can typically hold about 7 items.

Words in the middle of the list had been there too long to be held in short term memory (STM) (due to displacement) and not long enough to be put into long term memory (LTM). This is referred as a asymptote.

In a nutshell, when participants remember primary and recent information, it is thought that they are recalling information from two separate stores (STM and LTM).

https://www.simplypsychology.org/primacy-recency.html
Retrograde Amnesia

Retrograde amnesia occurs when a person is unable to access memories of events that happened in the past, prior to the precipitating injury or disease that caused the loss.

Those who are impacted are generally able to remember meanings and other actual information, but are not able to recall specific events or situations.

The severity of the condition is often indicated by what memories are retained, as under a medical principle known as Ribot's Law, more recent memories are lost first, with more ingrained memories tending to be less likely to be dislodged.

Recent studies have indicated that the extensiveness of the memory loss is a reflection of whether damage to the brain is limited to the hippocampus or also includes the temporal cortex. (Journal of Neuroscience)

Anterograde Amnesia

When a person is unable to store and retain new information but is able to recall data and events that happened previously, it's known as anterograde amnesia.

Movie enthusiasts will recognize this form of amnesia from the popular film Fifty First Dates. Though the movie was fiction, it is a reflection of events that can happen in real life, and which have.

Shortly after the movie was released the news was filled with similar medical cases including a young woman whose combination of epilepsy and Functional Neurological Disorder prevent her from forming new memories, and a dental patient who had a negative reaction to anesthesia and suffered a similar loss in ability. (Medical Daily)

Anterograde amnesia is often a permanent condition generally thought to be caused by damage to the hippocampus section of the brain.

This damage can be caused by an accident, as a result of surgery, alcohol, and even an acute deficiency of thiamine known as Korsakoff's syndrome.

Whatever the cause of the trauma, the person who is affected is unable to convert their short term experiences into long-term memory. (Simply Psychology) Studies have also shown that patients who have been prescribed benzodiazepines, familiarly known as tranquilizers such as Valium or Xanax, can suffer from anterograde amnesia.
Spreading activation is a model of working memory, also known as short-term memory, that seeks to explain how the mind processes related ideas, especially semantic or verbal concepts.

The spreading activation model is one way cognitive psychologists explain the priming effect, which is the observable phenomenon that a person is able to more quickly recall information about a subject once a related concept has been introduced.

According to this model, semantic long-term memory consists of a vast, interrelated network of concepts. When a person is presented with any concept, the concepts most closely connected to it are activated in that person's mind, preparing or "priming" him or her to recall information related to any of them.

According to the theory of spreading activation, each semantic concept has a node in the neural network that is activated at the same time as the nodes for related concepts.

If a person is presented with the concept "dog," nodes for concepts like "bark," "beagle" and "pet" might be activated, priming him or her to think about these related words.

Depending on which concept relating to "dog" is presented next, the person is able to recall any information that might be relevant to the task at hand.

One such task might be to evaluate the accuracy of semantic statements. The person could, for instance, more quickly verify the statement "A beagle is a dog" if he or she already knows that the topic at hand is "dog."

The stronger the connection between the ideas, the more quickly the person is able to recall relevant information.

A person can probably verify very quickly the statement "A bird is an animal," because birds are very common, typical examples of the category "animal."

On the other hand, the same person would likely take significantly longer to process and verify the statement "A chinchilla is an animal," because a chinchilla is an atypical member of the category.

The model of spreading activation would account for this difficulty by saying that the node for "chinchilla" would not necessarily be activated by the category "animal."

Of course, the associations between semantic concepts vary greatly from person to person.

Someone who has a pet chinchilla, for example, will have far greater connections between "animal" and "chinchilla" than the general population.

In this way, the semantic categories described by spreading activation are a product both the actual content and of the individual experience.

For this reason, the spreading activation model is very useful for describing how the mind has responded to a semantic task, but not necessarily useful for predicting how a person will respond to any given task.
Information that you have to consciously work to remember is known as explicit memory, while information that you remember unconsciously and effortlessly is known as implicit memory.

While most of the information you find about memory tends to focus specifically on explicit memory, researchers are becoming increasingly interested in how implicit memory works and how it influences our knowledge and behavior.

Explicit Memory

When you are trying to intentionally remember something (like a formula for your statistics class or a list of dates for your history class), this information is stored in your explicit memory. We use these memories every day, from remembering information for a test to recalling the date and time of a doctor's appointment. This type of memory is also known as declarative memory, since you can consciously recall and explain the information.

Some tasks that require the use of explicit memory include remembering what you learned in your psychology class, recalling your phone number, identifying who the current president is, writing a research paper, and remembering what time you are meeting a friend to go to a movie.

There are two major types of explicit memory:

1. Episodic memory: These are your long-term memories of specific events, such as what you did yesterday or your high school graduation.

2. Semantic memory: These are memories of facts, concepts, names, and other general knowledge information.

Implicit Memory

Things that we don't purposely try to remember are stored in implicit memory. This kind of memory is both unconscious and unintentional. Implicit memory is also sometimes referred to as nondeclarative memory since you are not able to consciously bring it into awareness.

Procedural memories, such as how to perform a specific task like swinging a baseball bat or making toast, are one type of implicit memory since you don't have to consciously recall how to perform these tasks.

While implicit memories are not consciously recalled, they still influence how you behave as well as your knowledge of different tasks.

Some examples of implicit memory include singing a familiar song, typing on your computer keyboard, daily habits, and driving a car. Riding a bicycle is another great example. Even after going years without riding one, most people are able to hop on a bike and ride it effortlessly.

Here's a quick demonstration that you can try to show how implicit memory works.

Type the following sentence without looking down at your hands: "Every red pepper is tantalizing." Now, without looking, try naming the ten letters that appear in the top row of your keyboard.

Since most students are good typists, you probably found it quite easy to type the above sentence without having to consciously think about where each letter appears on the keyboard. That task requires implicit memory.

Having to recall which letters appear in the top row of your keyboard, however, is something that would require explicit memory.

Since you have probably never sat down and intentionally committed the order of those keys to memory, it is not something that you are able to easily recall.
Interference theory refers to the occurrence of interaction between newly learned material and past behavior, memories or thoughts that cause disturbance in retrieval of the memory.

Based on the disturbance caused in attempts to retrieve past or latest memories, interference have been classified into two different kinds.

1. Retroactive Interference

2. Proactive Interference

Retroactive interference is when more recent information gets in the way of trying to recall older information.

An example would be calling your ex-boyfriend/girlfriend by your new boyfriend/girlfriend's name. The new name retroactively interferes with the old one, which is clearly problematic for recall.

Proactive interference is the reverse direction of interference to retroactive interference.

This is when old information prevents the recall of newer information.

This could, for example, occur with telephone numbers. When trying to recall a new phone number, the old phone number you have previously had for years could proactively interfere with the recall, to the point when it is very difficult to remember the new number.

Strengths of the theory

Research evidence: There is research to support this theory such as the study from Baddeley and Hitch (1977)

Intuitively correct: Most people can think of times when interference in both directions have occurred. This means that the theory makes sense and there are plenty of everyday examples of it occurring.

Weaknesses of the theory

1. Limited scope: This theory only can explain lack of recall when information in a similar format prevents recall. This means that there are many types of recall that are not explained by this theory.

2. Poor ecological validity: Like much of memory research there is a problem with the validity of the research that supports the theory. It is predominantly laboratory based and therefore does not test everyday recall.