Only $2.99/month

psych 105 exam 2 vest

Terms in this set (26)

LEARNING: happens before we are even born; ability for a baby in the womb to single out the sound of his mother's voice.
Defined as a relatively enduring change in behavior or thinking that results from our experiences. Although learning leads to changes in the brain, including alterations to individual neurons as well as their networks, modifications of thinking and behavior are not always permanent.
One of the most basic forms of learning occurs during the process of habituation, which is evident when an organism does not respond as strongly or as often to an event or occurrence following multiple exposures to that event- an organism learns about a stimulus but begins to ignore it as it is repeated (habituation). Learning is very much about creating associations. Learning can occur in predictable or unexpected ways, it allows us to grow and change, and it is key to achieving goals.

CLASSICAL CONDITIONING/Pavlovian conditioning: Associating two different stimuli. Ivan Pavlov (1849-1936). Pavlov had discovered how associations develop through the process of learning, which he referred to as CONDITIONING. Early experiment was measuring dogs salvation in response to food. Initially, the dogs salivated as expected, but as the experiment progressed, they began salivating to other stimuli as well. After repeated trials with an assistant giving a dog its food and then measuring the dog's saliva output, Pavlov noticed that instead of salivating the moment it received food, the dog began to salivate at the mere sound or sight of the lab assistant arriving to feed it. The assistants footsteps were the trigger (stimulus) for the dog to start salivating (response). The dog was associating the sound of footsteps with the arrival of food; it had been CONDITIONED to associate the sights and sounds with eating. The dog had learned to salivate when exposed to these stimuli, much like you might respond when seeing a sign for a favorite restaurant or a dessert you love.

Pavlov was studying reflexive or involuntary behavior. The connection between food and salivating is natural/unlearned, whereas the link between the sound of footsteps and salivating is learned. Learning has occurred when a new, non universal link between stimulus (footsteps) and response (salivation) is established.

Because Pavlov was interested in exploring the link between a stimulus and the dog's response, he had to pick a stimulus that was MORE CONTROLLED than the sound of someone walking into a room. Pavlov used a variety of stimuli, such as sounds from a metronome, a buzzer, and the tone of a bell- all have nothing to do with food. In other words, they are NEUTRAL STIMULI in relation to food and responses to food.

Before the experiment began, the sound of the tone was a NEUTRAL STIMULUS-something in the environment that DOES NOT normally cause a relevant AUTOMATIC or reflexive response. In the current example, salivation is the relevant automatic response associated with food; dogs do not normally respond to the sound of a tone by salivating. But through experience, the dog learned to link this neutral stimulus (the tone) with another stimulus (food) that prompts an automatic, unlearned response (salvation). This type of learning is called CLASSICAL CONDITIONING, which occurs when an originally neutral stimulus is conditioned to induce an involuntary response, such as salivation, eye blinks, and other types of REFLEX actions.

At the start of Pavlov's experiment, before the dogs were conditioned or had learned anything about the neutral stimulus, they salivated when they smelled or were given food. The FOOD is called an UNCONDITIONED STIMULUS (US) because it automatically triggers a response without any learning needed. SALIVATION by the dogs when exposed to food is an UNCONDITIONED RESPONSE (UR) because it doesn't require any conditioning (learning); the dog just does it automatically. The salvation (the UR) is an automatic response caused by the the smell or taste of food (the US). After conditioning has occurred, the dog responds to the sound of the tone almost as if it were food. The tone, previously a NEUTRAL STIMULUS, has now become a CONDITIONED STIMULUS (CS) because it triggers the dog's salvation. When the salvation occurs in response to the sound of the tone, it is called a CONDITIONED RESPONSE (CR); the salvation is a learned response.
We learn to associate a neutral stimulus with an unconditioned stimulus that produces an automatic, natural response.
Classical conditioning prompts learning that occurs naturally, without studying or other voluntary effort, and it happens everyday. Do you feel hungry when you see a pizza box or the McDonald's "M"? Just like Pavlov's dogs, we learn through repeated pairings to associate these neutral stimuli with food, and the sight of a cardboard box or a yellow "M" can be enough to get our stomachs rumbling.

The pairings of the neutral stimulus (the tone) with he unconditioned stimulus (the meat powder) occur during the ACQUISITION or initial learning phase:
--the meat powder is always a UNCONDITIONED STIMULUS (the dog never has to learn how to respond to it).
--the dog's salivating is initially a UNCONDITIONED RESPONSE to the meat powder, but eventually becomes a CONDITIONED RESPONSE as it occurs in response to the sound of the tone (without the SIGHT OR SMELL of meat powder).
--the UNCONDITIONED STIMULUS is always different from the CONDITIONED STIMULUS; the UNCONDITIONED STIMULUS automatically triggers the response, but with the CONDITIONED STIMULUS, the response has been LEARNED by the organism.

What would happen if a dog in one of Pavlov's experiments heard a slight higher frequency tone? Would the dog still salivate? Pavlov asked this same question and found that a stimulus similar to the CONDITIONED STIMULUS caused the dogs to salivate as well. This is an example of STIMULUS GENERALIZATION. Once an association if forged between the CONDITIONED STIMULUS and a CONDITIONED RESPONSE, the learner often responds to similar stimuli as if they were the original CONDITIONED STIMULUS. EX: when pavlov's dogs learned to salivate in response to a metronome ticking at 90 beats per minute, they also salivated when the metronome ticked a little more quickly (100 beats per minute) or slowly (80 beats per minute). Their response was GENERALIZED to metronome speeds ranging from 80 to 100 beats per minute. Many of you may have been classically conditioned to salivate at the sight of a tall glass of lemonade. STIMULUS GENERALIZATION predicts you would salivate when seeing a shorter glass of lemonade, or even a mug if you knew it contained your favorite lemonade. Someone who has been bitten by a small dog may subsequently react with fear to all dogs, big and small. This would suggest she has generalized her fear, responding in this way to dogs of all sizes,, even though the original CONDITIONED STIMULUS was a small dog.

What happens when Pavlov's dogs are presented with two stimuli that differ significantly. If you present the meat powder with a high pitched sound, the dogs will associate that pitch with the meat powder. They will then only salivate in response to that pitch, but not to low-pitched sounds. The dogs in the experiment are displaying STIMULUS DISCRIMINATION, the ability to differentiate between a particular CONDITIONED STIMULUS and other stimuli SUFFICIENTLY DIFFERENT from it. Someone who has been stung by a bee might only become afraid at the sight of bees, and not flies, because he has learned to discriminate among the stimuli represented by the wide variety of flying insects. He has only been conditioned to experience fear in response to bees.

Once the dogs associate the sound of the tone with meat powder, can they ever listen to its sound without salivating? The answer is yes-if they are repeatedly exposed to the sound of the tone WITHOUT the meat powder. If the CONDITIONED STIMULUS is presented time and time again without being accompanied by the UNCONDITIONED STIMULUS, the association may fade. The CONDITIONED RESPONSE decreases and eventually disappears in a process called EXTINCTION.

Even with extinction, the connection is not necessarily gone forever. Two hours later after the process of extinction, Pavlov presented the tone again and the dog salivated again because of a process called SPONTANEOUS RECOVERY. With the presentation of a CONDITIONED STIMULUS after a period of rest, the CONDITIONED RESPONSE reappears. The link between the sound of the bell and the food was simmering beneath the surface. The dog had not "forgotten" the associating when the pairing was extinguished. Rather, the CONDITIONED RESPONSE was "suppressed" during the extinction, when the dog was not being exposed to the UNCONDITIONED STIMULUS, the association was not lost. Back to the glass of lemonade, a summer drink you do not get to enjoy for 9 months out of the year. It is possible that your CONDITIONED RESPONSE (salivating) will be suppressed through the process of extinction (from Sept. to the end of May), but when June rolls around, SPONTANEOUS RECOVERY may find you once again salivating at the sight of that tangy sweet drink.

After ACQUISITION has occurred, can another layer be added to the conditioning process? Suppose the sound of the tone has become a CONDITIONED STIMULUS for the dog, such that everytime the tone sounds, the dog has learned to salivate. Once this conditioning is established, the researcher can add a new neutral stimulus, such as a light flashing everytime the dog hears the sound of the tone. After pairing the sound and the light together (without the meat powder anywhere in sight or smell), the light will become associated with the sound and the dog will begin to salivate in response to seeing the light alone. This is called HIGHER ORDER CONDITIONING. With repeated pairings of the CONDITIONED STIMULUS (the tone) and a new NEUTRAL STIMULUS (the light), the second neutral stimulus becomes a controlled stimulus as well. When all is said and done, both stimuli (the sound and the light) have gone from being neutral stimuli to conditioned stimuli, and either of them can elicit the CONTROLLED RESPONSE (salivation). But in HIGHER ORDER CONDITIONING, the second neutral stimulus is paired with a CONDITIONED STIMULUS instead of being paired with the original UNCONDITIONED STIMULUS. In the example, the light is associated with the sound, not the food directly.

Next time you are really hungry and begin to open the wrapper of your favorite granola bar or energy snack, notice what's going on inside your mouth. You have come to associate the wrapper (CONTROLLED STIMULUS) with the food (UNCONDITIONED STIMULUS), leading you to salivate (CONDITIONED RESPONSE) at the unwrapping of the bar. Or perhaps your mouth begins to water when you prepare food, or when you are exposed to stimuli associated with making food. Let's say your nightly routine includes making dinner during tv commercial breaks. Initially, the commercials are neutral stimuli, but you may have gotten into the habit of preparing food only during those breaks. Making your food (CONDITIONED STIMULUS) causes you to salivate (CONDITIONED RESPONSE), and because dinner preparation always happens during commercials, those commercials will eventually have the power to make you salivate (CONDITIONED RESPONSE) as well, even in the absence of food. This is an example of HIGHER ORDER CONDITIONING wherein auditioned stimuli cause the CONDITIONED RESPONSE.

After falling ill from something you ate, whether it was sushi, uncooked chicken, or tainted peanut butter, you probably steered clear of that food for a while. This is an example of CONDITIONED STATE AVERSION, when one learns to associate the taste of a particular food or drink with illness.

A conditioned taste aversion is a form of classical conditioning that occurs when an organism learns to associate the taste of a particular food or drink with illness. Avoiding foods that induce sickness has adaptive value, increasing the odds an organism will survive and reproduce, passing its genes along to the next generation. Animals and people show biological preparedness, or are predisposed or inclined to learn such situations.

TIMING: Conditioning most effective when CONDITIONED STIMULUS is presented immediately before UNCONDITIONED STIMULUS.

OPERANT CONDITIONING: a type of learning in which people or animals come to associate their voluntary actions with their consequences. Whether pleasant or unpleasant, the effects of a behavior influence future actions.

THORNDIKES LAW OF EFFECT. Edward Thorndike (1874-1949). Research on cats. Putting cat in latched cage called puzzle box, and planting enticing pieces of fish outside the door. When first placed inside the box, the cat would scratch and paw around randomly, but after a while, just by chance, it would pop the latch causing the door to release. Cat would then escape to devour the fish. The next time the cat was put in the box, it would repeat this random activity, scratching and pawing with no particular direction. And again, the cat would pop the latch that released the door and freed it to eat the fish. Each time the cat was returned to the box, the number of random activities decreased until eventually it was able to break free almost immediately. Cat's naturally occurring behaviors (scratching and exploring) allowed them to discover the solution to the puzzle box. The cats initially obtained the fish treat by accident. The amount of time it took the cats to break free was the measure of learning. Cats behavior can be explained by the LAW OF EFFECT, which says that a behavior (opening the latch) is more likely to happen again when followed by a pleasurable outcome (the fish). Behaviors that lead to pleasurable outcomes will be repeated, while behaviors that don't lead to pleasurable outcomes (or are followed by something unpleasant) will not be repeated. Fish are REINFORCERS, because the fish increased the likelihood that the preceding behavior (escaping the cage) would happen again. REINFORCERS are consequences that follow behaviors, and are a key component of operant conditioning. Through the process of REINFORCEMENT, an organism's behaviors become more frequent. A child praised for sharing a toy is more likely to share in the future.

Some of the earliest and most influential research on operant conditioning came out of the lab of BF SKINNER (1904-1990). Skinner was devoted to BEHAVIORISM, which is the scientific study of observable behavior- proposed that ALL of an organism's behaviors- all acts, thoughts, and feelings-are shaped by factors in the external environment.
Building on THORNDIKE'S LAW OF EFFECT and WATSON'S approach to RESEARCH, BF SKINNER* demonstrated, among other things, that rats can learn to push levers and pigeons can learn to bowl. Since animals cant be expected to immediately perform such complex behaviors, Skinner employed SHAPING, the use of REINFORCERS to change behaviors through small steps toward a desired behavior. Skinner used shaping to teach a pigeon to bowl (bowling). Skinner devised a plan that every time the pigeons did something that brought them a step closer to completing the desired behavior, they would get a reinforcer (food). Since each incremental change in behavior brings the pigeons closer to accomplishing the larger goal of bowling, this method is called SHAPING by SUCCESSIVE APPROXIMATIONS. Can be used with humans, who are unable to change problematic behaviors overnight. SUCCESSIVE APPROXIMATIONS have been used to change truancy behavior in adolescents.

Both OPERANT and CLASSICAL CONDITIONING are forms of learning, and they share many common principles. As with CLASSICAL CONDITIONING, behaviors learned through OPERANT CONDITIONING go through an ACQUISITION phase. When Jeremy Lin was a sophomore in HS, he learned how to dunk a basketball. Similarly, the cats in THORNDIKE'S puzzle boxes learned the skills they needed to break free. In both cases, the ACQUISITION stage occurred through the gradual process of SHAPING. Using SUCCESSIVE APPROXIMATIONS, the learner acquires the new behavior overtime.
Behaviors learned through OPERANT CONDITIONING are also subject to EXTINCTION- that is, they may fade in the absence of REINFORCERS. A rat in a Skinner box eventually gives up pushing on a lever if there is no longer a tasty reinforcer awaiting. But that same lever-pushing behavior can make a sudden comeback through SPONTANEOUS RECOVERY. After a rest period, the rat returns to his box and reverts to his old lever-pushing ways.

With OPERANT CONDITIONING, STIMULUS GENERALIZATION occurs when a previously learned response to one stimulus occurs in the presence of a similar stimulus. A rat is conditioned to push a particular type of lever, but it may push a variety of other lever types similar in shape, size, and/or color. Horses also show STIMULUS GENERALIZATION. With SUCCESSIVE APPROXIMATIONS using a tasty oat-molasses grain REINFORCER, a small sample of horses learned to push on a "rat lever" with their lips in response to the appearance of a solid black circle with 2.5 inch diameter. After conditioning, the horses were presented with a variety of black circles of different diameters, and indeed STIMULUS GENERALIZATION was evident; they pressed the lever most often when shown circles closer in size to the original.
STIMULUS DISCRIMINATION is also at work in OPERANT CONDITIONING, as organisms can learn to discriminate between behaviors that do not result in REINFORCEMENT. A turtle was rewarded with morales of meat for choosing the black paddle as opposed to the white and gray wooden paddles for example, and then subsequently chose the black paddle over other colored paddles. STIMULUS DISCRIMINATION also occurs in basketball, Lin has learned to discriminate between teammates and opponents. Lin would not likely get reinforcement from the crowd if he mistook an opponent for a teammate and passed the ball to the other team.

CONDITIONING BASICS
THE LINK:
--CLASSICAL CONDITIONING: links different stimuli, often through repeated pairings.
--OPERANT CONDITIONING: links behavior to its consequence often through repeated pairings.

RESPONSE:
--CLASSICAL CONDITIONING: involuntary behavior.
--OPERANT CONDITIONING: voluntary behavior

ACQUISITION:
--CLASSICAL CONDITIONING: the initial learning phase.
--OPERANT CONDITIONING: the initial learning phase.

EXTINCITON:
--CLASSICAL CONDITIONING: process by which the conditioned response decreases after repeated exposure to the conditioned stimulus in the absence of the unconditioned stimulus.
--OPERANT CONDITIONING: the disappearance of a learned behavior through the removal of its reinforcer.

SPONTANEOUS RECOVERY:
--CLASSICAL CONDITIONING: following extinction, with the presentation of the conditioned stimulus after a rest period, the conditioned response reappears.
--OPERANT CONDITIONING: following extinction due to the absence of reinforcers, the behavior reemerges in a similar setting.

THE DIFFERENCE BETWEEN CLASSICAL AND OPERANT CONDITIONING:
-both forms of conditioning involve forming associations.
-in CLASSICAL CONDITIONING, the learner links different STIMULI; in OPERANT CONDITIONING, the learner connects her behavior to its CONSEQUENCES (REINFORCEMENT and PUNISHMENT).
-another key similarity is that the principles of acquisition, stimulus discrimination, stimulus generalization, extinction, and spontaneous recovery apply to both types of conditioning.
--In CLASSICAL CONDITIONING, the learned behaviors are voluntary, or reflexive. Ivonne cannot control her heart rate any more than Pavlov's dogs can decide when to salivate. OPERANT CONDITIONING, on the other hand, concerns voluntary behavior. Jeremy Lin had power over his decision to practice his shot, just as Skinner's pigeons had control over swatting bowling balls with their beaks.
In short, CLASSICAL CONDITIONING is an involuntary form of learning, whereas OPERANT CONDITIONING requires active effort.
Another important distinction is the way in which behaviors are strengthened. In CLASSICAL CONDITIONING, behaviors become more frequent with repeated pairings of stimuli. The more often Ivonne smells chlorine before swim practice, the tighter the association she makes between chlorine and swimming. OPERANT CONDITIONING is also strengthened by repeated pairings, but in this case, the connection is between a behavior and its consequences. REINFORCERS strengthen the behavior, punishment weakens it. The more benefits Jeremy Lin gains from succeeding in basketball, the more likely he will keep practicing.

OBSERVATIONAL LEARNING: learning that occurs as a result of watching the behaviors of others. Speaking english, eating with utensils, driving a car- all skills acquired from watching and mimicking others. OBSERVATIONAL LEARNING does not always require sight, you can feel a demonstration and then imitate that movement that you observed with your sense of touch. Just as OBSERVATIONAL LEARNING can lead to positive outcomes like sharper basketball and swimming skills, it can also breed undesirable behaviors. EX: Bobo doll experiment; shows how fast children can adopt aggressive ways they see modeled by adults, as well as exhibit their own novel aggressive responses. ALBERT BANDURA*.

When watching tv can promote bad and aggressive behaviors, either through too much television exposure, or parental approach. On the flip side, children also have a gift for mimicking positive behaviors. When children watch shows similar to Sesame Street, they receive messages that encourage PROSOCIAL BEHAVIORS, meaning they foster kindness, generosity, and forms of behavior that benefit others.

LATENT LEARNING: Edward Tolman. Is a type of learning that occurs without awareness and regardless or reinforcement, and that remains hidden until there is a need to use it. EX: as Ivonne runs, she hears sounds from all directions, the breathing of other runners, their feet hitting the ground, chatter from the sidelines, and uses these auditory cues to produce a mental map of her surroundings. Learning for the sake of learning.
REINFORCEMENT: process through which an organism learns to associate voluntary behaviors with their consequence. Any stimulus that increases a behavior is a reinforcer. Can be added or taken away. Either category of reinforcement-positive or negative-has the effect of increasing a desired behavior.

POSITIVE REINFORCEMENT: in the process of positive reinforcement, reinforcers are added or presented following the targeted behavior, and reinforcers in this case are generally pleasant; increasing the chances that the target behavior will occur again in the future. If behavior doesn't increase as a result of an added stimulus, that stimulus should not be considered a reinforcer. The fish treats that Thorndike's cats received immediately after escaping the puzzle box and the morsels of bird feed that Skinner's pigeons got for bowling are examples of positive reinforcement. In both cases, the reinforcers were ADDED following the desired behavior and were pleasurable to the cats and pigeons.
Not all positive reinforcers are positive in the sense that we think of them as pleasant; when we refer to POSITIVE REINFORCEMENT, we mean that something has been ADDED. For example, if a child is starved for attention, any kind of attention would be experienced as a positive reinforcer; the ATTENTION is ADDED.

NEGATIVE REINFORCEMENT: behaviors can increase in response to negative reinforcement, through the process of TAKING AWAY/subtracting something unpleasant. Skinner used negative reinforcement to shape the behavior of his rats. The rats were placed in Skinner boxes with floors that delivered a continuous mild electric shock-except when the animals pushed on a lever. The animals would begin running around the floors to escape the shock, but every once in a while they would accidentally hit the lever and turn off the electrical current. Eventually, they learned to associate pushing the lever with the removal of the unpleasant stimulus (the mild electric shock). After several trials, the rats would push the lever immediately to reduce the amount of time being shocked. Example of negative reinforcement in your life; driving your car without your seatbelt, eventually begins to make annoying beeping sound. The automakers have found a way to use negative reinforcement to INCREASE the use of seat belts. The beeping provides an annoyance (an unpleasant stimulus) that will influence most people to quickly put on their seat belts (the desired behavior increases) to make the beeping stop, and thus remove the unpleasant stimulus.
The target behaviors increase in order to remove an unwanted condition.EX: basketball coach starts practice about whining and complaining (a very annoying stimulus) about how slow the players are. But as soon as their level of activity increases, he stops his annoying behavior. The players then learn to avoid the coaches whining and complaining simply by running faster and working harder at every practice. The removal of the annoyance stimulus (whining and complaining) increase the desired behavior (running faster).

PRIMARY REINFORCER: The food that Skinner rewarded his rats and pigeons with is considered a POSITIVE REINFORCER (innate reinforcer) because it satisfies a biological need. Food, water, and physical contact are considered primary reinforcers (for both animals and people) because they meet essential requirements.

SECONDARY REINFORCERS: Many of the reinforcers shaping human behavior are SECONDARY REINFORCERS, which means they do not satisfy biological needs but often derive their power from their connection with primary reinforcers. Although money is not a primary reinforcer, we know from experience that it gives us access to primary reinforcers, such as food, a safe place to live, and perhaps even the ability to attract desirable mates. Therefore, money is a secondary reinforcer. Listening to music, washing dishes, taking a ride in your car-these are all considered secondary reinforcers for people who enjoy doing them. A REINFORCER IS ONLY A REINFORCER IF THE PERSON RECEIVING IT FINDS IT TO BE REINFORCING. In other words, the existence of a reinforcer depends on its ability to increase a target behavior.

CONTINUOUS REINFORCEMENT: "Every time they would try to get me to run further, they'd say, "We'll have hot chocolate afterwards!" The hot chocolate was given in a schedule of CONTINUOUS REINFORCEMENT, because the reinforcer was presented every time Ivonne ran a little further. EX: a child getting a sticker every time she practices the piano; a spouse getting praise overtime she does the dishes, a dog getting a treat every time it comes when called. Reinforcement overtime the behavior is produced.

PARTIAL REINFORCEMENT: delivering reinforcers intermittently/every once in a while works better for maintaining behaviors. A child gets a sticker every other time she practices, a spouse gets praise almost all of the time she does the dishes, a dog gets a treat every third time it comes when called. The reinforcer is not given every time the behavior is observed, but only some of the time.

PARTIAL REINFORCEMENT EFFECT: behaviors take longer to disappear (through the process of extinction) when they have been acquired or maintained through partial or intermittent, rather than continuous reinforcement.

**FIXED RATIO (# of time) SCHEDULE: with this arrangement, the subject must exhibit a predetermined number of desired response or behaviors before a reinforcer is given. A pigeon in a skinner box may have to peck a spot 5 times in order to score a delicious pellet (5:1). A third grade teacher might give students prizes when they pass three multiplication tests (3:1). The students quickly learn this predictable pattern. Generally, the fixed-ratio schedule produces a high response rate, but with a characteristic dip immediately following the reinforcement. Pigeons peck away at the target with only a brief test and students study for multiplication tests following the same pattern.


**VARIABLE RATIO (# of time) SCHEDULE: unpredictable reinforcement. In a VARIABLE RATIO SCHEDULE, the number of desired responses or behaviors that must occur before a reinforcer is given. Changes across trials and is based on an average number of responses to be reinforced. If the goal is to train a pigeon to peck a spot on a target, a variable ratio schedule can be used: Trial 1, the pigeon gets a pellet after pecking the spot twice; Trial 2, the pigeon gets a pellet after pecking the spot once; Trial 3, the pigeon gets a pellet after pecking the spot 3 times; and so on. In the classroom, the teacher might not tell the students in advance how many tests they will need to pass to get a prize. She may give prizes after two tests, then the next time after seven tests. This variable ratio schedule tends to produce a high response rate and behaviors that are difficult to extinguish because of the unpredictability of the reinforcement schedule.

**FIXED INTERVAL (time) SCHEDULE: focuses on the interval of time between reinforcers, as opposed to the number of desired responses. In a FIXED INTERVAL SCHEDULE, the reinforcer comes after a preestablished interval of time; the target behavior is only reinforced after the given time period is over. A reinforcer is given for the first target behavior that occurs AFTER the time interval has ended. If a pigeon is on a fixed interval schedule of 30 seconds, it can peck the target as often as possible once the interval starts, but it will only get a reinforcer following its first response after the 30 seconds has ended. With a 1-week interval, the teacher gives prizes only on Fridays for children who do well on their math quiz that day, regardless of how they performed during the week on other math quizzes. With this schedule, the target behavior tends to increase as each time interval comes to an end.

**VARIABLE INTERVAL (time) SCHEDULE: in a VARIABLE INTERVAL SCHEDULE, the length of time between reinforcement is unpredictable. In this schedule, the reinforcer comes after an interval of time goes by, but the length of the interval changes from trial to trail (within a predetermined range based on an average interval length). As with the fixed-interval schedule, reinforcement follows the first target behavior that occurs after the time interval has elapsed. Training a pigeon to peck a spot on a target using a variable interval schedule might include: Trial 1, the pigeon gets a pellet after 41 seconds; Trial 2, the pigeon gets a pellet after 43 seconds; Trial 3, the pigeon gets a pellet after 40 seconds; and so on. In each trial, the pigeon must respond after the interval of time has passed (which varies from trial to trial). The third grade teacher might think that an average of 4 days should go by between quizzes. So, instead of giving reinforcers every 7 days (that is, always on friday), she gives quizzes separated by a variable interval. The first quiz might be a 2 day interval, the next after 3 day interval, and the students do not know when to expect them. The variable interval schedule tends to encourage steady patterns of behavior. The pigeon tries its luck pecking a target once every 30 seconds or so, and the students come to school prepared to take a quiz everyday (their amount of study holding steady).
SKINNER BOX:
Some of the earliest and most influential research on operant conditioning came out of the lab of BF SKINNER (1904-1990). Skinner was devoted to BEHAVIORISM, which is the scientific study of observable behavior-proposed that ALL of an organism's behaviors-all acts, thoughts, and feelings-are shaped by factors in the external environment.
Building on THORNDIKE'S LAW OF EFFECT and WATSON'S approach to RESEARCH, BF SKINNER* demonstrated, among other things, that rats can learn to push levers and pigeons can learn to bowl. Since animals cant be expected to immediately perform such complex behaviors, Skinner employed SHAPING, the use of REINFORCERS to change behaviors through small steps toward a desired behavior. Skinner used shaping to teach a pigeon to bowl (bowling).

Skinner placed the animals in chambers, or SKINNER BOXES, which were outfitted with food dispensers the animals could activate (by pecking a target or pushing on a lever, for instance) and recording equipment to monitor these behaviors. These boxes allowed skinner to conduct carefully controlled experiments, measuring behaviors precisely and advancing the scientific and systematic study of behavior.

Skinner devised a plan that every time the pigeons did something that brought them a step closer to completing the desired behavior, they would get a reinforcer (food). Since each incremental change in behavior brings the pigeons closer to accomplishing the larger goal of bowling, this method is called SHAPING by SUCCESSIVE APPROXIMATIONS. Can be used with humans, who are unable to change problematic behaviors overnight. SUCCESSIVE APPROXIMATIONS have been used to change truancy behavior in adolescents.
LATENT LEARNING: a type of learning that occurs without awareness and regardless of reinforcement, and that remains hidden until there is a need to use it.

Edward Tolman* and CH Honzik demonstrated LATENT LEARNING in rats in their 1930 maze experiment. The researchers took three groups of rats and let them run free in mazes for several days. One group received food for reaching the goal boxes in their mazes; a second group received no reinforcement; and a third received nothing until the 11th day of the experiment, when they too received food when finding the goal box.

As you expect, the first group solved the mazes more and more quickly as the days wore on. Meanwhile, the unrewarded rats wandered through the twists and turns, showing only minor improvements from one day to the next. But on day 11 when the researchers started to give treats to the third group, the behavior of the rats changed. After just one round of treats, the rats were scurrying through the mazes and scooping up the food as if they had been rewarded throughout the whole experiment.

They had apparently been learning the whole time! Even when there was no reinforcement for doing so- in other words, learning just for the sake of learning.

We all do this as we acquire cognitive maps of our environments. Without realizing it, we remember locations, objects, and details of our surroundings, and all of this information is brought together in a mental layout. Research suggests that visually impaired people forge cognitive maps without the use of visual information. Instead, they use "compensatory sensorial channels" (hearing and sense of touch, for example) as they gather information about their environments.

This line of research highlights the importance of cognitive processes underlying behaviors and suggests that learning can occur in the absence of reinforcement. Because of the focus on cognition, this research approach conflicts with the views of Skinner who adhered to a strict form of behaviorism.
ENCODING:
process through which information enters our memory system. Think about what happens when you pay attention to an even unfolding before you; stimuli associated with that event (sights, sounds, smells) are taken in by your sense and then converted to neural activity that travels to the brain. Once in the brain, the neural activity continues, at which point the information takes one of two paths: either it enters our memory system (it is encoded to be stored for a longer period of time) or it slips away.

STORAGE:
The next step for info that is successfully encoded is storage. Storage is exactly what it sounds like: preserving information for possible recollection in the future. Before Clive Wearing fell ill, his memory was excellent. His brain was able to encode and store a variety of events and learned abilities. Following his bout with encephalitis, however, his ability for long term storage of new memories was destroyed- he could no longer retain new information for more than seconds at a time.

RETRIEVAL:
After info is stored, how do we access it? Perhaps you still have a memory of your first-grade teacher's face, but can you remember his/her name? This process of coming up with the name is RETRIEVAL. Sometimes info is encoded and stored in memory, but it cannot be accessed or retrieved. Have you ever felt that a person's name or a certain vocabulary word was just sitting on the tip of your tongue? Chance are you were struggling from a retrieval failure.

**SENSORY MEMORY: can hold vast amounts of sensory stimuli for a sliver of time. Primary component of PERCEPTION; the process by which sensory data are organized to provide meaningful information.

Many of these sensory stimuli; focusing on sentence being read, while also collecting data through your peripheral vision. Also may be hearing noises (the hum of a fan), smelling odors (food cooking in kitchen), tasting foods (if you are snacking), and even feeling things (your back pressed against a chair) never catch your attention, but some are being registered in your SENSORY MEMORY, the first stage of the information processing model.

Not all information processed in sensory memory ends up in short term memory.

More is seen than remembered- even though these memories last less than 1 second. Spelling's research suggests that the visual impressions in our sensory memory, also known as ICONIC MEMORY, are photograph-like in their accuracy but dissolve in less than a second.

On your way to class, you notice a dog barking at a passing car. You see the car, smell its exhaust, hear the dog. All this info coming through your sensory system registers in your SENSORY MEMORY, the first in a series of stages in the information processing model of memory.

LEVELS OF PROCESSING IN SENSORY MEMORY:
SHALLOW: notice some physical features
INTERMEDIATE: notice patterns and a little more detail
DEEP: think about meaning.

ECHOIC MEMORY: "I hear the dog barking."
-auditory impressions/sounds
-duration 1-10 seconds
-very accurate

ICONIC MEMORY: "I see the dog."
-visual impressions/images
-duration less than 1 second
-very accurate

"I smell the car's exhaust."

EIDETIC IMAGERY: "photographic memory". details they see aren't always accurate, and thus their memories are not quite photographic. EIDETIC IMAGERY is rare and usually only occurs in young children.

Although brief, sensory memory is critical to the creation of memories. Without it, how would information enter the memory system in the first place? Iconic and Echoic memory register sights and sounds, but memories can also be rich in smells, tastes, and touch. Data received from all the senses are held momentarily in sensory memory.

**SHORT TERM MEMORY: can temporarily maintain and process limited information for a longer stretch. Second stage in the information processing model. The amount of time info is maintained and processed in short term memory depends on whether you are distracted by other cognitive activities, but the duration can be about 30 seconds.


In a classic study of duration of short term memory, an experimenter recited a three letter combination, followed by a number. Participants were then asked to begin counting backward by 3 from the number given (if the experimenter said CHG 300, then participants would respond 300,297,294...) until they saw a red light flash, which signaled them to repeat the three-letter combo. After 3 seconds of counting backwards, participants could only recall the correct letter combos approximately 50% of the time. Most of the participants were unable to recall the letter combination beyond 18 seconds. Think about what you normally do if you're trying to remember something; you probably say it over and over in your head (CHG, CHG, CHG). But the participants were not able to do this because they had to count backwards by 3s, which interfered with their natural inclination to mentally repeat the letter combos.

Short term memory has a limited capacity. At any given moment, you can only concentrate on a tiny percentage of the data flooding your sensory memory. Items that capture your ATTENTION can move into your short term memory, but most everything else disappears faster than you can say the word "memory". If your goal is to remember what you should be concentrating on, you need to give it your full attention.
CHUNKING: EX: friend gives u a phone number in a stressful situation, you can either remember the whole number 5095579498 or break it into more manageable pieces to remember it better: 509-557-9498.

WORKING MEMORY*
Short term memory is a stage in the original information processing model as well as the location where information is temporarily held
Working memory is the activities and processing occurring within- purpose is to actively maintain information, aiding the mind that is busily performing complex cognitive tasks.

Phonological Loop: Part of working memory/short term memory that is responsible for working with verbal information for brief periods of time; when exposed to verbal stimuli, we "hear" an immediate corollary in our mind. We use this when reading, trying to solve problems, or learning new vocabulary.

LONG TERM MEMORY: has essentially unlimited capacity and stores enduring information about facts and experiences.

Explicit and implicit memories.

Items that enter short term memory have two fates; either they fade or they move in to long term memory. Example items in long term memory: funny jokes, passwords, images of faces, multiplication tables, and so many vocabulary words- between 30,000. Long term memory has no limits. Memories here may even last a lifetime. Quick retrieval of memory with little effort.

Procedural Memory: How to perform different skills, operations, and actions.

Episodic Memory: The record of memorable experiences or "episodes" including when and where an experience occurred; a type of explicit memory.

Autobiographical Memory: memory of life events

Semantic Memory: The memory of information theoretically available to anyone, which pertains to general facts about the world; a type of explicit memory.
MEMORY:
refers to information the brain collects, stores, and may retrieve for later use. Communication among neurons, memories are subject to modifications overtime, they may be somewhat different everytime you access them. The brain has seemingly unlimited storage capabilities, and the the ability to process many types of information simultaneously, both consciously and unconsciously.

Memory is a complex system involving multiple structures and regions of the brain. Memory is formed, processed, and stored throughout the brain, and different types of memory have different paths.

BRAIN AREAS RESPONSIBLE FOR MEMORY FORMATION

Process of memory formation which moves a memory from the hippocampus to other areas of the brain, is called MEMORY CONSOLIDATION. The consolidation that begins in the hippocampus allows for the long term storage of memories. As for retrieval, the HIPPOCAMPUS appears to be in charge of accessing young memories, but then passes on that responsibility to other brain regions as memories grow older.

HIPPOCAMPUS:
EXPLICIT memory formation. plays a vital role in the creation of new memories. The hippocampus is primarily responsible for processing and making new memories, but is NOT where memories are permanently stored. It is also one of the brain areas where neurogenesis occurs, that is, where new neurons are generated. Described as a pair of seahorse-shaped structures buried deep within the temporal lobes.

INFANTILE AMNESIA: the inability to remember events from our earliest years.

AMYGDALA:
IMPLICIT memory formation, emotional memory formation.

CEREBELLUM:
IMPLICIT memory formation.
PROACTIVE INTERFERENCE:
forgetting can stem from problems in encoding and storage. And the tip-of-the-tounge phenomenon tells us that it can also result from glitches in retrieval. Let's take a closer look at some other retrieval problems. Studies show that retrieval is influenced, or in some cases blocked, by information we learn before and after a memory was made, which we refer to as INTERFERENCE. If you have studied more than one foreign language, you have probably experienced interference. Suppose you take Spanish in middle school, and then begin studying Italian in college. As you try to learn Italian, you may find Spanish words creeping into your mind and confusing you; this is an example of PROACTIVE INTERFERENCE, the tendency for information learned in the past to interfere with the retrieval of new material. People who learn to play a second instrument have the same problem; the fingering of the old instrument interferes with the retrieval of new fingering.

RETROACTIVE INTERFERENCE:
Now lets say you're going to Mexico and need to use the spanish you learned back in middle school. As you approach a vendor in an outdoor market in Costa Maya, you may become frustrated when the only words that come to mind are ciao bello and buougiorno (italian for hello handsome and good day), when you really are searching for phrases with the same meaning in Espanol. Here, recently learned information interfere with the retrieval of things learned in the past. We call this RETROACTIVE INTERFERENCE. This type of interference can also impact the musician; when she switches back to her original instrument, the fingering techniques she uses to play the new instrument interfere with her old techniques. Thus, proactive interference results from knowledge acquired in the past and retroactive interference is caused by information learned recently.
LOFTUS EXPERIMENT AND THE MISINFORMATION EFFECT:
after showing participants a short film clip of a multiple car accident, Loftus and Palmer asked them what they had seen. Asked "how fast were the cars going when they smashed into eachother?" Replacing the word "smash" with "hit", they asked others "About how fast were the cars going when they hit eachother?" Can you guess which version resulted in the highest estimates of speed? If you guessed "smashed" you are correct. Participants who had heard the word "smashed" apparently incorporated a faster speed in their memories, and were more likely to report having seen broken glass even though there wasn't any glass. Participants who had not heard the word "smashed" seemed to have a more accurate memory of the car crash.
^concluded that memories can be changed in response to new information. In this case, the participants recollections of the car accident were altered by the wording of a questionnaire. This research suggests eyewitness accounts of accidents, crimes, and other important events might be altered by a variety of factors that come into play AFTER the event occurs. Wording of questions can change the way the events are recalled.
MISINFORMATION EFFECT is the tendency for new and misleading information to distort one's memory from an incident. ("was there broken glass?")

Recall is a construction built and rebuilt from various sources

Often fit memoires into existing beliefs or schemas

EBBINGHAUS FORGETTING CURVE:
relearning (ex you learn material much more the second time around)

Ebbinghaus would memorize nonsense syllables (DAZ, MIB, CHQ, etc.). Once ebbinghaus has successfully remembered a list, meaning he could recite it smoothly and confidently, he would put it aside. Later, he would memorize it again and calculate how much time he had saved in round 2, would call this the measuring score.

Meaningless strings of numbers, letters tend to fade within a day.

In addition to demonstrating the effects of relearning, was the first to demonstrate just how rapidly memories vanish. Through his experience with nonsense syllables, Ebbinghaus found that the bulk of forgetting occurs immediately after learning.

What causes us to forget? ENCODING FAILURE

How quickly we forget the material depends on how well the material was encoded. Also, how meaningful the material was, and how often it was rehearsed.
TESTS OF LONG TERM MEMORY

**RETRIEVAL CUES:
are stimuli that help you retrieve stored information that is difficult to access. For example, lets say you were trying to remember the name of the researcher who created the working memory model introduced earlier. If we gave you the first letter of his last name B, would that help you retrieve the information? If your mind jumped to Baddeley, then B served as your retrieval cue.

PRIMING: is the process of awakening memories with the help of retrieval cues.

**RECALL:
the process of retrieving information held in long term memory without the help of explicit retrieval cues. Recall is what you depend on when you answer fill in the blank or short answer essay questions on exams. Say you are given the following prompt: "Using a computer metaphor, what are the three processes involved in memory?" In this situation, you must come up with the answer from scratch: "The three processes are encoding, storage, and retrieval."

**RECOGNITION:
lets say you are faced with a multiple choice question: "One proven way of retaining information is: a) distributed practice, b) missed practice, or c) eidetic imagery." Answering this question relies on RECOGNITION, the process of matching incoming data to information stored in long memory. Recognition is generally a lot easier than recall because the information is right before your eyes; you just have to identify it (Hey, I've seen that before). On the other hand, it requires you to come up with information on your own. Most of us find it easy to recognize the correct answer from a list of possible answers in a multiple choice question than to recall the same correct answer from a fill in the blank question.
SCHEMA: one of the basic units of cognition.

a collection of ideas or notions representing a basic unit of understanding. Your children form schemas based on functional relationships. The schema "toy" for example, might include any object that can be played with (such as dolls, trucks, and balls). As children mature, so do their schemas, which begin to organize and structure their thinking around more basic categories, such as "love" (romantic love, love for one's country, and so on). As they grow, children expand their schemas in response to life experiences and interactions with the environment.

cognitive equilibrium: a feeling of mental or cognitive balance.

Suppose a kindergartener's schema of airplane pilots only includes men, but then he sees a female pilot on tv. This experience shakes up his notion of who can be a pilot, causing an uncomfortable sense of disequilibrium that motivates him to restore cognitive balance. There are two ways he might accomplish this. He could use ASSIMILATION, an attempt to understand new info using his already existing knowledge base, or schema. For example, the young boy might think about the female bus drivers he has seen and connect that to the female pilot idea (women drive buses, so maybe they can fly planes too). However, if the new information is so disconcerting that it cannot be assimilated, he might use ACCOMMODATION, a restructuring of old notions to make a place for new information. With accommodation, we remodel old schemas or create new ones. If the child had never seen a female driving anything other than a car, he might become confused at the sight of a female pilot. To eliminate that confusion, he would create a new schema. This is how we make great strides in cognitive growth. We ASSIMILATE information to fit new experiences into our old ways of thinking, and we ACCOMMODATE our old way of thinking to understand new information.
TRIAL AND ERROR:
process of finding a solution through a series of attempts. Mistakes will be made along the way, whatever mistakes there are get eliminated. Only useful in certain circumstances, should not be used if the circumstances are extremely high, especially in situations where a potentially wrong selection would be harmful or life threatening. Nor is this approach suggested for problems with too many possible solutions. If your keychain has 1,000 keys, you probably wouldn't want to send your time randomly selecting keys until one fits. Trial and error is sort of a gamble; no guarantee it will lead to a solution.

ALGORITHIMS:
use formulas or sets of rules that provide solutions to problems. Unlike trial and error, algorithms ENSURE a solution as long as you follow all of the steps. As reliable as algorithms can be, they are not always practical. If you don't know the algorithms formula, you obviously cannot use it, and sometimes the steps may require too much time.

HEURISTICS:
problem-solving approach that applies a "rule of thumb" or broad application of a strategy. Although heuristics aren't always reliable in their results, they do help us identify and evaluate possible solutions to our problems. Lets say you are cooking rice, but the instructions are unavailable. One good rule of thumb is to use two cups of water for every cup of rice. But unlike algorithms, which use formulas and sets of rules, there is no guarantee a heuristic will yield a correct solution. The advantage of heuristics is that they allow you to shrink the pool of possible solutions to a size that is manageable (given all the varieties of rice, there are many different way to cook it). Provide shortcuts, allowing you to ignore the pool of solutions you know will not work and move on to solutions more likely to be successful. But you might need to use trial and error to choose the best solution from that smaller pool of possibilities.

Creating subgoals or subproblems. When writing you last term paper, did you break it into shorter more achievable parts?

Means-ends analysis. With this heuristic, you try to figure out how to decrease the distance between your goal and current point in the process. You determine HOW to reach your goal (the means), which allows you to solve the problem (the end). Using means end analysis would breaking a problem into subproblems that can be solved independently.
CONFIRMATION BIAS:
when we unintentionally look for evidence that upholds our beliefs. People tend to overlook or discount evidence that runs counter to their original beliefs or positions. For example, you decide to go on a date with someone you a really interested in, even though you don't know him very well. You google stalk him and look on his Facebook page to see what his friends write on his wall, and you immediately connect with one of the like he has. With this information in hand, you stop your search, convinced you now have evidence to support your decision to go out with him. We tend to focus on information that supports favorable outcomes. The confirmation bias is not a conscious act; we do not deliberately set out looking for information to support what we already think. The danger with the confirmation bias is that we miss or ignore relevant (and possibly contradictory) information without conscious intent.

OVERCONFIDENCE EFFECT:
The Overconfidence Effect is a phenomenon where an individual has excessive confidence in their ability to overcome challenges or dangers. This is often caused by overconfidence, or lack of ability, knowledge, or complete information on how to succeed at a task. A simple example of this can be seen by watching a child try to do things that they have seen grownups do, like perhaps cooking dinner, but without the knowledge or skills necessary to do it successfully.

BELIEF PERSEVERANCE:
Social psychologists Ross, Lepper and Hubbard found that some people have a tendency or unwillingness to admit that their foundational premises are incorrect even when shown convincing evidence to the contrary. Belief perseverance is this tendency to reject convincing proof and become even more tenaciously held when the belief has been publicly announced to others.


REACTANCE THEORY/FORMATION:
Reactance theory describes the pattern of behaviors that occur in an individual when they feel their freedoms are being taken away or restricted. First introduced by Brehm (1968), this theory posits that individuals believe they have certain freedoms and choices and if these are threatened then negative reactions occur. When behaviors that are perceived as being free are threatened or taken away individuals can become motivated to retain and recapture these freedoms.
INTELLIGENCE:
ones innate ability to solve problems, adapt to the environment, and learn from experiences. Includes broad array of psychological factors, including memory, learning perception, and language, and how it is defined often depends on what particular variable is being measured. Cultural Construct, people in US associate intelligence with school smarts, whereas children in Kenya would score higher in practical knowledge than on tests assessing vocabulary. Intelligence does not always go hand in hand with intelligent behavior, people can score high on intelligent tests but exhibit a low level of judgement.
Genereal intelligence (G factor), which refers to a singular underlying aptitude or intellectual ability. This G factor, according to SPEARMEN, drives capabilities in many areas, including verbal, spatial, and reasoning competencies. The g factor is the common link.

Tests of intelligence generally aim to measure APTITUDE, or person's potential for learning. On the other hand, measures of ACHIEVEMENT are designed to assess acquired knowledge (what a person has learned).
Alfred BINET (1857-1911) sought to create a way to identify students who might have trouble learning in regular classroom settings.
Binet worked with one of his students to construct an assessment of intelligence. Came up with 30 items on the assessment he was going to test these children on. These items were designed to be of increasing difficulty, starting with a simple test to see if the child could follow a lit match that the tester moved in front of her. The items became more difficult as the test progressed (explain gin how paper and cardboard are different; making rhymes with words). Binet and Simon assumed that children generally follow the same path of intellectual development. Their primary goal in creating their assessment was to compare the mental ability of an individual child with the mental abilities of other children of the same age. They would determine the MENTAL AGE (MA) of an individual child by comparing his performance to that of other children in the same age category. For example, a 10 year old boy with average intellectual abilities would score similarly to other 10 year old children and thus would have a mental age of 10. An intelligent 10 year old boy would score better than other 10 year old children and thus have a higher mental age (for example, a mental age of 12) compared to his chronological age. Similarly, a child who was intellectually slower would have a lower mental age than his chronological age (mental age of 8 even though he was 10). One of the problems with relying on mental age as an index is that it cannot be used to compare intelligence levels across age groups. For example, you can't use mental age to compare the intelligence levels of an 8 year old girl and a 12 year old girl.

3 kinds of intelligence:
1. Analytical: solve problem
2. Practical: Adjust to different environments
3. Creative: handle new situations


In 1912, William Stern solved this problem by devising the INTELLIGENCE QUOTIENT (IQ), providing a way to compare intelligence across ages. To calculate IQ, a child's mental age is divided by her chronological age and multiplied by 100. A 10 year old girl with a mental age of 8 would have an IQ score of (8/10) x 100 = 80. If her mental age and chronological age were the same, her IQ would be 100. The IQ score can be used to compare the level of intelligence of this 10 year old girl with children of other ages. This method doesn't apply to adults. Modern intelligence tests still assign a numerical score, although they no longer use the actual quotient score.

Stanford-Binet test includes the assessment of verbal and nonverbal activities (defining words, tracing paths in a maze). This test yields an overall score for general intelligence, as well as scores relating to more specific abilities, such as knowledge, reasoning, visual processing, and working memory.

In late 1930s, David WECHSLER began creating intelligence tests for adults. WECHSLER noted that the Stanford-Binet was designed exclusively for children. And although many had been using the Stanford-Binet with adults, it was not an ideal measure, given that adults might not react positively to the questions geared to the daily experiences of school-age children. The Wechsler Adult Intelligence Scale (WAIS) was published in 1955 and has since been revised numerous times, with the most recent revision in 2008. Wechsler also developed teats for younger and older children, as well as adults.
Wechsler assessments of intelligence consist of a variety of subsets designed to measure different aspects of intellectual ability. The 10 subsets target four domains of intellectual intelligence: verbal activities, perpetual reasoning, working memory, and processing speed. Results from WAIS-IV include an overall IQ score, as well as scores on the four domains. Look for consistency among the domain scores and subset scores.

US MILITARY:
-US army needed to develop mass testing- APA wants to be accepted and contribute.
-Given to 1.75 million recruits
-2 versions:
ARMY ALPHA: test administered in writing
ARMY BETA: test was administered orally to recruits and draftees who couldn't read.

May have led to discrimination
Ignored confounding factors (wealth, length of time in the US, education)
VALIDITY:
the degree to which the assessment measures what it intends to measure. We can determine the validity of a measure by seeing if it can predict what it is designed to measure, or its predictive validity. Thus, to determine if an intelligence test is valid, we would check to see if the scores it produces are consistent with those of other intelligence tests. A valid intelligence test should also be able to predict future performance on tasks related to intellectual ability.

RELIABILITY:
the ability of a test to provide consistent, reproducible results. Should produce the same types of scores. Expect an individuals scores to remain consistent across time. Can also determine the reliability of an assessment by splitting the test in half and then determining whether the findings of the first and second halves of the test agree with eachother. It is possible to have a reliable test that is not valid. For this reason, we always have to determine both reliability and validity.

STANDARDIZATION:
occurs when test developers administer a test to a large sample of people, and then publish the average scores, or norms, for specified groups (EX: ACT, SAT, IQ test). The test developers provide these norms using a sample that is representative of the population of interest. Important that the sample include a variety of individuals who are similar to the population doing the test- allows you to compare your own score with people of the same age, gender, socioeconomic status, or region. With test norms, you are able to make judgements about the relative performance (often provided as percentiles) of an individual compared to others with similar characteristics. Also important that assessments are given and scored using standard procedures- intelligence scores are subject to tight control.
MOTIVATION: stimulus or force that directs behavior, thinking, and persistence.

INSTINCT THEORY:
instincts are FIXED/unlearned and species specific. Evolutionary forces influence human behavior-adaption. Air, Water, Food, Sex, Need to belong (social aspect)
Example, emotional responses, such as fear of snakes, heights, and spiders, may have evolved to protect us from danger. But these fears are not instincts, because they are not universal- not everyone is afraid of snakes and spiders; some of us have learned to fear (or not fear) them through experience. When trying to pin down the motivation for behavior, it is difficult to determine the relative contributions of learning (nurture) and innate factors (nature) such as instinct.

DRIVE-REDUCTION THEORY:
suggests that maintaining HOMEOSTASIS motivates us to meet biological needs. HOMEOSTASIS: tendency for bodies to maintain constant states through internal controls.
If a need is not fulfilled, a DRIVE is created, or state of tension, that pushes us or motivates us or motivates behaviors to meet the need. The urges to eat, drink, sleep, seek comfort, or have sex are associated with physiological needs. Once a need is satisfied, the drive is reduced, at least temporarily, because this is an ongoing process, as the need inevitably returns. For example, when the behavior (eating) stops, the deprivation of something (food) causes the need to increase again.

Reward Pathway:
-ADDICTION: ventral tegmental area
-VTA, nucleus accumbent, Pre frontal Cortex Arc all part of reward pathway
-Orgasm 10,00x more rewarding when high on drugs
-Reward Pathway is part of the brain that gets hijacked in addiction.
-Addiction hijacks part of the brain responsible for choice, Reward Pathway has a lot to do with it- no longer a choice, that motivation to you is now your life.
-Allostasis: homeostasis point resets. EX: drinking a lot to avoid "withdraw" point/hangover feeling, etc.

Arousal Theory:
Impulsivity forces: boredom drives curiosity and activity-seeking behavior.
-feels like anxiety to some, but sensation seekers are comfortable here.
-both get people into trouble (leads to drugs and alcohol)
-Impulse control is going to be very important for life outcomes.
-Not all motivation stems from physical needs. Arousal, or engagement in the world, can be a product of anxiety, surprise, excitement, interest, fear, and many other emotions. OPTIMAL arousal is a personal or subjective matter that is not the same for everyone. Some people are sensation seekers; seek activities that increase arousal. Popularly known as "adrenaline junkies: these individuals relish activities like cliff jumping, racing motorcycles, and watching horror movies. High sensation seeking isn't necessarily a bad thing, as it may be associated with a higher tolerance for stressful events.

Self Determination Theory:
autonomy: we like to be independent
like to feel COMPETENT, relatable-social bonds
EXTRINSIC: grades, money, things like that
INTRINSIC: Desire to do something because you want to be a better person; self-fulfillment.
Extrinsic motivators are dependent on reinforcer.

Competence and Achievement motivation
Leaders and Institutions we aspire to

"Golden Circle"
(from inner circle to outer circle):
why? how? what?
"why?" is how leaders communicate. Apple company promotes it's products from inside-out, rather than outside-in; like most other companies.

Most companies/people know what they do, how they do it, but WHY they do it is the question.

People don't buy what you do, but inspired and buy how and why you do it. WHAT: Every single company and organisation on the planet knows WHAT they do. This is true no matter how big or small, no matter what industry. Everyone is easily able to describe the products or services a company sells or the job function they have within the system. WHATs are easy to identify.
HOW: Some companies and people know HOW they do WHAT they do. Whether you call them a ''differentiating value proposition'' or ''unique selling proposition,'' HOWs are often given to explain how something is different or better. Not as obvious as WHATs , and many think these are the differentiating or motivating factors in a decision. It would be false to assume that's all that is required. There is one missing detail.
WHY: Very few people or companies can clearly articulate WHY they do WHAT they do. This isn't about making money - that's a result. WHY is all about your purpose, cause or belief. WHY does your company exist? WHY do you get out of bed in the morning? And WHY should anyone care?