Learning, acquiring knowledge or developing the ability to perform new behaviors. It is common to think of learning as something that takes place in school, but much of human learning occurs outside the classroom, and people continue to learn throughout their lives.
Even before they enter school, young children learn to walk, to talk, and to use their hands to manipulate toys, food, and other objects. They use all of their senses to learn about the sights, sounds, tastes, and smells in their environments. They learn how to interact with their parents, siblings, friends, and other people important to their world. When they enter school, children learn basic academic subjects such as reading, writing, and mathematics. They also continue to learn a great deal outside the classroom. They learn which behaviors are likely to be rewarded and which are likely to be punished. They learn social skills for interacting with other children. After they finish school, people must learn to adapt to the many major changes that affect their lives, such as getting married, raising children, and finding and keeping a job.
Because learning continues throughout our lives and affects almost everything we do, the study of learning is important in many different fields. Teachers need to understand the best ways to educate children. Psychologists, social workers, criminologists, and other human-service workers need to understand how certain experiences change people’s behaviors. Employers, politicians, and advertisers make use of the principles of learning to influence the behavior of workers, voters, and consumers.
Learning is closely related to memory, which is the storage of information in the brain. Psychologists who study memory are interested in how the brain stores knowledge, where this storage takes place, and how the brain later retrieves knowledge when we need it. In contrast, psychologists who study learning are more interested in behavior and how behavior changes as a result of a person’s experiences.
There are many forms of learning, ranging from simple to complex. Simple forms of learning involve a single stimulus. A stimulus is anything perceptible to the senses, such as a sight, sound, smell, touch, or taste. In a form of learning known as classical conditioning, people learn to associate two stimuli that occur in sequence, such as lightning followed by thunder. In operant conditioning, people learn by forming an association between a behavior and its consequences (reward or punishment). People and animals can also learn by observation—that is, by watching others perform behaviors. More complex forms of learning include learning languages, concepts, and motor skills.
This article discusses general principles of learning. For information about the application of learning principles to formal education, see Educational Psychology.
II SIMPLE FORMS OF LEARNING
Habituation, one of the simplest types of learning, is the tendency to become familiar with a stimulus after repeated exposure to it. A common example of habituation occurs in the orienting response, in which a person’s attention is captured by a loud or sudden stimulus. For example, a person who moves to a house on a busy street may initially be distracted (an orienting response) every time a loud vehicle drives by. After living in the house for some time, however, the person will no longer be distracted by the street noise—the person becomes habituated to it and the orienting response disappears.
Despite its simplicity, habituation is a very useful type of learning. Because our environments are full of sights and sounds, we would waste a tremendous amount of time and energy if we paid attention to every stimulus each time we encountered it. Habituation allows us to ignore repetitive, unimportant stimuli. Habituation occurs in nearly all organisms, from human beings to animals with very simple nervous systems. Even some one-celled organisms will habituate to a light, sound, or chemical stimulus that is presented repeatedly.
Sensitization, another simple form of learning, is the increase that occurs in an organism’s responsiveness to stimuli following an especially intense or irritating stimulus. For example, a sea snail that receives a strong electric shock will afterward withdraw its gill more strongly than usual in response to a simple touch. Depending on the intensity and duration of the original stimulus, the period of increased responsiveness can last from several seconds to several days.
III CLASSICAL CONDITIONING
Another form of learning is classical conditioning, in which a reflexive or automatic response transfers from one stimulus to another. For instance, a person who has had painful experiences at the dentist’s office may become fearful at just the sight of the dentist’s office building. Fear, a natural response to a painful stimulus, has transferred to a different stimulus, the sight of a building. Most psychologists believe that classical conditioning occurs when a person forms a mental association between two stimuli, so that encountering one stimulus makes the person think of the other. People tend to form these mental associations between events or stimuli that occur closely together in space or time.
A Pavlov’s Experiments
Classical conditioning was discovered by accident in the early 1900s by Russian physiologist Ivan Pavlov. Pavlov was studying how saliva aids the digestive process. He would give a dog some food and measure the amount of saliva the dog produced while it ate the meal. After the dog had gone through this procedure a few times, however, it would begin to salivate before receiving any food. Pavlov reasoned that some new stimulus, such as the experimenter in his white coat, had become associated with the food and produced the response of salivation in the dog. Pavlov spent the rest of his life studying this basic type of associative learning, which is now called classical conditioning or Pavlovian conditioning.
The conditioning process usually follows the same general procedure. Suppose a psychologist wants to condition a dog to salivate at the sound of a bell. Before conditioning, an unconditioned stimulus (food in the mouth) automatically produces an unconditioned response (salivation) in the dog. The term unconditioned indicates that there is an unlearned, or inborn, connection between the stimulus and the response. During conditioning, the experimenter rings a bell and then gives food to the dog. The bell is called the neutral stimulus because it does not initially produce any salivation response in the dog. As the experimenter repeats the bell-food association over and over again, however, the bell alone eventually causes the dog to salivate. The dog has learned to associate the bell with the food. The bell has become a conditioned stimulus, and the dog’s salivation to the sound of the bell is called a conditioned response.
B Principles of Classical Conditioning
Following his initial discovery, Pavlov spent more than three decades studying the processes underlying classical conditioning. He and his associates identified four main processes: acquisition, extinction, generalization, and discrimination.
The acquisition phase is the initial learning of the conditioned response—for example, the dog learning to salivate at the sound of the bell. Several factors can affect the speed of conditioning during the acquisition phase. The most important factors are the order and timing of the stimuli. Conditioning occurs most quickly when the conditioned stimulus (the bell) precedes the unconditioned stimulus (the food) by about half a second. Conditioning takes longer and the response is weaker when there is a long delay between the presentation of the conditioned stimulus and the unconditioned stimulus. If the conditioned stimulus follows the unconditioned stimulus—for example, if the dog receives the food before the bell is rung—conditioning seldom occurs.
Once learned, a conditioned response is not necessarily permanent. The term extinction is used to describe the elimination of the conditioned response by repeatedly presenting the conditioned stimulus without the unconditioned stimulus. If a dog has learned to salivate at the sound of a bell, an experimenter can gradually extinguish the dog’s response by repeatedly ringing the bell without presenting food afterward. Extinction does not mean, however, that the dog has simply unlearned or forgotten the association between the bell and the food. After extinction, if the experimenter lets a few hours pass and then rings the bell again, the dog will usually salivate at the sound of the bell once again. The reappearance of an extinguished response after some time has passed is called spontaneous recovery.
After an animal has learned a conditioned response to one stimulus, it may also respond to similar stimuli without further training. If a child is bitten by a large black dog, the child may fear not only that dog, but other large dogs. This phenomenon is called generalization. Less similar stimuli will usually produce less generalization. For example, the child may show little fear of smaller dogs.
The opposite of generalization is discrimination, in which an individual learns to produce a conditioned response to one stimulus but not to another stimulus that is similar. For example, a child may show a fear response to freely roaming dogs, but may show no fear when a dog is on a leash or confined to a pen.
C Applications of Classical Conditioning
After studying classical conditioning in dogs and other animals, psychologists became interested in how this type of learning might apply to human behavior. In an infamous 1921 experiment, American psychologist John B. Watson and his research assistant Rosalie Rayner conditioned a baby named Albert to fear a small white rat by pairing the sight of the rat with a loud noise. Although their experiment was ethically questionable, it showed for the first time that humans can learn to fear seemingly unimportant stimuli when the stimuli are associated with unpleasant experiences. The experiment also suggested that classical conditioning accounts for some cases of phobias, which are irrational or excessive fears of specific objects or situations. Psychologists now know that classical conditioning explains many emotional responses—such as happiness, excitement, anger, and anxiety—that people have to specific stimuli. For example, a child who experiences excitement on a roller coaster may learn to feel excited just at the sight of a roller coaster. For an adult who finds a letter from a close friend in the mailbox, the mere sight of the return address on the envelope may elicit feelings of joy and warmth.
Psychologists use classical conditioning procedures to treat phobias and other unwanted behaviors, such as alcoholism and addictions. To treat phobias of specific objects, the therapist gradually and repeatedly presents the feared object to the patient while the patient relaxes. Through extinction, the patient loses his or her fear of the object. In one treatment for alcoholism, patients drink an alcoholic beverage and then ingest a drug that produces nausea. Eventually they feel nauseous at the sight or smell of alcohol and stop drinking it. The effectiveness of these therapies varies depending on the individual and on the problem behavior. See Psychotherapy: Behavioral Therapies.
D Contemporary Theories
Modern theories of classical conditioning depart from Pavlov’s theory in several ways. Whereas Pavlov’s theory stated that the conditioned and unconditioned stimuli should elicit the same type of response, modern theories acknowledge that the conditioned and unconditioned responses frequently differ. In some cases, especially when the unconditioned stimulus is a drug, the conditioned stimulus elicits the opposite response. Modern research has also shown that conditioning does not always require a close pairing of the two stimuli. In taste-aversion learning, people can develop disgust for a specific food if they become sick after eating it, even if the illness begins several hours after eating.
Psychologists today also recognize that classical conditioning does not automatically occur whenever two stimuli are repeatedly paired. For instance, suppose that an experimenter conditions a dog to salivate to a light by repeatedly pairing the light with food. Next, the experimenter repeatedly pairs both the light and a tone with food. When the experimenter presents the tone by itself, the dog will show little or no conditioned response (salivation), because the tone provides no new information. The light already allows the dog to predict that food will be coming. This phenomenon, discovered by American psychologist Leon Kamin in 1968, is called blocking because prior conditioning blocks new conditioning.
IV OPERANT CONDITIONING
One of the most widespread and important types of learning is operant conditioning, which involves increasing a behavior by following it with a reward, or decreasing a behavior by following it with punishment. For example, if a mother starts giving a boy his favorite snack every day that he cleans up his room, before long the boy may spend some time each day cleaning his room in anticipation of the snack. In this example, the boy’s room-cleaning behavior increases because it is followed by a reward or reinforcer.
Unlike classical conditioning, in which the conditioned and unconditioned stimuli are presented regardless of what the learner does, operant conditioning requires action on the part of the learner. The boy in the above example will not get his snack unless he first cleans up his room. The term operant conditioning refers to the fact that the learner must operate, or perform a certain behavior, before receiving a reward or punishment.
A Thorndike’s Law of Effect
Some of the earliest scientific research on operant conditioning was conducted by American psychologist Edward L. Thorndike at the end of the 19th century. Thorndike’s research subjects included cats, dogs, and chickens. To see how animals learn new behaviors, Thorndike used a small chamber that he called a puzzle box. He would place an animal in the puzzle box, and if it performed the correct response (such as pulling a rope, pressing a lever, or stepping on a platform), the door would swing open and the animal would be rewarded with some food located just outside the cage. The first time an animal entered the puzzle box, it usually took a long time to make the response required to open the door. Eventually, however, it would make the appropriate response by accident and receive its reward: escape and food. As Thorndike placed the same animal in the puzzle box again and again, it would make the correct response more and more quickly. Soon it would take the animal just a few seconds to earn its reward.
Based on these experiments, Thorndike developed a principle he called the law of effect. This law states that behaviors that are followed by pleasant consequences will be strengthened, and will be more likely to occur in the future. Conversely, behaviors that are followed by unpleasant consequences will be weakened, and will be less likely to be repeated in the future. Thorndike’s law of effect is another way of describing what modern psychologists now call operant conditioning.
B B. F. Skinner’s Research
American psychologist B. F. Skinner became one of the most famous psychologists in history for his pioneering research on operant conditioning. In fact, he coined the term operant conditioning. Beginning in the 1930s, Skinner spent several decades studying the behavior of animals—usually rats or pigeons—in chambers that became known as Skinner boxes. Like Thorndike’s puzzle box, the Skinner box was a barren chamber in which an animal could earn food by making simple responses, such as pressing a lever or a circular response key. A device attached to the box recorded the animal’s responses. The Skinner box differed from the puzzle box in three main ways: (1) upon making the desired response, the animal received food but did not escape from the chamber; (2) the box delivered only a small amount of food for each response, so that many reinforcers could be delivered in a single test session; and (3) the operant response required very little effort, so an animal could make hundreds or thousands of responses per hour. Because of these changes, Skinner could collect much more data, and he could observe how changing the pattern of food delivery affected the speed and pattern of an animal’s behavior.
Skinner became famous not just for his research with animals, but also for his controversial claim that the principles of learning he discovered using the Skinner box also applied to the behavior of people in everyday life. Skinner acknowledged that many factors influence human behavior, including heredity, basic types of learning such as classical conditioning, and complex learned behaviors such as language. However, he maintained that rewards and punishments control the great majority of human behaviors, and that the principles of operant conditioning can explain these behaviors.
C Principles of Operant Conditioning
In a career spanning more than 60 years, Skinner identified a number of basic principles of operant conditioning that explain how people learn new behaviors or change existing behaviors. The main principles are reinforcement, punishment, shaping, extinction, discrimination, and generalization.
In operant conditioning, reinforcement refers to any process that strengthens a particular behavior—that is, increases the chances that the behavior will occur again. There are two general categories of reinforcement, positive and negative. The experiments of Thorndike and Skinner illustrate positive reinforcement, a method of strengthening behavior by following it with a pleasant stimulus. Positive reinforcement is a powerful method for controlling the behavior of both animals and people. For people, positive reinforcers include basic items such as food, drink, sex, and physical comfort. Other positive reinforcers include material possessions, money, friendship, love, praise, attention, and success in one’s career.
Depending on the circumstances, positive reinforcement can strengthen either desirable or undesirable behaviors. Children may work hard at home or at school because of the praise they receive from parents and teachers for good performance. However, they may also disrupt a class, try dangerous stunts, or start smoking because these behaviors lead to attention and approval from their peers. One of the most common reinforcers of human behavior is money. Most adults spend many hours each week working at their jobs because of the paychecks they receive in return. For certain individuals, money can also reinforce undesirable behaviors, such as burglary, selling illegal drugs, and cheating on one’s taxes.
Negative reinforcement is a method of strengthening a behavior by following it with the removal or omission of an unpleasant stimulus. There are two types of negative reinforcement: escape and avoidance. In escape, performing a particular behavior leads to the removal of an unpleasant stimulus. For example, if a person with a headache tries a new pain reliever and the headache quickly disappears, this person will probably use the medication again the next time a headache occurs. In avoidance, people perform a behavior to avoid unpleasant consequences. For example, drivers may take side streets to avoid congested intersections, citizens may pay their taxes to avoid fines and penalties, and students may do their homework to avoid detention.
C2 Reinforcement Schedules
A reinforcement schedule is a rule that specifies the timing and frequency of reinforcers. In his early experiments on operant conditioning, Skinner rewarded animals with food every time they made the desired response—a schedule known as continuous reinforcement. Skinner soon tried rewarding only some instances of the desired response and not others—a schedule known as partial reinforcement. To his surprise, he found that animals showed entirely different behavior patterns.
Skinner and other psychologists found that partial reinforcement schedules are often more effective at strengthening behavior than continuous reinforcement schedules, for two reasons. First, they usually produce more responding, at a faster rate. Second, a behavior learned through a partial reinforcement schedule has greater resistance to extinction—if the rewards for the behavior are discontinued, the behavior will persist for a longer period of time before stopping. One reason extinction is slower after partial reinforcement is that the learner has become accustomed to making responses without receiving a reinforcer each time. There are four main types of partial reinforcement schedules: fixed-ratio, variable-ratio, fixed-interval, and variable-interval. Each produces a distinctly different pattern of behavior.
On a fixed-ratio schedule, individuals receive a reinforcer each time they make a fixed number of responses. For example, a factory worker may earn a certain amount of money for every 100 items assembled. This type of schedule usually produces a stop-and-go pattern of responding: The individual works steadily until receiving one reinforcer, then takes a break, then works steadily until receiving another reinforcer, and so on.
On a variable-ratio schedule, individuals must also make a number of responses before receiving a reinforcer, but the number is variable and unpredictable. Slot machines, roulette wheels, and other forms of gambling are examples of variable-ratio schedules. Behaviors reinforced on these schedules tend to occur at a rapid, steady rate, with few pauses. Thus, many people will drop coins into a slot machine over and over again on the chance of winning the jackpot, which serves as the reinforcer.
On a fixed-interval schedule, individuals receive reinforcement for their response only after a fixed amount of time elapses. For example, in a laboratory experiment with a fixed-interval one-minute schedule, at least one minute must elapse between the deliveries of the reinforcer. Any responses that occur before one minute has passed have no effect. On these schedules, animals usually do not respond at the beginning of the interval, but they respond faster and faster as the time for reinforcement approaches. Fixed-interval schedules rarely occur outside the laboratory, but one close approximation is the clock-watching behavior of students during a class. Students watch the clock only occasionally at the start of a class period, but they watch more and more as the end of the period gets nearer.
Variable-interval schedules also require the passage of time before providing reinforcement, but the amount of time is variable and unpredictable. Behavior on these schedules tends to be steady, but slower than on ratio schedules. For example, a person trying to call someone whose phone line is busy may redial every few minutes until the call gets through.
Whereas reinforcement strengthens behavior, punishment weakens it, reducing the chances that the behavior will occur again. As with reinforcement, there are two kinds of punishment, positive and negative. Positive punishment involves reducing a behavior by delivering an unpleasant stimulus if the behavior occurs. Parents use positive punishment when they spank, scold, or shout at children for bad behavior. Societies use positive punishment when they fine or imprison people who break the law. Negative punishment, also called omission, involves reducing a behavior by removing a pleasant stimulus if the behavior occurs. Parents’ tactics of grounding teenagers or taking away various privileges because of bad behavior are examples of negative punishment.
Considerable controversy exists about whether punishment is an effective way of reducing or eliminating unwanted behaviors. Careful laboratory experiments have shown that, when used properly, punishment can be a powerful and effective method for reducing behavior. Nevertheless, it has several disadvantages. When people are severely punished, they may become angry, aggressive, or have other negative emotional reactions. They may try to hide the evidence of their misbehavior or escape from the situation, as when a punished child runs away from home. In addition, punishment may eliminate desirable behaviors along with undesirable ones. For example, a child who is scolded for making an error in the classroom may not raise his or her hand again. For these and other reasons, many psychologists recommend that punishment be used to control behavior only when there is no realistic alternative.
Shaping is a reinforcement technique that is used to teach animals or people behaviors that they have never performed before. In this method, the teacher begins by reinforcing a response the learner can perform easily, and then gradually requires more and more difficult responses. For example, to teach a rat to press a lever that is over its head, the trainer can first reward any upward head movement, then an upward movement of at least one inch, then two inches, and so on, until the rat reaches the lever. Psychologists have used shaping to teach children with severe mental retardation to speak by first rewarding any sounds they make, and then gradually requiring sounds that more and more closely resemble the words of the teacher. Animal trainers at circuses and theme parks use shaping to teach elephants to stand on one leg, tigers to balance on a ball, dogs to do backward flips, and killer whales and dolphins to jump through hoops.
As in classical conditioning, responses learned in operant conditioning are not always permanent. In operant conditioning, extinction is the elimination of a learned behavior by discontinuing the reinforcer of that behavior. If a rat has learned to press a lever because it receives food for doing so, its lever-pressing will decrease and eventually disappear if food is no longer delivered. With people, withholding the reinforcer may eliminate some unwanted behaviors. For instance, parents often reinforce temper tantrums in young children by giving them attention. If parents simply ignore the child’s tantrums rather than reward them with attention, the number of tantrums should gradually decrease.
C6 Generalization and Discrimination
Generalization and discrimination occur in operant conditioning in much the same way that they do in classical conditioning. In generalization, people perform a behavior learned in one situation in other, similar situations. For example, a man who is rewarded with laughter when he tells certain jokes at a bar may tell the same jokes at restaurants, parties, or wedding receptions. Discrimination is learning that a behavior will be reinforced in one situation but not in another. The man may learn that telling his jokes in church or at a serious business meeting will not make people laugh. Discriminative stimuli signal that a behavior is likely to be reinforced. The man may learn to tell jokes only when he is at a loud, festive occasion (the discriminative stimulus). Learning when a behavior will and will not be reinforced is an important part of operant conditioning.
D Applications of Operant Conditioning
Operant conditioning techniques have practical applications in many areas of human life. Parents who understand the basic principles of operant conditioning can reinforce their children’s appropriate behaviors and punish inappropriate ones, and they can use generalization and discrimination techniques to teach which behaviors are appropriate in particular situations. In the classroom, many teachers reinforce good academic performance with small rewards or privileges. Companies have used lotteries to improve attendance, productivity, and job safety among their employees.
Psychologists known as behavior therapists use the learning principles of operant conditioning to treat children or adults with behavior problems or psychological disorders. Behavior therapists use shaping techniques to teach basic job skills to adults with mental retardation. Therapists use reinforcement techniques to teach self-care skills to people with severe mental illnesses, such as schizophrenia, and use punishment and extinction to reduce aggressive and antisocial behaviors by these individuals. Psychologists also use operant conditioning techniques to treat stuttering, sexual disorders, marital problems, drug addictions, impulsive spending, eating disorders, and many other behavioral problems. See Behavior Modification.
V LEARNING BY OBSERVATION
Although classical and operant conditioning are important types of learning, people learn a large portion of what they know through observation. Learning by observation differs from classical and operant conditioning because it does not require direct personal experience with stimuli, reinforcers, or punishers. Learning by observation involves simply watching the behavior of another person, called a model, and later imitating the model’s behavior.
Both children and adults learn a great deal through observation and imitation. Young children learn language, social skills, habits, fears, and many other everyday behaviors by observing their parents and older children. Many people learn academic, athletic, and musical skills by observing and then imitating a teacher. According to Canadian-American psychologist Albert Bandura, a pioneer in the study of observational learning, this type of learning plays an important role in a child’s personality development. Bandura found evidence that children learn traits such as industriousness, honesty, self-control, aggressiveness, and impulsiveness in part by imitating parents, other family members, and friends.
Psychologists once thought that only human beings could learn by observation. They now know that many kinds of animals—including birds, cats, dogs, rodents, and primates—can learn by observing other members of their species. Young animals can learn food preferences, fears, and survival skills by observing their parents. Adult animals can learn new behaviors or solutions to simple problems by observing other animals.
A Bandura’s Experiments
In the early 1960s Bandura and other researchers conducted a classic set of experiments that demonstrated the power of observational learning. In one experiment, a preschool child worked on a drawing while a television set showed an adult behaving aggressively toward a large inflated Bobo doll (a clown doll that bounces back up when knocked down). The adult pummeled the doll with a mallet, kicked it, flung it in the air, sat on it, and beat it in the face, while yelling such remarks as ‘Sock him in the nose … Kick him … Pow!’ The child was then left in another room filled with interesting toys, including a Bobo doll. The experimenters observed the child through one-way glass. Compared with children who witnessed a nonviolent adult model and those not exposed to any model, children who witnessed the aggressive display were much more likely to show aggressive behaviors toward the Bobo doll, and they often imitated the model’s exact behaviors and hostile words.
In a variant of the original experiment, Bandura and colleagues examined the effect of observed consequences on learning. They showed four-year-old children one of three films of an adult acting violently toward a Bobo doll. In one version of the film, the adult was praised for his or her aggressive behavior and given soda and candies. In another version, the adult was scolded, spanked, and warned not to behave that way again. In a third version, the adult was neither rewarded nor punished. After viewing the film, each child was left alone in a room that contained a Bobo doll and other toys. Many children imitated the adult’s violent behaviors, but children who saw the adult punished imitated the behaviors less often than children who saw the other films. However, when the researchers promised the children a reward if they could copy the adult’s behavior, all three groups of children showed large and equal amounts of violent behavior toward the Bobo doll.
Bandura concluded that even those children who did not see the adult model receive a reward had learned through observation, but these children (especially those who saw the model being punished) would not display what they had learned until they expected a reward for doing so. The term latent learning describes cases in which an individual learns a new behavior but does not perform this behavior until there is the possibility of obtaining a reward.
B Bandura’s Theory of Imitation
According to Bandura’s influential theory of imitation, also called social learning theory, four factors are necessary for a person to learn through observation and then imitate a behavior: attention, retention, reproduction, and motivation. First, the learner must pay attention to the crucial details of the model’s behavior. A young girl watching her father bake a cake will not be able to imitate this behavior successfully unless she pays attention to many important details—ingredients, quantities, oven temperature, baking time, and so on. The second factor is retention—the learner must be able to retain all of this information in memory until it is time to use it. If the person forgets important details, he or she will not be able to successfully imitate the behavior. Third, the learner must have the physical skills and coordination needed for reproduction of the behavior. The young girl must have enough strength and dexterity to mix the ingredients, pour the batter, and so on, in order to bake a cake on her own. Finally, the learner must have the motivation to imitate the model. That is, learners are more likely to imitate a behavior if they expect it to lead to some type of reward or reinforcement. If learners expect that imitating the behavior will not lead to reward or might lead to punishment, they are less likely to imitate the behavior.
C Theory of Generalized Imitation
An alternative to Bandura’s theory is the theory of generalized imitation. This theory states that people will imitate the behaviors of others if the situation is similar to cases in which their imitation was reinforced in the past. For example, when a young child imitates the behavior of a parent or an older sibling, this imitation is often reinforced with smiles, praise, or other forms of approval. Similarly, when children imitate the behaviors of friends, sports stars, or celebrities, this imitation may be reinforced—by the approval of their peers, if not their parents. Through the process of generalization, the child will start to imitate these models in other situations. Whereas Bandura’s theory emphasizes the imitator’s thought processes and motivation, the theory of generalized imitation relies on two basic principles of operant conditioning—reinforcement and generalization.
D Factors Affecting Imitation
Many factors determine whether or not a person will imitate a model. As already shown, children are more likely to imitate a model when the model’s behavior has been reinforced than when it has been punished. More important, however, are the expected consequences to the learner. A person will imitate a punished behavior if he or she thinks that imitation will produce some type of reinforcement.
The characteristics of the model also influence the likelihood of imitation. Studies have shown that children are more likely to imitate adults who are pleasant and attentive to them than those who are not. In addition, children more often imitate adults who have substantial influence over their lives, such as parents and teachers, and those who seem admired and successful, such as celebrities and athletes. Both children and adults are more likely to imitate models who are similar to them in sex, age, and background. For this reason, when behavior therapists use modeling to teach new behaviors or skills, they try to use models who are similar to the learners.
E Influence of Television
In modern society, television provides many powerful models for children and abundant opportunities for observational learning. Many parents are concerned about the behaviors their children can observe on TV. Many television programs include depictions of sex, violence, drug and alcohol use, and vulgar language—behaviors that most parents do not want their children to imitate. Studies have found that by early adolescence, the average American child has watched thousands of dramatized murders and countless other acts of violence on television.
For many years, psychologists have debated the question of whether watching violence on television has detrimental effects on children. A number of experiments, both inside and outside the laboratory, have found evidence that viewing television violence is related to increased aggression in children. Some psychologists have criticized this research, maintaining that the evidence is inconclusive. Most psychologists now believe, however, that watching violence on television can sometimes lead to increased aggressiveness in children.
The effects of television on children’s behaviors are not all negative. Educational programs such as “Sesame Street” give children the opportunity to learn letters of the alphabet, words, numbers, and social skills. Such programs also show people who solve problems and resolve differences through cooperation and discussion rather than through aggression and hostility.
VI OTHER FORMS OF LEARNING
Although psychologists who study learning have focused the most attention on classical conditioning, operant conditioning, and observational learning, they have also studied other types of learning, including language learning, learning by listening and reading, concept formation, and the learning of motor skills. These types of learning still involve the principles of conditioning and observational learning, but they are worth considering separately because of their importance in everyday life.
A Language Learning
Learning to speak and understand a language is one of the most complex types of learning, yet all normal children master this skill in the first few years of their lives. The familiar principles of shaping, reinforcement, generalization, discrimination, and observational learning all play a role in a child’s language learning. However, in the 1950s American linguist Noam Chomsky proposed that these basic principles of learning cannot explain how children learn to speak so well and so rapidly. Chomsky theorized that humans have a unique and inborn capacity to extract word meanings, sentence structure, and grammatical rules from the complex stream of sounds they hear. Although Chomsky’s theory is controversial, it has received some support from scientific evidence that specific parts of the human brain are essential for language. When these areas of the brain are damaged, a person loses the ability to speak or comprehend language.
B Learning by Listening and Reading
Because people communicate through language, they can learn vast amounts of information by listening to others and by reading. Learning through the spoken or written word is similar to observational learning, because it allows people to learn not simply from their own experiences, but also from the experiences of others. For example, by listening to a parent or instructor, children can learn to avoid busy streets and to cross the street at crosswalks without first experiencing any positive or negative consequences. By listening to and observing others, children can learn skills such as tying a shoelace, swinging a baseball bat, or paddling a canoe. Listening to the teacher and reading are essential parts of most classroom learning.
Much of what we read and hear is quickly forgotten. Learning new information requires that we retain the information in memory and later be able to retrieve it. The process of forming long-term memories is complex, depending on the nature of the original information and on how much a person rehearses or reviews the information. See Memory.
C Concept Formation
Concept formation occurs when people learn to classify different objects as members of a single category. For example, a child may know that a mouse, a dog, and a whale are all animals, despite their great differences in size and appearance. Concept formation is important because it helps us identify stimuli we have never encountered before. Thus, a child who sees an antelope for the first time will probably know that it is an animal. Even young children learn a large number of such concepts, including food, games, flowers, cars, and houses. Although language plays an important role in how people learn concepts, the ability to speak is not essential for concept formation. Experiments with birds and chimpanzees have shown that these animals can form concepts.
D Learning Motor Skills
A motor skill is the ability to perform a coordinated set of physical movements. Examples of motor skills include handwriting, typing, playing a musical instrument, driving a car, and most sports skills. Learning a motor skill is usually a gradual process that requires practice and feedback. Learners need feedback from a teacher or coach to tell them which movements they are performing well and which need improvement. While learning a new motor skill, the learner should direct full attention to the task. Some motor skills, if learned well, can be performed automatically. For example, a skilled typist can type quickly and accurately without thinking about every keystroke.
VII THEORIES OF LEARNING
Early in the 20th century, some psychologists believed that it might be possible to develop a single, general theory that could explain all instances of learning. For instance, the so-called one-factor theory proposed that reinforcement was the single factor that controlled whether learning would or would not occur. However, latent learning and similar phenomena contradicted this theory by showing that learning could occur without reinforcement.
In recent years, psychologists have abandoned attempts to develop a single, all-purpose theory of learning. Instead, they have developed smaller and more specialized theories. Some theories focus on classical conditioning, some on operant conditioning, some on observational learning, and some on other specific forms of learning. The major debates in learning theory concern which theories best describe these more specific areas of learning.
In studying learning, psychologists follow two main theoretical approaches: the behavioral approach and the cognitive approach. Recall that learning is acquiring knowledge or developing the ability to perform new behaviors. Behavioral psychologists focus on the change that takes place in an individual’s behavior. Cognitive psychologists prefer to study the change in an individual’s knowledge, emphasizing mental processes such as thinking, memory, and problem solving. Many psychologists combine elements of both approaches to explain learning.
A The Behavioral Approach
The term behaviorism was first used by John B. Watson in the early 1910s. Later, B. F. Skinner expanded and popularized the behavioral approach. The essential characteristic of the behavioral approach to learning is that events in the environment are understood to predict a person’s behavior, not thoughts, feelings, or other events that take place inside the person. Strict behaviorists believe that it is dangerous and unscientific to treat thoughts and feelings as the causes of a person’s behavior, because no one can see another person’s thoughts or feelings. Behaviorists maintain that human learning can be explained by examining the stimuli, reinforcers, and punishments that a person experiences. According to behaviorists, reinforcement and punishment, along with other basic principles such as generalization and discrimination, can explain even the most advanced types of human learning, such as learning to read or to solve complex problems.
B The Cognitive Approach
Unlike behaviorists, cognitive psychologists believe that it is essential to study an individual’s thoughts and expectations in order to understand the learning process. In 1930 American psychologist Edward C. Tolman investigated cognitive processes in learning by studying how rats learn their way through a maze. He found evidence that rats formed a “cognitive map” (a mental map) of the maze early in the experiment, but did not display their learning until they received reinforcement for completing the maze—a phenomenon he termed latent learning. Tolman’s experiment suggested that learning is more than just the strengthening of responses through reinforcement.
Modern cognitive psychologists believe that learning involves complex mental processes, including memory, attention, language, concept formation, and problem solving. They study how people process information and form mental representations of people, objects, and events.
C Evaluation of the Two Approaches
During the first half of the 20th century, behaviorism was the dominant theoretical approach in the field of learning. Since the 1950s, however, cognitive psychology has steadily gained in popularity, and now more psychologists favor a cognitive approach than a strict behavioral approach. Cognitive psychologists and behaviorists will continue to debate the merits of their different positions, but in many ways these two approaches have different strengths that complement each other. With its emphasis on memory and complex thought processes, the cognitive approach appears well suited for investigating the most sophisticated types of human learning, such as reasoning, problem solving, and creativity. The behavioral approach, which emphasizes basic principles of conditioning, reinforcement, and punishment, can provide explanations of why people behave the way they do and how they choose between different possible courses of action.
VIII FACTORS THAT INFLUENCE LEARNING ABILITY
A variety of factors determine an individual’s ability to learn and the speed of learning. Four important factors are the individual’s age, motivation, prior experience, and intelligence. In addition, certain developmental and learning disorders can impair a person’s ability to learn.
Animals and people of all ages are capable of the most common types of learning—habituation, classical conditioning, and operant conditioning. As children grow, they become capable of learning more and more sophisticated types of information. Swiss developmental psychologist Jean Piaget theorized that children go through four different stages of cognitive development. In the sensorimotor stage (from birth to about 2 years of age), infants use their senses to learn about their bodies and about objects in their immediate environments. In the preoperational stage (about 2 to 7 years of age), children can think about objects and events that are not present, but their thinking is primitive and self-centered, and they have difficulty seeing the world from another person’s point of view. In the concrete operational stage (about 7 to 11 years of age), children learn general rules about the physical world, such as the fact that the amount of water remains the same if it is poured between containers of different shapes. Finally, in the formal operational stage (ages 11 and up), children become capable of logical and abstract thinking. See also Child Development.
Adults continue to learn new knowledge and skills throughout their lives. For example, most adults can successfully learn a foreign language, although children usually can achieve fluency more easily. If older adults remain healthy, their learning ability generally does not decline with age. Age-related illnesses that involve a deterioration of mental functioning, such as Alzheimer’s disease, can severely reduce a person’s ability to learn.
Learning is usually most efficient and rapid when the learner is motivated and attentive. Behavioral studies with both animals and people have shown that one effective way to maintain the learner’s motivation is to deliver strong and immediate reinforcers for correct responses. However, other research has indicated that very high levels of motivation are not ideal. Psychologists believe an intermediate level of motivation is best for many learning tasks. If a person’s level of motivation is too low, he or she may give up quickly. At the other extreme, a very high level of motivation may cause such stress and distraction that the learner cannot focus on the task. See Motivation.
C Prior Experience
How well a person learns a new task may depend heavily on the person’s previous experience with similar tasks. Just as a response can transfer from one stimulus to another through the process of generalization, people can learn new behaviors more quickly if the behaviors are similar to those they can already perform. This phenomenon is called positive transfer. Someone who has learned to drive one car, for example, will be able to drive other cars, even though the feel and handling of the cars will differ. In cases of negative transfer, however, a person’s prior experience can interfere with learning something new. For instance, after memorizing one shopping list, it may be more difficult to memorize a different shopping list.
Psychologists have long known that people differ individually in their level of intelligence, and thus in their ability to learn and understand. Scientists have engaged in heated debates about the definition and nature of intelligence. In the 1980s American psychologist Howard Gardner proposed that there are many different forms of intelligence, including linguistic, logical-mathematical, musical, and interpersonal intelligence. A person may easily learn skills in some categories but have difficulty learning in others. See Intelligence.
E Learning and Developmental Disorders
A variety of disorders can interfere with a person’s ability to learn new skills and behaviors. Learning and developmental disorders usually first appear in childhood and often persist into adulthood. Children with attention-deficit hyperactivity disorder (ADHD) may not be able to sit still long enough to focus on specific tasks. Children with autism typically have difficulty speaking, understanding language, and interacting with people. People with mental retardation, characterized primarily by very low intelligence, may have trouble mastering basic living tasks and academic skills. Children with learning or developmental disorders often receive special education tailored to their individual needs and abilities.
See also Psychology; Animal Behavior.
James E. Mazur