chapter 3… · 2020. 9. 1. · # 154203 cust: pearson au: ormrod pg. no. 53 title: human learning...

21
LEARNING OUTCOMES 3.1. Identify major assumptions that underlie behaviorist theories of learning. 3.2. Explain how classical conditioning occurs, recognize examples of classical conditioning and related phenomena in action, and describe at least two different approaches a teacher or therapist might take to reduce or eliminate unproductive classically conditioned responses. 3.3. Describe what operant conditioning is and when it is most likely to occur; also, recognize examples of operant conditioning and related phenomena in real-life situations. 3.4. Identify various examples of positive reinforcement, negative reinforcement, and punishment; explain key ways in which negative reinforcement and punishment are different. 3.5. Explain how cognitive and motivational factors sometimes play roles in contemporary behaviorist theories. 51 Basic Assumptions in Behaviorism Classical Conditioning Classical Conditioning in Human Learning Common Phenomena in Classical Conditioning Eliminating Unproductive Classically Conditioned Responses Operant Conditioning Important Conditions for Operant Conditioning to Occur Contrasting Operant Conditioning with Classical Conditioning Forms That Reinforcement Might Take Common Phenomena in Operant Conditioning Effects of Antecedent Stimuli and Responses in Operant Conditioning Avoidance Learning Punishment Potentially Effective Forms of Punishment Ineffective Forms of Punishment Cognition and Motivation in Behaviorist Theories Summary CHAPTER 3 BEHAVIORISM W hen my children were small, they often behaved in ways they knew would improve their circumstances. For example, when Alex needed money to buy something he desperately wanted, he engaged in behaviors he never did otherwise, such as mowing the lawn or scrubbing the bathtub. Jeff had less interest in money than his older brother, but he would readily clean up the disaster area he called his bedroom if doing so enabled him to have a friend spend the night. My children also learned not to engage in behaviors that led to unpleasant consequences. For instance, soon after my daughter Tina reached adolescence, she discovered the thrill of sneak- ing out late at night to join friends at a local park or convenience store. Somehow she thought she would never be missed if she placed a doll’s head on her pillow and stuffed a blanket under her bedspread to form the shape of a body. I can’t say for sure how many times Tina tried this, Ormrod, Jeanne Ellis. <i>Human Learning, Global Edition</i>, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531. Created from hkied-ebooks on 2019-09-18 07:54:40. Copyright © 2016. Pearson Education UK. All rights reserved.

Upload: others

Post on 30-Jan-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 51 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    LEARNING OUTCOMES 3.1. Identify major assumptions that underlie

    behaviorist theories of learning. 3.2. Explain how classical conditioning occurs,

    recognize examples of classical conditioning and related phenomena in action, and describe at least two different approaches a teacher or therapist might take to reduce or eliminate unproductive classically conditioned responses.

    3.3. Describe what operant conditioning is and when it is most likely to occur; also,

    recognize examples of operant conditioning and related phenomena in real-life situations.

    3.4. Identify various examples of positive reinforcement, negative reinforcement, and punishment; explain key ways in which negative reinforcement and punishment are different.

    3.5. Explain how cognitive and motivational factors sometimes play roles in contemporary behaviorist theories.

    51

    Basic Assumptions in BehaviorismClassical Conditioning

    Classical Conditioning in Human LearningCommon Phenomena in Classical ConditioningEliminating Unproductive Classically Conditioned

    ResponsesOperant Conditioning

    Important Conditions for Operant Conditioning to Occur

    Contrasting Operant Conditioning with Classical Conditioning

    Forms That Reinforcement Might TakeCommon Phenomena in Operant ConditioningEffects of Antecedent Stimuli and Responses

    in Operant ConditioningAvoidance Learning

    PunishmentPotentially Effective Forms of PunishmentIneffective Forms of Punishment

    Cognition and Motivation in Behaviorist TheoriesSummary

    CHAPTER 3

    Behaviorism

    When my children were small, they often behaved in ways they knew would improve their circumstances. For example, when Alex needed money to buy something he desperately wanted, he engaged in behaviors he never did otherwise, such as mowing the lawn or scrubbing the bathtub. Jeff had less interest in money than his older brother, but he would readily clean up the disaster area he called his bedroom if doing so enabled him to have a friend spend the night.

    My children also learned not to engage in behaviors that led to unpleasant consequences. For instance, soon after my daughter Tina reached adolescence, she discovered the thrill of sneak-ing out late at night to join friends at a local park or convenience store. Somehow she thought she would never be missed if she placed a doll’s head on her pillow and stuffed a blanket under her bedspread to form the shape of a body. I can’t say for sure how many times Tina tried this,

    M03_ORMR4386_GE_07_CH03.indd 51 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 52 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    52 PART TWO Behaviorist Views of Learning

    but on two occasions I found pseudo-Tina in bed and promptly locked the window that real-Tina had planned to use on her return. Both times the chilly night air left Tina no option but to ring the doorbell to gain entry into the house. After being grounded for two weeks for each infraction—and probably also after realizing that her mother wasn’t totally clueless about her devious schemes—Tina started taking her curfew more seriously.

    The idea that consequences affect behavior has influenced psychologists’ thinking for more than a century and has been especially prominent in behaviorist learning theories. In this chapter we’ll consider several basic assumptions underlying behaviorist approaches and then turn to two prominent behaviorist theories. One of them, classical conditioning, explains how learning might come about through the simultaneous presentation of two stimuli in one’s environment. The other, operant conditioning, focuses on how rewarding (reinforcing) consequences can enhance learning and performance. Near the end of the chapter we’ll examine the effects of aversive con-sequences on learners’ behaviors.

    Basic assumptions in BehaviorismAs mentioned in Chapter 1, early research on learning relied heavily on introspection, a method in which people were asked to “look” inside their heads and describe what they were thinking. But in the early 1900s, some psychologists argued that such self-reflections were highly subjec-tive and not necessarily accurate—a contention substantiated by later research (e.g., Nisbett & Wilson, 1977; Zuriff, 1985). Beginning with the efforts of the Russian physiologist Ivan Pavlov and the American psychologist Edward Thorndike (both to be described shortly), a more objec-tive approach to the study of learning emerged. These researchers looked primarily at behavior—something they could easily see and objectively describe and measure—and so the behaviorist movement was born.

    Behaviorists haven’t always agreed on the specific processes that account for learning. Yet many of them have historically shared certain basic assumptions.

    ◆ Principles of learning should apply equally to different behaviors and to a variety of animal species. Many behaviorists work on the assumption that human beings and other animals learn in similar ways—an assumption known as equipotentiality. Based on this assumption, behaviorists often apply to human learning the principles they have derived primarily from research with such ani-mals as rats and pigeons. In their discussions of learning, they typically use the term organism to refer generically to a member of any animal species, human and nonhuman alike.

    ◆ Learning processes can be studied most objectively when the focus of study is on stimuli and responses. Behaviorists believe that psychologists must study learning through objective sci-entific inquiry, in much the same way that chemists and physicists study phenomena in the physical world. By focusing on two things they can observe and measure—more specifically, by focusing on stimuli in the environment and responses that organisms make to those stimuli— psychologists can maintain this objectivity. Behaviorist principles of learning often describe a relationship between a stimulus (S) and a response (R); hence, behaviorism is sometimes called S–R psychology.

    ◆ Internal processes tend to be excluded or minimized in theoretical explanations. Many be-haviorists believe that because we can’t directly observe and measure internal mental processes (e.g., thoughts and motives), we should exclude these processes from research investigations,

    M03_ORMR4386_GE_07_CH03.indd 52 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 53 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    CHAPTER 3 Behaviorism 53

    and in some instances also from explanations of how learning occurs (e.g., Kimble, 2000; J. B. Watson, 1925). These behaviorists describe an organism as a black box, with stimuli imping-ing on the box and responses emerging from it but with the things going on inside it remaining a mystery.1

    Not all behaviorists take a strict black-box perspective. Some insist that factors within the organism (O), such as motivation and the strength of stimulus–response associations, are also important in understanding learning and behavior (e.g., Hull, 1943, 1952). These neobehaviorist theorists are sometimes called S–O–R (stimulus–organism–response) theorists rather than S–R theorists. Especially in recent decades, some behaviorists have acknowledged that they can fully understand both human and animal behavior only when they consider cognitive processes as well as environmental events (e.g., DeGrandpre, 2000; De Houwer, 2011; Rachlin, 1991).

    ◆ Learning involves a behavior change. In Chapter 1, I defined learning as involving a long-term change in mental representations or associations. In contrast, behaviorists have traditionally defined learning as a change in behavior. And in fact, we can determine that learning has occurred only when we see it reflected in someone’s actions.

    As behaviorists have increasingly brought cognitive factors into the picture, many have backed off from this behavior-based definition of learning. Instead, they treat learning and be-havior as separate, albeit related, entities. A number of psychologists have suggested that many behaviorist laws are more appropriately applied to an understanding of what influences the performance of learned behaviors, rather than what influences learning itself (e.g., W. K. Estes, 1969; Herrnstein, 1977; B. Schwartz & Reisberg, 1991).

    ◆ Organisms are born as blank slates. Historically, many behaviorists have argued that, aside from certain species-specific instincts (e.g., nest-building in birds) and biologically based dis-abilities (e.g., mental illness in human beings), organisms aren’t born with predispositions to behave in particular ways. Instead, they enter the world as a “blank slate” (or, in Latin, tabula rasa) on which environmental experiences gradually “write.” Because each organism has a unique set of environmental experiences, so, too, will it acquire its own unique set of behaviors.

    ◆ Learning is largely the result of environmental events. Rather than use the term learning, behaviorists often speak of conditioning: An organism is conditioned by environmental events. The passive form of this verb connotes many behaviorists’ belief that because learning is the result of one’s experiences, learning happens to an organism in a way that is often beyond the organism’s control.

    Some early behaviorists, such as B. F. Skinner, were determinists: They proposed that if we were to have complete knowledge of an organism’s inherited behaviors, past experiences, and present environmental circumstances, we would be able to predict the organism’s next response with 100% accuracy. But more recently, most behaviorists have rejected the idea of complete determinism: In their view, any organism’s behavior reflects a certain degree of variability that genetic heritage and stimulus–response associations alone can’t explain (e.g., R. Epstein, 1991; Rachlin, 1991). Looking at how organisms have learned previously to respond to different stim-uli can certainly help us understand why people and other animals currently behave as they do, but we’ll never be able to predict their future actions with total certainty.

    1This idea that the study of human behavior and learning should focus exclusively on stimuli and responses is sometimes called radical behaviorism.

    M03_ORMR4386_GE_07_CH03.indd 53 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 54 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    54 PART TWO Behaviorist Views of Learning

    ◆ The most useful theories tend to be parsimonious ones. According to behaviorists, we should explain the learning of all behaviors, from the most simple to the most complex, by as few learn-ing principles as possible. This assumption reflects a preference for parsimony (conciseness) in explaining learning and behavior. We’ll see an example of such parsimony in the first behaviorist theory we explore: classical conditioning.

    classical conditioningIn the early 1900s, Russian physiologist Ivan Pavlov conducted a series of experiments with dogs to determine how salivation contributes to the digestion of food. His approach typically involved making a surgical incision in a dog’s mouth (allowing collection of the dog’s saliva in a small cup), strapping the dog into an immobile position, giving it some powdered meat, and then measur-ing the amount of saliva the dog produced. Pavlov noticed that after a few of these experiences, a dog would begin to salivate as soon as the lab assistant came into the room, even though it hadn’t yet had an opportunity to see or smell any meat. Apparently, the dog had learned that the lab assistant’s entrance meant that food was on the way, and it responded accordingly. Pavlov de-voted a good part of his later years to a systematic study of this learning process on which he had so inadvertently stumbled, and he eventually summarized his research in his book Conditioned Reflexes (Pavlov, 1927).

    Pavlov’s early studies went something like this:

    1. He first observed whether a dog salivated in response to a particular stimulus—perhaps to a flash of light, the sound of a tuning fork, or the ringing of a bell. For simplicity’s sake, we’ll continue with our discussion using a bell as the stimulus in question. As you might imagine, the dog didn’t find the ringing of a bell especially appetizing and thus didn’t salivate to it.

    2. Pavlov rang the bell again, this time immediately following it with some powdered meat. Naturally, the dog salivated. Pavlov rang the bell several more times, always pre-senting meat immediately afterward. The dog salivated on each occasion.

    3. Pavlov then rang the bell without presenting any meat, and the dog salivated. Thus, the dog was now responding to a stimulus to which (in Step 1) it had previously been unresponsive. There had been a change in behavior as a result of experience; from the behaviorist perspective, then, learning had taken place.

    The phenomenon Pavlov observed is now commonly known as classical conditioning.2 Let’s analyze the three steps in Pavlov’s experiment in much the same way that Pavlov did:

    1. A neutral stimulus (NS) is identified—a stimulus to which the organism doesn’t respond in any noticeable way. In the case of Pavlov’s dogs, the bell was originally a neutral stimulus that didn’t elicit a salivation response.

    2. The neutral stimulus is presented just before another stimulus—one that does lead to a response. This second stimulus is called an unconditioned stimulus (UCS), and

    2Some psychologists instead use the term respondent conditioning, a label coined by B. F. Skinner to reflect its involuntary-response-to-a-stimulus nature.

    M03_ORMR4386_GE_07_CH03.indd 54 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 55 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    CHAPTER 3 Behaviorism 55

    the response to it is called an unconditioned response (UCR), because the organism responds to the stimulus unconditionally, without having had to learn to do so. For Pavlov’s dogs, meat powder was an unconditioned stimulus to which they responded with an unconditioned salivation response.

    3. After being paired with an unconditioned stimulus, the previously neutral stimulus now elicits a response and thus is no longer “neutral.” The NS has become a condi-tioned stimulus (CS) to which the organism has learned a conditioned response (CR).3 In Pavlov’s experiment, after several pairings with meat, the bell became a con-ditioned stimulus that, on its own, elicited the conditioned response of salivation. The diagram in Figure 3.1 shows graphically what happened from a classical conditioning perspective.

    Since Pavlov’s time, other behaviorists have observed classical conditioning in many species—for instance, in newborn human infants (Boiler, 1997; Lipsitt & Kaye, 1964; Reese & Lipsitt, 1970), human fetuses still in the womb (Macfarlane, 1978), laboratory rats (Cain, Blouin, & Barad, 2003), rainbow trout (Nordgreen, Janczak, Hovland, Ranheim, & Horsberg, 2010), and snails (Samarova et al., 2005). The applicability of classical conditioning clearly extends widely across the animal kingdom.

    As Pavlov’s experiments illustrated, classical conditioning typically occurs when two stimuli are presented at approximately the same time. One of these stimuli is an unconditioned stimulus: It has previously been shown to elicit an unconditioned response. The second stimulus, through its association with the unconditioned stimulus, begins to elicit a response as well: It becomes a conditioned stimulus that brings about a conditioned response. In many cases, conditioning oc-curs fairly quickly; it isn’t unusual for an organism to show a conditioned response after the two stimuli have been presented together only five or six times, and sometimes after only one pairing (Rescorla, 1988).

    Classical conditioning is most likely to occur when the conditioned stimulus is presented just before the unconditioned stimulus—perhaps half a second earlier. For this reason, some psy-chologists describe classical conditioning as a form of signal learning. By being presented first,

    Figure 3.1A classical conditioning analysis of how Pavlov’s dogs learned.

    Step 1:(no response)

    NS(bell)

    UCR(salivate)

    Step 2: NS(bell)

    Step 3: CS(bell)

    UCS(meat)

    CR(salivate)

    3Pavlov’s original terms were actually unconditional stimulus, unconditional response, conditional stimu-lus, and conditional response, but the mistranslations to “unconditioned” remain in most discussions of classical conditioning.

    M03_ORMR4386_GE_07_CH03.indd 55 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 56 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    56 PART TWO Behaviorist Views of Learning

    the conditioned stimulus serves as a signal that the unconditioned stimulus is coming, much as Pavlov’s dogs might have learned that the sound of a bell indicated that tasty meat powder was on its way.

    Classical conditioning usually involves the learning of involuntary responses—responses over which the learner has no control. When we say that a stimulus elicits a response, we mean that the stimulus brings about a response automatically, without the learner having much choice in the matter. In most cases, the conditioned response is similar to the unconditioned response, with the two responses differing primarily in terms of which stimulus initially elicits the response and sometimes in terms of the strength of the response. Occasionally, however, the CR is quite different from—perhaps even opposite to—the UCR (I’ll give you an example shortly). But in one way or another, the conditioned response allows the organism to anticipate and prepare for the unconditioned stimulus that will soon follow.

    Classical Conditioning in Human Learning

    We can use classical conditioning to help us understand how people learn a variety of invol-untary responses, especially responses associated with physiological functioning or emotion. For example, people can develop aversions to particular foods as a result of associating those foods with an upset stomach (Garb & Stunkard, 1974; Logue, 1979). To illustrate, after as-sociating the taste of creamy cucumber salad dressing (CS) with the nausea I experienced during pregnancy (UCS), I developed an aversion (CR) to cucumber dressing that lasted for several years.

    Classical conditioning also provides a useful perspective for explaining some of the fears and phobias people develop (Mineka & Zinbarg, 2006). For example, whenever a bee flies near me, I scream, wave my arms frantically, and run around like a wild woman. Yes, yes, I know, I would be better off if I remained perfectly still, but somehow I just can’t control myself. My overreaction to bees is probably a result of several bee stings I received as a small child. In classical condi-tioning terminology, bees (CS) were previously associated with painful stings (UCS), such that I became increasingly fearful (CR) of the nasty critters. In a similar way, people who are bitten by a particular breed of dog sometimes become afraid of that breed, or even of all dogs.

    The conditioned stimulus may serve as a signal that the unconditioned stimulus is coming.

    COME ANDGET IT!!

    M03_ORMR4386_GE_07_CH03.indd 56 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 57 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    CHAPTER 3 Behaviorism 57

    Probably the best-known example of a classically conditioned fear is the case of “Little Albert,” an infant who learned to fear white rats through a procedure used by John Watson and Rosalie Rayner (1920). Albert was an even-tempered, 11-month-old child who rarely cried or displayed fearful reactions.4 One day, Albert was shown a white rat. As he reached out and touched the rat, a large steel bar behind him was struck, producing a loud, unpleasant noise. Albert jumped, obviously upset by the startling noise. Nevertheless, he reached to touch the rat with his other hand, and the steel bar was struck once again. After five more pairings of the rat (CS) and the loud noise (UCS), Albert was truly rat-phobic: Whenever he saw the rat he cried hysterically and crawled away as quickly as he could. Watson and Rayner reported that Albert re-sponded in a similarly fearful manner to a rabbit, a dog, a sealskin coat, cotton wool, and a Santa Claus mask with a fuzzy beard, although none of these had ever been paired with the startling noise. (Watson and Rayner never “undid” their conditioning of poor Albert. Fortunately, the ethi-cal standards of the American Psychological Association now prohibit such negligence. In fact, they would probably prohibit the study altogether.)

    Attitudes, too, can be partly the result of classical conditioning (e.g., Walther, Weil, & Düsing, 2011). In one study (M. A. Olson & Fazio, 2001), college students sat at a computer ter-minal to watch various unfamiliar cartoon characters from the Pokemon video game series. One character was consistently presented in conjunction with words and images that evoked pleasant feelings (e.g., “excellent,” “awesome,” pictures of puppies and a hot fudge sundae). A second character was consistently presented along with words and images that evoked unpleasant feel-ings (e.g., “terrible,” “awful,” pictures of a cockroach and a man with a knife). Other characters were paired with more neutral words and images. Afterward, when the students were asked to rate some of the cartoon characters and other images they had seen on a scale of –4 (unpleasant) to +4 (pleasant), they rated the character associated with pleasant stimuli far more favorably than the character associated with unpleasant stimuli.

    Think about the many television commercials you’ve seen that have evoked an emotional, “Awww, how sweet!” response in you, perhaps by showing puppies, adorable little children, or a tender moment between a loving couple. If such commercials clearly display a particular product at the same times—say, a certain beverage, junk food, or medicine—they can capitalize on clas-sical conditioning to engender positive attitudes toward the product and thereby increase sales (Aiken, 2002; J. Kim, Lim, & Bhargava, 1998).

    Earlier I mentioned that a conditioned response is sometimes quite different from an un-conditioned response. We find an example in the effects of certain drugs on physiological functioning. When organisms receive certain drugs (e.g., morphine or insulin), the drugs, as un-conditioned stimuli, naturally lead to certain physiological responses (e.g., reduced pain sensitiv-ity or hypoglycemia). However, stimuli presented prior to these drugs—perhaps a light, a tone, or the environmental context more generally—begin to elicit an opposite response (e.g., increased pain sensitivity or hyperglycemia), presumably to prepare for—in this case, to counteract—the pharmaceutical stimuli that will soon follow (Flaherty et al., 1980; S. Siegel, 1975, 1979). Such physiological responses are one likely explanation for people’s addictions to nicotine, alcohol, and street drugs. When habitual smokers and abusers of other substances return to environments

    4Evidence has recently come to light that Albert was not an average infant but probably had significant neurological impairments (e.g., see Fridlund, Beck, Goldie, & Irons, 2012).

    M03_ORMR4386_GE_07_CH03.indd 57 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 58 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    58 PART TWO Behaviorist Views of Learning

    in which they’ve previously used an addictive substance, their bodies respond in counteractive ways that make the substance even more desirable or seemingly necessary. Meanwhile, they develop greater tolerance to their substance of choice and need increasing quantities to gain the same “high” or other sought-after physiological state (C. A. Conklin, 2006; McDonald & Siegel, 2004; S. Siegel, 2005; S. Siegel, Baptista, Kim, McDonald, & Weise-Kelly, 2000).

    Common Phenomena in Classical Conditioning

    Pavlov and other behaviorists have described a number of phenomena related to classical condi-tioning. Here I’ll describe several phenomena that have particular relevance to human learning.

    Associative BiasAssociations between certain kinds of stimuli are more likely to be made than are associations between others—a phenomenon known as associative bias (J. Garcia & Koelling, 1966; Hollis, 1997; B. Schwartz & Reisberg, 1991). For example, food is more likely to become a conditioned stimulus associated with nausea (a UCS) than, say, a flash of light or the sound of a tuning fork. Quite possibly, evolution has been at work here: Our ancestors could better adapt to their envi-ronments when they were predisposed to make associations that might have reflected true cause-and-effect relationships, such as associating nausea with a new food that induced it (Öhman & Mineka, 2003; Timberlake & Lucas, 1989).

    Importance of ContingencyPavlov proposed that classical conditioning occurs when the unconditioned stimulus and would-be conditioned stimulus are presented at approximately the same time; that is, there must be contiguity between the two stimuli. But contiguity alone seems to be insufficient. As noted earlier, classical conditioning is most likely to occur when the conditioned stimulus is presented just before the unconditioned stimulus. It’s less likely to occur when the CS and UCS are presented at exactly the same time, and it rarely occurs when the CS is presented after the UCS (e.g., R. R. Miller & Barnet, 1993). In some cases, temporal proximity isn’t even necessary; for instance, people de-velop an aversion to certain foods when the delay between the conditioned stimulus (food) and the unconditioned stimulus (nausea) is as much as 24 hours (Logue, 1979).

    More recently, theorists have suggested that contingency is the essential condition: The potential conditioned stimulus must occur only when the unconditioned stimulus is likely to follow—in other words, when the CS serves as a signal that the UCS is probably on its way (recall my earlier reference to “signal learning”). When two stimuli that are usually encountered separately occur together a few times merely by coincidence, classical conditioning is unlikely to occur (e.g., Gallistel & Gibbon, 2001; Rescorla, 1988; Vansteenwegen, Crombez, Baeyens, Hermans, & Eelen, 2000).

    ExtinctionRecall that Pavlov’s dogs learned to salivate at the sound of a bell alone after the bell had consis-tently been rung in conjunction with meat powder. But what would happen if the bell continued to ring over and over without the meat powder’s ever again being presented along with it? Pavlov discovered that repeated presentations of the conditioned stimulus without the unconditioned stimulus led to successively weaker and weaker conditioned responses. Eventually, the dogs no

    M03_ORMR4386_GE_07_CH03.indd 58 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 59 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    CHAPTER 3 Behaviorism 59

    longer salivated to the bell; in other words, the conditioned response disappeared. Pavlov called this phenomenon extinction.

    Sometimes conditioned responses extinguish, and sometimes they don’t.5 One reason they often don’t easily extinguish is that humans and nonhumans alike tend to avoid a stimulus they’ve learned to fear, thus reducing the chances that they might eventually encounter the conditioned stimulus in the absence of the unconditioned stimulus (more on this point in an upcoming section on avoidance learning). A second reason is the phenomenon to be described next— spontaneous recovery.

    Spontaneous RecoveryEven though Pavlov quickly extinguished his dogs’ conditioned salivation responses by repeat-edly presenting the bell without any meat powder, when he entered his lab the following day he discovered that the bell once again elicited salivation, almost as if extinction had never taken place. This reappearance of the salivation response after it had previously been extinguished is something Pavlov called spontaneous recovery.

    In more general terms, spontaneous recovery is a recurrence of a conditioned response when a period of extinction is followed by a rest period. The conditioned response is especially likely to pop up again if its extinction has occurred in only one context; the response may reappear in contexts where extinction hasn’t taken place (Bouton, 1994; R. R. Miller & Laborda, 2011). For example, let’s return to my bee phobia. If a bee flies near me while I’m sitting outside with friends, I eventually settle down and regain my composure. But later, in a different setting with different friends, my first response is to fly off the handle once again.

    Pavlov found that when a conditioned response appears in spontaneous recovery, it’s typi-cally weaker than the original conditioned response and extinguishes more quickly. In situations in which several spontaneous recoveries are observed (each one occurring after a period of rest), the reappearing CR becomes progressively weaker and disappears increasingly rapidly.

    GeneralizationYou may recall that after Little Albert was conditioned to fear a white rat, he also became afraid of a rabbit, a dog, a white fur coat, cotton wool, and a fuzzy-bearded Santa Claus mask. When learners respond to other stimuli in the same way they respond to a conditioned stimulus, generalization is occurring. The more similar a stimulus is to the conditioned stimulus, the greater the probability of generalization. Albert exhibited fear of all objects that were white and fuzzy like the rat, but he wasn’t afraid of his nonwhite, nonfuzzy toy blocks. In a similar way, a child who fears an abusive relative may generalize that fear to other people of the same gender.

    Generalization of conditioned responses to new stimuli is a common phenomenon (Bouton, 1994; N. C. Huff & LaBar, 2010; McAllister & McAllister, 1965). In some cases, generalization of conditioned fear responses can increase over time, such that a learner becomes fearful of an ever-expanding number of stimuli. Thus, dysfunctional conditioned responses that don’t quickly extinguish may sometimes become more troublesome as the years go by.

    5Findings in one recent research study indicate that extinction can be quite effective if systematic extinc-tion procedures are conducted within six hours after the original conditioning event. Apparently a rapid follow-up extinction process interferes with the brain’s consolidation of the stimulus–response association (Schiller et al., 2010).

    M03_ORMR4386_GE_07_CH03.indd 59 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 60 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    60 PART TWO Behaviorist Views of Learning

    Stimulus DiscriminationPavlov observed that when he conditioned a dog to salivate in response to a high-pitched tone, the dog would generalize that conditioned response to a low-pitched tone. To teach the dog the difference between the two tones, Pavlov repeatedly presented the high tone in conjunction with meat powder and presented the low tone without meat. After several such presentations of the two tones, the dog eventually learned to salivate only at the high tone. In Pavlov’s terminology, differentiation between the two tones had taken place. Psychologists today more frequently use the term stimulus discrimination for this phenomenon.

    Stimulus discrimination occurs when one stimulus (the CS+) is presented in conjunction with an unconditioned stimulus, and another stimulus (the CS–) is presented in the absence of the UCS. The individual acquires a conditioned response to the CS+ but either doesn’t initially generalize the response to the CS– or, through repeated experiences, learns that the CS– doesn’t warrant the same response (N. C. Huff & LaBar, 2010). For example, if a child who is abused by a relative simultaneously has positive interactions with other people of the same gender, she isn’t as likely to generalize her fear to those other individuals.

    Higher-Order ConditioningConditioned stimulus–response associations sometimes piggyback on one another. For example, if Pavlov conditioned a dog to salivate at the sound of a bell and then later presented the bell in conjunction with another neutral stimulus—say, a flash of light that had never been associated with meat—that neutral stimulus would also begin to elicit a salivation response. This phenom-enon is known as second-order conditioning or, more generally, higher-order conditioning.

    Higher-order conditioning works like this: First, a neutral stimulus (NS1) becomes a con-ditioned stimulus (CS1) by being paired with an unconditioned stimulus (UCS), so that it soon elicits a conditioned response (CR). Next, a second neutral stimulus (NS2) is paired with CS1, and it, too, begins to elicit a conditioned response; that second stimulus has also become a condi-tioned stimulus (CS2). This process is diagrammed in Figure 3.2. Steps 1 and 2 depict the original conditioning; Steps 3 and 4 depict higher-order conditioning.

    Figure 3.2An example of higher-order conditioning.

    CR(salivate)

    Step 3: NS2(light)

    Step 4: CS2(light)

    CS1(bell)

    CR(salivate)

    UCR(salivate)

    Step 1: NS1(bell)

    Step 2: CS1(bell)

    UCS(meat)

    CR(salivate)

    M03_ORMR4386_GE_07_CH03.indd 60 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 61 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    CHAPTER 3 Behaviorism 61

    Some seemingly irrational fears may have their roots in higher-order conditioning (e.g., Klein, 1987; Wessa & Flor, 2007).6 For example, imagine that on several occasions a learner associates failure at school tasks (initially an NS) with painful physical punishment (a UCS that elicits a UCR of fear), to the point that failure (now a CS) begins to elicit considerable anxiety. Then one or more other stimuli—perhaps difficult tests, required oral presentations, or the gen-eral school environment—are frequently associated with failure. In this way, a student might develop test anxiety, fear of public speaking, or even school phobia—fear of school itself. Please note, however, that many people come to fear failure even when they have never been physically punished for it (details to follow in Chapter 14).

    Higher-order conditioning might also explain certain attitudes we acquire toward particular people, objects, or situations (Aiken, 2002). As an example, let’s return to the experiment involv-ing Pokemon characters described earlier (M. A. Olson & Fazio, 2001). People aren’t born with particular feelings about the words awesome or awful, nor do they necessarily have built-in reac-tions to pictures of hot fudge sundaes or cockroaches. Rather, people probably acquire particular feelings about the words and images through their experiences over time, to the point where these stimuli can provide a starting point for further classical conditioning.

    Eliminating Unproductive Classically Conditioned Responses

    Some classically conditioned responses—for instance, some irrational fear responses—can seri-ously impair everyday functioning. Yet as we’ve seen, a simple extinction process doesn’t always work to eliminate them. The phenomenon of spontaneous recovery suggests that a learned CS–CR connection doesn’t get unlearned—the association is still lingering somewhere, waiting to pop up at potentially inopportune moments (R. R. Miller & Laborda, 2011).

    To successfully eliminate an unproductive conditioned response, then, the existing CS–CR association needs to be overpowered by a different, stronger CS–CR association. A strategy called counterconditioning can often make this happen. As an example, let’s consider Mary Cover Jones’s (1924) classic work with “Little Peter,” a 2-year-old boy who had somehow acquired a fear of rabbits. To rid Peter of his fear, Jones placed him in a high chair and gave him some candy. As he ate, she brought a rabbit into the far side of the same room. Under different circumstances the rabbit might have elicited anxiety; however, the pleasure Peter felt as he ate the candy was a stronger response and essentially overpowered any anxiety he might have felt about the rab-bit’s presence. Jones repeated the same procedure every day over a two-month period, each time putting Peter in a high chair with candy and bringing the rabbit slightly closer than she had on the preceding occasion, and Peter’s anxiety about rabbits eventually disappeared. More recently, researchers have used a similar procedure in helping an 8-year-old boy lose his fear of electroni-cally animated toys and holiday decorations (Ricciardi, Luiselli, & Camare, 2006).

    In general, counterconditioning involves the following steps:

    1. A new response that is incompatible with the existing conditioned response is chosen. Two responses are incompatible with each other when they cannot be performed at the same time. Because classically conditioned responses are often emotional in nature, an

    6See Wessa and Flor (2007) for a good discussion of how posttraumatic stress disorder (PTSD) might be extremely difficult to extinguish because of higher-order conditioning.

    M03_ORMR4386_GE_07_CH03.indd 61 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 62 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    62 PART TWO Behaviorist Views of Learning

    incompatible response is often some sort of opposite emotional reaction. For example, in the case of Little Peter, happiness was used as an incompatible response for fear. An alternative would be any response involving relaxation, because fear and anxiety involve bodily tension.

    2. A stimulus that elicits the incompatible response must be identified; for example, candy elicited a “happy” response for Peter. If we want to help someone develop a happy re-sponse to a stimulus that has previously elicited displeasure, we need to find a stimulus that already elicits pleasure—perhaps a friend, a party, or a favorite food. If we want someone instead to acquire a relaxation response, we might ask that person to imagine lying in a cool, fragrant meadow or on a lounge chair by a swimming pool.

    3. The stimulus that elicits the new response is presented to the individual, and the condi-tioned stimulus eliciting the undesirable conditioned response is gradually introduced into the situation. In treating Peter’s fear of rabbits, Jones first gave Peter some candy; she then presented the rabbit at some distance from Peter, only gradually bringing it closer and closer in successive sessions. The trick in counterconditioning is to en-sure that the stimulus eliciting the desirable response always has a stronger effect than the stimulus eliciting the undesirable response; otherwise, the latter response might prevail.

    A technique I have recommended to many graduate students who dread their required sta-tistics courses is essentially a self-administered version of counterconditioning. In particular, I’ve suggested that math-phobic students find a math textbook that begins well below their own skill level—at the level of basic number facts, if necessary—so that the problems aren’t anxiety arousing. As they work through the text, the students begin to associate mathematics with suc-cess rather than failure. Programmed instruction (described in Chapter 4) is another technique that can be useful in reducing anxiety about a given subject matter, because it allows a student to progress through potentially difficult material in small, easy steps.

    A therapeutic technique known as systematic desensitization uses counterconditioning to treat many conditioned anxiety responses. Typically, people who are excessively anxious in the presence of certain stimuli are asked to relax while imagining themselves in increasingly stressful situations involving those stimuli; in doing so, they gradually replace anxiety with a relaxation response (Head & Gross, 2009; Wolpe, 1969; Wolpe & Plaud, 1997). Alternatively, people might virtually “experience” a series of stressful situations through the use of goggles and computer-generated images, all the while making a concerted effort to relax (P. Anderson, Rothbaum, & Hodges, 2003; Garcia-Palacios, Hoffman, Carlin, Furness, & Botella, 2002). In general, learning new responses to images of certain stimuli does seem to generalize to the stimuli themselves.

    Systematic desensitization has been widely used as a means of treating such problems as test anxiety and fear of public speaking (Head & Gross, 2009; Hopf & Ayres, 1992; W. K. Silverman & Kearney, 1991). I should point out, however, that treating test anxiety alone, without also reme-diating possible academic sources of a student’s poor test performance, may reduce test anxiety without yielding any improvement in test scores (Cassady, 2010b; Naveh-Benjamin, 1991; Tryon, 1980).

    In our examination of phenomena related to classical conditioning, we’ve been focusing on involuntary responses. In contrast, operant conditioning—our next topic—involves responses that are presumably under a learner’s control.

    M03_ORMR4386_GE_07_CH03.indd 62 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 63 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    CHAPTER 3 Behaviorism 63

    operant conditioningIn research for his doctoral dissertation at Columbia University in the late 1890s, Edward Thorndike placed a cat in a contraption he called a “puzzle box,” which had a door that opened when a certain device (e.g., a wire loop) was appropriately manipulated. Thorndike observed the cat engaging in numerous, apparently random behaviors in an effort to get out of the box; eventu-ally, by chance, the cat triggered the mechanism that opened the door and allowed escape. When Thorndike put the cat in the box a second time, it again engaged in trial-and-error behaviors but managed to escape in less time than it had before. With subsequent returns to the box, the cat found its way out of the box within shorter and shorter time periods. From his observations of cats in the puzzle box, Thorndike concluded that learning consists of trial-and-error behavior and a gradual “stamping in” of some behaviors and “stamping out” of others as a result of the con-sequences that various behaviors bring (Thorndike, 1898, 1911). We can sum up Thorndike’s law of effect as follows:7

    Responses to a situation that are followed by satisfaction are strengthened; responses that are followed by discomfort are weakened.

    In other words, Thorndike proposed, rewarded responses increase, and punished responses de-crease and possibly disappear.

    Thorndike’s later research (1932a, 1932b) indicated that punishment may not be terribly ef-fective in weakening responses. In one experiment, college students completed a multiple-choice Spanish vocabulary test in which they were to choose the correct English meanings of a variety of Spanish words. Each time a student chose the correct English meaning out of five alternatives, the experimenter said “Right!” (presumably rewarding the response); each time a student chose an incorrect alternative, the experimenter said “Wrong!” (presumably punishing the response). In responding to the same multiple-choice questions over a series of trials, the students increased the responses for which they had been rewarded but didn’t necessarily decrease those for which they had been punished. In his revised law of effect, Thorndike (1935) continued to maintain that rewards strengthen the behaviors they follow, but he deemphasized the role of punishment. Instead, he proposed that punishment has an indirect effect on learning: As a result of experienc-ing an annoying state of affairs, a learner may engage in certain other behaviors (e.g., crying or running away) that interfere with performance of the punished response.

    In the 1930s, another American psychologist, B. F. Skinner, began to build on Thorndike’s work (e.g., 1938, 1953, 1958, 1971, 1989). Like Thorndike, Skinner proposed that organisms acquire behaviors that are followed by certain consequences. In order to study the effects of con-sequences both objectively and precisely, Skinner developed a piece of equipment, now known as a Skinner box, that has gained widespread popularity in animal learning research. As shown in Figure 3.3, the Skinner box used in studying rat behavior includes a metal bar that, when pushed down, causes a food tray to swing into reach long enough for the rat to grab a food pellet. In the pigeon version of the box, instead of a metal bar, a lighted plastic disk is located on one wall; when the pigeon pecks the disk, the food tray swings into reach for a short time.

    7Thorndike’s theory is sometimes referred to as connectionism, but it shouldn’t be confused with a more con-temporary perspective known as either connectionism or parallel distributed processing, discussed in Chapter 9.

    M03_ORMR4386_GE_07_CH03.indd 63 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 64 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    64 PART TWO Behaviorist Views of Learning

    Skinner found that rats will learn to press metal bars and pigeons will learn to peck on plastic disks in order to get pellets of food. From his observations of rats and pigeons in their respective Skinner boxes under varying conditions, Skinner (1938) formulated a basic principle he called operant conditioning, which can be paraphrased as follows:

    A response that is followed by a reinforcer is strengthened and therefore more likely to occur again.

    In other words, responses that are reinforced tend to increase in frequency, and this increase—a change in behavior—means that learning is taking place.

    Skinner intentionally used the term reinforcer instead of reward to describe a consequence that increases the frequency of a behavior. The word reward implies that the stimulus or event following a behavior is somehow both pleasant and desirable, an implication Skinner wanted to avoid for two reasons. First, some individuals will work for what others believe to be unpleasant consequences; for example, as a child, my daughter Tina occasionally did something she knew would irritate me because she enjoyed watching me blow my stack. Second, like many behavior-ists, Skinner preferred that psychological principles be restricted to the domain of objectively observable events. A reinforcer is defined not by allusion to “pleasantness” or “desirability”—both of which involve subjective judgments—but instead by its effect on behavior:

    A reinforcer is a stimulus or event that increases the frequency of a response it follows. (The act of following a response with a reinforcer is called reinforcement.)

    Now that I’ve given you definitions of both operant conditioning and a reinforcer, I need to point out a problem: Taken together, the two definitions constitute circular reasoning. I’ve said that operant conditioning is an increase in a behavior when it’s followed by a reinforcer, but I can’t seem to define a reinforcer in any other way except to say that it increases behavior. Thus, I’m using reinforcement to explain a behavior increase and a behavior increase to explain reinforcement! Fortunately, an article by Meehl (1950) has enabled learning theorists to get out of this circular mess by pointing out the transituational generality of a reinforcer: Any single reinforcer—whether it be food, money, a sleepover with a friend, or something else altogether—is likely to increase many different behaviors in many different situations.

    Figure 3.3A prototypical Skinner box: The food tray swings into reach to provide reinforcement.

    Metal Bar

    M03_ORMR4386_GE_07_CH03.indd 64 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 65 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    CHAPTER 3 Behaviorism 65

    Skinner’s principle of operant conditioning has proven to be a very useful and powerful ex-planation of why human beings often act as they do, and its applications to instructional and therapeutic situations are almost limitless. Virtually any behavior—academic, social, psychomotor— can be learned or modified through operant conditioning. Unfortunately, undesirable behaviors can be reinforced just as easily as desirable ones. Aggression and criminal activity often lead to successful outcomes: Crime usually does pay. And in school settings, disruptive behaviors can often get teachers’ and classmates’ attention when more productive behaviors don’t (McGinnis, Houchins-Juárez, McDaniel, & Kennedy, 2010; M. M. Mueller, Nkosi, & Hine, 2011).

    As a teacher for many years, I kept reminding myself of what student behaviors I wanted to encourage and tried to follow those behaviors with reinforcing consequences. For example, when typically quiet students raised their hands to answer a question or make a comment, I called on them and gave them whatever positive feedback I could. I also tried to make my classes not only informative but also lively, interesting, and humorous, so that students were reinforced for coming to class in the first place. Meanwhile, I tried not to reinforce behaviors that weren’t in students’ long-term best interests. For instance, when a student came to me at semester’s end pleading for a chance to complete an extra-credit project in order to improve a failing grade, I invariably turned the student down. I did so for a simple reason: I wanted good grades to result from good study habits and high achievement throughout the semester, not from desperate begging behavior. Teachers must be extremely careful about what they reinforce and what they do not.

    Important Conditions for Operant Conditioning to Occur

    The following three key conditions influence the likelihood that operant conditioning will take place.

    ◆ The reinforcer must follow the response. “Reinforcers” that precede a response rarely have an effect on the response. For example, many years ago, a couple of instructors at my university were concerned that the practice of assigning course grades made students so anxious that they couldn’t learn effectively. Thus, the instructors announced on the first day of class that all class members would receive a final course grade of A. Many students never attended class after that first day, so there was little learning with which any grade might interfere.

    ◆ Ideally, the reinforcer should follow immediately. A reinforcer tends to reinforce the response that immediately preceded it. As an example, consider Ethel, a pigeon that my lab partner and I worked with when we were undergraduate psychology majors. Our task was to teach Ethel to peck a plastic disk in a Skinner box, and she was making some progress in learning this behavior. But on one occasion we waited too long after her peck before reinforcing her, and in the meantime she had begun to turn around. After eating her food pellet, Ethel began to spin frantically in counterclock-wise circles, and it was several minutes before we could get her back to the disk-pecking response.

    Immediate reinforcement is especially important when working with young children and animals (e.g., Critchfield & Kollins, 2001; Green, Fry, & Myerson, 1994). Even many ado-lescents behave in ways that bring them immediate pleasure (e.g., partying on school nights) despite potentially adverse consequences of their behavior down the road (V. F. Reyna & Farley, 2006; Steinberg et al., 2009). Yet schools are notorious for delayed reinforcement—for instance, in the form of end-of-semester grades rather than immediate feedback for a job well done.

    M03_ORMR4386_GE_07_CH03.indd 65 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 66 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    66 PART TWO Behaviorist Views of Learning

    ◆ The reinforcer must be contingent on the response. Ideally, the reinforcer should be presented only when the desired response has occurred—that is, when the reinforcer is contingent on the response. For example, teachers often specify certain conditions that children must meet before going on a field trip: They must bring signed permission slips, complete previous assignments, and so on. When these teachers feel badly for children who haven’t met some of the conditions and let them go on the field trip anyway, the reinforcement isn’t contingent on the response, and the children aren’t learning acceptable behavior. If anything, they’re learning that rules can be broken.

    Contrasting Operant Conditioning with Classical Conditioning

    In both classical conditioning and operant conditioning, an organism shows an increase in a par-ticular response. But operant conditioning differs from classical conditioning in three important ways (see Figure 3.4). As you learned earlier, classical conditioning results from the pairing of two stimuli: an unconditioned stimulus (UCS) and an initially neutral stimulus that becomes a condi-tioned stimulus (CS). The organism learns to make a new, conditioned response (CR) to the CS, thus acquiring a CS→CR association. The CR is automatic and involuntary, such that the organism has virtually no control over what it is doing. Behaviorists typically say that the CS elicits the CR.

    In contrast, operant conditioning results when a response is followed by a reinforcing stimulus (we’ll use the symbol SRf).

    8 Rather than acquiring an S→R association (as in classical conditioning), the organism comes to associate a response with a particular consequence, thus acquiring an R→SRf association. The learned response is a voluntary one emitted by the organism, with the organism having complete control over whether the response occurs. Skinner coined the term operant to reflect the fact that the organism voluntarily operates on—and, in doing so, has an effect on—the environment.

    Figure 3.4Differences between classical and operant conditioning.

    Occurs when:

    Nature ofresponse:

    Associationacquired:

    Classical Conditioning Operant Conditioning

    Two stimuli (UCS and CS) are paired

    Involuntary: elicited by a stimulus

    A response (R) is followed by a

    reinforcing stimulus (SRf)

    Voluntary: emitted by theorganism

    CS CR R SRf

    8Some behaviorists use the term reinforcement when discussing classical conditioning, but they mean it in a very different sense than Skinner did. More specifically, they are referring to the pairing of a would-be conditioned stimulus (CS) with an unconditioned stimulus (UCS) that elicits a particular response (UCR). Persistent pairing of the two stimuli is said to “reinforce” a new conditioned-stimulus→conditioned-response (CS→CR) connection. Personally, I wish that behaviorists wouldn’t use the term in this way, because it can create considerable confusion for some people who read their work.

    M03_ORMR4386_GE_07_CH03.indd 66 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 67 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    CHAPTER 3 Behaviorism 67

    Some theorists have suggested that both classical and operant conditioning are based on the same underlying learning processes (G. H. Bower & Hilgard, 1981; Donahoe & Vegas, 2004). In most situations, however, the classical and operant conditioning models are differentially useful in explaining different learning phenomena, so many psychologists continue to treat them as distinct forms of learning.

    Forms That Reinforcement Might Take

    Behaviorists have identified a variety of stimuli and events that can reinforce and thereby increase learners’ behaviors. They distinguish between two general categories of reinforcers: primary ver-sus secondary. They also suggest that reinforcement can take either of two forms: positive or negative.

    Primary versus Secondary ReinforcersA primary reinforcer satisfies a built-in, perhaps biology-based, need or desire. Some primary reinforcers are essential for physiological well-being; examples are food, water, and warmth. Others enhance social cohesiveness and so indirectly enhance one’s chances of survival; exam-ples are physical affection and other people’s smiles (Harlow & Zimmerman, 1959; Vollmer & Hackenberg, 2001). To some degree, there are individual differences regarding the extent to which certain consequences serve as primary reinforcers; for instance, sex is reinforcing for some individuals but not for others, and a particular drug might be a primary reinforcer for a drug ad-dict but not necessarily for a nonaddicted individual (e.g., Lejuez, Schaal, & O’Donnell, 1998).

    A secondary reinforcer, also known as a conditioned reinforcer, is a previously neutral stimu-lus that has become reinforcing to a learner through repeated association with another reinforcer. Examples of secondary reinforcers, which don’t satisfy any built-in biological or social needs, are praise, good grades, and money.9

    How do secondary reinforcers take on reinforcing value? One early view involves classical conditioning: A previously neutral stimulus is paired with an existing reinforcer (UCS) that elicits a feeling of satisfaction (UCR) and begins to elicit that same sense of satisfaction (CR) (Bersh, 1951; D’Amato, 1955). An alternative perspective is that secondary reinforcers provide information that a primary reinforcer might subsequently be coming (G. H. Bower, McLean, & Meachem, 1966; Fantino, Preston, & Dunn, 1993; Green & Rachlin, 1977; Mazur, 1993). The second explanation has a decidedly cognitive flavor to it: The learner is seeking informa-tion about the environment rather than simply responding to that environment in a “thought-less” manner.

    The relative influences of primary and secondary reinforcers in our lives probably depend a great deal on economic circumstances. When such biological necessities as food and warmth are scarce, these primary reinforcers—as well as secondary reinforcers closely associated with them (e.g., money)—can be major factors in reinforcing behavior. But in times of economic well-being,

    9One could argue that praise enhances social relationships and so might be a primary reinforcer. However, praise involves language—a learned behavior—and not all individuals find it reinforcing, hence its catego-rization as a secondary reinforcer (e.g., see Vollmer & Hackenberg, 2001). Furthermore, some individuals with cognitive disabilities must be systematically trained to respond favorably to praise (e.g., see Dozier, Iwata, Thomason-Sassi, Worsdell, & Wilson, 2012).

    M03_ORMR4386_GE_07_CH03.indd 67 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 68 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    68 PART TWO Behaviorist Views of Learning

    when cupboards are full and homes are well-heated, such secondary reinforcers as praise and good grades are more likely to play significant roles in the learning process.

    Positive ReinforcementWhen people think about stimuli and events that are reinforcing for them, they typically think about positive reinforcement, which involves the presentation of a stimulus after the response. Positive reinforcement can take a variety of forms: Some are extrinsic reinforcers, in that they’re provided by the outside environment, whereas others are intrinsic reinforcers that come from within the learner.

    Material reinforcers A material reinforcer (also known as a tangible reinforcer) is an actual object, such as food or a toy. Material reinforcers can be highly effective in changing behavior, especially for animals and young children. However, most psychologists recommend that at school, teachers use material reinforcers only as a last resort, when absolutely no other reinforcer works. Desired objects have a tendency to distract students from things they should be doing in class and thus may be counterproductive over the long run.

    Social reinforcers A social reinforcer is a gesture or sign (e.g., a smile, attention, or praise) that one person gives another, usually to communicate positive regard. In classroom settings, teacher attention, approval, and praise can be powerful reinforcers (P. Burnett, 2001; McKerchar & Thompson, 2004; N. M. Rodriguez, Thompson, & Baynham, 2010).10 Attention or approval from peers can be highly reinforcing as well (Beaulieu, Hanley, & Roberson, 2013; Flood, Wilder, Flood, & Masuda, 2002; Grauvogel-MacAleese & Wallace, 2010).

    Activity reinforcers Speaking nonbehavioristically, an activity reinforcer is an opportunity to engage in a favorite activity. (Quick quiz: Which word is the nonbehaviorist part of my definition, and why?) David Premack (1959, 1963) discovered that people will often perform one activity if doing so enables them to perform another. His Premack principle for activity reinforcers is this:

    When an opportunity to make a normally high-frequency response is contingent on first making a normally low-frequency response, the high-frequency response will increase the frequency of the low-frequency response.

    A high-frequency response is, in essence, a response that a learner enjoys doing, whereas a low-frequency response is one that the learner doesn’t enjoy. Another way of stating the Premack principle, then, is that learners will perform less-preferred tasks so that they can subsequently engage in more-preferred tasks.

    To illustrate, I rarely do housework; I much prefer retreating to a quiet room to either read or write about human learning and behavior. I’ve found that I’m more likely to do household chores if I make a higher-frequency behavior, such as reading a mystery novel or hosting a party, contingent on doing the housework. In a similar way, appropriate classroom behavior can be im-proved through the Premack principle. For instance, young children can quickly be taught to sit quietly and pay attention if they’re allowed to engage in high-frequency, high-activity behaviors

    10We should note here that praise has potential downsides, depending on the specific message it conveys and the context in which it is given; we’ll examine its possible downsides in Chapter 14 and Chapter 15.

    M03_ORMR4386_GE_07_CH03.indd 68 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 69 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    CHAPTER 3 Behaviorism 69

    (e.g., interacting with classmates) only after they’ve been quiet and attentive for a certain period of time (Azrin, Vinas, & Ehle, 2007; Homme, deBaca, Devine, Steinhorst, & Rickert, 1963).

    Token reinforcers A token reinforcer is a small, insignificant item (e.g., poker chip, specially marked piece of colored paper, or sticker on a record-keeping chart) that a learner can accumulate and eventually use to “purchase” desired objects or privileges. For example, a classroom teacher might give students a certain number of “points” for completed assignments or appropriate classroom behaviors. At the end of the week, students can use their points to “buy” small trinkets, free time in a reading center, or a prime position in the lunch line. Tokens are often used in token economies, to be described in Chapter 4.

    Positive feedback In some instances, material and social reinforcers improve classroom behavior and lead to better learning of academic skills because they communicate a message that learners are performing well or making significant progress. Such positive feedback is clearly effective in bringing about desired behavior changes (Hattie & Gan, 2011; S. Ryan, Ormond, Imwold, & Rotunda, 2002; Shute, 2008).

    I once spent a half hour each day for several weeks working with Michael, a 9-year-old boy with a learning disability who was having trouble learning how to read and write cursive let-ters. In our first few sessions together, neither Michael nor I could see any improvement, and we were both becoming frustrated. To give ourselves more concrete feedback, I constructed a chart on a sheet of graph paper and explained to Michael how we would track his progress by marking off the number of cursive letters he could remember each day. I also told him that as soon as he had reached the dotted line at the 26-letter mark for three days in a row, he could have a special treat; his choice was a purple felt-tip pen. Michael’s daily performance began to improve dramatically. Not only was he making noticeable progress but he also looked forward to charting his performance and seeing a higher number of mastered letters each day. Within two weeks, Michael had met the criterion for his felt-tip pen: He had written all 26 cursive let-ters for three days straight. As it turns out, the pen probably wasn’t the key ingredient in our success: Michael didn’t seem bothered by the fact that he lost it within 24 hours of earning it. Instead, I suspect that the concrete positive feedback about his own improvement was the true reinforcer that helped Michael learn.

    Feedback is especially likely to be effective when it communicates what students have and haven’t learned and when it gives them guidance about how they might improve their perfor-mance (Hattie & Gan, 2011; Shute, 2008). Under such circumstances, even negative feedback can lead to enhanced performance. It’s difficult to interpret this fact within a strictly behaviorist framework; it appears that students must be thinking about the information they receive and us-ing it to modify their behavior and gain more favorable feedback later on.

    Intrinsic reinforcers Oftentimes learners engage in certain behaviors not because of any external consequences but because of the internal good feelings—the intrinsic reinforcers—that such responses bring. Feeling successful after solving a difficult puzzle, feeling proud after returning a valuable item to its rightful owner, and feeling a sense of relief after successfully completing a challenging assignment are all examples of intrinsic reinforcers. People who continue to engage in certain behaviors for a long time without any obvious external reinforcement are probably working for intrinsic sources of satisfaction.

    The concept of intrinsic reinforcers doesn’t fit comfortably within traditional behaviorism, which, as you should recall, focuses on external, observable events. Yet, for many students, the

    M03_ORMR4386_GE_07_CH03.indd 69 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 70 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    70 PART TWO Behaviorist Views of Learning

    true reinforcers for learning are probably the internal ones—the feelings of success, mastery, and pride—that their accomplishments bring. For such students, other reinforcers are more helpful if they provide feedback that academic tasks have been performed well. Class grades may be reinforcing for the same reason: Good grades reflect high achievement—a reason to feel proud.

    Positive feedback and the intrinsic reinforcement that such feedback brings are probably the most productive forms of reinforcement in the classroom. Yet teachers must remember that what is reinforcing for one student may not be reinforcing for another; reinforcement, like beauty, is in the eyes of the beholder. In fact, consistent positive feedback and resulting feelings of success and mastery can occur only when instruction has been carefully tailored to individual skill levels and abilities and only when students have learned to value academic achievement. When, for what-ever reasons, students aren’t initially interested in achieving academic success, then other rein-forcers, such as social and activity reinforcers—even material ones if necessary—can be useful in helping students acquire the knowledge and skills that will serve them well in the outside world.

    Negative ReinforcementIn contrast to positive reinforcement, negative reinforcement increases a response through the removal of a stimulus, usually an aversive or unpleasant one. Don’t let the word negative lead you astray here. It’s not a value judgment, nor does it mean that an undesirable behavior is involved; it simply refers to the fact that something is being taken away from the situation. For instance, imagine a rat in a Skinner box that often gives the rat an unpleasant electric shock. When the rat discovers that pressing a bar terminates the shock, its bar-pressing behavior increases consider-ably. As another example, perhaps you have a car that makes an annoying sound if the key is still in the ignition when you open the door. By removing the key from the ignition, you can stop the irritating noise, which might lead you to be more diligent about removing the key on future occasions.

    Removal of guilt or anxiety can be an extremely powerful negative reinforcer for human beings. A child may confess to a crime committed days or weeks earlier because she feels guilty about the transgression and wants to get it off her chest. Anxiety may drive one student to complete a term paper early, thereby removing an item from his to-do list. Another student con-fronted with the same term paper might procrastinate until the last minute, thereby removing anxiety—if only temporarily—about the more difficult aspects of researching for and writing the paper.

    Negative reinforcement probably explains many of the escape behaviors that humans and nonhumans learn. For instance, rats that are given a series of electric shocks will quickly learn to turn a wheel that enables them to escape to a different, shock-free environment (N. E. Miller, 1948). Likewise, children and adolescents acquire various ways of escaping unpleasant tasks and situations in the classroom and elsewhere. Making excuses (“My dog ate my homework!”) and engaging in inappropriate classroom behaviors provide the means of escaping tedious or frustrat-ing academic assignments (A. W. Gardner, Wacker, & Boelter, 2009; McKerchar & Thompson, 2004; Romaniuk et al., 2002). Lying about one’s own behaviors (“I didn’t do it—he did!”) is potentially a way of escaping the playground supervisor’s evil eye. Complaints about nonexistent stomachaches, chronic truancy, and dropping out of school are ways of escaping the school en-vironment altogether. Some escape responses are productive ones, of course; for instance, many teenagers acquire tactful ways of rebuffing unwanted sexual advances or leaving a party where alcohol or street drugs are in abundance.

    M03_ORMR4386_GE_07_CH03.indd 70 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.

  • # 154203   Cust: Pearson   Au: Ormrod  Pg. No. 71 Title: Human Learning   7e

    K Short / Normal

    DESIGN SERVICES OF

    S4-CARLISLEPublishing Services

    CHAPTER 3 Behaviorism 71

    Keep in mind that negative reinforcement can affect teachers’ behaviors as well as those of students. Sometimes teachers act in ways that immediately get rid of aversive stimuli but aren’t effective over the long run. In particular, some student misbehaviors—although they’re responses from a student’s perspective—can be unpleasant stimuli for a teacher, in which case the teacher wants them to stop. Imagine, for example, that Ms. Taylor yells at Marvin for talking too much. Marvin may temporarily stop talking, which negatively reinforces Ms. Taylor’s yelling behavior, but if Marvin likes getting Ms. Taylor’s attention—that is, if it’s a positive reinforcer for him—he’ll be chattering again before very long.

    Common Phenomena in Operant Conditioning

    Researchers have observed a variety of phenomena related to reinforcement’s effects. We’ll now look at several that are often seen in human learning and behavior.

    Superstitious BehaviorOperant conditioning can sometimes occur even when events happen randomly and aren’t contingent on something the learner has done. For example, Skinner once left eight pigeons in their cages overnight with the food tray mechanism adjusted to give reinforcement at regular intervals, regardless of what responses the pigeons were making at the time. By morning, six of the pigeons were acting bizarrely. One repeatedly thrust its head into an upper corner of the cage, and two others were swinging their heads and bodies in rhythmic pendulum movements (B. F. Skinner, 1948).

    Randomly administered reinforcement tends to reinforce whatever response has occurred immediately beforehand, and a learner will increase that response, thus displaying what Skinner called superstitious behavior. A nonbehaviorist way of describing the learning of a superstitious behavior is that the learner thinks that the response and reinforcement are related when in fact they aren’t. For example, a student might wear a “lucky sweater” on exam days, and a star athlete might perform a certain ritual before every game.11

    In the classroom, superstitious behavior can occur when students don’t know which of their many responses are responsible for bringing about reinforcement. To minimize it, teachers

    Common escape responses.

    I can't — I have towash my hair tonight.

    I didn't do it— he did!

    I don't feelso good.

    I think I hear mymother calling.

    11Such behaviors can actually improve performance by enhancing people’s self-confidence regarding the activity at hand and thereby also increasing persistence in the activity (Damisch, Stoberock, & Mussweiler, 2010). This self-confidence is an example of self-efficacy—a cognitivist concept central to social cognitive theory (Chapter 5).

    M03_ORMR4386_GE_07_CH03.indd 71 11/07/15 8:58 am

    Ormrod, Jeanne Ellis. Human Learning, Global Edition, Pearson Education UK, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hkied-ebooks/detail.action?docID=5138531.Created from hkied-ebooks on 2019-09-18 07:54:40.

    Cop

    yrig

    ht ©

    201

    6. P

    ears

    on E

    duca

    tion

    UK

    . All

    right

    s re

    serv

    ed.