Respondent and Operant Conditioning
Respondent and Operant Conditioning
Program Transcript
[MUSIC PLAYING]
STEVEN LITTLE: I'm Dr. Steven Little and this week we are going to talk about
respondent and operant conditioning. We've pretty much gone through the philosophical
aspects of this course. Those areas that deal with philosophy. It's also going to be
redundant with what I will talk about in other classes. The fact is though that this is the
foundation of behavior analysis, particularly operant conditioning. And it's important that
you have it multiple times, that you're exposed to things so that you never forget them.
Respondent conditioning, also called classical conditioning, so if I use either of those
terms we're talking about the exact same thing. I know behavior analyst for some
reason prefer the term respondent. I tend to prefer the term classical but that's totally a
preference on my part. It's the way that I was taught many years ago. So, I tend to say
classical conditioning more than respondent but they're exact same thing.
So, we're talking about respondent conditioning. I'll do an overview. I'll talk about the
components of classical conditioning. I'll talk about some related principles that go along
with classical conditioning. And then I'm going to give you some examples.
One that you're familiar with, I alluded to before and I specifically mentioned it, which is
the Little Albert study by John Watson and Rosalie Rayner that conditioned fear of a
white rat in a young child. And taste aversion. I'll talk about taste aversion and how that
can come about via classical conditioning.
Now classical conditioning is important for us to know. But it is not what we as behavior
analysts focus on. We focus on operant conditioning. Those things that come out of
Skinner. And Skinner's radical behaviorism and behavioral theory overall. And talking
about behavioral philosophy went into aspects of operant conditioning. So, it's not going
to be new for you. But this is going to be the focus of your entire curriculum.
So, I'm going to do a very brief overview. So, this won't be the longest lecture that you
have, it may very well be one of the shorter ones. So, I'm going to talk about that. And
© 2021 Walden University, LLC
1
Respondent and Operant Conditioning
I'll talk about the ABCs of behavior, which I've already defined for you, but we'll go over
it again. And just the ways we administer contingent consequences. So, we'll get to that
at the end. But let's start by talking about respondent or classical conditioning.
Now, you have been exposed to this. I can pretty much guarantee you have all been
exposed to this. Because when we look at classical conditioning, or respondent
conditioning, it was a principle that was really discovered by accident.
Ivan Pavlov was a Russian physiologist. And he was investigating how a dog's stomach
prepared itself to digest food. So, when something is placed in the dog's mouth, he was
looking at the processes, the salivation in the physiological preparation of the body to
get food and to digest food. And what he noticed was that the mere sight or the smell of
food was enough to get the dog salivating. That was the first thing he noticed. He didn't
actually have to put the food in the dog's mouth to get salivation to start. He said it
happened before that.
Now you've all experienced that. Go by a bakery, a donut shop, or even something like
Subway when they're baking that bread. That does smell good, whether you like
Subway or not. Making their bread does smell good. And notice yourself salivating. So,
you've all experienced that yourself.
But he realized, OK, something's going on here that's not purely physiological, let me
investigate a little bit how this works. And what he decided to do was create a little
experiment. And he chose a tuning fork as a neutral stimulus, a tuning fork. If you're a
musician of any kind you're probably familiar with a tuning fork. I don't have one to
demonstrate here because I am not a musician, and I do not have a tuning fork.
So, but he used the tuning fork. But it can be a bell, it can be any type of alarm, it can
be any type of noise. But in his case, his first studies use the tuning fork. Although, a lot
of people say he was using a bell, but it was really a tuning fork. But that's not that
important. It is a sound that is neutral. The noise of a tuning fork, the noise of a bell by
itself does not elicit any type of response, in this case, salivation.
© 2021 Walden University, LLC
2
Respondent and Operant Conditioning
You know, same thing with you. If you hear her tuning fork, go off, or a bell ring, you're
not going to automatically start salivating. Because you're not in the presence of the
stimulus that would more naturally cause the salivation.
So, what he found was that by pairing this neutral stimulus, this tuning fork noise, with
the food, in his case he was used to using a food powder that he'd sprinkle on the dog's
tongue. But by pairing the neutral stimulus with the food he found that the tone alone
was able to elicit salivation. Just the tone from the tuning fork. Or a bell if you're using a
bell. But a neutral stimulus when paired with a stimulus that more automatically
produces the response became to elicit the response by itself.
Now when we talk about classical conditioning, respondent conditioning, we're generally
talking about involving responses that the individual has little control over, such as those
things that go on in the autonomic nervous system. Salivation. You go by a bakery that
has very nice smells coming out of it and you don't say oh, nice smells. I think I'll
salivate. No, it's not a conscious response. It's an automatic response.
Infants, you put something up to their mouth and what do they do? They start to suck.
They're not going oh, that is a sucky thing. That's a thing that I can suck, I think I may
suck on it. And then they suck. No. It's put there the stimulation around their lips causes
them to automatically suck.
Heart rate, blood pressure, all of these things are autonomic nervous system. You don't
sit there and go oh, my blood pressure's getting a little high, I think I'll reduce. OK, it's
done. You know, you can't do that. I mean there are some relaxation techniques and
stuff that can help but that's another discussion. So basically, in the elements of
classical conditioning, what he's doing is he's doing an experiment to see what can elicit
salivation that is not the actual stimulus that would naturally, automatically, initiate.
So there are certain components that these are the things you have to be familiar with
on classical conditioning. The first thing is an unconditioned stimulus. Unconditioned
stimulus. This is going to be abbreviated USC for unconditioned stimulus. Or just US,
also unconditioned stimulus. Depending on the author who's writing a book, or an article
will abbreviated either US or UC. Again, I was taught USC so that's what I defer to. But
they're the same thing.
© 2021 Walden University, LLC
3
Respondent and Operant Conditioning
Unconditioned stimulus. What an unconditioned stimulus's is an environmental factor
such as food that naturally brings about a specific behavior or a specific response.
Salivation. Food put into the mouth will create salivation. You don't learn that. It is a
naturally occurring response.
If you are just sitting there in a quiet environment and all of a sudden there's a loud
noise. You startle. That would be an example of it unconditioned stimulus. The loud
noise. And it brings about a particular response, in that case startle response. So that's
an unconditioned stimulus.
Now in my example of unconditioned stimulus, I also talked about an unconditioned
response. I hadn't labeled it yet, but that's what we're talking about is the response. The
unconditioned response, which is abbreviated either UCR or UR. Again, depending on
who wrote what you're reading. I was initially trained calling it a UCR for unconditioned
response, but other people abbreviated UR for unconditioned response.
The unconditioned response is the unlearned automatically occurring reaction brought
about by the unconditioned stimulus. So, in the case of Pavlov, salivation. Which the
unconditioned stimulus would be the food. What response naturally occurs to food?
Salivation. That would be the unconditioned response.
And the example I gave you with a loud noise, the loud noise is the unconditioned
stimulus. The startle response is the unconditioned response. You did not do that
voluntarily, it happened automatically. Though a startle response does involve some
muscles that are part of the striated or the voluntary muscles. The sensations that you
get, the heightened autonomic nervous system arousal that occurs as a result of that
loud noise causes the totality of that startle response.
I'll give some other examples in a minute. So, we have unconditioned stimulus, which is
an environmental stimulus that naturally brings upon a particular response. No learning
is required. And what it brings on is an unconditioned response. Unlearnt.
Now we also have a conditioned stimulus. Conditioned stimulus, which is abbreviated
CS, and I don't have never seen it abbreviate any other way. I learned as CS. I've
always seen it written as CS. So, a conditioned stimulus is a CS. That's the neutral
© 2021 Walden University, LLC
4
Respondent and Operant Conditioning
stimulus that becomes capable of eliciting a particular response through being paired
with a UCS, unconditioned stimulus.
In the case of Pavlov and his dog, you pair the tone, give the dog food. Tone, give the
dog food. Tone, give the dog food. Tone, dog salivates without the food. So, the
conditioned stimulus is that which brings out that same response but as a result of a
previously neutral stimulus. And when it is the previously neutral stimulus that elicits the
response, you get a conditioned response, a CR.
Conditioned response. That's the response that's aroused by some stimulus other than
the one that automatically produces it. Other than the unconditioned stimulus. So, food
is the unconditioned stimulus that results in salivation. When food producers salivation,
it's an unconditioned response. If you then condition the individual using a tuning fork or
a bell or something else, a light, that previously neutral response being paired with the
unconditioned stimulus elicits the exact same behavior, the salivation. When it is the
conditioned stimulus that elicits the behavior, it is now a conditioned response.
So even though the response may be exactly the same, salivation in this case, if it's
elicited by food, it's an unconditioned response. If it's elicited by the tone, it's a
conditioned response. Conditioned is just another way to say learned.
So, if we say unconditioned stimulus that means unlearned stimulus that leads to an
unlearned response. You don't have to learn to salivate. But the tone being paired with
it, you learn that connection. And that's why it becomes conditioned or learned stimulus
in a learned response. You learn that connection.
Those are the key things that you have to remember as far as classical conditioning.
Now there are some other principles that go along with this that you should be aware of.
The first one is extinction. Now we're going to be talking about extinction in classical
conditioning and in operant conditioning.
In classical conditioning, what Pavlov discovered is that if he stopped presenting food
after sounding the tuning fork, the sound basically gradually lost its effectiveness to elicit
salivation. So its effect on the dog was lessened, no longer caused the dog to salivate.
So the association between the previously neutral stimulus and the response to the
© 2021 Walden University, LLC
5
Respondent and Operant Conditioning
salivation over time without additional pairings of food, which is the unconditioned
stimulus, you lessen the strength of that relationship. And that's called extinction.
So, if we think classically extinguisher response, we basically just try to remove the
parings so that the individual no longer makes that connection between what was a
previously neutral stimulus. And so, in later on. Classical extinction plays a big role in
the treatment of phobias, for example. I'll go into that in a different class, I don't want to
get too complicated right now. This is after all your first class. So, we don't want to go
too much into that.
So, to summarize what extinction is that you just continue to present the individual with
the conditioned stimulus, and they no longer experience the response. Because that
connection between the conditioned stimulus and the unconditional stimulus becomes
weak.
The next thing I want you to remember here with regard to classical conditioning is
spontaneous recovery. Spontaneous recovery. What Pavlov also found is that if he
gave a dog a rest after extinction, in other words, he picked the tuning fork multiple
times without pairing it with food, the dog ceased to salivate. He took a period of time
off. Say a week later, he came back and he just one time did the tuning fork to the dog,
the dog salivated.
It's called spontaneous recovery. And what's happening is after a period of time from
extinction, the behavior will be able to come back if there is another presentation of that
neutral stimulus, the conditioned stimulus. If the pairing is not made with an
unconditioned stimulus, the second extension will happen much quicker. In any
successive spontaneous recoveries will also extinguish much quicker.
So, the behavior can come back, just as if you were a behavioral psychologist working
with someone who had a phobia. And you get them to function-- say they were afraid of
a dog, and you get them to be able to function with around dogs. And then they don't
have any exposure to dogs with any outcome for a while and they see a dog and they
get scared again; you have to get them to expect that. So, this case because it's
spontaneous recovery. But it doesn't last long if there is not a pairing with the
unconditioned stimulus.
© 2021 Walden University, LLC
6
Respondent and Operant Conditioning
A third principle that goes along with this is stimulus generalization. Stimulus
generalization. We talked about the principle of generalization when we were talking
about the philosophy of behaviorism. And then we want things to generalize. Well,
stimulus generalization just means that the stimulus becomes generalized.
So, it's not just the specific conditions stimulus under which the response was first
learned that can elicit the conditioned response, but a range of similar stimuli can also
come to elicit the same response. In other words, the stimulus, the conditioned stimulus,
becomes generalized. And maybe a stimulus class, things around that that are similar
but not the exact same stimulus can also come to elicit the response.
Now Pavlov found that not only the tuning fork but also-- if you've ever hit the edge of a
wine glass it, does give a little tone. Or you run your finger across the top of a wine
glass, it will sort of give a tone off. That's not the same thing as the tuning fork, but
Pavlov found that hitting the edge of a glass could also come to a salivation. That is
stimulus generalization.
Two more principals, and then we'll move on to some examples. Stimulus
discrimination. Stimulus discrimination. That's kind of the opposite of stimulus
generalization. Stimulus discrimination means that you can respond differently to two or
more stimuli that may be similar, yet they're distinct.
So, a tune a tone from a tuning fork makes a particular noise, but a bell, it's the same,
it's a noise. It could be the similar decibels or whatever, I don't know exactly all those
terms. But it's distinctly different. So, the individual can determine a tuning fork versus a
bell. Or could even be different tuning forks that give different tones. That they can
specifically respond just to one and not the other. When the individual does that, it's
called stimulus discrimination. They are able to discriminate between different stimuli
that may be similar but are yet different in some way.
The last thing for now I want you to remember on classical conditioning is higher order
conditioning. Higher order condition. In this case, what happens is that there's a
secondary conditioned stimulus. And it's paired with the primary conditioned stimulus a
number of times. And it eventually takes on the function of the primary conditioned
stimulus.
© 2021 Walden University, LLC
7
Respondent and Operant Conditioning
So, for example, in Pavlov, the primary conditioned stimulus was the tone from a tuning
fork. Now let's just say you also have a light. And that light comes on, then the tuning
fork is sounded, and then they get food, and then they salivate. And you make this
pairing of light, tuning fork, food, salivation. And you eventually get to the point where
the tuning fork can cause salivation without the food, it becomes a conditioned stimulus.
You can also get that secondary conditioned stimulus. The light coming on. And it alone
will elicit a conditioned response, the salivation.
So a higher order condition, you have a secondary, it could be a tertiary, it could be a
third one. But you get a pairing of stimuli leading up to the learning trials with the
unconditioned stimulus.
You understand all that? OK. I hope so. If you've had Intro Psych, you've had all this.
And granted you could have forgotten all of it, you could have maybe not learned it all
well the first time. But you'll have other opportunities to learn. Don't worry. If there's any
confusion, don't worry, there'll be plenty of opportunities.
Let me give you a couple of examples that may make things a little clear. I'm going to
use the classic example, a study by Watson and Rayner, John Watson Rosalie Rayner,
from 1920. So, it was a long time ago. It was more than 100 years ago.
Watson and Rayner, I talked a little bit about John Watson before. Rose Rayner was his
lab assistant he had an affair with and eventually married. Rosalie actually died rather
young of tuberculosis. But she was only in her early 30s when she died. Watson went
on to become a rich successful advertising executive in New York City. Told you all that
but, this is the classic stuff. And it was published in 1920.
It's the demonstration of classical conditioning to emotional responses in humans. I say
emotional response is fear, is the main emotional response we're talking about here.
So, you had Little Albert. Little Albert was a nine-month-old boy, who was the son of a
woman who worked at Johns Hopkins University. I believe she was a cleaning woman.
And that she had a young son and volunteered her son to participate in the study.
So what they found was at the beginning of the experiment, a white rat-- I mean you
might say ew, rat, ew, but you've been conditioned to think that way. If you look at a rat,
© 2021 Walden University, LLC
8
Respondent and Operant Conditioning
they're kind of cute. They're furry and this is not a wild white rat, this a tame white rat
who would like to get petted. And it may had pink beady eyes, but it was still a cute little
white rat.
And they gave a Little Albert the white rat to pet and Albert showed no fear at all of this
white rat. So, one day when Albert is sort of sitting there, minding his own business,
petting the bright white rat, showing no fear of the white rat, somebody came up behind
him with two pieces of metal and made a very loud noise by banging the two pieces of
metal together.
Now normally, for a young child, a loud noise is going to produce a startle and a fear
response. The kid's going to start crying. And they're going to be distressed. That's a
noise. So what you would have, in this case, is you pair the loud noise with the rat and
after a while, and actually this case is basically one trial learning, the rat was able to
elicit a fear. Or this conditioned emotional response in Little Albert.
So what is the unconditioned stimulus in this case? What is the unconditioned stimulus?
I think you all got that one right. The fear, or the startled response, the distress. Actually,
I'm getting ahead of myself sorry, the loud noise is the unconditioned stimulus. And I
just answered what was going to be the next question, what was the unconditioned
response. The unconditioned response was the fear.
If you are exposed to a loud noise and you're a nine-month-old little boy, you're going to
get scared by somebody sneaking up behind you and making a loud noise. Chances
are however old you are, if somebody snuck up behind you and bang two pieces of
metal to make a loud noise, you too would show a startle in a fear response.
But what we have is loud noise naturally producing this fear response. So, we have the
unconditioned stimulus, unconditioned response. So, what is the conditioned stimulus in
this case? What was the previously neutral stimulus, a stimulus that did not elicit fear?
The white rat, of course. So, the rat is the conditioned stimulus.
What is the conditioned response? OK, it's the same thing as the unconditioned
response only it's now being elicited by the presence of the rat alone, not the
unconditioned stimulus. So, it's the fear of the rat.
© 2021 Walden University, LLC
9
Respondent and Operant Conditioning
I'll talk about this in another class in more detail, on how this goes in the establishment
of phobias and it also brings in some operant conditioning in eliciting things that you see
something you're afraid of, you're going to escape. And that reduces your anxiety. And
that is reinforcing to you, that's negative reinforcement. So, we'll get to that later on.
Let me just do a couple of things here and all the principles we talked about. What
would it be if Alfred also became afraid of cats? What would it be if Albert now started to
be afraid of cats? That would be stimulus generalization. The cat is different from the
rat, but they both have the common feature, which is fur. Maybe some of the features
whiskers, whatever. But if he did, I'm not saying he did, if he did that would be stimulus
generalization.
Now what would happen if he was not afraid of say gerbils? That would be stimulus
discrimination. Very good. Stimulus discrimination. He was able to discriminate between
the rat in the gerbil. He was not showing fear of a gerbil, he was showing fear of the rat.
So, I said this is one way that phobias can develop. And we'll talk about that in different
class.
Another example in your life that you probably had some classical conditioning involves
taste aversion. Let's say you go to a restaurant and you try escargot, snails. First time
you've ever tried them. You like them, you may even like them a lot. You've never had
them before. Everything else that you had of that meal, you've had multiple times. But
you never had escargot before.
And then you go home, and you get sick to your stomach and you start throwing up. Will
you eat snails again? Probably not.
Now let's say you're not a vegetarian, not a vegan, you do eat meat. But let's just say
that you ate steak, baked potato, and snails. The aversion was probably just to the
snails, why? Because you have a learning history with steak and baked potatoes.
You've eaten them many times and not gotten sick. The one time you eat the snails, you
make that connection between the snails and getting sick. And you're going to avoid
eating snails because the snails are the novel stimulus.
© 2021 Walden University, LLC
10
Respondent and Operant Conditioning
So, taste versions can occur via classical conditioning. And it's perpetuated because
you don't want to take the chance anymore, so you avoid snails. You may never have a
snail again as long as you live. Why? Because it made you sick that one time. It may
have had nothing to do with the snails.
I can tell you one example in my own life. It's cauliflower. I cannot eat cauliflower. I will
not eat cauliflower. But I had cauliflower with a meal the night before I got a stomach flu.
It had nothing to do with what I ate; it wasn't food poisoning. It wasn't the cauliflower.
But I have not had cauliflower to this day. And that was 40 years ago.
I don't want to. Makes me, you know, yuck just seeing it. But I ate it before that. Not
often, but I would eat it.
OK. That's all I want to do on classical conditioning though. I want you to get the basic
principles of classical conditioning. I want to quickly do an overview of operant
conditioning. We talked about it in week one and it will be the focus of most of your
classes. It will be all sorts of aspects of operant conditioning. So, let's just do a quick
overview now and we can wrap this lecture up. How's that sound?
OK. The definition of operant conditioning, it's a type of learning in which the
consequences of the behavior influence whether the organism will act in the same way
in the future. The animal, in our case, human, will learn the relationship between his or
her own behavior in either reinforcing and punishing consequences. And this is what
Skinner is associated with. This is what Skinner devoted his life to.
Reinforcement is defined as an environmental stimulus, which is contingent upon a
response and increases the probability of the response. I like chocolate, you may like
chocolate, chocolate may make you want to do whatever the behavior was that got you
the chocolate.
But if you don't like chocolate, it's not going to be a reinforcer for you. I know you may
be saying who doesn't like chocolate? Most people do. That's why it's used as a
reinforcer a lot. But if you don't like it, it's not going to be reinforcing. So, it is defined by
© 2021 Walden University, LLC
11
Respondent and Operant Conditioning
the consequence. It will increase the probability of response. And I'll go through the
different types in a minute.
Punishment, overall, is an environmental stimulus, which is contingent on a response
and it decreases the probability of the response. Decreases. Again, defined by the
consequence. The behavior as a result of the consequence. If it decreases the
frequency of the behavior, it's a punishment.
You may like people saying, hey, you look good today. Other people may not. So, with
you, hey, you look good today, you may do the behaviors that led to the way you looked
that day more. If somebody doesn't like the attention drawn to them, hey, you look good
today, they may stop doing whatever it was that they think led to that person telling
them they look good. So, it depends on the individual. It's defined by the actual
outcome.
Now there are four ways in which we can administer contingent consequences.
Reinforcers or punishers. And when we use the word positive, it means presentation.
So, if we present a stimulus, a stimulus that we like, a stimulus that will lead to increase
in the behavior, it's positive reinforcement. Positive reinforcement means we present a
stimulus, and it increases the probability of a response. If I give you candy for sitting
there and listening to me lecture and you continue lecturing, the candy would be
positive reinforcement. I'm giving you something.
Which leads us to the second way contingencies can be administered and that is
negative reinforcement. Probably the most misused term in all of psychology. Negative
reinforcement is not punishment, it says reinforcement. It increases the frequency of the
behavior. So an aversive stimulus, something we don't like is removed. And the removal
of that stimulus leads to an increase in the behavior that led to the removal.
I can guarantee you that I will write a multiple-choice question in this module that will
have you giving a definition of negative reinforcement. And clearly showing that
negative reinforcement is not punishment. OK?
So negative reinforcement, something is removed, which increases the probability of the
behavior occurring again. Let me give you an example. A child wants a cookie. He cries
© 2021 Walden University, LLC
12
Respondent and Operant Conditioning
until his mother gives him a cookie. He cries I want cookie; I want a cookie. The mother
gives him a cookie. The child's actual tantrumming, or whining, or screaming behavior,
is actually being positively reinforced because the child is getting a cookie. Where does
the negative reinforcement come in?
The negative reinforcement is with the mother. The mother, behavior of giving the child
the cookie, has decreased an aversive behavior in her environment. The child whining,
the child screaming, the child tantrumming. So, the mother is more likely to give the
child a cookie the next time the child screams or cries or tantrums. Because they
escape the crying, the crying stops the aversive.
But, again, the child was positively reinforced. When I talk about the development of
problem behaviors in another class, you'll clearly see how this process of positive
reinforcement, negative reinforcement, works in the development of aversive behaviors.
And problem behaviors in children.
So, again, always remember, reinforcement increases the probability of behavior.
Positive reinforcement, presentation. Negative reinforcement, something taken away.
So, we can use those same terms when something is decreased.
So, we can use positive and negative. So, we have positive punishment, we know
punishment is defined as it decreases the probability of the behavior. Positive
punishment means something was presented, positive means presentation. So, when
I'm talking to parents, I will not use the term positive punishment because they will
probably misconstrue the word positive. And I will use the term presentation
punishment. But as a behavior analyst, you know positive means presentation.
So, spanking would be an example of positive punishment. So, people would get
confused because spanking is not positive. But it's positive because it's being presented
to the individual. But if it decreases the probability of a behavior, and it's been
presented, it is positive punishment.
Negative punishment on the other hand, again, negative just means removal. So
negative punishment, sometimes called response cost, is you're taking something away
that the individual likes. Like you take away their TV privileges. You take away their
© 2021 Walden University, LLC
13
Respondent and Operant Conditioning
desserts. In a larger sense, you ground a teenager. That would be negative
punishment.
If you're going to use a punishment procedure, this is the type of procedure you
probably prefer using but it is negative, meaning that something is taken away. I prefer
the term response cost in talking to parents, or teachers, because using positive and
negative in terms of punishment gets confusing for them. Just as it may for you right
now. But always remember positive means presentation, negative means removal.
We also have the ABCs of behavior. I've mentioned this before, I'll go real quick. ABC:
antecedent, behavior, consequence. Antecedent is what happens before the behavior.
What happens before the response? So, it is the stimuli that are happening in the
individual's environment before. It could be as simple as giving someone instructions,
that would be an antecedent. It could be someone calling you a name, that would be
antecedent. But recognize antecedents, what happens before.
Behavior, this is simply the act itself. In our case, we're usually talking about target
behaviors. Target behaviors. So, it's the individual's response. And consequence is
what happens after the behavior. The reinforcement or the punishment.
ABCs, an example. Teacher asked the student to answer a question. The student
answers the question. The teacher tells the student they did a good job, reinforcement.
ABC.
And that's basically it for this week. So, what did we talk about? We reviewed the week.
We talked about respondent or classical conditioning. I did an overview. I talked about
components of classical conditioning. I talked about some related principles like
generalization, discrimination, spontaneous recovery, extinction, those type of things.
And I gave an example with Fat Albert-- Little Albert, Watson, and Rayner, 1920. And
with taste aversion, which you probably have experienced sometime in your life. And
then I did just now a very quick overview of operant conditioning.
OK. That's about it for today. I hope you enjoyed this lecture. I hope you're enjoying
them all. And I hope you have a good week. Until we get together again next week, you
have a good week. And as always, good behavior. Bye everybody.
© 2021 Walden University, LLC
14
Respondent and Operant Conditioning
[MUSIC PLAYING]
Respondent and Operant Conditioning
Content Attribution
SC_Light&Bright06_T32
Studio Cutz
SC_Business01_T41
Studio Cutz
© 2021 Walden University, LLC
15
Copyright American Psychological Association. Not for further distribution.
B. F. Skinner. (Courtesy of
R. W. Rieber)
An example of a Skinner box. (Courtesy of R. W. Rieber)
http://dx.doi.org/10.1037/10276-011
Psychology: Theoretical-Historical Perspectives (2nd Ed.), edited by R. W.
Rieber and K. Salzinger
Copyright © 1998 American Psychological Association. All rights reserved.
Copyright American Psychological Association. Not for further distribution.
11
The Experimentd Analysis of
Operant Behuvior: A History
B. F. SKINNER
I was drawn to psychology and particularly to behaviorism by some papers that
Bertrand Russell published in the Dial in the 1920s and that led me to his book
Philosophy (1927, called in England An Outline of Philosophy), the first section of
which contains a much more sophisticated discussion of several epistemological
issues raised by behaviorism than anything of John B. Watson’s. Naturally I turned
to Watson (1924) himself, but at the time only to his popular Behaviorism. I bought
Pavlov’s Conditioned Reflexes (1927) shortly after it appeared, and when I came to
Harvard for graduate study in psychology, I took a course that covered not only
conditioned reflexes but also the postural and locomotor reflexes of Magnus and the
spinal reflexes reported in Sherrington’s Integrative Action of the Nervous System
(1906). The course was taught by Hudson Hoagland in the Department of General
Physiology, the head of which, W. J. Crozier, had worked with Jacques Loeb and
was studying tropisms. I continued to prefer the reflex to the tropism, but I accepted
Loeb’s and Crozier’s dedication to the organism as a whole and the latter’s contempt
for medical school “organ physiology.” Nevertheless, in the Department of Physiology at Harvard Medical School, I later worked with Hallowell Davis and with
Alexander Forbes, who had been in England with Adrian and was using Sherrington’s torsion-wire myograph to study the reflex control of movement.
By the end of my first year at Harvard, I was analyzing the behavior of an “organism as a whole” under soundproofed conditions like those described by Pavlov.
In one experiment, I quietly released a rat into a small dark tunnel from which it
could emerge into a well-lighted space, and, with moving pen on a moving strip of
paper, I recorded its exploratory progress as well as its retreat into the tunnel when
I made a slight noise. Some of my rats had babies, and in their early squirmings, I
thought I saw some of the postural reflexes stereoscopically illustrated in Magnus’s
289
Copyright American Psychological Association. Not for further distribution.
B. E SKLNNER
Korfirstellung (1924), and I began to study them. I mounted a light platform on
tight wires and amplified its forward-and-backward movement with an arm writing
on a smoked drum. I could put a small rat on the platform and record the tremor
of its leg muscles when I pulled it gently by the tail, as well as the sudden forward
leap with which it often reacted to this stimulation.
I decided to do something of the sort with an adult rat. I built a very light runway
about 8 feet long, the lengthwise vibration of which I could also amplify and record
on a smoked drum, and I induced a rat to run along it by giving it food at the end.
When it was halfway along, I would make a slight noise and record the way in
which it came to a sudden stop by the effect on the runway. I planned to watch
changes as the rat adapted to the noise; possibly I could condition another stimulus
to elicit the same response. My records looked a little like those made by a torsionwire myograph, but they reported the behavior of the organism as a whole.
This was all pretty much in the tradition of reflex physiology, but quite by accident
something happened that dramatically changed the direction of my research. In my
apparatus, the rat went down a back alley to the other end of the apparatus before
making its recorded run, and I noticed that it did not immediately start to do so
after being fed. I began to time the delays and found that they changed in an orderly
way. Here was a process, something like the processes of conditioning and extinction
in Pavlov’s work, in which the details of the act of running, like those of salivation,
were not the most important thing.
I have described elsewhere (Skinner, 1956) the series of steps through which I
simplified my apparatus until the rat simply pushed open the door of a small bin
to get a piece of food. Under controlled conditions and with pellets of food that
took some time to chew, I found that the rate of eating was a function of the quantity
of food already eaten. The title of my first experimental paper, “On the Conditions
of Elicitation of Certain Eating Reflexes” (Skinner, 1930), shows that I was still
applying the concept of the reflex to the behavior of the organism as a whole.
Pushing open a door was conditioned behavior, but to study the process of con-‘
ditioning, I needed a more clearly defined act. I chose pushing down a horizontal
bar mounted as a lever. When the rat pressed the lever, a pellet of food was released
into a tray. The arrangement was, of course, close to that with which Thorndike
had demonstrated his Law of Effect, and in my first paper, I called my apparatus a
“problem box,” but the results were quite different. Thorndike’s cat learned by dropping out unsuccessful bits of behavior until little or nothing remained but the successful response. Nothing of the sort happened in my experiment. Pavlov’s emphasis
on the control of conditions had led me to take certain steps to avoid disturbing my
rat. I gave it plenty of time to recover from being put into the apparatus by enclosing
it first in a special compartment from which I later quietly released it. I left it in
the apparatus a long time so that it could become thoroughly accustomed to being
there, and I repeatedly operated the food dispenser until the rat was no longer
disturbed by the noise and ate as soon as food appeared. All this was done when
the lever was resting in its lowest position and hence before pressing it could be
conditioned. The effect was to remove all the unsuccessful behavior that had composed the learning process in Thorndike’s experiment. Many of my rats began to
respond at a high rate as soon as they had depressed the lever and obtained only
one piece of food.
Conditioning was certainly not the mere survival of a successful response; it was
290
Copyright American Psychological Association. Not for further distribution.
OPERANT BEHAVIOR
an increase in rate of responding, or in what I called reflex strength. Thorndike had
said that the cat’s successful behavior was “stamped in,” but his evidence was an
increasing priority over other behavior that was being “stamped out.” The difference
in interpretation became clearer when I disconnected the food dispenser and found
that the behavior underwent extinction. As R. S. Woodworth (1951) later pointed
out, Thorndike never investigated the extinction of problem-box behavior.
Though rate of responding was not one of Sherrington’s measures of reflex
strength, it emerged as the most important one in my experiment. Its significance
was clarified by the fact that I recorded the rat’s behavior in a cumulative curve;
one could read the rate directly as the slope of the curve and see at a glance how
it changed over a considerable period of time. Rate proved to be a particularly useful
measure when I turned from the acquisition of behavior to its maintenance, in the
study of schedules of intermittent reinforcement. Theoretically, it was important
because it was relevant to the central question: What is the probability that an
organism will engage in a particular form of behavior at a particular time?
I was nevertheless slow in appreciating the importance of the concept of strength
of response. For example, I did not immediately shift from condition to reinforce,
although the latter term emphasizes the strengthening of behavior. I did not use
reinforce at all in my report of the arrangement of lever and food dispenser, and my
first designation for intermittent reinforcement was periodic reconditioning.
Strength or probability of response fitted comfortably into the formulation of a
science or behavior proposed in my thesis. Russell was again responsible for a central
point. Somewhere he had said that “reflex” in psychology had the same status as
“force” in physics. I knew what that meant because I had read Ernst Mach’s (1893)
Science of Mechanics, the works of Henri PoincarC on scientific method, and Bridgman’s (1928) The Logic of Modem Physics. My thesis was an operational analysis of
the reflex. I insisted that the word should be defined simply as an observed correlation of stimulus and response. Sherrington’s synapse was a mere inference that
could not be used to explain the facts from which it was inferred. Thus, a stimulus
might grow less and less effective as a response was repeatedly elicited, but it did
not explain anything to attTibute this to “reflex fatigue.” Eventually, the physiologist
would discover a change in the nervous system, but so far as the behavioral facts
were concerned, the only identifiable explanation was the repeated elicitation. In
my thesis (Skinner, 1931), I asserted that in the intact organism “conditioning, ‘emotion,’ and ‘drive’ so far as they concern behavior were essentially to be regarded as
changes in reflex strength,” and I offered my experiments on “drive” and conditioning as examples.
One needed to refer not only to a simulus and a response but also to conditions
that changed the relation between them. I called these conditions third V Q n ’ Q b h
and represented matters with a simple equation:
where A represented any condition affecting reflex strength, such as the deprivation
with which 1 identified “drive” in the experimental part of my thesis.
The summer after I got my degree, Edward C. Tolman was teaching at Harvard,
and I saw a great deal of him. I expounded my operational position at length and
the relevance of third variables in determining reflex strength. Tolman’s book Pur29 l
B. F. SK)”ER
~
~
~
~~
~
Copyright American Psychological Association. Not for further distribution.
posive Behavior in Animals and Men (19’32) was then in press, and in it, he speaks
of “independent variables” but only as such things as genetic endowment or an
initiating physiological state. Three years later he published a paper (Tolman, 1935)
containing the equation:
in which B stood for behavior, as my R stood for response, S for “the environmental
stimulus setup” (my S), H for heredity, T for “specific past training” (my “conditioning”), and P for “a releasing internal condition of appetite or aversion” (my
“drive”). Woodworth later pointed out that these equations were similar. There was,
however, an important difference: What I had called a third variable, Tolman called
intervening. For me the observable operations in conditioning, drive, and emotion
lay outside the organism, but Tolman put them inside, as replacements for, if not
simply redefinitions of, mental processes, and that is where they still are in cognitive
psychology today. Ironically, the arrangement is much closer than mine to the traditional reflex arc.
Although rate of responding, in the absence of identifiable stimulation, had no
parallel in Sherrington or Pavlov, I continued to talk about reflexes. I assumed that
some features of the lever were functioning as stimuli that elicited the response of
pressing the lever. But I was unhappy about this, and I began to look more closely
at the role of the stimulus. I reinforced pressing the lever when a light was on but
not when it was off and found that in the dark the behavior underwent extinction.
Turning on the light then appeared to elicit the response, but the history behind
that effect could not be ignored. The light was not eliciting the behavior; it was
functioning as a variable affecting its rate, and it derived its power to do so from
the differential reinforcement with which it had been correlated.
In the summer of 1934, I submitted two papers for publication in separate efforts
to revise the concept of the reflex. In “The Generic Nature of the Concepts of
Stimulus and Response” (Skinner, 1935a), I argued that neither a stimulus nor a
response could be isolated by surgical or other means and that the best clue to a
useful unit was the orderliness of the changes in its strength as a function of third
variables. In “Two Types of Conditioned Reflex and a Pseudo-Type” (Skinner,
1935b), I distinguished between Pavlovian and what I would later call operant conditioning. Quite apart from any internal process, a clear difference could be pointed
out in the contingent relations among stimuli, responses, and reinforcement.
I was forced to look more closely at the role of the stimulus when Konorski and
Miller (1937) replied to the latter paper by describing an experiment they had
performed in the late 1920s that they felt anticipated my own. They had shocked
the paw of a dog and given it food when it flexed its leg. Eventually the leg flexed
even though the paw was not shocked. I replied that true reflexes seldom have the
kinds of consequences that lead to operant conditioning. Shock may be one way of
inducing a hungry dog to flex its leg so that the response can be reinforced with
food, but it is an unusual one, and an eliciting stimulus can in fact seldom be
identified. (As to priority, Thomdike was, of course, ahead of us all by more than a
quarter of a century.)
In my reply (Skinner, 1937), I used the term operant for the first time and applied
respondent to the Pavlovian case. It would have been the right time to abandon
292
Copyright American Psychological Association. Not for further distribution.
OPERANT BEHAVIOR
reflex, but I was still strongly under the control of Sherrington, Magnus, and Pavlov,
and I continued to hold to the term doggedly when I wrote The Behavior of Organisms (1938). It took me several years to break free of my own stimulus control in
the field of operant behavior. From this point on, however, I was clearly no longer
a stimulus-response psychologist.
The lack of an identifiable eliciting stimulus in operant behavior raises a practical
problem: We must wait for behavior to appear before we can reinforce it. We thus
start with much less control than in respondent conditioning. Moreover, there is a
great deal of complex behavior for which we shall certainly wait in vain, because it
will never occur spontaneously. In human behavior, there are many ways of “priming” an operant response (that is, evoking it for the first time to reinforce it), and
one of them is also available in lower organisms: Complex behavior can be “shaped”
through a series of successive approximations. To reinforce pressing a lever with
great force, for example, we cannot simply wait for a very forceful response, but we
can differentially reinforce the more forceful of the responses that do occur, with
the result that the mean force increases.
I used a similar programming of contingencies of reinforcement to shape complex
topography in a demonstration (reported in The Behavior of Organisms) in which a
rat pulled a chain to release a marble, picked up the marble, carried it across the
cage, and dropped it into a tube. The terminal behavior was shaped by a succession
of slight changes in the apparatus. Later my colleagues and I discovered that we
could avoid the timeconsuming process of altering the apparatus by constructing
programmed contingencies while reinforcing directly by hand.
I soon tried the procedure on a human subject - our 9-month-old daughter. I was
holding her on my lap one evening when I turned on a table lamp beside the chair.
She looked up and smiled, and I decided to see whether I could use the light as a
reinforcer. I waited for a slight movement of her left hand and turned on the light
for a moment. Almost immediately, she moved her hand again, and again I reinforced. I began to wait for bigger movements, and within a short time, she was
lifting her arm in a wide arc -“to turn on the light.”
I was writing Walden Two (Skinner, 1948) at the time, and the book is often cited
as an essay in behavioral engineering, but I believe it contains no example of the
explicit use of a contrived reinforcer. The community functions through positive
reinforcement, but the contingencies are in the natural and social environments.
They have been carefully designed, but there is no continuing intervention by a
reinforcing agent. The only contrived contingencies are Pavlovian: Children are
“desensitized” to frustration and other destructive emotions by being exposed to
situations of carefully graded intensity.
I began to analyze the contingencies of reinforcement to be found in existing
cultures in an undergraduate course at Harvard in the spring of 1949. Science and
Human Behavior (Skinner, 1953) was written as a text for that course, and in it, I
considered practices in such fields as government, religion, economics, education,
psychotherapy, self-control, and social behavior - and all from an operant point of
view.
Practical demonstrations soon followed. A graduate student at Indiana, Paul Fuller, had reinforced arm raising in a 20-year-old human organism which had never
before “shown any sign of intelligence,” and in 1953, I set up a small laboratory to
study operant behavior in a few backward patients in a mental hospital. Ogden R.
293
Copyright American Psychological Association. Not for further distribution.
B. F. SKINNER
Lindsley took over that project and found that psychotics could be brought under
the control of contingencies of reinforcement if the contingencies were clearcut and
carefully programmed. Ayllon, Azrin, and many others subsequently used operant
conditioning in both management and therapy to improve the lives of psychotic and
retarded people.
At the University of Pittsburgh in the spring of 1954, I gave a paper called “The
Science of Learning and the Art of Teaching” (Skinner, 1954) and demonstrated a
machine designed to teach arithmetic, using an instructional program. A year or
two later, I designed the teaching machines that were used in my undergraduate
course at Harvard, and my colleague James G. Holland and I wrote the programmed
materials eventually published as The Analysis of Behavior (1961). The subsequent
history of programmed instruction and, on a broader scale, of what has come to be
called applied behavior analysis or behavior modification is too well known to need
further review here.
Meanwhile, the experimental analysis of operant behavior was expanding rapidly
as many new laboratories were set up. Charles B. Ferster and I enjoyed a very
profitable 5-year collaboration. Many of our experiments were designed to discover
whether the performance characteristic of a schedule could be explained by the
conditions prevailing at the moment of reinforcement, including the recent history
of responding, but administrative exigencies drew our collaboration to a close before
we had reached a sound formulation, and we settled for the publication of a kind
of atlas showing characteristic performances under a wide range of schedules (Schedules of Reinforcement; Ferster & Skinner, 1957). The subsequent development of
the field can be traced in the Ioumal ofthe Experimental Analysis of Behavior, which
was founded in 1958.
Several special themes have threaded their way through this history, and some of
them call for comment.
Verbal Behavior
I began to explore the subject in the mid-1930s. The greater part of a manuscript
was written with the help of a Guggenheim Fellowship in 1944-1945, from which
the William James Lectures at Harvard in 1947 were taken. A sabbatical term in
the spring of 1955 enabled me to finish most of a book, which appeared in 1957
as Verbat Behavior. It will, I believe, prove to be my most important book. It has
not been understood by linguists or psycholinguists, in part because it requires a
technical understanding of an operant analysis, but in part because linguists and
psycholinguists are primarily concerned with the listener - with what words mean
to those who hear them and with what kinds of sentences are judged grammatical
or ungrammatical. The very concept of communication -whether of ideas, meanings, or information-emphasizes transmission to a Iistener. So far as I am concerned, however, very little of the behavior of the listener is worth distinguishing as
verbal.
In Verbal Behavior, verbal operants are classified by reference to the contingencies
of reinforcement maintained by a verbal community. The classification is an alternative to the “moods” of the grammarian and the “intentions” of the cognitive
psychologist. When these verbal operants came together under multiple causation,
294
OPERANT BEHAVIOR
Copyright American Psychological Association. Not for further distribution.
the effect may be productive if it contributes, say, to style and wit, but destructive
if it leads to distortion and fragmentation. Speakers manipulate their own verbal
behavior to control or qualify the responses of listeners, and grammar and syntax
are ”autoclitic” techniques of this sort, as are many other practices in sustained
composition. A technology of verbal self-management emerges that is useful both
in “discovering what one has to say” and in restricting the range of controlling
variables - emphasizing, for example, the kinds of variable (characteristic of logic
and science) most likely to lead to effective practical action or the kinds found to
be more productive of poetry or fiction.
The Nervous System
My thesis was a sort of declaration of independence from the nervous system, and
I restated the position in The Behavior of Organisms. It is not, I think, antiphysie
logical. Various physiological states and processes intervene between the operations
performed on an organism and the resulting behavior. They can be studied with
appropriate techniques, and there is no question of their importance. A science of
behavior has its own facts, however, and they are too often obscured when they are
converted into hasty inferences about the nervous system. I would still say, as I said
in The Behavior of Organisms, that no physiological fact has told us anything about
behavior that we did not already know, though we have been told a great deal about
the relations between the two fields. The helpful relation is the other way around:
A behavioral analysis defines the task of the physiologist. Operant theory and practice
now have an important place in the physiological laboratory.
Psychopharmacology
At Minnesota, W. T. Heron and I studied the effects of a few familiar drugs on
operant behavior, and in the early 1950s, Peter Dews of the Department of Pharmacology at the Harvard Medical School became associated with my laboratory and
coworkers. At about the same time, many of the ethical drug companies set up
operant laboratories, some of which contributed to the present armamentarium of
behavior-modifying drugs. Operant techniques are now widely used in the field, as
well as in the study of drug addiction and related medical problems.
Ethology
Ethologists often assert that their work is neglected by behaviorists, but Watson’s first
experiments were ethological, and so were mine. The process of operant conditioning itself is part of the genetic equipment of the organism, and I have argued that
reinforcers are effective, not because they reduce current drives (a widely held view),
but because susceptibilities to reinforcement have had survival value. Speciesspecific behavior may disrupt operant behavior, but the reverse is also true.
In Science and Human Behavior, I pointed out that contingencies of survival in
295
Copyright American Psychological Association. Not for further distribution.
B. E SKINNER
natural selection resembled contingencies of reinforcement in operant conditioning.
Both involve selection by consequences, a process which, in a work in progress, I
argue to be particularly relevant to the question of whether human behavior can
indeed take the future into account. Phylogenic contingencies that could have
shaped and maintained, say, imitative behavior resemble the contingencies of reinforcement that shape similar behavior in the individual, but one repertoire does
not evolve from the other. An experiment on imprinting has shown how an operant
analysis may clarify field observations and correct conclusions drawn from them:
The young duckling does not inherit the behavior of following its mother or an
imprinted object; it acquires the behavior because of an innate susceptibility of
reinforcement from being close.
A Theory of Knowledge
I came to behaviorism, as I have said, because of its bearing on epistemology, and
I have not been disappointed. I am, of course, a radical rather than a methodological
behaviorist. I do not believe that there is a world of mentation or subjective experience that is being, or must be, ignored. One feels various states and processes
within one’s body, but these are collateral products of one’s genetic and personal
histories. No creative or initiating function is to be assigned to them. Introspection
does not permit us to make any substantial contribution to physiology, because “we
do not have nerves going to the right places.” Cognitive psychologists make the
mistake of internalizing environmental contingencies - as in speaking of the storage
of sensory contacts with the environment in the form of memories that are retrieved
and responded to again at some later date. There is a sense in which one knows the
world, but one does not possess knowledge; one behaves because of one’s exposure
to a complex and subtle genetic and environmental history. As I argued in a final
chapter in Verbal Behavior (Skinner, 1957), thinking is simply behaving and may
be analyzed as such. In About Behaviorism (Skinner, 1974), I attempted to make a
comprehensive statement of the behaviorist’s position as I understood it 46 years
after I first entered the field.
Designing a Culture
Walden Two (Skinner, 1948) was an early essay in the design of a culture. It was
fiction, but I described a supporting science and technology in Science and Human
Behavior (Skinner, 1953). I was made aware of a basic issue when Walden Two was
immediately attacked as a threat to freedom. Its protagonist was said to have manip
ulated the lives of people and to have made an unwarranted use of his own value
system. I discussed the issue in a paper called “Freedom and the Control of Men”
in 1955 (Skinner, 1955-1956) and in a debate with Carl Rogers in 1956 (Rogers
& Skinner, 1956). The control of behavior became especially critical with the rise
of an applied behavioral analysis in the 1960s, and I returned to the issue in Beyond
Freedom and Dignity in 1971. Unfortunately, that title led many people to believe
that I was opposed to freedom and dignity. I did, indeed, argue that people are not
296
Copyright American Psychological Association. Not for further distribution.
OPERANT BEHAVIOR
in any scientific sense free or responsible for their achievements, but I was concerned
with identifying and promoting the conditions under which they feel free and worthy. I had no quarrel with the historical struggle to free people from aversive control
or from punitive restrictions on the pursuit of happiness, and I proposed that that
struggle be continued by shifting to practices that used positive reinforcement, but
I argued that certain aspects of the traditional concepts stood in the way. For example, to make sure that individuals receive credit for their actions, certain punitive
practices have actually been perpetuated. I believe that a scientific formulation of
human behavior can help us maximize feelings of freedom and dignity.
There is a further goal: What lies beyond freedom and dignity is the survival of
the species, and the issues I first discussed in Walden Two have become much more
pressing as the threat of a catastrophic future becomes clearer. Unfortunately, we
move only slowly toward effective action. A question commonly asked is this: When
shall we have the behavioral science we need to solve our problems? I believe that
the real question is this: When shall we be able to use the behavioral science we
already have? More and better science would be helpful, but far more effective
decisions would be made in every field of human affairs if those who made them
were aware of what we already know.
REFERENCES
Bridgman, P. W. (1928). The logic of modem physics. New York: Macmillan.
Ferster, C. B., & Skinner, B. F. (1957). Schedules of reinforcement. New York: AppletonGentury-Crof.
Holland, J. G., & Skinner, B. F. (1961). The analysis of behavior. New York: McGraw-Hill.
Konorski, J., & Miller, S. (1937). On two types of conditioned reflex. Ioumal of General Psychology,
16, 264-272.
Mach, E. (1893). The science ofmechanics. Chicago: Open Court.
Magnus, R. ( 1924). Korpersteflung. Berlin: Springer.
Pavlov, I. P. (1927). Conditioned reflexes. London: Oxford University Press.
Rogers, C. R., & Skinner, B. F. (1956). Some issues concerning the control of human behavior: A
symposium. Science, 124, 1057- 1066.
Russell, B. (1927). Philosophy. New York: Norton.
Sherrington, C. S. (1906). Integrative action of the nervous system. New Haven, CT: Yale University
Press.
Skinner, B. F. (1930). O n the conditions of elicitation of certain eating reflexes. Proceedings of the
National Academy of Sciences, 16, 433-438.
Skinner, B. F. (1931). The concept ofthe reflex in the description ofbehavior. Unpublished thesis, Harvard
University Library, Cambridge, MA.
Skinner, B. F. (1935a). The generic nature of the concepts of stimulus and response. Ioumal ofGeneral
Psychology, 12, 40-65.
Skinner, B. f(1935b). Two types of conditioned reflex and a pseudo-type. JournalofCeneral Psychology,
12. 66-77.
Skinner, B. F. (1937). Two types of conditioned reflex: A reply to Konorski and Miller. Ioumal ofGeneral
Psycholog, 16, 272-279.
Skinner, B. F. (1938). The behavior of organisms. New York: AppletonCentury.
Skinner, B. F. (1948). Walden two. New York: Macmillan.
Skinner, B. F. (1953). Science and human behavior. New York: Macmillan.
Skinner, B. F. (1954). The science of learning and the art of teaching. Harvard Educational Review,
24, 86-97.
Skinner, B. F. (1955-1956, Winter). Freedom and the control of men. Amencan Scholar, 25, 47-65.
Skinner, B. F. (1956). A case history in scientific method. American Psychologist, 1 1 , 221-233.
Skinner, B. F. (1957). Verbal behavior. New York: Appleton-Century-Crofts.
297
B. F. SKINNER
Skinner, B. F. (1971). Beyond freedom and dignity. New York: Alfred A. Knopf.
Skinner, B. F. (1974). About behaviorism. New York: Alfred A Knopf.
Tolman, E. C. (1932). Purposive behavior in animals and men. New York: Century.
Tolman, E. C. (1935). Philosophy versus immediate experience. Philosophy of Science, 2, 356-380.
Watson, J. B. (1924). Behaviorism. New York: Norton.
Woodworth, R. S. (1951). Contemporary schools ofpychology. New York: Ronald Press.
Copyright American Psychological Association. Not for further distribution.
RECOMMENDED READINGS
Skinner, B.
Skinner, B.
Skinner, B.
Skinner, B.
Skinner, B.
Skinner, B.
Skinner, B.
298
F. (1953). Science and human behavior. New York: Macmillan.
F. (1957). Verbal behavior. New York: Appleton-Century-Crofts.
F. (1971). Beyond freedom and dignity. New York: Appleton-Century-Crofts.
F. (1972). Cumulative record (3rd ed.). New York: Appleton-Century-Crof.
F. (1974).About behaviorism. New York: Alfred A Knopf.
F. (1976). Particulars of my life. New York: Alfred k Knopf.
F. (1979). The shaping o f a behaviorist. New York. Alfred A. Knopf.
12
Copyright American Psychological Association. Not for further distribution.
WHATEVER HAPPENED TO
LITTLE ALBERT?
BENJAMIN HARRIS
Almost 60 years after it was first reported, Watson and Rayner’s
(1920) attempted conditioning of the infant Albert B. is one of the most
widely cited experiments in textbook psychology. Undergraduate textbooks
of general, developmental, and abnormal psychology use Albert’s conditioning to illustrate the applicability of classical conditioning to the development and modification of human emotional behavior. More specialized books focusing on psychopathology and behavior therapy (e.g.,
Eysenck, 1960) cite Albert’s conditioning as an experimental model of
psychopathology (i.e., a rat phobia) and often use Albert to introduce a
discussion of systematic desensitization as a treatment of phobic anxiety.
Unfortunately, most accounts of Watson and Rayner’s research with
Albert feature as much fabrication and distortion as they do fact. From
information about Albert himself to the basic experimental methods and
Preparation of this article was aided by the textbook and literature searches of Nancy Kinsey,
the helpful comments of Mike Wessels, and the bibliographic assistance of Cedric Larson. The
author also thanks Bill Woodward and Ernest Hilgard for their comments on earlier versions of
this work.
Reprinted from American Psychologist, 34, 151-160 (1979). Copyright 0 1979 by the
American Psychological Association. Reprinted with permission of the author.
237
http://dx.doi.org/10.1037/10421-012
Evolving Perspectives on the History of Psychology, edited by W. E. Pickren and
D. A. Dewsbury
Copyright © 2002 American Psychological Association. All rights reserved.
Copyright American Psychological Association. Not for further distribution.
results, no detail of the original study has escaped misrepresentation in the
telling and retelling of this bit of social science folklore.
There has recently been a revival of interest in Watson’s conditioning
research and theorizing (e.g., MacKenzie, 1972; Seligman, 1971; Weimer
6r Palermo, 1973; Samelson, Note l ) , and in the mythology of little Albert
(Cornwell & Hobbs, 1976; Larson, 1978; Prytula, Oster, & Davis, 1977).
However, there has yet to be a complete examination of the methodology
and results of the Albert study and of the process by which the study’s
details have been altered over the years. In the spirit of other investigations
of classic studies in psychology (e.g., Ellenberger, 1972; Parsons, 1974) it
is time to examine Albert’s conditioning in light of current theories of
learning. It is also time to examine how the Albert study has been portrayed over the years, in the hope of discovering how changes in psychological theory have affected what generations of psychologists have told
each other about Albert.
THE EXPERIMENT
As described by Watson and Rayner (1920), an experimental study
was undertaken to answer three questions: (1) Can an infant be conditioned to fear an animal that appears simultaneously with a loud, feararousing sound? (2) Would such fear transfer to other animals or to inanimate objects? (3) How long would such fears persist? In attempting to
answer these questions, Watson and Rayner selected an infant named Albert B., whom they described as “healthy,” and “stolid and unemotional”
(p. 1). At approximately 9 months of age, Albert was tested and was judged
to show no fear when successively observing a number of live animals (e.g.,
a rat, a rabbit, a dog, and a monkey), and various inanimate objects (e.g.,
cotton, human masks, a burning newspaper). He was, however, judged to
show fear whenever a long steel bar was unexpectedly struck with a claw
hammer just behind his back.
Two months after testing Albert’s apparently unconditioned reactions
to various stimuli, Watson and Rayner attempted to condition him to fear
a white rat. This was done by presenting a white rat to Albert, followed
by a loud clanging sound (of the hammer and steel bar) whenever Albert
touched the animal. After seven pairings of the rat and noise (in two
sessions, one week apart), Albert reacted with crying and avoidance when
the rat was presented without the loud noise.
In order to test the generalization of Albert’s fear response, 5 days
later he was presented with a rat, a set of familiar wooden blocks, a rabbit,
a short-haired dog, a sealskin coat, a package of white cotton, the heads
of Watson and two assistants (inverted so that Albert could touch their
hair), and a bearded Santa Claus mask. Albert seemed to show a strong
238
BENJAMIN HARRIS
Copyright American Psychological Association. Not for further distribution.
fear response to the rat, the rabbit, the dog, and the sealskin coat; a “negative” response to the mask and Watson’s hair; and a mild response to the
cotton. Also, Albert played freely with the wooden blocks and the hair of
Watson’s assistants.
After an additional 5 days, Watson reconditioned Albert to the rat
(one trial, rat paired with noise) and also attempted to condition Albert
directly to fear the previously presented rabbit (one trial) and dog (one
trial). When the effects of this procedure were tested in a different, larger
room, it was found that Albert showed only a slight reaction to the rat,
the dog, and the rabbit. Consequently, Watson attempted to “freshen the
reaction to the rat” (p. 9) by presenting it with the loud noise. Soon after
this, the dog began to bark loudly at Albert, scaring him and the experimenters and further confounding the experiment.
To answer their third question concerning the permanence of conditioned responses over time, Watson and Rayner conducted a final series
of tests on Albert after 31 days of neither conditioning nor extinction
trials. In these tests, Albert showed fear when touching the Santa Claus
mask, the sealskin coat, the rat, the rabbit, and the dog. A t the same time,
however, he initiated contact with the coat and the rabbit, showing “strife
between withdrawal and the tendency to manipulate” (Watson & Rayner,
1920, p. 10). Following these final tests, Albert’s mother removed him from
the hospital where the experiment had been conducted. (According to
their own account, Watson and Rayner knew a month in advance the day
that Albert would no longer be available to them.)
THE CONTEXT OF WATSON AND RAYNERS STUDY
What was the relationship of the Albert experiment to the rest of
Watson’s work? O n a personal level, this work was the final published
project of Watson’s academic career, although he supervised a subsequent,
related study of the deconditioning of young children’s fears (M. C. Jones,
1924a, 1924b). From a theoretical perspective, the Albert study provided
an empirical test of a theory of behavior and emotional development that
Watson had constructed over a number of years.
Although Watson had publicly declared himself a “behaviorist” in
early 1913, he apparently did not become interested in the conditioning
of motor and autonomic responses until late 1914, when he read a French
edition of Bekhterev’s Objective Psychology (see Hilgard & Marquis, 1940).
By 1915, Watson’s experience with conditioning research was limited to
this reading and his collaboration with his student Karl Lashley in a few
simple studies. Nevertheless, Watson’s APA Presidential Address of that
year made conditioned responses a key aspect of his outline of behaviorism
and seems to have been one of the first American references to Bekhterev’s
WHATEVER HAPPENED TO LITTLE ALBERT?
239
Copyright American Psychological Association. Not for further distribution.
work (Hilgard & Marquis, 1940, p. 24; Koch, 1964, p. 9; Watson, 1916b).
Less than a year after his APA address, two articles by Watson (1916a,
1916c) were published in which he hypothesized that both normal defense
mechanisms and psychiatric disorders (e.g., phobias, tics, hysterical symptoms) could be understood on the basis of conditioning theory.
Six months later, the American Journal of Psychology featured a more
extensive article by Watson and J. J. B. Morgan (1917) that formulated a
theory of emotion, intended to serve both experimentalists and clinicians.
Its authors hypothesized that the fundamental (unlearned) human emotions were fear, rage, and love; these emotions were said to be first evoked
by simple physical manipulations of infants, such as exposing them to loud
sounds (fear) or restricting their movements (rage). Concurrently, they hypothesized that “the method of conditioned reflexes” could explain how
these basic three emotions become transformed and transferred to many
objects, eventually resulting in the wide range of adult emotions that is
evoked by everyday combinations of events, persons, and objects. In support of these theoretical ideas, Watson and Morgan began to test whether
infants’ fears could be experimentally conditioned, using laboratory analogues of thunder and lightning. In the description of this work and the
related theory, a strong appeal was made for its practical importance, stating
that it could lead to a standard experimental procedure for “bringing the
human emotions under experimental control” (p. 174).
By the early months of 1919, Watson appears not yet to have found
a reliable method for experimentally eliciting and extinguishing new emotional reactions in humans. However, by this time he had developed a
program of research with infants to verify the existence of his hypothesized
three fundamental emotions. Some early results of this work were described
in May 1919, as part of a lengthy treatise on both infant and adult emotions. Anticipating his work with Albert,’ Watson (1919b) for the first
time applied his earlier principles of emotional conditioning to children’s
fears of animals. Based on a case of a child frightened by a dog that he
had observed, Watson hypothesized that although infants do not naturally
fear animals, if “one animal succeeds in arousing fear, any moving furry
animal thereafter may arouse it” (p. 182). Consistent with this hypothesis,
‘In tracing the development of Watson’s ideas about conditioning, it would be helpful to know
whether the experiments with Albert had already begun when Watson wrote his 1919
Psychological Reoriew article. Unfortunately, there is no hard evidence of exactly when the
Albert study was completed. Watson and Rayner’s original report was published in the
February 1920 Journal of Experimental Psychology, suggesting that the research was completed in
1919. Also, M. C. Jones (1975, Note 2) remembers that Watson lectured about Albert as early
as the spring of 1919 and showed a film of his work with infants at the Johns Hopkins
University (Watson, 1919a). Individual frames of this film published later (“Behaviorist
Babies,” 1928; “Can Science Determine Your Baby’s Career Before I t Can Talk?,” 1922;
Watson, 1927, 1928a) suggest that at some date this film contained footage of Albert’s
conditioning. Since the work with Albert lasted for approximately 4 months, there seems to be
a strong possibility that Watson’s 1919 prediction was not entirely based on theoretical
speculation.
240
BENJAMIN HARRIS
Copyright American Psychological Association. Not for further distribution.
the results of Watson and Rayner’s experiments with Albert were reported
9 months later.
Although Watson’s departure from Johns Hopkins prematurely ended
his own research in 1920, he continued to write about his earlier findings,
including his work with Albert. In 1921, he and Rayner (then Rosalie
Rayner Watson) summarized the results of their interrupted infant research
program, concluding with a summary of their experience with Albert. Although this was a less complete account than their 1920 article, it was the
version that was always referenced in Watson’s later writings. These writings included dozens of articles in the popular press (e.g., Watson, 1928b,
1928c), the books Behaviorism (1924) and Psychological Care of Infant and
Child (1928a), and a series of articles in Pedagogical Seminary (Watson,
1925a, 192513, 1925~).
Many of these articles retold the Albert story, often
with photographs and with added comments elaborating on the lessons of
this study.
INTRODUCTORY-LEVEL TEXTBOOK VERSIONS OF ALBERT
A selective survey of textbooks’ used to introduce students to general,
developmental, and abnormal psychology revealed that few books fail to
refer to Watson and Rayner’s (1920) study in some manner. Some of these
accounts are completely accurate (e.g., Kennedy, 1975; Page, 1975; Whitehurst & Vasta, 1977). However, most textbook versions of Albert’s conditioning suffer from inaccuracies of various degrees. Relatively minor details that are misrepresented include Albert’s age (Calhoun, 1977; Johnson
& Medinnus, 1974), his name (Galanter, 1966), the spelling of Rosalie
Rayner’s name (e.g., Biehler, 1976; Helms & Turner, 1976; McCandless &
Trotter, 1977; Papalia & Olds, 1975), and whether Albert was initially
conditioned to fear a rat or a rabbit (CRM Books, 1971; Staats, 1968).
Of more significance are texts’ misrepresentations of the range of Albert’s postconditioning fears and of the postexperimental fate of Albert.
The list of spurious stimuli to which Albert’s fear response is claimed to
have generalized is rather extensive. It includes a fur pelt (CRM Books,
1971), a man’s beard (Helms & Turner, 1976), a cat, a pup, a fur muff
(Telford & Sawrey, 1968), a white furry glove (Whittaker, 1965), Albert’s
aunt, who supposedly wore fur (Bemhardt, 1953), either the fur coat or
the fur neckpiece of Albert’s mother (Hilgard, Atkinson, & Atkinson,
1975; Kisker, 1977; Weiner, 1977), and even a teddy bear (Boring, Langfeld, & Weld, 1948). In a number of texts, a happy ending has been added
to the story by the assertion that Watson removed (or “reconditioned”)
’After this survey of texts was completed, similar reviews by Cornwell and Hobbs (1976) and
by Prytula et al. (1977) were discovered. Interested readers should consult these articles for
lists of additional textbook errors.
WHATEVER HAPPENED TO LlTTLE ALBERT?
24 1
Copyright American Psychological Association. Not for further distribution.
Albert’s fear, with his process sometimes described in detail (Engle 6r Snellgrove, 1969; Gardiner, 1970; Whittaker, 1965).
What are the causes of these frequent errors by the authors of undergraduate textbooks? Prytula et al. (1977) catalogued similar mistakes
but offered little explanation of their source. Cornwell and Hobbs (1976)
suggested that such distortions, if not simply due to overreliance on secondary sources, can be generally seen as authors’ attempts to paint the
Albert study (and Watson) in a more favorable light and to make it believable to undergraduates. Certainly, many of the common errors are consistent with a brushed-up image of Watson and his work. For example, not
one text mentions that Watson knew when Albert would leave his control
-a detail that might make Watson and Rayner’s failure to recondition
Albert seem callous to some modern readers.
However, there are other reasons for such errors besides textbooks’
tendencies to tell ethically pleasing stores that are consistent with students’
common sense. One major source of confusion about the Albert story is
Watson himself, who altered and deleted important aspects of the study in
his many descriptions of it. For example, in the Scientific Monthly description of the study (Watson & Watson, 1921), there is no mention of the
conditioning of Albert to the dog, the rabbit, and the rat that occurred at
11 months 20 days; thus Albert’s subsequent responses to these stimuli can
be mistaken for a strong generalization effect (for which there is little
evidence). A complementary and equally confusing omission occurs in Psychological Care of Infant and Child (Watson, 1928a). There, Watson begins
his description of the Albert study with Albert’s being conditioned to a
rabbit (apparently the session occurring at 11 months 20 days). As a result,
the reader is led to believe that Albert’s fear of a rat (a month later) was
the product of generalization rather than the initial conditioning trials.
Besides these omissions, Watson and Rayner (1920) also made frequent
editorial comments, such as the assertion that fears such as Albert’s were
“likely to persist indefinitely, unless an accidental method for removing
them is hit upon” (p. 12). Given such comments, it is understandable that
one recent text overestimates the duration of the Albert experiment by
300% (Goldenberg, 1977), and another states that Albert’s “phobia became
resistant to extinction” (Kleinmuntz, 1974, p. 130).
A second reason for textbook authors’ errors, it seems, is the desire
of many of us to make experimental evidence consistent with textbook
theories of how organisms should act. According to popular versions of
learning theory (as described by Herrnstein, 1977), organisms’conditioning
should generalize along simple stimulus dimensions; many textbooks list
spurious fear-arousing stimuli (for Albert) that correspond to such dimensions. To illustrate the process of stimulus generalization, Albert is often
said to have feared every white, furry object-although he actually showed
fear mostly of nonwhite objects (the rabbit, the dog, the sealskin coat,
242
BENJAMIN HARRlS
Copyright American Psychological Association. Not for further distribution.
Watson’s hair), and did not even fear everything with hair (the observers).
But to fit a more simplified view of learning, either new stimuli appear in
some texts (e.g., a white rabbit, a white glove) or it is simply asserted that
Albert’s conditioning generalized to all white and furry (or hairy) stimuli
(see Biehler, 1976; Craig, 1976; Helms & Turner, 1976). Though it might
seem as if Albert’s fear did generalize to the category of all animate objects
with fur (e.g., the rabbit) or short hair (e.g., Watson’s head), this is impossible to show conclusively. The only experimental stimuli not fitting
this category were the blocks and the observers’ hair. Apparently the blocks
were a familiar toy (thus not a proper stimulus), and Albert’s familiarity
with the observers is not known (although we may guess that one might
have been his mother).
BEHAVIOR THERAPISTS’ VIEWS OF ALBERT
Unfortunately, misrepresentations of Watson and Rayner’s ( 1920)
work are not confined to introductory-level texts. For proponents of behavioral therapies, Albert’s conditioning has been a frequently cited reference, although its details have often become altered or misinterpreted.
Joseph Wolpe, for example, is well known for his conditioning-anxiety
model of phobias and his treatment of various neurotic disorders by what
was originally termed “reciprocal inhibition” (Wolpe, 1958). According to
Wolpe and Rachman (1960):
Phobias are regarded as conditioned anxiety (fear) reactions. Any
“neutral”stimulus, simple or complex, that happens to make an impact
on an individual at about the time that a fear reaction is evoked acquires the ability to evoke fear subsequently. (p. 145)
In support of this model Wolpe and Rachman cited the Albert study
to “indicate that it is quite possible for one experience to induce a phobia”
(p. 146). Also, Eysenck (1960) asserted that “Albert developed a phobia
for white rats and indeed for all furry animals” (p. 5). Similar interpretations of Watson and Rayner’s (1920) experiment were found in subsequent
writings by Wolpe and other behavior therapists (e.g., Rachman, 1964;
Sandler & Davidson, 1971; Ullman & Krasner, 1965; Wolpe, 1973).
Critical reading of Watson and Rayner’s (1920) report reveals little
evidence either that Albert developed a rat phobia or even that animals
consistently evoked his fear (or anxiety) during Watson and Rayner’s
(1920) experiment. For example, 10 days after the completion of the initial
(seven-trial) conditioning to a white rat, Albert received an additional trial
of conditioning to the same rat. Immediately following this, his reaction
to the rat was described as: “Fell over to the left side, got up on all fours
and started to crawl away. On this occasion there was no crying, but strange
WHATEVER HAPPENED TO LITTLE ALBERT?
243
Copyright American Psychological Association. Not for further distribution.
to say, as he started away he began to gurgle and coo, even while leaning
far over to the left side to avoid the rat” (p. 7).
O n the same day as this, Albert received a trial of conditioning to
the rabbit he had seen previously (using the clanging steel bar). When
shown the rabbit twice again, he whimpered but did not cry. Immediately
after this, his reactions were tested in a different (larger) room. When
shown the rabbit, Albert’s response was described as: “Fear reaction slight.
Turned to left and kept face away from the animal but the reaction was
never pronounced” (p. 9).
Finally, 31 days later and after having received an additional conditioning trial to the rat at the end of the preceding session, Albert’s reactions to the (same) rat were:
He allowed the rat to crawl towards him without withdrawing. He sat
very still and fixated intently. Rat then touched his hand. Albert withdrew it immediately, then leaned back as far as possible but did not
cry. When the rat was placed on his arm he withdrew his body and
began to fret, nodding his head. The rat was then allowed to crawl
against his chest. He first began to fret and then covered his eyes with
both hands. (p. 11)
Not only does Albert’s response seem lacking in the strength that we
associate with phobia (possibly due to Watson’s alternation of acquisition
and extinction trials) but on a qualitative basis it seems unlike the classically conditioned anxiety on which some behavior therapists base their
theoretical models of phobias.
Of course, it might be argued by proponents of a two-factor theory
of phobias that Albert’s reactions to the rat and the rabbit were successful
escape responses from the anxiety-arousing stimuli, thus explaining Albert’s
relative calm (no rapid breathing, crying, etc.). However, Albert did not
consistently avoid the animals to which he was conditioned. On his final
day of testing, for example, Albert initially did not avoid the rabbit to
which he had been conditioned; he then attempted to avoid it, but then
“after about a minute he reached out tentatively and slowly and touched
the rabbit’s ear with his right hand, finally manipulating it” (Watson &
Rayner, 1920, p. ll).3
’Another model that has been applied to the Albert study is that of operant or instrumental
conditioning. For example, Larson (1978) and Reese and Lipsitt (1970) cited a paper by R. M.
Church (Note 3 ) on this point (see also Kazdin, 1978). Such an interpretation is apparently
based on Watson’s notes indicating that at least for the first two trials, the loud noise was
contingent o n Albert’s active response (i.e., touching the rat). Also, the one trial of
conditioning to the rabbit occurred when Albert had begun “to reach out and manipulate its
fur with forefingers” (Watson & Rayner, 1920, p. 8). The attractiveness of an (aversive)
instrumental model of Albert’s conditioning is that it would not necessarily predict any
emotional reaction by Albert and would help explain his reluctance to touch the experimental
animals. Strong support for this model is lacking, however, with Watson and Rayner describing
at least four conditioning trials on which the loud sound was not contingent on Albert’s
instrumental response, and a number of trials the character of which is uncertain.
244
BENJAMIN HARRlS
Copyright American Psychological Association. Not for further distribution.
A more serious problem with clinicians’ citing of the Albert study is
the failure of Watson’s contemporaries to replicate his work. Although
H. E. Jones (1930) subsequently demonstrated persistent galvanic skin response (GSR) conditioning with an infant (using a mild electric current
as an unconditioned stimulus, and a light and various sounds as conditioned stimuli), attempts to replicate the Albert study using variations of
Watson’s own method were unsuccessful. Valentine ( 1930), for example,
used extensive naturalistic observation and failed to find conditioned fear
of infants to loud noises; he criticized both Watson’s methodology and his
simplistic theory of emotional development. Bregman (1934) was also unsuccessful in her attempts to condition even 1 of 15 infants to fear wooden
and cloth objects, using a disagreeable noise as an unconditioned stimulus
(see Thorndike, 1935). Finally, whatever our retrospective view of Albert’s
conditioned reactions, a conditioned-avoidance model of phobias (with
fear as a necessary component) is not consistent with more recent experimental and clinical literature (see Costello, 1970; Hineline, 1977; Marks,
1969, 1977).
ALBERT AND PREPAREDNESS THEORY
One of the reasons that Albert is so well known is that he is rediscovered every 5 or 10 years by a new group of psychologists. In the early
1960s, Wolpe and Eysenck were the curators and analysts of the Albert
myth. Ten years later, Wolpe and Eysenck were supplanted by M. E. P.
Seligman, who has seized control of the Albert story and uses it (in slightly
revised form) to attack the views of its former proponents. At the same
time, Seligman both challenges traditional theories of learning and proposes his own reformulation, known as “preparedness theory.”
Briefly stated, preparedness theory (Seligman, 1970, 1971; see also
Schwartz, 1974) posits that traditionally held laws of learning cannot be
uniformly applied to all stimuli interacting with all organisms. In a classical
conditioning paradigm, organisms may be physiologically or cognitively
“prepared” to form certain conditioned stimulus-unconditioned stimulus
associations and “contraprepared” to develop others. In the former case
(e.g., rats learning taste aversion to food causing illness) the association is
easily formed, but in the latter case (e.g., rats learning taste aversion to
food prepared with footshock) it is difficult if not impossible to form. Similarly, Seligman ( 1970) summarized evidence from instrumental-learning
paradigms to suggest that for a particular organism, certain behaviors differ
in their potential to be successfully conditioned (see Shettleworth, 1973).
Relevant to Albert, Seligman ( 1971) hypothesized that the strength
of human phobic reactions (i.e., their resistance to extinction) is due to
the high degree of preparedness of certain stimuli (e.g., snakes). This conWHATEVER HAPPENED TO LIJTLE ALBERT?
245
Copyright American Psychological Association. Not for further distribution.
ditioning to phobic objects occurs very quickly, whereas conditioning to
other stimuli (assumed to be of low preparedness or contraprepared) results
in fear reactions that are less intense, take longer to establish, and extinguish more quickly. As Marks (1977) noted, there is some evidence that
objects differ in their ability to produce conditioned GSR in humans over
time (e.g., Ohman, Erixon, & Lofberg, 1975). It also makes sense that
evolution may have made it easier for humans to learn some responses
than others (see Herrnstein, 1977). However, much of Seligman’s (1971)
discussion of human phobias is based on an erroneous interpretation of
Watson and Rayner’s (1920) work.
As described in his article “Phobias and preparedness,” Seligman’s
version of Albert’s conditioning is generally consistent with the exaggerated claims for the study made by Watson (e.g., Watson, 1924). According
to preparedness theory, the existence of strong animal phobias in the human clinical literature is evidence that ‘Lfurrythings” (Seligman, 1971,
p. 315) are strongly prepared phobic stimuli for humans. If furry things are
highly prepared and Watson and Rayner (1920) used furry things in their
study, then Albert must have quickly developed a strong fear of animals
and other furry things. Consistent with this logic is Seligman’s (1971)
assertion that “Albert became afraid of rats, rabbits, and other furry objects” (p. 308, italics added) and that Watson “probably did not” become
an aversive stimulus to Albert. In fact, Albert was “completely negative”
to Watson’s hair (Watson & Rayner, 1920, p. 7), and of course, Albert’s
fear was only tested to a single rat, a single rabbit, and to no previously
neutral, nonfurry objects.
In addition to presenting this inaccurate picture of how Albert’s fear
initially generalized, Seligman’s account also misrepresents the ease with
which Albert was conditioned, the durability of his reactions, and the
details of an attempt to replicate the Albert study. According to Seligman
( 1971), Albert’s “conditioning occurred in two trials” and this “prepared
fear conditioning [did] not extinguish readily” (p. 315). In fact, “seven
joint stimulations were given [to Albert] to bring about the complete reaction” (Watson & Rayner, 1920, p. 5), and there is little if any evidence
either that the reactions of Albert were resistant to a formal extinction
procedure (or to the passage of time) or that he was tested with valid
contraprepared stimuli. Further, in describing a similar study that actually
used a contraprepared stimulus (a wooden duck), Seligman erred in his
statement that the experimenter “did not get fear conditioning to a wooden
duck, even after many pairings with a startling noise” (1971, p. 315). In
fact, the experimenter himself admitted that his failure was due to the
inadequacy of his unconditioned stimulus, not to the inappropriateness of
a wooden duck as a phobic stimulus:
We did not succeed in establishing a conditional fear response to the
duck for the simple reason that the noise failed to evoke fear. Once
246
BENJAMIN HARRIS
Copyright American Psychological Association. Not for further distribution.
only in something over fifty trials did the child exhibit what might be
called a worried look. (English, 1929, p. 222)
One can understand how the Albert study could be selectively misperceived by Seligman, since the errors that he committed result in a historical account that provides more support for the predictions of his preparedness theory than does (subsequent) clinical observation (DeSilva,
Rachman, & Seligman, 1977; Rachman & Seligman, 1976). It seems ironic
that in making his case for the new theory of preparedness, Seligman first
had to strengthen the old Watsonian interpretation of the Albert study:
that it was a successful laboratory demonstration of fear conditioning, its
generalization, and resistance to extinction.
CONCLUSIONS
What can be deduced from reviewing the many versions of Watson
and Rayner’s study of Albert? One somewhat obvious conclusion is that
we should be extremely wary of secondhand (and more remote) accounts
of psychological research. As Cornwell and Hobbs (1976) suggested, this
may be most relevant to often-cited studies in psychology, since we may
be more likely to overestimate our knowledge of such bulwarks of textbook
knowledge.
What about the process by which secondary sources themselves come
to err in their description of classic studies? A simple explanation might
assume that more recent authors, like any recipients of secondhand information (e.g., gossip), are more likely to present an account of much-cited
research that has “drifted well away from the original” (Cornwell and
Hobbs, 1976, p. 9). For the Albert study at least, this relatively passive
model of communication is an oversimplified view. Not only was Watson
quick to actively revise his own description of his research (e.g., Watson,
1928a; Watson & Watson, 1921) but it took little time for textbook authors to alter the details of Albert’s conditioning. For example, within a
year after Watson’s original article, one text (Smith 6r Guthrie, 1921) had
already invented spurious stimuli to which Albert’s initial fear generalized;
such errors were also contained in early texts by H. L. Hollingworth (1928)
and j. W. Bridges (1930).
There has undoubtedly been some distortion due to the simple retelling of the Watson and Rayner study, but a more dynamic influence on
textbook accounts seems to have been the authors’ opinions of behaviorism
as a valid theoretical viewpoint. For example, the agreement of Harvey
Carr’s ( 1925) text with Watson’s overgeneralizations about Albert was consistent with Carr’s (1915) relatively favorable review of Watson’s early
work. Similarly, as behaviorism’s influence grew, even relative skeptics seem
to have been willing to devote more attention to the Albert study. For
W H A T E V E R HAPPENED TO L I T L E ALBERT?
247
Copyright American Psychological Association. Not for further distribution.
example, the fourth edition of Robert S. Woodworth’s (1940) text, Psychology, mentioned that Albert’s “conditioned fear was ‘transferred’ from
the rat to similar objects” (p. 379), though the previous edition of the text
(Woodworth, 1934) did not mention this generalization and was more
critical of Watson’s theory of emotional development. Woodworth’s 1934
text also had Albert initially conditioned to a rabbit, while the 1940 one
correctly described the conditioned stimulus of a rat. This greater accuracy
in Woodworth’s later account is an indication of at least one author’s ability
to resist any general drift toward increasing misinformation.
Any attempted explanation of textbook errors concerning Albert
raises the question of the role of classic studies and the nature of historical
changes in psychology. As discussed by Samelson (1974) and Baumgardner
(1977), modern citations of classic studies can often be seen as attempts
by current theorists to build a false sense of continuity into the history of
psychology. In social psychology, for example, claiming Auguste Comte as
a founder of the field (see Allport, 1968) gives the impression that our
contemporary motives (especially the wish for a well-developed behavioral
science) have directed the field’s progress for almost a century (Samelson,
1974). To cite another classic “origin,” the Army’s psychological testing
program during World War I is taken by some clinical psychologists as an
early example of how the profession of psychology has always grown in
relation to its increased usefulness. However, it has recently been shown
that World War I intelligence testing was of little practical use at the time
(Samelson, 1977).
In reviewing these classic studies or origin myth in psychology, it
should be emphasiz...
Purchase answer to see full
attachment