PSY 7711 Seminole State College Single Case Experimental Designs Discussion

User Generated

ertvan0719

Humanities

PSY 7711

Seminole State College of Florida

PSY

Description

In this unit, you learned about alternating treatment designs. You have been pointed out strengths and limitations of this design, as well as provided a graphing tutorial to better understand utilization.

undefined

For this discussion, compare and contrast the hypothetical examples of alternating treatment designs provided on page 244 of your Single Case Experimental Designs text and on page 59 of your Behavior Modification text.

undefined
  • Is this difference of opinion significant?
  • Under what conditions would you use one over the other?

Unformatted Attachment Preview

page 244 of your Single Case Experimental Designs Given that the experimenter is interested in comparing two or more treatments rather than in simply showing a trend toward improvement over time, the experimenter does not merely plot the data by connecting data points for Weeks 1, 2, 3, and so on. Instead, the experimenter examines the data by connecting all the data points measuring the effects of Treatment A and then connects all the data points measuring the effects of Treatment B. If, over time, these two series of data points separate (e.g., Treatment B produces greater improvement than Treatment A), then one could say with some certainty that one treatment is more effective. Naturally, these results would then require replication on additional clients with the same problem. Such hypothetical data are plotted in Figure 8.1 for a client who was treated and assessed weekly. FIGURE 8.1 Hypothetical example of an ATD comparing treatments A and B. The experimenter should not proceed in a simple A-B-A-B-A-B-A-B fashion, but rather should randomize the order of introduction of the treatments to control for sequential confounding, or the possibility that introducing Treatment A first might bias the results in favor of Treatment A. Notice in the hypothetical data presented in Figure 8.1 that Treatments A and B are introduced in a relatively random fashion. Thus, a clinician treating a client using this approach might administer the treatments in an A-B-B-A-B-A-A-B fashion, as in the hypothetical data. For a client in an office setting, these treatment occasions might be twice a week, with the experiment taking a total of 4 weeks. For a child in a school setting, one might alternate treatments 4 times a day, and the experiment would be completed in a total of 2 days. Introducing and alternating treatments in a random fashion, as well as other procedural considerations, will be discussed more fully in section 8.2. In summary, this design requires the comparison of two separate series of data points. For this reason, this experimental design has also been described as falling within a general strategy referred to as between-series, where one is comparing results between two separate series of data points. On the other hand, A-B-A withdrawal designs, described in chapters 5 and 6, look at data within the same series of data points, and therefore the strategy has been described as within-series (see Hayes, Barlow, & Nelson-Gray, 1999). History and terminology The ATD strategy has been used for many years; however, a confusing array of terminology has delayed widespread understanding and use of this design. In the first edition of this book, we termed this strategy a multiple schedule design. Others have termed the same design a multielement baseline design, a randomization design, and a simultaneous treatment design. Below we provide a brief historical review of the different terms used to describe the ATD strategy so that the reader can be clear on how and why these different terms have been used. Sidman (1960) initially used the term multi-element manipulation to describe this design. Following this early work, some researchers have continued to use the term multi-element design when referring to this strategy (e.g., Neef, McCord, & Ferreri, 2006). Others have used the term multiple schedule to refer to this design (e.g., Agras et al., 1969; Leitenberg, 1973). These procedures and terminology were derived directly from basic behavioral research laboratories in which researchers were attempted to apply operant conditioning procedures to the treatment of human behavioral problems. Thus, the term multiple schedule, implies not only a distinct reinforcement schedule as one of the treatments, but also a distinct stimulus that allows subjects to discriminate as to when each of the two or more conditions will be in effect. However, in recent years it has become clear that signs or signals functioning as discriminative stimuli (SD) are either an inherent part of the treatment, and therefore require no further consideration, or are not needed. For example, alternating a pharmacological agent with a placebo, using at ATD design, would be perfectly legitimate, but each drug would not require a discriminative stimulus. In fact, this would be undesirable; hence, the usual double-blind experimental strategies in drug research (see chapter 6). For this reason, the more appropriate analogy within the basic behavioral laboratories would be a mixed schedule rather than a multiple schedule, since a mixed schedule does not have discriminative stimuli. The term schedule implies a distinct reinforcement schedule associated with each treatment. Although some studies, particularly those testing operant conditioning procedures do indeed use multiple schedules of reinforcement (e.g., Hagopian, Bruzek, Bowman, & Jennett, 2007; Tiger, Hanley, & Heal, 2006), many and perhaps most specific treatments under investigation do not contain different schedules of reinforcement, and so the terms multiple schedule and mixed schedule are not really appropriate more generally. page 59 of your Behavior Modification text. Alternating-Treatments Design The alternating-treatments design (ATD), also called a multi-element design, differs from the research designs just reviewed in that baseline and treatment conditions (or two treatment conditions) are conducted in rapid succession and compared with each other. For example, treatment is implemented on one day, baseline the next day, treatment the next day, baseline the next day, and so on. In the A-B, A-B-A-B, or multiple-baseline designs, a treatment phase occurs after a baseline phase has been implemented for a period of time; that is, baseline and treatment occur sequentially. In these designs, a baseline or treatment phase is conducted until a number of data points are collected (usually at least three) and there is no trend in the data. A trend means the data are increasing or decreasing across a phase. In the ATD, two conditions (baseline and treatment or two different treatments) occur during alternating days or sessions. Therefore, the two conditions can be compared within the same time period. This is valuable because any extraneous variables would have a similar effect on both conditions, and thus an extraneous variable could not be the cause of any differences between conditions. Consider the following example of an ATD. A teacher wants to determine whether violent cartoons lead to aggressive behavior in preschool children. The teacher uses an ATD to demonstrate a functional relationship between violent cartoons and aggressive behavior. On one day, the preschoolers do not watch any cartoons (baseline) and the teacher records the students' aggressive behavior. The next day, the students watch a violent cartoon and the teacher again records their aggressive behavior. The teacher continues to alternate a day with no cartoons and a day with cartoons. After a few weeks, the teacher can determine whether a functional relationship exists. If there is consistently more aggressive behavior on cartoon days and less aggressive behavior on no-cartoon days, the teacher has demonstrated a functional relationship between violent cartoons and aggressive behavior in the preschoolers. An example of a graph from this hypothetical ATD is shown in Figure 3-15. FIGURE 3-15 This alternating-treatments design shows the frequency of aggressive behavior on days when children watched violent cartoons compared with days when they did not watch cartoons. The level of the aggressive behavior is greater on days with violent cartoons than on days with no cartoons. In this graph, the number of aggressive behaviors occurring per day is graphed on days when the children watched violent cartoons (odd-numbered days) and on days when they did not (even-numbered days). Notice that the aggressive behavior occurs more frequently on days when the children watched cartoons. We would say there is separation in the data when the data are consistently higher in one condition than the other. Because the aggressive behavior is always greater on cartoon days (there is separation in the data), the researchers have demonstrated a functional relationship and conclude that the aggressive behavior occurred as a function of watching violent cartoons.
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

View attached explanation and answer. Let me know if you have any questions.

The opinion difference is substantial as both instances make sense in various ways. For
alternating treatment designs, one instance is based on weekly assessment, and the other one has
grounded on days assessment. The alternating treatment design’s hypothetical example that is on
Behavior Modification is the treatment that is grounded on days. The graph in this text does not
actually represent the two treatments’ usage rather than showing the...

Related Tags