Technical Writing in Your Discipline(Speech Language and Pathology SLP)

Anonymous
timer Asked: Oct 3rd, 2018
account_balance_wallet $15

Question description

For this project, you will be completing preliminary research on technical writing in your discipline.Select 5 journals from the list provided below (or beyond it, if your discipline isnt represented on the list, or you want to find something else!) and find at least 5 total articles that talk about the role technical communication and/or technical writing within your discipline. Then, write a paper on the patterns you found regarding the audience, context, purpose of technical writing in your discipline, and anything new you learned about your discipline. Cite all research in the citation format used on your discipline, both in your own writing throughout the paper, and in an appropriate reference section at the conclusion. Five articles related to AAC are attached,you can look for your own articles related to AAC or use the 5 articles attached. Also the guidlines for the paper is also attached.

AJSLP Tutorial Brain–Computer Interfaces for Augmentative and Alternative Communication: A Tutorial Jonathan S. Brumberg,a Kevin M. Pitt,b Alana Mantie-Kozlowski,c and Jeremy D. Burnisond Purpose: Brain–computer interfaces (BCIs) have the potential to improve communication for people who require but are unable to use traditional augmentative and alternative communication (AAC) devices. As BCIs move toward clinical practice, speech-language pathologists (SLPs) will need to consider their appropriateness for AAC intervention. Method: This tutorial provides a background on BCI approaches to provide AAC specialists foundational knowledge necessary for clinical application of BCI. Tutorial descriptions were generated based on a literature review of BCIs for restoring communication. Results: The tutorial responses directly address 4 major areas of interest for SLPs who specialize in AAC: (a) the current state of BCI with emphasis on SLP scope of practice (including the subareas: the way in which individuals access AAC with BCI, the efficacy of BCI for AAC, and the effects of fatigue), (b) populations for whom BCI is best suited, (c) the future of BCI as an addition to AAC access strategies, and (d) limitations of BCI. Conclusion: Current BCIs have been designed as access methods for AAC rather than a replacement; therefore, SLPs can use existing knowledge in AAC as a starting point for clinical application. Additional training is recommended to stay updated with rapid advances in BCI. I & Baker, 2012). In the most serious cases of total paralysis with loss of speech (e.g., locked-in syndrome; Plum & Posner, 1972), even these advanced methods are not sufficient to provide access to language and literacy (Oken et al., 2014). Access to communication is critical for maintaining social interactions and autonomy of decision-making in this population (Beukelman & Mirenda, 2013); therefore, individuals with paralysis and akinetic mutism have been identified as potential candidates for brain–computer interface (BCI) access to AAC (Fager et al., 2012). BCIs for communication take AAC and access technology to the next level and provide a method for selecting and constructing messages by detecting changes in brain activity for controlling communication software (Wolpaw, Birbaumer, McFarland, Pfurtscheller, & Vaughan, 2002). In particular, they are devices that provide a direct link between an individual and a computer device through brain activity alone, without requiring any overt movement or behavior. As an access technique, BCIs have the potential to reduce or eliminate some physical barriers to successful AAC intervention for individuals with severe speech and physical impairments. Similar to AAC and associated access techniques, current BCI technology can take a variety of forms on the basis of the neural signal targeted and the method used for individuals to interact with the communication ndividuals with severe speech and physical impairments often rely on augmentative and alternative communication (AAC) and specialized access technologies to facilitate communication on the basis of the nature and severity of their speech, motor, and cognitive impairments. In some cases, people who use AAC are able to use specially modified computer peripherals (e.g., mouse, joystick, stylus, or button box) to access AAC devices, whereas in other, more severe cases, sophisticated methods are needed to detect the most subtle of movements (e.g., eye gaze tracking; Fager, Beukelman, Fried-Oken, Jakobs, a Department of Speech-Language-Hearing: Sciences and Disorders, Neuroscience Graduate Program, The University of Kansas, Lawrence b Department of Speech-Language-Hearing: Sciences and Disorders, The University of Kansas, Lawrence c Communication Sciences and Disorders Department, Missouri State University, Springfield d Neuroscience Graduate Program, The University of Kansas, Lawrence Correspondence to Jonathan S. Brumberg: brumberg@ku.edu Editor-in-Chief: Krista Wilkinson Editor: Erinn Finke Received December 31, 2016 Revision received April 6, 2017 Accepted August 14, 2017 https://doi.org/10.1044/2017_AJSLP-16-0244 Disclosure: The authors have declared that no competing interests existed at the time of publication. American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 • Copyright © 2018 American Speech-Language-Hearing Association 1 interface. Each of these factors may impose different demands on the cognitive and motor abilities of individuals who use BCI (Brumberg & Guenther, 2010). Although the field of BCI has grown over the past decade, many stakeholders including speech-language pathologists (SLPs), other practitioners, individuals who use AAC and potentially BCI, and caretakers are unfamiliar with the technology. SLPs are a particularly important stakeholder given their role as the primary service providers who assist clients with communicative challenges secondary to motor limitations through assessment and implementation of AAC interventions and strategies. A lack of core knowledge on the potential use of BCI for clinical application may limit future intervention with BCI for AAC according to established best practices. This tutorial will offer some basic explanations regarding BCI, including the benefits and limitations of this access technique, and the different varieties of BCI. It will also provide a description of individuals who may be potentially best suited for using BCI to access AAC. An understanding of this information is especially important for SLPs specializing in AAC who are most likely to interact with BCI as they move from research labs into real-world situations (e.g., classrooms, home, work). Tutorial Descriptions by Topic Area Topic 1: How Do People Who Use BCI Interact With the Computer? BCIs are designed to allow individuals to control computers and communication systems using brain activity alone and are separated according to whether signals are recorded noninvasively from/through the scalp or invasively through implantation of electrodes in or on the brain. Noninvasive BCIs, those that are based on brain recordings made through the intact skull without requiring a surgical procedure (e.g., electroencephalography or EEG, magnetoencephalography, functional magnetic resonance imaging, functional near-infrared spectroscopy), often use an indirect technique to map brain signals unrelated to communication onto controls for a communication interface (Brumberg, Burnison, & Guenther, 2016). Though there are many signal acquisition modalities for noninvasive recordings of brain activity, noninvasive BCIs typically use EEG, which is recorded through electrodes placed on the scalp according to a standard pattern (Oostenveld & Praamstra, 2001) and record voltage changes that result from the simultaneous activation of millions of neurons. EEG can be analyzed for its spontaneous activity, or in response to a stimulus (e.g., event-related potentials), and both have been examined for indirect access BCI applications. In contrast, another class of BCIs attempts to directly output speech from imagined/attempted productions (Blakely, Miller, Rao, Holmes, & Ojemann, 2008; Brumberg, Wright, Andreasen, Guenther, & Kennedy, 2011; Herff et al., 2015; Kellis et al., 2010; Leuthardt et al., 2011; Martin et al., 2014; Mugler et al., 2014; Pei, Barbour, Leuthardt, & Schalk, 2011; Tankus, Fried, & Shoham, 2012); however, these 2 techniques typically rely on invasively recorded brain signals (via implanted microelectrodes or subdural electrodes) related to speech motor preparation and production. Though in their infancy, direct BCIs for communication have the potential to completely replace the human vocal tract for individuals with severe speech and physical impairments (Brumberg, Burnison, & Guenther, 2016; Chakrabarti, Sandberg, Brumberg, & Krusienski, 2015); however, the technology does not yet provide a method to “read thoughts.” For the remainder of this tutorial, we focus on noninvasive, indirect methods for accessing AAC with BCIs, and we refer readers to other sources for descriptions of direct BCIs for speech (Brumberg, Burnison, & Guenther, 2016; Chakrabarti et al., 2015). Indirect methods for BCI parallel other access methods for AAC devices, where nonspeech actions (e.g., button press, direct touch, eye gaze) are translated to a selection on a communication interface. The main difference between the two access methods is that BCIs rely on neurophysiological signals related to sensory stimulation, preparatory motor behaviors, and/or covert motor behaviors (e.g., imagined or attempted limb movements), rather than overt motor behavior used for conventional access. The way in which individuals control a BCI greatly depends on the neurological signal used by the device to make selections on the communication interface. For instance, in the case of an eye-tracking AAC device, one is required to gaze at a communication icon, and the system makes a selection on the basis of the screen coordinates of the eye gaze location. For a BCI, individuals may be required to (a) attend to visual stimuli to generate an appropriate visual–sensory neural response to select the intended communication icon (e.g., Donchin, Spencer, & Wijesinghe, 2000), (b) take part in an operant conditioning paradigm using biofeedback of EEG (e.g., Kübler et al., 1999), (c) listen to auditory stimuli to generate auditory–sensory neural responses related to the intended communication output (e.g., Halder et al., 2010), or (d) imagine movements of the limbs to alter the sensorimotor rhythm (SMR) to select communication items (e.g., Pfurtscheller & Neuper, 2001). At present, indirect BCIs are more mature as a technology, and many have already begun user trials (Holz, Botrel, Kaufmann, & Kübler, 2015; Sellers, Vaughan, & Wolpaw, 2010). Therefore, SLPs are most likely to be involved with indirect BCIs first as they move from the research lab to the real world. Indirect BCI techniques are very similar to current access technologies for high-tech AAC; for example, the output of the BCI system can act as an input method for conventional AAC devices. Below, we review indirect BCI techniques and highlight their possible future in AAC. The P300-Based BCI The visual P300 grid speller (Donchin et al., 2000) is the most well-known and most mature technology with ongoing at-home user trials (Holz et al., 2015; Sellers et al., 2010). Visual P300 BCIs for communication use the P300 event-related potential, a neural response to novel, rare American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 visual stimuli in the presence of many other visual stimuli, to select items on a communication interface. The traditional graphical layout for a visual P300 speller is a 6 × 6 grid that includes the 26 letters of the alphabet, space, backspace, and numbers (see Figure 1). Each row and column1 on the spelling grid are highlighted in a random order, and a systematic variation in the EEG waveform is generated when one attends to a target item for selection, the “oddball stimulus,” which occurs infrequently compared with the remaining items (Donchin et al., 2000). The event-related potential in response to the target item will contain a positive voltage fluctuation approximately 300 ms after the item is highlighted (Farwell & Donchin, 1988). The BCI decoding algorithm then selects items associated with detected occurrences of the P300 for message creation (Donchin et al., 2000). The P300 grid speller has been operated by individuals with amyotrophic lateral sclerosis (ALS; Nijboer, Sellers, et al., 2008; Sellers & Donchin, 2006) and has been examined as part of at-home trials by individuals with neuromotor impairments (Holz et al., 2015; Sellers & Donchin, 2006), making it a likely candidate for future BCI-based access for AAC. In addition to the cognitive requirements for operating the P300 speller, successful operation depends somewhat on the degree of oculomotor control (Brunner et al., 2010). Past findings have shown that the P300 amplitude can be reduced if individuals are unable to use an overt attention strategy (gazing directly at the target) and, instead, must use a covert strategy (attentional change without ocular shifting), which can degrade BCI performance (Brunner et al., 2010). An alternative P300 interface displays a single item at a time on the screen (typically to the center as in Figure 1, second from left) to alleviate concerns for individuals with poor oculomotor control. This interface, known as the rapid serial visual presentation speller, has been successfully controlled by a cohort of individuals across the continuum of locked-in syndrome severity (Oken et al., 2014). All BCIs that use spelling interfaces require sufficient levels of literacy, though many can be adapted to use icon or symbol-based communication (e.g., Figure 2). Auditory stimuli can also be used to elicit P300 responses for interaction with BCI devices for individuals with poor visual capability (McCane et al., 2014), such as severe visual impairment, impaired oculomotor control, and cortical blindness. Auditory interfaces can also be used in poor viewing environments, such as outdoors or in the presence of excessive lighting glare. Like its visual counterpart, the auditory P300 is elicited via an oddball paradigm, and has been typically limited to binary (yes/no) selection by attending to one of two different auditory tones presented monaurally to each ear (Halder et al., 2010), or linguistic stimuli (e.g., attending to a “yep” target among “yes” presentations in the right ear vs. “nope” and “no” in the left; Hill et al., 2014). The binary control achieved 1 Each individual item may also be highlighted, rather than rows and columns. using the auditory P300 interface has the potential to be used to navigate a spelling grid similar to conventional auditory scanning techniques for accessing AAC systems, by attending to specific tones that correspond to rows and columns (Käthner et al., 2013; Kübler et al., 2009). There is evidence that auditory grid systems may require greater attention than their visual analogues (Klobassa et al., 2009; Kübler et al., 2009), which should be considered when matching clients to the most appropriate communication device. Steady State Evoked Potentials BCIs can be controlled using attention-modulated steady state brain rhythms, as opposed to event-related potentials, in both visual (steady state visually evoked potential [SSVEP]) and auditory (auditory steady state response [ASSR]) domains. Both the SSVEP and ASSR are physiological responses to a driving input stimulus that are amplified when an individual focuses his or her attention on the stimulus (Regan, 1989). Strobe stimuli are commonly used for SSVEP, whereas amplitude-modulated tones are often used for ASSR (Regan, 1989). BCIs using SSVEP exploit the attention-modulated response to strobe stimuli by simultaneously presenting multiple communication items for selection, each flickering at a different frequency (Cheng, Gao, Gao, & Xu, 2002; Friman, Luth, Volosyak, & Graser, 2007; Müller-Putz, Scherer, Brauneis, & Pfurtscheller, 2005).2 As a result, all item flicker rates will be observed in the EEG recordings, but the frequency of the attended stimulus will contain the largest amplitude (Lotte, Congedo, Lécuyer, Lamarche, & Arnaldi, 2007; Müller-Putz et al., 2005; Regan, 1989) and greatest temporal correlation to the strobe stimulus (Chen, Wang, Gao, Jung, & Gao, 2015; Lin, Zhang, Wu, & Gao, 2007). The stimulus with the greatest neurophysiological response will then be selected by the BCI to construct a message, typically via an alphanumeric keyboard (shown in Figure 1), though icons can be adapted for different uses and levels of literacy (e.g., Figure 2). Major advantages of this type of interface are the following: (a) high accuracy rates, often reported above 90% with very little training (e.g., Cheng et al., 2002; Friman et al., 2007); (b) overlapping, centrally located stimuli could be used for individuals with impaired oculomotor control (Allison et al., 2008). A major concern with this technique, however, is an increased risk for seizures (Volosyak, Valbuena, Lüth, Malechka, & Gräser, 2011). BCIs that use the ASSR require one to shift his or her attention to a sound stream that contains a modulated stimulus (e.g., a right monoaural 38-Hz amplitude modulation, 1000-Hz carrier tone presented with a left monoaural 42-Hz modulated, 2500-Hz carrier; Lopez, Pomares, Pelayo, Urquiza, & Perez, 2009). As with the SSVEP, the modulation frequency of the attended sound stream is 2 There are other variants that use a single flicker rate with a specific strobe pattern that is beyond the scope of this tutorial. Brumberg et al.: AAC-BCI Tutorial 3 Figure 1. From left to right, example visual displays for the following BCIs: P300 grid speller, RSVP P300, SSVEP, and motor-based (SMR with keyboard). For the P300 grid, each row and column are highlighted until a letter is selected. In the RSVP, each letter is displayed randomly, sequentially in the center of the screen. For the SSVEP, this example uses four flickering stimuli (at different frequencies) to represent the cardinal directions, which are used to select individual grid items. This can also be done with individual flicker frequencies for all 36 items with certain technical considerations. For the motor-based BCI, this is an example of a binary-selection virtual keyboard; imagined right hand movements select the right set of letters. RSVP = rapid serial visual presentation; SSVEP = steady state visually evoked potential; SMR = sensorimotor rhythm; BCI = brain–computer interfaces. Copyright © Tobii Dynavox. Reprinted with permission. observable in the recorded EEG signal and will be amplified relative to the other competing stream. Therefore, in this example, if the BCI detects the greatest EEG amplitude at 38 Hz, it will perform a binary action associated with the right-ear tone (e.g., yes or “select”), whereas detection of the greatest EEG amplitude at 42 Hz will generate a left-ear tone action (e.g., no or “advance”). Motor-Based BCIs Another class of BCIs provides access to communication interfaces using changes in the SMR, a neurological signal related to motor production and motor imagery (Pfurtscheller & Neuper, 2001; Wolpaw et al., 2002), for individuals with and without neuromotor impairments (Neuper, Müller, Kübler, Birbaumer, & Pfurtscheller, 2003; Vaughan et al., 2006). The SMR is characterized by the μ (8–12 Hz) and β (18–25 Hz) band spontaneous EEG oscillations that are known to desynchronize, or reduce in amplitude, during covert and overt movement attempts (Pfurtscheller & Neuper, 2001; Wolpaw et al., 2002). Many motor-based BCIs use left and right limb movement imagery because the SMR desynchronization will occur on the contralateral side, and are most often used to control spelling interfaces (e.g., virtual keyboard: Scherer, Müller, Neuper, Graimann, & Pfurtscheller, 2004; DASHER: Wills & MacKay, 2006; hex-o-spell: Blankertz et al., 2006; see Figure 1, right, for an example), though they can be used as inputs to commercial AAC devices as well (Brumberg, Burnison, & Pitt, 2016). Two major varieties of motor-based BCIs have been developed for controlling computers: those that provide continuous cursor control (analogous to mouse/joystick and eye gaze) and others that use discrete selection (analogous to button presses). An example layout of keyboard-based and symbol-based motor-BCI interfaces are shown in Figures 1 and 2. Cursor-style BCIs transform changes in the SMR continuously over time into computer control signals (Wolpaw & McFarland, 2004). One example of a continuous, SMR-based BCI uses imagined movements of the hands and feet to move a cursor to select progressively refined Figure 2. From left to right, examples of how existing BCI paradigms can be applied to page sets from current AAC devices: P300 grid, SSVEP, motor based (with icon grid). For the P300 grid interface, a row or column is highlighted until a symbol is selected (here, it is yogurt). For the SSVEP, either directional (as shown here) or individual icons flicker at specified strobe rates to either move a cursor or directly select an item. For motor based, the example shown here uses attempted or imagined left hand movements to advance the cursor and right hand movements to choose the currently selected item. SSVEP = steady state visually evoked potential; SMR = sensorimotor rhythm; BCI = brain–computer interfaces; AAC = augmentative and alternative communication. Copyright © Tobii Dynavox. Reprinted with permission. 4 American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 groups of letters organized at different locations around a computer screen (Miner, McFarland, & Wolpaw, 1998; Vaughan et al., 2006). Another continuous-style BCI is used to control the “hex-o-spell” interface in which imagined movements of the right hand rotate an arrow to point at one of six groups of letters, and imagined foot movements extend the arrow to select the current letter group (Blankertz et al., 2006). Discrete-style motor BCIs perform this transformation using the event-related desynchronization (Pfurtscheller & Neuper, 2001), a change to the SMR in response to some external stimulus, like an automatically highlighted row or column via scanning interface. One example of a discrete-style motor BCI uses the event-related desynchronization to control a virtual keyboard consisting of a binary tree representation of letters, in which individuals choose between two blocks of letters, selected by (imagined) right or left hand movements until a single letter or item remains (Scherer et al., 2004). Most motor-based BCIs require many weeks or months for successful operation and report accuracies greater than 75% for individuals without neuromotor impairments and, in one study, 69% accuracy for individuals with severe neuromotor impairments (Neuper et al., 2003). Motor-based BCIs are inherently independent from interface feedback modality because they rely only on an individual’s ability to imagine his or her limbs moving, though users are often given audio or visual feedback of BCI choices (e.g., Nijboer, Furdea, et al., 2008). A recent, continuous motor BCI has been used to produce vowel sounds with instantaneous auditory feedback by using limb motor imagery to control a two-dimensional formant frequency speech synthesizer (Brumberg, Burnison, & Pitt, 2016). Other recent discrete motor BCIs have been developed for row–column scanning interfaces (Brumberg, Burnison, & Pitt, 2016; Scherer et al., 2015). 2003). Nearly all BCIs require some amount of cognitive effort or selective attention, though the amount of each depends greatly on the style and modality of the specific device. Individuals with other neuromotor disorders, such as cerebral palsy, muscular dystrophies, multiple sclerosis, Parkinson’s disease, and brain tumors, may require AAC (Fried-Oken, Mooney, Peters, & Oken, 2013; Wolpaw et al., 2002) but are not yet commonly considered for BCI studies and interventions (cf. Neuper et al., 2003; Scherer et al., 2015), due to concomitant impairments in cognition, attention, and memory. In other instances, elevated muscle tone and uncontrolled movements (e.g., spastic dysarthria, dystonia) limit the utility of BCI due to the introduction of physical and electromyographic movement artifacts (i.e., muscle-based signals that are much stronger than EEG and can distort recordings of brain activity). BCI research is now beginning to consider important human factors involved in appropriate use of BCI for individuals (FriedOken et al., 2013) and for coping with difficulties in brain signal acquisition due to muscular (Scherer et al., 2015) and environmental sources of artifacts. Developing BCI protocols to help identify the BCI technique most appropriate for each individual must be considered as BCI development moves closer to integration with existing AAC techniques. Topic 2: Who May Best Benefit From a BCI? BCI Summary BCIs use a wide range of techniques for mapping brain activity to communication device control through a combination of signals related to sensory, motor, and/or cognitive processes (see Table 1 for a summary of BCI types). The choice of BCI protocol and feedback methods trade off with cognitive abilities needed for successful device operation (e.g., Geronimo, Simmons, & Schiff, 2016; Kleih & Kübler, 2015; Kübler et al., 2009). Many BCIs require individuals to follow complex, multistep procedures and require potentially high levels of attentional capacity that are often a function of the sensory or motor process used for BCI operation. For example, the P300 speller BCI (Donchin et al., 2000) requires that individuals have an ability to attend to visual stimuli and make decisions about them (e.g., recognize the intended visual stimulus among many other stimuli). BCIs that use SSVEPs depend on the neurological response to flickering visual stimuli (Cheng et al., 2002) that is modulated by attention rather than other cognitive tasks. These two systems both use visual stimuli to elicit neural activity for controlling a BCI but differ in their demands on cognitive and attention processing. In contrast, motor-based BCI systems (e.g., Pfurtscheller & Neuper, 2001; Wolpaw et al., 2002) require individuals to have sufficient motivation and volition, as well as an ability to learn how changing mental tasks can control a communication device. At present, BCIs are best suited for individuals with acquired neurological and neuromotor impairments leading to paralysis and loss of speech with minimal cognitive involvement (Wolpaw et al., 2002), for example, brainstem stroke and traumatic brain injury (Mussa-Ivaldi & Miller, Sensory, Motor, and Cognitive Factors Alignment of the sensory, motor, and cognitive requirements for using BCI to access AAC devices with individuals’ unique profile will help identify and narrow down Operant Conditioning BCIs This interface operates by detecting a stimulusindependent change in brain activity, which is used to select options on a communication interface. The neural signals used for controlling the BCI are not directly related to motor function or sensation. Rather, it uses EEG biofeedback for operant conditioning to teach individuals to voluntarily change the amplitude and polarity of the slow cortical potential, a slow-wave (< 1 Hz) neurological rhythm that is related to movements of a one-dimensional cursor. In BCI applications, cursor vertical position is used to make binary selections for communication interface control (Birbaumer et al., 2000; Kübler et al., 1999). Brumberg et al.: AAC-BCI Tutorial 5 Table 1. Summary of BCI varieties and their feedback modality. EEG signal type Event-related potentials Sensory/Motor modality User requirements Visual P300 (grid) Visual P300 (RSVP) Auditory P300 Steady state evoked potentials Steady state visually evoked potential Motor-based Auditory steady state response Continuous sensorimotor rhythm Discrete event-related desynchronization Operant conditioning Motor preparatory signals, for example, contingent negative variation Slow cortical potentials Visual oddball paradigm, requires selective attention around the screen Visual oddball paradigm, requires selective attention to the center of the screen only (poor oculomotor control) Auditory oddball paradigm, requires selective auditory attention, no vision requirement Attention to frequency tagged visual stimuli, may increase seizure risk Attention to frequency modulated audio stimuli Continuous, smooth control of interface (e.g., cursors) using motor imagery (first person) Binary (or multichoice) selection of interface items (# choices = # of imagined movements), requires motor imagery ability Binary selection of communication interface items using imagined movements Binary selection of communication interface items after biofeedback-based learning protocol Note. BCI = brain–computer interface; EEG = electroencephalography; RSVP = rapid serial visual presentation. the number of candidate BCI variants (e.g., feature matching; Beukelman & Mirenda, 2013; Light & McNaughton, 2013), which is important for improving user outcomes with the chosen device (Thistle & Wilkinson, 2015). Matching possible BCIs should also include overt and involuntary motor considerations, specifically the presence of spasticity or variable muscle tone/dystonia, which may produce electromyographic artifacts that interfere with proper BCI function (Goncharova, McFarland, Vaughan, & Wolpaw, 2003). In addition, there may be a decline in brain signals used for BCI decoding as symptoms of progressive neuromotor diseases become more severe (Kübler, Holz, Sellers, & Vaughan, 2015; Silvoni et al., 2013) that may result in decreased BCI performance. The wide range in sensory, motor, and cognitive components of BCI designs point to a need for user-centered design frameworks (e.g., Lynn, Armstrong, & Martin, 2016) and feature matching/screening protocols (e.g., Fried-Oken et al., 2013; Kübler et al., 2015), like those used for current practices in AAC intervention (Light & McNaughton, 2013; Thistle & Wilkinson, 2015). Topic 3: Are BCIs Faster Than Other Access Methods for AAC? Current AAC devices yield a range of communication rates that depend on access modality (e.g., direct selection, scanning), level of literacy, and information represented by each communication item (e.g., single-meaning icons or images, letters, icons representing complex phrases; Hill & Romich, 2002; Roark, Fried-Oken, & Gibbons, 2015), as well as word prediction software (Trnka, McCaw, Yarrington, McCoy, & Pennington, 2008). Communication rates using AAC are often less than 15 words per minute (Beukelman & Mirenda, 2013; Foulds, 1980), and slower speeds (two to 6 five words per minute; Patel, 2011) are observed for letter spelling due to the need for multiple selections for spelling words (Hill & Romich, 2002). Word prediction and language modeling can increase both speed and typing efficiency (Koester & Levine, 1996; Roark et al., 2015; Trnka et al., 2008), but the benefits may be limited due to additional cognitive demands (Koester & Levine, 1996). Scan rate in auto-advancing row–column scanning access methods also affects communication rate, and though faster scan rates should lead to faster communication rates, slower scan rates can reduce selection errors (Roark et al., 2015). BCIs are similarly affected by scan rate (Sellers & Donchin, 2006); for example, a P300 speller can only operate as fast as each item is flashed. Increases in flash rate may also increase cognitive demands for locating desired grid items while ignoring others, similar to effects observed using commercial AAC visual displays (Thistle & Wilkinson, 2013). Current BCIs for communication generally yield selection rates that are slower than existing AAC methods, even with incorporation of language prediction models (Oken et al., 2014). Table 2 provides a summary of selection rates from recent applications of conventional access techniques and BCI to communication interfaces. Both individuals with and without neuromotor impairments using motor-based BCIs have achieved selection rates under 10 selections (letters, numbers, symbols) per minute (Blankertz et al., 2006; Neuper et al., 2003; Scherer et al., 2004), and those using P300 methods commonly operate below five selections per minute (Acqualagna & Blankertz, 2013; Donchin et al., 2000; Nijboer, Sellers, et al., 2008; Oken et al., 2014). A recent P300 study using a novel presentation technique has obtained significantly higher communication rates of 19.4 characters per minute, though the method has not been studied in detail with participants American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 Table 2. Communication rates from recent BCI and conventional access to communication interfaces. BCI method Population Selection rate Berlin BCI (motor imagery) Graz BCI (motor imagery) Graz BCI (motor imagery) P300 speller (visual) Healthy Healthy Impaired Healthy P300 speller (visual) RSVP P300 RSVP P300 ALS ALS LIS Healthy 2.3–7.6 letters/min 2.0 letters/min 0.2–2.5 letter/min 4.3 letters/min 19.4 char/min (120.0 bits/min) 1.5–4.1 char/min (4.8–19.2 bits/min) 3–7.5 char/min 0.4–2.3 char/min 1.2–2.5 letter/min SSVEP Healthy AAC (row–column) Healthy LIS Healthy AAC (direct selection) 33.3 char/min 10.6 selections/min (27.2 bits/min) 18–22 letters/min 6.0 letters/min 5.2 words/min Source Blankertz et al. (2006) Scherer et al. (2004) Neuper et al. (2003) Donchin et al. (2000) Townsend and Platsko (2016) Nijboer, Sellers, et al. (2008) Mainsah et al. (2015) Oken et al. (2014) Acqualagna and Blankertz (2013), Oken et al. (2014) Chen et al. (2015) Friman et al. (2007) Roark et al. (2015) Trnka et al. (2008) Note. BCI = brain–computer interface; ALS = amyotrophic lateral sclerosis; RSVP = rapid serial visual presentation; LIS = locked-in syndrome; SSVEP = steady state visually evoked potential; AAC = augmentative and alternative communication; char = character. with neuromotor impairments (Townsend & Platsko, 2016). BCIs, on the basis of the SSVEP, have emerged as a promising technique often yielding both high accuracy (> 90%) and communication rates as high as 33 characters per minute (Chen et al., 2015). From these reports, BCI performance has started to approach levels associated with AAC devices using direct selection, and the differences in communication rates for scanning AAC devices and BCIs (shown in Table 2) are reduced when making comparisons between individuals with neuromotor impairments rather than individuals without impairments (e.g., AAC: six characters per minute; Roark et al., 2015; BCI: one to eight characters per minute; Table 2). Differences in communication rate can also be reduced based on the type of BCI method (e.g., 3–7.5 characters per minute; Mainsah et al., 2015). These results suggest that BCI has become another clinical option for AAC intervention that should be considered during the clinical decision-making process. BCIs have particular utility when considered for the most severe cases; the communication rates described in the literature are sufficient to provide access to language and communication for those who are currently without both. Recent improvements in BCI designs have shown promising results (e.g., Chen et al., 2015; Townsend & Platsko, 2016), which may start to push BCI communication efficacy past current benchmarks for AAC. Importantly, few BCIs have been evaluated over extended periods of time (Holz et al., 2015; Sellers et al., 2010); therefore, it is possible that BCI selection may improve over time with training. Topic 4: Fatigue and Its Effects BCIs, like conventional AAC access techniques, require various levels of attention, working memory, and cognitive load that all affect the amount of effort (and fatigue) needed Table 3. Take-home points collated from the interdisciplinary research team that highlight the major considerations for BCI as possible access methods for AAC. BCIs do not yet have the ability to translate thoughts or speech plans into fluent speech productions. Direct BCIs, usually involving a surgery for implantation of recording electrodes, are currently being developed as speech neural prostheses. Noninvasive BCIs are most often designed as an indirect method for accessing AAC, whether custom developed or commercial. There are a variety of noninvasive BCIs that can support clients with a range of sensory, motor, and cognitive abilities—and selecting the most appropriate BCI technique requires individualized assessment and feature matching procedures. The potential population of individuals who may use BCIs is heterogeneous, though current work is focused on individuals with acquired neurological and neuromotor disorders (e.g., locked-in syndrome due to stroke, traumatic brain injury, and ALS); limited study has involved individuals with congenital disorders such as CP. BCIs are currently not as efficient as existing AAC access methods for individuals with some form of movement, though the technology is progressing. For these individuals, BCIs provide an opportunity to augment or complement existing approaches. For individuals with progressive neurodegenerative diseases, learning to use BCI before speech and motor function worsen beyond the aid of existing access technologies may help maintain continuity of communication. For those who are unable to use current access methods, BCIs may provide the only form of access to communication. Long-term BCI use is only just beginning; BCI performance may improve as the technology matures and as individuals who use BCI gain greater proficiency and familiarity with the device. Note. BCI = brain–computer interface; AAC = augmentative and alternative communication; ALS = amyotrophic lateral sclerosis; CP = cerebral palsy. Brumberg et al.: AAC-BCI Tutorial 7 to operate the device (Kaethner, Kübler, & Halder, 2015; Pasqualotto et al., 2015). There is evidence that scanningtype AAC devices are not overly tiring (Gibbons & Beneteau, 2010; Roark, Beckley, Gibbons, & Fried-Oken, 2013), but prolonged AAC use can have a cumulative effect and reduce communication effectiveness (Trnka et al., 2008). In these cases, language modeling and word prediction can reduce fatigue and maintain high communication performance using an AAC device (Trnka et al., 2008). Within BCI, reports of fatigue, effort, and cognitive load are mixed. Individuals with ALS have reported that visual P300 BCIs required more effort and time compared with eye gaze access (Pasqualotto et al., 2015), whereas others reported that a visual P300 speller was easier to use, and not overly exhausting compared with eye gaze, because it does not require precise eye movements (Holz et al., 2015; Kaethner et al., 2015). Other findings from these studies indicate that the visual P300 speller incurred increased cognitive load and fatigue for some (Kaethner et al., 2015), whereas for others, there is less strain compared to eye-tracking systems (Holz et al., 2015). The application of many conventional and BCI-based AAC access techniques with the same individual may permit an adaptive strategy to rely on certain modes of access based on each individual’s level of fatigue. This will allow one to change his or her method of AAC access to suit his or her fatigue level throughout the day. Topic 5: BCI as an Addition to Conventional AAC Access Technology At their current stage of development, BCIs are mainly the primary choice for individuals with either absent, severely impaired, or highly unreliable speech and motor control. As BCIs advance as an access modality for AAC, it is important that the goal of intervention remains on selecting an AAC method that is most appropriate versus selecting the most technologically advanced access method (Light & McNaughton, 2013). Each of the BCI devices discussed has unique sensory, motor, and cognitive requirements that may best match specific profiles of individuals who may require BCI, as well as the training required for device proficiency. The question then of BCIs replacing any form of AAC must be determined according to the needs, wants, and abilities of the individual. These factors play a crucial role on motivation, which has direct impact on BCI effectiveness (Nijboer, Birbaumer, & Kübler, 2010). Other assessment considerations include comorbid conditions, such as a history of seizures, which is a contraindication for some visual BCIs due to the rapidly flashing icons (Volosyak et al., 2011). Cognitive factors, such as differing levels of working memory (Sprague, McBee, & Sellers, 2015) and an ability to focus one’s attention (Geronimo et al., 2016; Riccio et al., 2013), are also important considerations because they have been correlated to successful BCI operation. There are additional considerations for motor-based BCIs, including (a) a well-known observation that the SMR, which is necessary for device control, cannot be adequately 8 estimated in approximately 15%–30% of all individuals with or without impairment (Vidaurre & Blankertz, 2010) and (b) the possibility of performance decline or instability as a result of progressive neuromotor disorders, such as ALS (Silvoni et al., 2013). These concerns are currently being addressed using assessment techniques to predict motor-based BCI performance, including a questionnaire to estimate kinesthetic motor imagery (e.g., first person imagery or imagining performing and experiencing the sensations associated with motor imagery) performance (Vuckovic & Osuagwu, 2013), which is known to lead to better BCI performance compared with a third person motor imagery (e.g., watching yourself from across the room; Neuper, Scherer, Reiner, & Pfurtscheller, 2005). Overall, there is limited research available on the inter- and intraindividual considerations for BCI intervention that may affect BCI performance (Kleih & Kübler, 2015); therefore, clinical assessment tools and guidelines must be developed to help determine the most appropriate method of accessing AAC (that includes both traditional or BCI-based technologies) for each individual. These efforts have already begun (e.g., Fried-Oken et al., 2013; Kübler et al., 2015), and more work is needed to ensure that existing AAC practices are well incorporated with BCI-based assessment tools. In summary, the ultimate purpose of BCI access techniques should not be seen as a competition or a replacement for existing AAC methods that have a history of success. Rather, the purpose of BCI-based communication is to provide a feature-matched alternate or complementary method for accessing AAC for individuals with suitability, preference, and motivation for BCI or for those who are unable to utilize current communicative methods. Topic 6: Limitations of BCI and Future Directions Future applications of noninvasive BCIs will continue to focus on increasing accuracy and communication rate for use either as standalone AAC options or to access existing AAC devices. One major area of future work is to improve the techniques for noninvasively recording brain activity needed for BCI operation. Though a large majority of people who may potentially use BCI have reported that they are willing to wear an EEG cap (84%; Huggins, Wren, & Gruis, 2011), the application of EEG sensors and their stability over time are still obstacles needed to be overcome for practical use. Most EEG-based BCI systems require the application of electrolytic gel to bridge the contact between electrodes and the scalp for good signal acquisition. Unfortunately, this type of application has been reported to be inconvenient and cumbersome by individuals who currently use BCI and may also be difficult to set up and maintain by a trained facilitator (Blain-Moraes, Schaff, Gruis, Huggins, & Wren, 2012). Further, electrolytic gels dry out over time, gradually degrading EEG signal acquisition. Recent advances in dry electrode technology may help overcome this limitation (Blain-Moraes et al., 2012) by allowing for recording of EEG without electrolytic solutions and may lead to easier American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 application of EEG sensors and prolonged stability of EEG signal acquisition. In order to be used in all environments, EEG must be portable and robust to external sources of noise and artifacts. EEG is highly susceptible to electrical artifacts from the muscles, environment, and other medical equipment (e.g., mechanical ventilation). Therefore, an assessment is needed for likely environments of use, as are guidelines for minimizing the effect of these artifacts. Simultaneous efforts should be made toward improving the tolerance of EEG recording equipment to these outsize sources of electrical noise (Kübler et al., 2015). The ultimate potential of BCI technology is the development of a system that can directly decode brain activity into communication (e.g., written text or spoken), rather than indirectly operate a communication device. This type of neural decoding is primarily under investigation using invasive methods using electrocorticography and intracortical microelectrodes and has focused on decoding phonemes (Blakely et al., 2008; Brumberg et al., 2011; Herff et al., 2015; Mugler et al., 2014; Tankus et al., 2012), words (Kellis et al., 2010; Leuthardt et al., 2011; Pei et al., 2011), and time-frequency representations (Martin et al., 2014). Invasive methods have the advantage of increased signal quality and resistance to sources of external noise but require a surgical intervention to implant recording electrodes either in or on the brain (Chakrabarti et al., 2015). The goal of these decoding studies and other invasive electrophysiological investigations of speech processing is to develop a neural prosthesis for fluent-like speech production (Brumberg, Burnison, & Guenther, 2016). Although invasive techniques come at a surgical cost, one study reported that 72% of individuals with ALS indicated they were willing to undergo outpatient surgery, and 41% were willing to have a surgical intervention with a short hospital stay to access invasive BCI methods (Huggins et al., 2011). That said, very few invasive BCIs are available for clinical research or long-term at-home use (e.g., Vansteensel et al., 2016); therefore, noninvasive methods will likely be first adopted for use in AAC interventions. Conclusions This tutorial has focused on a few important considerations for the future of BCIs as AAC: (a) Despite broad speech-language pathology expertise in AAC, there are few clinical guidelines and recommendations for the use of BCI as an AAC access technique; (b) the most mature BCI technologies have been designed as methods to access communication interfaces rather than directly accessing thoughts, utterances, and speech motor plans from the brain; and (c) BCI is an umbrella term for a variety of brain-tocomputer techniques that require comprehensive assessment for matching people who may potentially use BCI with the most appropriate device. The purpose of this tutorial was to bridge the gaps in knowledge between AAC and BCI practices, describe BCIs in the context of current AAC conventions, and motivate interdisciplinary collaborations to pursue rigorous clinical research to adapt AAC feature matching protocols to include intervention with BCIs. A summary of take-home messages to help bridge the gap between knowledge of AAC and BCI was compiled from our interdisciplinary team and summarized in Table 3. Additional training and hands-on experience will improve acceptance of BCI approaches for interventionists targeted by this tutorial, as well as people who may use BCI in the future. Key to the clinical acceptance of BCI are necessary improvements in communication rate and accuracy via BCI access methods (Kageyama et al., 2014). However, many people who may use BCIs understand the current limitations, yet they recognize the potential positive benefits of BCI, reporting that the technology offers “freedom,” “hope,” “connection,” and unlocking from their speech and motor impairments (Blain-Moraes et al., 2012). A significant component of future BCI research will focus on meeting the priorities of people who use BCIs. A recent study assessed the opinions and priorities of individuals with ALS in regard to BCI design and reported that individuals with ALS prioritized performance accuracy of at least 90% and a rate of at least 15 to 19 letters per minute (Huggins et al., 2011). From our review, most BCI technologies have not yet reached these specifications, though some recent efforts have made considerable progress (e.g., Chen et al., 2015; Townsend & Platsko, 2016). A renewed emphasis on user-centered design and development is helping to move this technology forward by best matching the wants and needs of individuals who may use BCI with realistic expectations of BCI function. It is imperative to include clinicians, individuals who use AAC and BCI, and other stakeholders into the BCI design process to improve usability and performance and to help find the optimal translation from the laboratory to the real world. Acknowledgments This work was supported in part by the National Institutes of Health (National Institute on Deafness and Other Communication Disorders R03-DC011304), the University of Kansas New Faculty Research Fund, and the American Speech-LanguageHearing Foundation New Century Scholars Research Grant, all awarded to J. Brumberg. References Acqualagna, L., & Blankertz, B. (2013). Gaze-independent BCIspelling using rapid serial visual presentation (RSVP). Clinical Neurophysiology, 124(5), 901–908. Allison, B. Z., McFarland, D. J., Schalk, G., Zheng, S. D., Jackson, M. M., & Wolpaw, J. R. (2008). Towards an independent brain– computer interface using steady state visual evoked potentials. Clinical Neurophysiology, 119(2), 399–408. Beukelman, D., & Mirenda, P. (2013). Augmentative and alternative communication: Supporting children and adults with complex communication needs (4th ed.). Baltimore, MD: Brookes. Birbaumer, N., Kübler, A., Ghanayim, N., Hinterberger, T., Perelmouter, J., Kaiser, J., . . . Flor, H. (2000). The thought translation device (TTD) for completely paralyzed patients. IEEE Transactions on Rehabilitation Engineering, 8(2), 190–193. Brumberg et al.: AAC-BCI Tutorial 9 Blain-Moraes, S., Schaff, R., Gruis, K. L., Huggins, J. E., & Wren, P. A. (2012). Barriers to and mediators of brain–computer interface user acceptance: Focus group findings. Ergonomics, 55(5), 516–525. Blakely, T., Miller, K. J., Rao, R. P. N., Holmes, M. D., & Ojemann, J. G. (2008). Localization and classification of phonemes using high spatial resolution electrocorticography (ECoG) grids. In 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (pp. 4964–4967). https://doi.org/10.1109/IEMBS.2008.4650328 Blankertz, B., Dornhege, G., Krauledat, M., Müller, K.-R., Kunzmann, V., Losch, F., & Curio, G. (2006). The Berlin brain–computer interface: EEG-based communication without subject training. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 14(2), 147–152. Brumberg, J. S., Burnison, J. D., & Guenther, F. H. (2016). Brainmachine interfaces for speech restoration. In P. Van Lieshout, B. Maassen, & H. Terband (Eds.), Speech motor control in normal and disordered speech: Future developments in theory and methodology (pp. 275–304). Rockville, MD: ASHA Press. Brumberg, J. S., Burnison, J. D., & Pitt, K. M. (2016). Using motor imagery to control brain–computer interfaces for communication. In Foundations of Augmented Cognition: Neuroergonomics and Operational Neuroscience. Cham, Switzerland: Spring International Publishing. Brumberg, J. S., & Guenther, F. H. (2010). Development of speech prostheses: Current status and recent advances. Expert Review of Medical Devices, 7(5), 667–679. Brumberg, J. S., Wright, E. J., Andreasen, D. S., Guenther, F. H., & Kennedy, P. R. (2011). Classification of intended phoneme production from chronic intracortical microelectrode recordings in speech-motor cortex. Frontiers in Neuroscience, 5, 65. Brunner, P., Joshi, S., Briskin, S., Wolpaw, J. R., Bischof, H., & Schalk, G. (2010). Does the “P300” speller depend on eye gaze? Journal of Neural Engineering, 7(5), 056013. Chakrabarti, S., Sandberg, H. M., Brumberg, J. S., & Krusienski, D. J. (2015). Progress in speech decoding from the electrocorticogram. Biomedical Engineering Letters, 5(1), 10–21. Chen, X., Wang, Y., Gao, S., Jung, T.-P., & Gao, X. (2015). Filter bank canonical correlation analysis for implementing a highspeed SSVEP-based brain–computer interface. Journal of Neural Engineering, 12(4), 46008. Cheng, M., Gao, X., Gao, S., & Xu, D. (2002). Design and implementation of a brain–computer interface with high transfer rates. IEEE Transactions on Biomedical Engineering, 49(10), 1181–1186. Donchin, E., Spencer, K. M., & Wijesinghe, R. (2000). The mental prosthesis: Assessing the speed of a P300-based brain–computer interface. IEEE Transactions on Rehabilitation Engineering, 8(2), 174–179. Fager, S., Beukelman, D., Fried-Oken, M., Jakobs, T., & Baker, J. (2012). Access interface strategies. Assistive Technology, 24(1), 25–33. Farwell, L., & Donchin, E. (1988). Talking off the top of your head: Toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and Clinical Neurophysiology, 70(6), 510–523. Foulds, R. (1980). Communication rates of nonspeech expression as a function in manual tasks and linguistic constraints. In International Conference on Rehabilitation Engineering (pp. 83–87). Fried-Oken, M., Mooney, A., Peters, B., & Oken, B. (2013). A clinical screening protocol for the RSVP keyboard brain–computer 10 interface. Disability and Rehabilitation. Assistive Technology, 10(1), 11–18. Friman, O., Luth, T., Volosyak, I., & Graser, A. (2007). Spelling with steady-state visual evoked potentials. In 2007 3rd International IEEE/EMBS Conference on Neural Engineering (pp. 354–357). Kohala Coast, HI: IEEE. Geronimo, A., Simmons, Z., & Schiff, S. J. (2016). Performance predictors of brain–computer interfaces in patients with amyotrophic lateral sclerosis. Journal of Neural Engineering, 13(2), 026002. Gibbons, C., & Beneteau, E. (2010). Functional performance using eye control and single switch scanning by people with ALS. Perspectives on Augmentative and Alternative Communication, 19(3), 64–69. Goncharova, I. I., McFarland, D. J., Vaughan, T. M., & Wolpaw, J. R. (2003). EMG contamination of EEG: Spectral and topographical characteristics. Clinical Neurophysiology, 114(9), 1580–1593. Halder, S., Rea, M., Andreoni, R., Nijboer, F., Hammer, E. M., Kleih, S. C., . . . Kübler, A. (2010). An auditory oddball brain– computer interface for binary choices. Clinical Neurophysiology, 121(4), 516–523. Herff, C., Heger, D., de Pesters, A., Telaar, D., Brunner, P., Schalk, G., & Schultz, T. (2015). Brain-to-text: Decoding spoken phrases from phone representations in the brain. Frontiers in Neuroscience, 9, 217. Hill, K., & Romich, B. (2002). A rate index for augmentative and alternative communication. International Journal of Speech Technology, 5(1), 57–64. Hill, N. J., Ricci, E., Haider, S., McCane, L. M., Heckman, S., Wolpaw, J. R., & Vaughan, T. M. (2014). A practical, intuitive brain–computer interface for communicating “yes” or “no” by listening. Journal of Neural Engineering, 11(3), 035003. Holz, E. M., Botrel, L., Kaufmann, T., & Kübler, A. (2015). Long-term independent brain–computer interface home use improves quality of life of a patient in the locked-in state: A case study. Archives of Physical Medicine and Rehabilitation, 96(3), S16–S26. Huggins, J. E., Wren, P. A., & Gruis, K. L. (2011). What would brain–computer interface users want? Opinions and priorities of potential users with amyotrophic lateral sclerosis. Amyotrophic Lateral Sclerosis, 12(5), 318–324. Kaethner, I., Kübler, A., & Halder, S. (2015). Comparison of eye tracking, electrooculography and an auditory brain–computer interface for binary communication: A case study with a participant in the locked-in state. Journal of Neuroengineering and Rehabilitation, 12(1), 76. Kageyama, Y., Hirata, M., Yanagisawa, T., Shimokawa, T., Sawada, J., Morris, S., . . . Yoshimine, T. (2014). Severely affected ALS patients have broad and high expectations for brain-machine interfaces. Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration, 15(7–8), 513–519. Käthner, I., Ruf, C. A., Pasqualotto, E., Braun, C., Birbaumer, N., & Halder, S. (2013). A portable auditory P300 brain–computer interface with directional cues. Clinical Neurophysiology, 124(2), 327–338. Kellis, S., Miller, K., Thomson, K., Brown, R., House, P., & Greger, B. (2010). Decoding spoken words using local field potentials recorded from the cortical surface. Journal of Neural Engineering, 7(5), 056007. Kleih, S. C., & Kübler, A. (2015). Psychological factors influencing brain–computer interface (BCI) performance. 2015 IEEE International Conference on Systems, Man, and Cybernetics, 3192–3196. American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 Klobassa, D. S., Vaughan, T. M., Brunner, P., Schwartz, N. E., Wolpaw, J. R., Neuper, C., & Sellers, E. W. (2009). Toward a high-throughput auditory P300-based brain–computer interface. Clinical Neurophysiology, 120(7), 1252–1261. Koester, H. H., & Levine, S. (1996). Effect of a word prediction feature on user performance. Augmentative and Alternative Communication, 12(3), 155–168. Kübler, A., Furdea, A., Halder, S., Hammer, E. M., Nijboer, F., & Kotchoubey, B. (2009). A brain–computer interface controlled auditory event-related potential (p300) spelling system for locked-in patients. Annals of the New York Academy of Sciences, 1157, 90–100. Kübler, A., Holz, E. M., Sellers, E. W., & Vaughan, T. M. (2015). Toward independent home use of brain–computer interfaces: A decision algorithm for selection of potential end-users. Archives of Physical Medicine and Rehabilitation, 96(3), S27–S32. Kübler, A., Kotchoubey, B., Hinterberger, T., Ghanayim, N., Perelmouter, J., Schauer, M., . . . Birbaumer, N. (1999). The thought translation device: A neurophysiological approach to communication in total motor paralysis. Experimental Brain Research, 124(2), 223–232. Leuthardt, E. C., Gaona, C., Sharma, M., Szrama, N., Roland, J., Freudenberg, Z., . . . Schalk, G. (2011). Using the electrocorticographic speech network to control a brain–computer interface in humans. Journal of Neural Engineering, 8(3), 036004. Light, J., & McNaughton, D. (2013). Putting people first: Re-thinking the role of technology in augmentative and alternative communication intervention. Augmentative and Alternative Communication, 29(4), 299–309. Lin, Z., Zhang, C., Wu, W., & Gao, X. (2007). Frequency recognition based on canonical correlation analysis for SSVEPBased BCIs. IEEE Transactions on Biomedical Engineering, 54(6), 1172–1176. Lopez, M.-A., Pomares, H., Pelayo, F., Urquiza, J., & Perez, J. (2009). Evidences of cognitive effects over auditory steadystate responses by means of artificial neural networks and its use in brain–computer interfaces. Neurocomputing, 72(16–18), 3617–3623. Lotte, F., Congedo, M., Lécuyer, A., Lamarche, F., & Arnaldi, B. (2007). A review of classification algorithms for EEG-based brain–computer interfaces. Journal of Neural Engineering, 4(2), R1–R13. Lynn, J. M. D., Armstrong, E., & Martin, S. (2016). User centered design and validation during the development of domestic brain computer interface applications for people with acquired brain injury and therapists: A multi-stakeholder approach. Journal of Assistive Technologies, 10(2), 67–78. Mainsah, B. O., Collins, L. M., Colwell, K. A., Sellers, E. W., Ryan, D. B., Caves, K., & Throckmorton, C. S. (2015). Increasing BCI communication rates with dynamic stopping towards more practical use: An ALS study. Journal of Neural Engineering, 12(1), 016013. Martin, S., Brunner, P., Holdgraf, C., Heinze, H.-J., Crone, N. E., Rieger, J., . . . Pasley, B. N. (2014). Decoding spectrotemporal features of overt and covert speech from the human cortex. Frontiers in Neuroengineering, 7, 14. McCane, L. M., Sellers, E. W., McFarland, D. J., Mak, J. N., Carmack, C. S., Zeitlin, D., . . . Vaughan, T. M. (2014). Brain– computer interface (BCI) evaluation in people with amyotrophic lateral sclerosis. Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration, 15(3–4), 207–215. Miner, L. A., McFarland, D. J., & Wolpaw, J. R. (1998). Answering questions with an electroencephalogram-based brain–computer interface. Archives of Physical Medicine and Rehabilitation, 79(9), 1029–1033. Mugler, E. M., Patton, J. L., Flint, R. D., Wright, Z. A., Schuele, S. U., Rosenow, J., . . . Slutzky, M. W. (2014). Direct classification of all American English phonemes using signals from functional speech motor cortex. Journal of Neural Engineering, 11(3), 035015. Müller-Putz, G. R., Scherer, R., Brauneis, C., & Pfurtscheller, G. (2005). Steady-state visual evoked potential (SSVEP)-based communication: Impact of harmonic frequency components. Journal of Neural Engineering, 2(4), 123–130. Mussa-Ivaldi, F. A., & Miller, L. E. (2003). Brain-machine interfaces: Computational demands and clinical needs meet basic neuroscience. Trends in Neurosciences, 26(6), 329–334. Neuper, C., Müller, G. R., Kübler, A., Birbaumer, N., & Pfurtscheller, G. (2003). Clinical application of an EEG-based brain–computer interface: A case study in a patient with severe motor impairment. Clinical Neurophysiology, 114(3), 399–409. Neuper, C., Scherer, R., Reiner, M., & Pfurtscheller, G. (2005). Imagery of motor actions: Differential effects of kinesthetic and visual-motor mode of imagery in single-trial EEG. Cognitive Brain Research, 25(3), 668–677. Nijboer, F., Birbaumer, N., & Kübler, A. (2010). The influence of psychological state and motivation on brain–computer interface performance in patients with amyotrophic lateral sclerosis —A longitudinal study. Frontiers in Neuroscience, 4, 1–13. Nijboer, F., Furdea, A., Gunst, I., Mellinger, J., McFarland, D. J., Birbaumer, N., & Kübler, A. (2008). An auditory brain–computer interface (BCI). Journal of Neuroscience Methods, 167(1), 43–50. Nijboer, F., Sellers, E. W., Mellinger, J., Jordan, M. A., Matuz, T., Furdea, A., . . . Kübler, A. (2008). A P300-based brain–computer interface for people with amyotrophic lateral sclerosis. Clinical Neurophysiology, 119(8), 1909–1916. Oken, B. S., Orhan, U., Roark, B., Erdogmus, D., Fowler, A., Mooney, A., . . . Fried-Oken, M. B. (2014). Brain–computer interface with language model-electroencephalography fusion for locked-in syndrome. Neurorehabilitation and Neural Repair, 28(4), 387–394. Oostenveld, R., & Praamstra, P. (2001). The five percent electrode system for high-resolution EEG and ERP measurements. Clinical Neurophysiology, 112(4), 713–719. Pasqualotto, E., Matuz, T., Federici, S., Ruf, C. A., Bartl, M., Olivetti Belardinelli, M., . . . Halder, S. (2015). Usability and workload of access technology for people with severe motor impairment: A comparison of brain–computer interfacing and eye tracking. Neurorehabilitation and Neural Repair, 29(10), 950–957. Patel, R. (2011). Message formulation, organization, and navigation schemes for icon-based communication aids. In Proceedings of the 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC ’11) (pp. 5364–5367). Boston, MA: IEEE. Pei, X., Barbour, D. L., Leuthardt, E. C., & Schalk, G. (2011). Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans. Journal of Neural Engineering, 8(4), 046028. Pfurtscheller, G., & Neuper, C. (2001). Motor imagery and direct brain–computer communication. Proceedings of the IEEE, 89(7), 1123–1134. Plum, F., & Posner, J. B. (1972). The diagnosis of stupor and coma. Contemporary Neurology Series, 10, 1–286. Regan, D. (1989). Human brain electrophysiology: Evoked potentials and evoked magnetic fields in science and medicine. New York, NY: Elsevier. Brumberg et al.: AAC-BCI Tutorial 11 Riccio, A., Simione, L., Schettini, F., Pizzimenti, A., Inghilleri, M., Belardinelli, M. O., . . . Cincotti, F. (2013). Attention and P300based BCI performance in people with amyotrophic lateral sclerosis. Frontiers in Human Neuroscience, 7, 732. Roark, B., Beckley, R., Gibbons, C., & Fried-Oken, M. (2013). Huffman scanning: Using language models within fixed-grid keyboard emulation. Computer Speech and Language, 27(6), 1212–1234. Roark, B., Fried-Oken, M., & Gibbons, C. (2015). Huffman and linear scanning methods with statistical language models. Augmentative and Alternative Communication, 31(1), 37–50. Scherer, R., Billinger, M., Wagner, J., Schwarz, A., Hettich, D. T., Bolinger, E., . . . Müller-Putz, G. (2015). Thought-based row– column scanning communication board for individuals with cerebral palsy. Annals of Physical and Rehabilitation Medicine, 58(1), 14–22. Scherer, R., Müller, G. R., Neuper, C., Graimann, B., & Pfurtscheller, G. (2004). An asynchronously controlled EEG-based virtual keyboard: Improvement of the spelling rate. IEEE Transactions on Biomedical Engineering, 51(6), 979–984. Sellers, E. W., & Donchin, E. (2006). A P300-based brain–computer interface: Initial tests by ALS patients. Clinical Neurophysiology, 117(3), 538–548. Sellers, E. W., Vaughan, T. M., & Wolpaw, J. R. (2010). A brain– computer interface for long-term independent home use. Amyotrophic Lateral Sclerosis, 11(5), 449–455. Silvoni, S., Cavinato, M., Volpato, C., Ruf, C. A., Birbaumer, N., & Piccione, F. (2013). Amyotrophic lateral sclerosis progression and stability of brain–computer interface communication. Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration, 14(5–6), 390–396. Sprague, S. A., McBee, M. T., & Sellers, E. W. (2015). The effects of working memory on brain–computer interface performance. Clinical Neurophysiology, 127(2), 1331–1341. Tankus, A., Fried, I., & Shoham, S. (2012). Structured neuronal encoding and decoding of human speech features. Nature Communications, 3, 1015. Thistle, J. J., & Wilkinson, K. M. (2013). Working memory demands of aided augmentative and alternative communication for individuals with developmental disabilities. Augmentative and Alternative Communication, 29(3), 235–245. Thistle, J. J., & Wilkinson, K. M. (2015). Building evidence-based practice in AAC display design for young children: Current 12 practices and future directions. Augmentative and Alternative Communication, 31(2), 124–136. Townsend, G., & Platsko, V. (2016). Pushing the P300-based brain–computer interface beyond 100 bpm: Extending performance guided constraints into the temporal domain. Journal of Neural Engineering, 13(2), 026024. Trnka, K., McCaw, J., Yarrington, D., McCoy, K. F., & Pennington, C. (2008). Word prediction and communication rate in AAC, In Proceedings of the IASTED International Conference on Telehealth/Assistive Technologies (Telehealth/AT ’08) (pp. 19–24). Baltimore, MD: ACTA Press Anaheim, CA, USA. Vansteensel, M. J., Pels, E. G. M., Bleichner, M. G., Branco, M. P., Denison, T., Freudenburg, Z. V., . . . Ramsey, N. F. (2016). Fully implanted brain–computer interface in a locked-in patient with ALS. New England Journal of Medicine, 375(21), 2060–2066. Vaughan, T. M., McFarland, D. J., Schalk, G., Sarnacki, W. A., Krusienski, D. J., Sellers, E. W., & Wolpaw, J. R. (2006). The Wadsworth BCI Research and Development Program: At home with BCI. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 14(2), 229–233. Vidaurre, C., & Blankertz, B. (2010). Towards a cure for BCI illiteracy. Brain Topography, 23(2), 194–198. Volosyak, I., Valbuena, D., Lüth, T., Malechka, T., & Gräser, A. (2011). BCI demographics II: How many (and what kinds of ) people can use a high-frequency SSVEP BCI? IEEE Transactions on Neural Systems and Rehabilitation Engineering, 19(3), 232–239. Vuckovic, A., & Osuagwu, B. A. (2013). Using a motor imagery questionnaire to estimate the performance of a brain–computer interface based on object oriented motor imagery. Clinical Neurophysiology, 124(8), 1586–1595. Wills, S. A., & MacKay, D. J. C. (2006). DASHER—An efficient writing system for brain–computer interfaces? IEEE Transactions on Neural Systems and Rehabilitation Engineering, 14(2), 244–246. Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G., & Vaughan, T. M. (2002). Brain–computer interfaces for communication and control. Clinical Neurophysiology, 113(6), 767–791. Wolpaw, J. R., & McFarland, D. J. (2004). Control of a twodimensional movement signal by a noninvasive brain–computer interface in humans. Proceedings of the National Academy of Sciences of the United States of America, 101(51), 17849–17854. American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 Copyright of American Journal of Speech-Language Pathology is the property of American Speech-Language-Hearing Association and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.
Augmentative and Alternative Communication, 2015; 31(2): 124–136 © 2015 International Society for Augmentative and Alternative Communication ISSN 0743-4618 print/ISSN 1477-3848 online DOI: 10.3109/07434618.2015.1035798 RESEARCH ARTICLE Building Evidence-based Practice in AAC Display Design for Young Children: Current Practices and Future Directions JENNIFER J. THISTLE & KRISTA M. WILKINSON Department of Communication Sciences and Disorders, The Pennsylvania State University, University Park, PA, USA Abstract Each time a practitioner creates or modifies an augmentative and alternative communication (AAC) display for a client, that practitioner must make a series of decisions about which vocabulary concepts to include, as well as physical and organizational features of the display. Yet, little is known about what factors influence the actual decisions and their outcomes. This research examined the design factors identified as priorities by speech-language pathologists (SLPs) when creating AAC displays for young children (age 10 years and under), and their rationale for the selection of these priorities. An online survey gathered ratings and comments from 112 SLPs with experience in AAC concerning the importance of a variety of factors related to designing an aided AAC display. Results indicated that some decisions were supported by existing research evidence, such as choosing vocabulary, collaborating with key stakeholders, and supporting partner modeling. Other decisions highlight areas for future research, including use of visual scene display layouts, symbol background color, and supports for motor planning. Keywords: Clinical practices; Display design; Survey; Augmentative and alternative communication Introduction of an individual, and using these characteristics of the individual to drive the selection and design of the system (Light & McNaughton, 2013a). For example, depending on the individual’s visual or motor access abilities, the size of the symbols may or may not be an important feature to manipulate (Kovach & Kenyon, 2003). A 2006 survey examined SLPs’ perceptions on what contributes to success and abandonment of AAC technology (Johnson, Inglebret, Jones, & Ray, 2006). SLPs reported that an appropriate match between the individual and the system is one factor that promotes greater success with the device. Intrinsic abilities such as motor, cognitive/linguistic, literacy skills, and sensory perceptual skills must be assessed and compared to external features of systems to determine the best match. With the great variety of aided AAC technologies available, matching external features to intrinsic abilities is no small task. Ideally, practitioners are combining their practical knowledge and experiences with available evidence to inform a trial-based, feature-matching approach. However, such an approach may increase the time it takes an individual to reach competence with a system. Rather, if there are design decisions that follow specific patterns, these could potentially reduce the number of trials needed to identify the best fit for Individuals with complex communication needs often rely on augmentative and alternative communication (AAC) to participate in communication interactions. An AAC system encompasses a variety of methods to support communication, such as gestures, sign language, communication boards, and speech generating devices (Beukelman & Mirenda, 2013). Techniques that utilize tools outside of the body, such as a communication board with graphic symbols or a computer programmed with voice output, are called aided AAC. Substantial evidence suggests the use of AAC interventions increases language development with individuals with a variety of communication disabilities (e.g., Binger & Light, 2007; Drager et al., 2006; Romski & Sevcik, 1996). Once a system is selected, AAC intervention requires more than taking the device out of the box and handing it over to the individual. One of the challenges facing practitioners such as speech-language pathologists (SLPs), special education teachers, and occupational therapists is creating an aided AAC system that maintains an appropriate balance between the benefits of the communication afforded by the system, and the costs of learning how to use it (Beukelman, 1991). Achieving this balance requires determining the needs and abilities Correspondence: Jennifer Thistle, Department of Communication Sciences and Disorders, HSS 112, University of Wisconsin-Eau Claire, Eau Claire, WI 54702, USA. Tel: ⫹ 1 715 836 6015. E-mail: thistljj@uwec.edu (Received 28 March 2014; revised 23 March 2015; accepted 24 March 2015) 124 AAC Display Design Decisions the individual. The first step in identifying decision processes that are more likely to result in positive outcomes is to better understand the factors considered by practitioners in designing AAC displays. There has been limited research specifically exploring the kinds of decisions SLPs and other practitioners make related to display design. One of the only studies to date that examined this topic was a preliminary and qualitative study using a hypothetical case study format. Specifically, McFadd and Wilkinson (2010) provided six clinicians who specialize in the field of AAC with a partial list of vocabulary to include on an aided AAC display. Clinicians selected additional vocabulary and created a low-tech communication board for a hypothetical student to use during snack time. Clinicians were asked to narrate their thought processes while making decisions regarding the selection and arrangement of vocabulary on the display. The clinicians in McFadd and Wilkinson (2010) applied research-based recommendations by incorporating vocabulary that supported a variety of communicative functions (Adamson, Romski, Deffebach, & Sevcik, 1992; Light & Drager, 2005). For example, five of the clinicians included verbs and social terms to support a variety of interactions. Another clinician only included object labels, but described her rationale that those labels would support social communication by allowing the child and peers to talk about the foods they were eating. Five of the six clinicians used Boardmaker Picture Communication Symbols (PCS; Mayer-Johnson, 1992) to represent the content, based on the instruction from the researchers; the sixth pulled similar types of images from the Internet because her school did not have access to the Boardmaker program. Choices related to arrangement of the symbols were less consistent across clinicians. All clinicians organized the vocabulary in some fashion, and those that included different types of words (e.g., object, verbs, socialregulatory) created subsets based on those types. However, the placement of the subsets varied. For instance, some clinicians placed social-regulatory symbols along the top row while others placed the same symbols along the left-hand column. Still another clinician used spacing within the page to separate types of words. Finally, some clinicians used background color to distinguish different symbol word classes while another placed the symbols on a colored page to support navigation across multiple pages. One challenge when examining clinical practices lies in the heterogeneity of individuals who can benefit from AAC systems. Children with developmental disorders, for example, may have very different needs and abilities compared to adults with acquired disorders. The current study focused on practitioners, specifically SLPs, working with young children (aged 10 years and under) in an attempt to constrain some of the variability seen in AAC decision-making. Even when limited to elementary school children, however, the caseloads of SLPs will influence the experiences upon which they draw when © 2015 International Society for Augmentative and Alternative Communication 125 designing AAC displays. In the 2014 American SpeechLanguage-Hearing Association (ASHA) Schools Survey (ASHA, 2014), the 55% of SLPs who regularly provided AAC-related services reported serving an average of five students. This represents 10% of an elementary school SLPs’ average monthly caseload, and 20% of a residential/special day school SLPs’ average monthly caseload (ASHA, 2014). Furthermore SLPs who work in residential/special day schools reported that 71% of their caseloads consisted of students with severe communication impairments. It is likely, then, that SLPs will have had different experiences designing AAC displays. Professional preparation in AAC also may influence SLPs’ comfort level with providing AAC services, thereby affecting the decisions they make when designing AAC displays. In a survey of 71 SLPs, 72% rated their competence in providing AAC services as fair to poor (Marvin, Montano, Fusco, & Gould, 2003). Similar results emerged in surveys conducted in Egypt (Wormnaes & Abdel Malek, 2004) and New Zealand (Sutherland, Gillon, & Yoder, 2005). Such low levels of competence may be precipitated by education and training provided by SLP programs. In a survey of SLP training programs, 33% of respondents felt that the majority of their students were prepared to work with individuals who use AAC (Ratcliff, Koul, & Lloyd, 2008). Just under half (47%) of the respondents reported that only up to one quarter of their students receive clinical practice in AAC. In a review of research conducted from 1985–2009, Costigan and Light (2010) examined surveys of pre-service AAC training programs in the US, and reported that the majority of SLPs received minimal to no pre-service training in AAC. Thus, it is possible that the variability noted between SLPs in McFadd and Wilkinson’s (2010) study reflected the SLPs’ educational background and experiences with individualizing displays for the wide variety of children who use AAC. If there are some practices that professionals have found to be more successful than other practices, it is important to identify the successful approaches in order to reduce the number of trials of different features that an individual who uses AAC must go through. Research Questions This research addressed the following question: What design factors do SLPs identify as priorities when they create aided AAC displays for young school-aged children, and what are their rationales for the selection of these factors as priorities? Through an online survey, participants answered questions related to the decisions they make regarding vocabulary selection, symbol types and arrangement, and manipulation of a variety of visual features (e.g., size, color, etc.) of aided AAC displays. The responses were analyzed not only to gain a broad view of the general clinical practices but also to understand the factors that might influence the decision-making process. 126 J. J. Thistle & K. M.Wilkinson Method Survey Development Survey questions were developed and refined through initial pilot testing and reviews by experts in survey design. Initial questions were developed to target decisions related to vocabulary selection, symbol types and arrangement, and manipulation of a variety of visual features of aided AAC displays. The initial pilot survey from which the final version was developed was completed by three SLPs, with an average of 8 years (range: 7–10 years) experience providing AAC services to children. Feedback from the pilot participants ensured that the focus of the questions centered on the goals of the study. The university’s Survey Research Center then reviewed the survey for structure and adherence to survey design principles. As a result, demographic questions were moved from the beginning to the end of the survey, based on the rationale that it may be perceived as less intrusive to answer personal demographic questions at the end of the survey (Groves et al., 2009). The final version of the survey consisted of 42 questions intended to solicit information about two aspects of display design: the principles guiding aided AAC display design in general, and child-specific decisions driven by a given case study. Participants advanced through the survey sequentially (answering the general questions first, then the child-specific questions) and received an error message if they attempted to advance without completing a question. Therefore, as the survey progressed, no questions could be skipped. Because the survey could be abandoned prior to completion, a greater number of responses were provided to questions asked earlier in the survey than those that were asked later. The current study reports on responses related to the first section because the goal was to outline general principles guiding aided AAC display design decisions. Appendix A (to be found online at http://informahealthcare.com/doi/abs/10.3109/07434618.2015.1035798) presents these survey questions. The results of the answers related to the specific case will be reported in a separate study. Participants Target participants were practicing SLPs who (a) maintained a current certificate of clinical competence from ASHA, (b) had at least 1 year of experience supporting individuals who use AAC, and (c) provided AAC services to school-age children aged 10 and under. The online survey was available for 12 weeks and participants were recruited through multiple contact points to allow for adequate opportunity for responding and to increase sample size (Dillman, Smyth, & Christian, 2009). Qualtrics1 online survey software hosted the web-based survey. Participants completed the survey at a computer of their choosing and were able to take the survey over multiple sessions if they chose to do so. The University’s Office for Research Protections provided human subjects approval for this research project. An implied consent form was embedded as the first page of the online survey and participants were advised that continuing the survey indicated consent. Participants had the option of downloading the implied consent form if desired. Survey Distribution Members of two list serves were contacted at three time points. The list serves were the ASHA Special Interest Group 12-Augmentative and Alternative Communication (SIG-12) and Quality Indicators for Assistive Technology (QIAT). A general recruitment notice describing the study and soliciting participation was posted to each list serve at the initial time point, 3 weeks later, and again 7 weeks from the initial posting. Throughout this data collection period, in-person recruitment also occurred during the ISAAC biennial convention. Finally, appeals to personal contacts and postings on social media websites provided additional advertising regarding availability of the survey. Data Analysis The survey consisted of a mix of open-ended and closed-ended questions. Descriptive methods of data analysis were utilized due to the exploratory nature of the questions and the goal of the survey to identify trends to inform future research directions. Descriptive data in the form of frequency tables were used to examine the closed-ended questions. The open-ended questions were coded for common themes using scrutiny techniques (Ryan & Bernard, 2003). Like Ryan and Bernard, three research assistants and the first author initially identified themes and subthemes by reading each response and listing commonly repeated terms and identifying similarities and differences in responses. Refinement of the themes and subthemes occurred during a cycle of consensus coding. The research team formally defined the codes in a codebook that contained the definition as well as examples and non-examples. A summary page of the codes is presented in Appendix B (to be found online at http://informahealthcare.com/doi/abs/10.3109/ 07434618.2015.1035798). Two of the primary codes were each further refined into five secondary codes, resulting in 16 possible codes. The primary codes were used to identify responses that (a) were unclear to the coder, (b) noted features the participant did not think were important, (c) related to the child’s skills and abilities, (d) related to the communication demands, (e) related to the AAC device, and (f) related to key stakeholders (e.g., clinicians, teachers, communication partners). The secondary codes provided detail related to the child’s abilities (e.g., vision abilities) and the communication demands (e.g., functional vocabulary for the setting). The first author divided the responses into individual thought units consisting of the smallest meaningful piece Augmentative and Alternative Communication AAC Display Design Decisions of information contained in the response (Fraenkel, 2006). Typically, the thought units corresponded with a participant’s sentence. However, when the participant included a variety of ideas in one sentence, the resulting thought units were individual words or phrases. Research assistants then assigned one code per thought unit. Inter-observer agreement was assessed on the final coding of the thought units. Two research assistants each independently coded thought units. After a period of training, the coders reached a minimum Kappa coefficient of .7 (Fleiss, Levin, & Paik, 2003). Coders then individually coded all questions and subsequent reliability was recalculated on 25% of the responses. Agreement on each question was on average .76 (range .71–.78). Kappa values of .4–.75 are considered good and values of .75 or greater signify excellent agreement (Fleiss et al., 2003). Results Responses In total, 192 individuals accessed the survey. Of those, 24 dropped out during the initial screening section, 17 were excluded because they did not meet the selection criteria, and two reported living outside the United States, in Canada and South Africa. Due to the small number of international participants, these responses were excluded from the final analysis. Of the 149 eligible participants, 112 completed the broad design questions2 but provided only some demographic data, 77 completed the entire survey (including all demographic data), and 37 did not complete the primary questions. The presentation of the results follows the sequence of the survey, although demographics and initial design decisions for the 77 participants who provided that information will be described first. A discussion of the clinical implications and future directions follow the survey results. Demographics Of the 77 participants who provided complete demographic data, 60 (78%) were members of ASHA SIG-12, and nearly half (48%) reported living in the Northeast. Table I presents a summary of the demographic information, including distribution by geographical region, participant gender, and race/ethnicity. One of the screening questions asked participants’ years of experience supporting children who use AAC. Thus, although only 77 participants completed the demographics section of the survey, all 112 participants provided years of experience. Table II presents the proportion of participants who completed the broad design questions but did not complete the demographic questions and those who completed both sections by their level of experience. The following results address similarities and differences observed in responses across the different levels of experience. © 2015 International Society for Augmentative and Alternative Communication 127 Table I. Percentage of Participants by Geographical Region, Gender, and Race/Ethnicity. Characteristic Participants (%) Geographic region Northeast Southeast North Central South Central West/Mountain Gender Female Male Race/ethnicity White/Caucasian African American Hispanic Asian n % 37 12 7 6 15 48.0 7.8 15.6 9.1 19.5 75 2 97.4 2.6 72 1 3 1 93.5 1.3 3.9 1.3 Initial Design Decisions One of the first decisions a clinician must make when creating a new display for young children is whether to modify the page set provided by the manufacturer. Of the 77 participants answering this question, 60 (78%) reported often or always making changes to the page set provided by the AAC manufacturer, and 8 (10%) reported rarely or never making changes to the page set provided by the manufacturer. An examination of the responses by level of experience did not reveal a distinctive pattern related to experience level. Across most experience levels, only 9% (5 out of 53) of the participants reported rarely or never making changes. However, 21% (3 out of 14) of the participants with 13–20 years of experience reported rarely or never making changes. This difference in responding by participants with this level of experience recurs throughout the survey and will be explored in the Discussion section. Decisions Related to Vocabulary Selection SLPs often play a key role in choosing the display content, including what concepts and communication functions the content supports. Several themes emerged in terms of decisions made by SLPs with regard to the importance of child preferences, other stakeholders, the role of core vocabulary, and the range of word classes to include. Table II. Number and Percentage of Participants Completing the Broad Design and the Demographic Questions Sections. Years of experience 1–3 4–7 8–12 13–20 21 or more Completed demographic questions (n ⫽ 77) n % 8 23 13 14 19 10.4 29.9 16.9 18.2 24.6 Completed only broad design questions (n ⫽ 35) n % 3 7 5 11 9 8.6 20.0 14.3 31.4 25.7 128 J. J. Thistle & K. M.Wilkinson Child Preferences. The child’s preferences were noted to be extremely or fairly important in vocabulary selection by 87% (97 out of 112) of respondents. Figure 1 illustrates the level of importance participants placed upon the child’s preferences in their vocabulary selection process, for each level of clinician experience. All of the participants with 1–3 years of experience felt the child’s preferences were extremely or fairly important. On the other hand, 28% (7 of 25) of respondents with 13–20 years of experience indicated that the child’s preferences were only somewhat or not very important. This accounted for half of the 13% (15 out of 112) of all participants who felt that child’s preferences were somewhat or not very important. Additional Priorities. An open-ended question provided some insight into what additional priorities respondents felt were important when choosing vocabulary. Figure 2 illustrates the main categories of priorities identified by participants by years of experience. Key Stakeholders. As a whole, 46 of 112 (41%) participants mentioned collaborating with key stakeholders. Once again, however, there was a somewhat unusual pattern in the group with 13–20 years’ experience, as only 20% (5 out of 25) mentioned key stakeholders compared to 41% (41) of the remaining 87 participants. Some participants mentioned specific instances that would influence vocabulary selection. One wrote, “I add vocabulary based on family preferences too – parents often like some of the politeness terms, for instance.” Other participants were more general in their description and rationale of including key stakeholders: “I take into account what the family and classroom find important.” Role of Core Vocabulary. Clinicians with more than 13 years of experience reported choosing core vocabulary based on frequency of words more often (15%, 8 out of 53) than those with less than 13 years of experience (8%, 5 out of 59). Some respondents prioritized core vocabulary above the child’s or key stakeholders’ preferences. For instance, one participant stated, “I would ONLY consider the child’s vocabulary preferences Figure 2. Percentage of participants mentioning additional vocabulary selection considerations beyond a child’s preferences, by years of experience. when I am including personal words that are part of the child’s extended vocabulary set. These personal extended vocabulary words are second on my list after CORE vocabulary” (emphasis provided by participant). This quote also illustrates the challenge inherent in design decision – many decisions influence other decisions, and trade-offs must be made. In this case, it seems the participant was making a choice between providing core vocabulary or personalized vocabulary. Range of Word Classes. Most participants indicated that they frequently used a variety of word forms in support of language acquisition and use. Specifically, participants incorporated subjects (82%, 92 of 112), actions (97%, 109 of 112), objects (83%, 93 of 112), and emotion words (84%, 94 of 112) most or all of the time. Figure 3 shows the frequency with which participants incorporate each of these types of words by their level of experience. In all, 93% (26 out of 28) of participants with the most experience reported incorporating emotion words most or all of the time, whereas on average 81% (68 out of 84) of the clinicians with other levels of experience incorporated emotion words most or all of the time. Decisions Related to Symbol Type Following identification of appropriate content, SLPs consider options regarding how best to represent that content. The type of symbol representation was rated as fairly or extremely important by 100% of participants with 1–3 years’ experience (n ⫽ 11), but only 76% (77 out of 101) of all other participants. When asked what factors influence their choice of symbol type, 90% (10 out of 11) of the participants with 1–3 years of experiences cited the child’s cognitive abilities as an important consideration. Across all other experience levels, just under half (45%, 45 out of 101) reported that the child’s cognitive level should be considered. Decisions Related to Visual Features of the Display Figure 1. Percentage of participants who rated child’s preference in vocabulary selection as either fairly/extremely important or not very/ somewhat important, by years of experience. The visual nature of an aided AAC display allows for manipulation of such features as symbol arrangement Augmentative and Alternative Communication AAC Display Design Decisions 129 Figure 3. Percent of participants indicating the frequency with which they include subjects, action words, descriptors, and emotion words in displays, by years of experience. and display layout, symbol size, and use of color. Three themes that emerged from this survey concerned choices related to (a) the type of display layout, (b) the use of black and white versus colored symbols, and (c) the use of background color cuing. Type of Display Layout. When asked to estimate the percentage of hybrid or visual scene displays (VSDs) participants design, 83% (64 out of 77) reported using these displays less than 25% of the time, suggesting grid-based displays were used most of the time. Only 3% (2 out of 77) of participants reported using VSD/hybrid displays more than 50% of the time. Use of Symbol Internal Color. Of the 112 participants, 94 (84%) reported utilizing symbols that have internal color most or all of the time; only one participant indicated very rarely using color symbols, but did not provide a reason in the following open-ended question. Despite the widespread use of symbol color, 78 (69%) reported using black and white symbols some of the time. Participants reported that black and white symbols were used to highlight new symbols, when color would not contribute to the meaning (e.g., prepositional words), or when team members did not have access to color printers. Use of Background Color. Color can also be featured in the background of symbols. All participants reported using symbol background color at least sometimes, and 49 (43%) reported that they used it most of the time, a trend that was consistent across all experience levels. The top two reasons provided regarding use of background color were (a) to support development of grammatical skills through color coding parts of speech, and © 2015 International Society for Augmentative and Alternative Communication (b) to draw attention to specific symbols. Using background color as a cue to word class category reflects a common clinical recommendation (Goossens’, Crain, & Elder, 1999; Kent-Walsh & Binger, 2009); however, to date there has been no research that specifically examines if the dimension of color aids in learning appropriate sentence structure. Additional Decisions When given the opportunity to discuss any additional general factors not previously mentioned, 32% (36 out of 112) of participants supported the use of consistency in display design to support motor planning and automaticity. In this approach, the location of previously learned symbols on the display does not change as new symbols are added to the display. Finally, an additional feature participants consider was mentioned in response to several different, unrelated, open-ended questions: designing the display in a way that supports partner modeling. Discussion The goal of AAC intervention for a child is to provide support for participation and language development across all environments, facilitating early communication (Light & Drager, 2005; Romski & Sevcik, 1996), advancing linguistic growth and functional communication (Binger & Light, 2007; Drager et al., 2006; Johnston, McDonnell, Nelson, & Magnavito, 2003), and providing early literacy experiences (Koppenhaver & Erickson, 2003; Light & McNaughton, 2013b). Designing an appropriate AAC display is one part of AAC intervention that may contribute to this goal. There are 130 J. J. Thistle & K. M.Wilkinson many factors to consider when designing an AAC display, including but not limited to appeal of the display (Light, Drager, & Nemser, 2004; Light, Page, Curran, & Pitkin, 2007), ease of use (Drager, Light, Speltz, Fallon & Jeffries, 2003; Drager et al., 2004; Fallon, Light, & Achenbach, 2003), and communicative functions supported by the display (Adamson et al., 1992; Romski & Sevcik, 1996). Furthermore, these considerations must be weighed against the needs and abilities of the child who will be using the display (Light & McNaughton, 2013a). Certainly, as a field, we are at times successful as we strive toward this goal; at other times, however, we do not succeed (Johnson et al., 2006; Snell et al., 2010). In this study, SLPs reported considering a number of factors in the development of AAC displays, suggesting at least some awareness of the need for AAC systems to be responsive to the needs of the child and team. This is in line with the multifaceted nature of AAC intervention that supports long-term success (Johnson et al., 2006). The reported decisions regarding vocabulary selection suggest that many clinicians prioritize the child’s preference while also considering input from others. The Participation Model presented by Beukelman and Mirenda (2013) highlights the importance of considering the interests and abilities of the child during the assessment process and subsequent intervention. Along these lines, in a survey of SLPs’ perspective of AAC success and abandonment, two of the top five factors associated with long-term success were the degree to which the device (a) matched the individual’s physical capabilities, and (b) was valued as a means of communication (Johnson et al., 2006). Research also illustrates the importance of involving a range of key stakeholders (e.g., the child, family, teachers, specialists, etc.) at all stages of intervention (Beukelman & Mirenda, 2013; Ogletree, 2012). Practicing a family-centered approach may reduce device abandonment through the provision of supports that result in a good fit between device and family members (Angelo, 2000; Jones, Angelo, & Kokoska, 1999). The collaboration of various stakeholders, including SLPs, teachers, and paraprofessionals, has been shown to increase communication initiation and engagement and decrease classroom assistance needed by students who use AAC (Hunt, Soto, Maier, Müller, & Goetz, 2002). Decisions regarding vocabulary selection highlighted the inclusion of core vocabulary. While there is a breadth of research describing typical vocabulary development and listing commonly used words by age (Ball, Marvin, Beukelman, Lasker, & Rupp, 1999; Banajee, Dicarlo, & Stricklin, 2003; Beukelman, Jones, & Rowan 1989), the effect of providing core vocabulary on language development in children who use AAC has not yet been directly studied. Furthermore, the provision of core vocabulary to the exclusion of personalized vocabulary runs counter to both recommended practices for people who use AAC (Williams, Krezman, & McNaughton, 2008) and factors contributing to device success (Johnson et al., 2006; Murphy, Markova, Collins, & Moodie, 1996). Evidence suggests that providing children with a range of vocabulary increases the frequency of using AAC (Beukelman, McGinnis, & Morrow, 1991; Yorkston, Honsinger, Dowden, & Marriner, 1989). Additionally, research has illustrated that when provided with a range of symbols across word classes, children who use AAC learn and use those symbols for more complex and varied communicative functions beyond requesting (Adamson et al., 1992; Light & Drager, 2005). Thus, although the decision process for the provision of vocabulary relies heavily on clinical experience and stakeholder input, available evidence suggests that it is critical to include a variety of word classes and vocabulary specific to the child. When making decisions about the type of symbol to use, nearly half (49%) of the clinicians said they consider the child’s cognitive abilities. They may have been relying in part on the literature base regarding iconicity to inform their decisions (e.g., Fuller & Lloyd, 1991; Mizuko & Reichle, 1989). Within this literature there is strong evidence supporting the hypothesis that more iconic symbols (i.e., symbols that have a high degree of resemblance to the referent) are more easily learned than those that are less iconic (for a complete review, see Schlosser & Sigafoos, 2002). Clinicians also reported other considerations (e.g., planning for the future, device software, symbol availability) when choosing the symbol type. Such extrinsic issues (e.g., team member familiarity with specific symbol sets) are often a consideration when developing an AAC system; however, too great a focus on designing a display in response to others runs the risk of neglecting the child’s needs and abilities, which could ultimately undercut the goal of AAC. Once the vocabulary and type of symbol is chosen, decisions related to the visual appearance of the display, such as the layout type, use of color, and number or size of the symbols, are made. There are numerous factors to consider, which is likely to result in heterogeneous decisions. However, the results of the survey highlighted three factors that illustrate some consistency across respondents: display layout, internal symbol color, and background color. Overall, grid-based designs were reportedly used more often than VSDs. Historically, the layout of symbols has been in a grid format, with individual concepts represented by isolated symbols arranged in rows and columns (Zangari, Lloyd, & Vicker, 1994). More recently, a visual scene display layout, where concepts are embedded within the context in which they naturally occur, has been proposed (Light & Drager, 2007; Shane, 2006; Wilkinson & Light, 2011). The research with young children without disabilities suggests VSDs may be an appropriate layout for beginning communicators (Drager et al., 2003, 2004; Light & Drager, 2007; Olin, Reichle, Johnson, & Monn, 2010). Although the clinicians in the current study worked with young children, no further caseload information was collected. It is possible that low VSD usage rates were a reflection of either caseloads that do not include Augmentative and Alternative Communication AAC Display Design Decisions many beginning communicators or a lack of awareness of VSDs, which were first introduced in the early 2000s. If the latter is the case, it highlights the need for pre-service training that is thorough and evidence based, and for in-service training that provides information on current research and newly emerging evidence. A majority (84%) of SLPs reported using symbols with internal color most of the time, suggesting a consistency across clinicians with respect to this factor. Research from the field of visual cognitive science suggests color plays a role in a variety of basic perceptual processes, including drawing attention to objects, distinguishing between similar objects, and contributing to object recognition (Gegenfurtner & Rieger, 2000; Wurm, Legge, Isenberg, & Luebker, 1993; Xu, 2002). More closely linked to AAC research, Wilkinson and colleagues have examined the effect of arranging symbols based on their internal color (Wilkinson, Carlin, & Jagaroo, 2006; Wilkinson, Carlin, & Thistle, 2008; Wilkinson, O’Neill, & McIlvane, 2014). Children with and without disabilities have consistently demonstrated benefits of grouping like-colored symbols, as measured by faster reaction times locating a target symbol within an array. However, in one study with children with autism, identification and generalization for the learning of color symbols was compared to the learning of grey-scale symbols (Hetzroni & Ne’eman, 2013). In an alternating treatment design study, all four children successfully learned and maintained recognition of the new vocabulary, regardless of the level of color included in the symbols. Further research is necessary, but the results reported by Hetzroni and Ne’eman provide evidence that, although pervasive and easily incorporated, symbols may not need to have color to be learned and used. Another consistent response was in relation to the use of color in the background of symbols. Although clinicians reported multiple reasons for using background color, to date, research related to the use of background color has been limited to its use as a cue for organizing vocabulary on the display. Thistle and Wilkinson (2009) compared the response time of children with typical development in locating symbols across several conditions, including variations of background color. The purpose of the background color was to add a secondary cue that the symbols with similar background color were within the same group (e.g., orange background behind green vegetables, pink background behind yellow fruits and vegetables). Wilkinson and Snell (2011) used background color as a cue to the meaning of the symbol by having PCS emotion symbols of similar category (e.g., happy, sad, angry) share background color. In both studies, the presence of background color detracted from performance of participants under the age of 4 years. One hypothesis is that the background color attracts too much attention and distracted participants from the critical information: the symbol content. These results suggest that caution is warranted when deciding on the use of background color for a specific individual. © 2015 International Society for Augmentative and Alternative Communication 131 One third of the respondents independently mentioned consistency as an important principle to follow when designing a display. Specifically, they suggested that organizing the symbols on the display in a consistent manner may support motor planning. The effect of motor learning can be observed in daily activities such as typing on a keyboard or playing an instrument. With practice, the effort involved becomes more automatic. Motor learning theory posits that practice leads to changes in motor patterns that in turn result in changes in the cognitive resources required to complete the motor movement (Fitts & Posner, 1967; Rosenbaum, 2009). However sound the theory, evidence specific within AAC has so far been limited to case studies and vendor-developed strategies targeted toward individuals with autism spectrum disorders (Cafiero & Delsack, 2007; Stuart & Ritthaler, 2008). Suggestions for future research are offered in an upcoming section. Finally, although the survey did not specifically ask about partner modeling, some participants mentioned the importance of this technique. Research supports the use of partner language modeling as a method of supporting comprehension of input (Cafiero, 2001; Drager et al., 2006; Goossens’ et al., 1999; Romski & Sevcik, 1996). Romski and Sevcik (1996; 2003) also argued that by providing augmented input, partners not only provide a visual scaffold for the verbal input but also demonstrate both how to use the display and the acceptability of using aided AAC as a mode of communication. The survey responses suggest that participants recognize the potential value of partner modeling and seek to facilitate its practice. Factors Contributing to Differences in Decision Making The current survey suggests differences in clinical decision making that may be a reflection of a variety of factors, such as years of experience, characteristics of caseloads, or availability of relevant pre-service and in-service training. Future research is necessary to determine if these differences are replicated in future studies, and if so, what factors may be contributing to the replication. One possible explanation may be differences in training experiences. As the field evolves, so too does training and education, such that today’s students enrolled in SLP training programs, taught under the current ASHA scope of practice (ASHA, 2007), may learn different content than the SLP educated 15 or 20 years ago (Koul & Lloyd, 1994; Ratcliff et al., 2008). For instance, the results of the current study revealed that the proportion of SLPs who rated the child’s preferences as highly important was much larger in the group with 1–3 years of experience compared to SLPs with 13–20 years of experience. In 2004, the ASHA scope of practice guidelines underwent a substantial revision that involved significant shifts in emphasis with respect to family/child-centered practices and the inclusion of AAC as a service area (ASHA, 2007). In the current study, if the majority of clinicians with fewer years of 132 J. J. Thistle & K. M.Wilkinson experience in AAC service provision received their training after 2004, a shift in pre-professional experiences might be reflected in their decision-making. Alternatively, regardless of training, differences in perspectives might emerge depending on number of years of experience with a variety of children. Consider, for example, the responses related to the importance of the type of symbol: Given that the range of available symbol sets has proliferated over just the last few years, training may account for the differences between those clinicians with the greatest and the least number of years of experience, but only if the less experienced clinicians were trained in the last few years and only if their training included an introduction to the range of technologies/symbols. These clinicians may rate symbol type as a high priority, based on the knowledge of a variety of symbol types. A plausible alternative explanation is that longer experience imbues clinicians with insights about the additional factors that may also influence decisions about symbol type. For instance, perhaps these clinicians feel that the specific representation is less important than other features of the display, such as the organization or flexibility of layouts. Decisions are likely influenced by the outcomes of previous experiences, allowing the clinician to weigh factors that colleagues with less experience may not consider. However, it is also quite possible that previous experiences may unduly influence current decision-making, ultimately undermining the best course for the current case. Future Research Directions The results of this study highlight that a great deal remains unknown about how to support SLPs in contributing to the development of effective AAC displays. What are best practices? What education, tools, and training experiences support developing these best practices? How does experience and caseload influence decision-making? The following are suggested areas for future research that may help to answer these questions. Identifying Best Practices. This survey sought to determine current practices, as a first step toward identifying display design decisions that were and were not common across clinicians. Additionally, the decisions were examined within the context of available evidence. Decisions regarding the type of layout, use of background color, and support for the development of motor planning are three areas in which future research is needed. Type of Layout. The great majority of the SLPs in this study reported that they typically make use of a traditional grid display. Although research with young children without disabilities suggests VSDs may be an appropriate layout for beginning communicators (Drager et al., 2003, 2004; Light & Drager, 2007; Olin et al., 2010), this type of display was not considered to be a first choice by clinicians in this sample. Grid-based layouts have been available for far longer, thus a greater research base exists concerning interventions in which these layouts are used. However, the limited consideration of emerging evidence-based practices suggests additional research is needed that addresses both practices and outcomes related to implementing VSDs. In relation to the practices, one line of research could identify the pitfalls and challenges that practitioners face when implementing VSDs; another could focus on determining the efficacy of this layout for individuals with various etiologies and communication goals. Such studies could seek to illustrate individual characteristics or communication situations that are best served by each type of display layout. Use of Background Color. Using background color to support sentence structure is a common clinical recommendation reported by SLPs, yet this practice has not been explored in the literature. Future research could ask what effect background color cues have on the proficiency of constructing a grammatical sentence. Using background color as a cue to word class category requires the child to understand and apply multiple concepts, that is, he or she must understand that words belong to different word class categories and that the color behind the symbol denotes that category. Research is needed to determine the effect of background color on the visual processing of the display and how this may vary given individual characteristics. Supporting the Development of Motor Planning. Future research is needed to examine the impact of supporting motor planning. The Language Acquisition through Motor Planning (LAMP; The Center for AAC and Autism, 2009) intervention combines principles of neurological and motor learning to teach language. The motor planning aspect stresses the importance of maintaining consistent locations of symbols. Rather than learning the meaning of the symbol representation, the child learns the locations that result in the desired communication. For instance, rather than learning the specific characteristics of the symbol representing HUG through repeated practice in visually searching for and locating the symbol, the child learns the specific motor movements required to access that symbol. One potential challenge of this approach occurs if an individual is ever required to deconstruct the learned motor pattern (cf. Light & Lindsay, 1991). Once component skills are learned as one motion, it is effortful to then break the group process back into its component parts (Fitts & Posner, 1967). Thus, if sentence construction is based solely on a motor plan, rather than knowledge of language structure, a transition to a new system would require starting over and learning new patterns. Empirical research is needed to determine how the act of learning the motor patterns influences learning and using language in the act of communicating. Additionally, the use of LAMP intervention has been focused on individuals with autism spectrum disorders. Research should seek to identify the benefits and challenges of this intervention across etiologies. Augmentative and Alternative Communication AAC Display Design Decisions Tools to Support Decision-making. Future research that identifies a comprehensive framework to support clinical decision-making would have clinical utility. Such an approach has been suggested in relation to completing an assessment (a) to aid in device selection (Dietz, Quach, Lund, & McKelvey, 2012), (b) to assess dysarthria and the potential for AAC for individuals with amyotrophic lateral sclerosis (Hanson, Yorkston, & Britton, 2011), and (c) to determine functional seating and positioning for children with cerebral palsy (Costigan & Light, 2011). A checklist or flowchart could support practitioners as they consider various features related to selection and representation of vocabulary, and arrangement and visual features of associated symbols. This tool may offer a double-check mechanism for clinicians who regularly support children who use AAC, perhaps limiting the danger of decision making that over-emphasizes previous success with another child rather than a clear focus on the needs of the current child. Additionally, clinicians who have a more varied caseload and support only a small number of children who use AAC could benefit from a checklist guiding them through the components associated with designing a display. Through a comprehensive consideration of the critical features of a display in relation to the child’s needs and abilities, this tool may help clinicians demonstrate decisions that balance experience with empirical evidence. Creation of such a tool requires research that identifies the features that are important and the factors that may influence the relative importance of any one feature. Education and Experiences of Clinicians. Replication and expansion of the current survey is needed in order to fully characterize practices at different levels of experience and geographic location. A more in-depth exploration of the training and clinical practice across years of experience may provide insight into factors that influence display decisions. With only two individuals outside of the United States providing responses, it was not possible to report on international practices. Further research is needed to identify practices and challenges that may differ as a result of geographical factors (e.g., policies, training, resources). Additionally, future research should seek to validate that reported practices are, in actual fact, those that practitioners follow. For example, a qualitative study using in-depth interviews that focus on specific children within a practitioner’s caseload could offer a description of actual display designs created that includes practitioner rationales, as well as outcomes related to both the child and the display designs. Limitations of the Study Perhaps the most critical limitation of this study is the small number of participants who completed the full survey. A true response rate cannot be calculated due to the recruitment method of posting the notice on list serves; it can be reported that 192 individuals initially © 2015 International Society for Augmentative and Alternative Communication 133 accessed the survey. The survey was very detailed, asking participants to respond to multiple open-ended questions. This may have contributed to attrition of participants, resulting in only 77 who completed the entire survey. However, analysis showed that the responses of those who did not complete the full survey were quite similar to the responses of the 77 who did. It is possible that those who took the survey are not representative of all SLPs who work with children under 10 who use AAC, as they may have had a particular interest in modifications that go into designing AAC displays. For instance, participants were primarily located through postings to list serves associated with AAC and other assistive technology. It is possible the SLPs who were members of those list serves were more interested in display design, or simply discussing their practices. Additionally, although the inclusion criteria required a minimum of 1 year of experience providing services to children who use AAC, individual experiences could be vastly different. It is possible that one participant’s caseload included only a handful of children who use AAC, while another practitioner with the same years of experience may have worked exclusively with children who use AAC. Other unknowns include pre-service education, formal/informal in-service trainings, and extent and type of teaming, all of which may contribute to the ways in which a practitioner designs an AAC display. On the one hand, such different experiences could contribute to different responses provided by participants in the current study. On the other hand, if this was the case, the similarities and differences seen across these participants particularly illuminates the challenges of display design. This small sample of participants recognized the need to individualize displays, but demonstrated a variety of approaches and rationales for the designs they create. Finally, in the section of the survey described here, participants were asked to describe their decisions when designing displays in general. They could have been thinking of one child or several children as they completed the survey. Responses may be based on some memory of related display design issues, but it is impossible to determine how accurate those memories were. Furthermore, participants might have felt that this task was asking them to justify their professional actions, introducing the possibility of social desirability bias (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003; Nederhof, 1985), which occurs when participants provide responses that reflect their belief in the desirability of a particular response rather than their true feelings. It is difficult to know if a response is truly reflective of what a participant believes is the best practice. Survey design methods and statistical analyses can be utilized to reduce the influence of social desirability bias (Nederhof, 1985; Podsakoff et al., 2003). Although responses in the current survey may have reflected a social desirability bias, the selfadministered and anonymous nature of the survey may have reduced the likelihood of such bias. 134 J. J. Thistle & K. M.Wilkinson Conclusion References The survey results indicate that the majority of SLPs are modifying and individualizing aided AAC displays for children under 10 years of age. Furthermore, clinical practices related to supporting a range of communicative functions, making vocabulary selection decisions, and collaborating with team members, including supporting partner modeling, appear to be practices commonly utilized by clinicians. Many of these practices have an existing research evidence base. However, other practices represent areas in which future research is needed such as creating VSDs, utilizing symbol background color, and supporting motor planning. Research examining the effect of such design decisions may strengthen an evidence-based approach by adding empirical support to commonly observed clinical recommendations. Specifically, research should answer the question of what advantage the feature in question offers, and to whom? Addressing this question could help identify best practices toward designing AAC displays. Once best practices are identified, a next step may be to determine how best to support clinicians’ application of those best practices (e.g., what training, tools, and supports are needed to increase the use of desired practices?). Adamson, L. B., Romski, M. A., Deffebach, K., & Sevcik, R. A. (1992). Symbol vocabulary and the focus of conversations: Augmenting language development for youth with mental retardation. Journal of Speech Hearing Research, 35, 1333–1343. doi:10.1044/ jshr.3506.1333 American Speech-Language-Hearing Association (2007). Scope of practice in speech-language pathology. Retrieved from http://www. asha.org/policy/SP2007-00283/ American Speech-Language-Hearing Association (2014). 2014 Schools survey report: SLP caseload characteristics. Retrieved from www.asha.org/research/memberdata/schoolsurvey/ Angelo, D. (2000). Impact of augmentative and alternative communication devices on families. Augmentative and Alternative Communication, 16, 37–47. doi:10.1080/07434610012331278894 Ball, L., Marvin, C., Beukelman, D., Lasker, J., & Rupp, D. (1999). Generic talk use by preschool children. Augmentative and Alternative Communication, 15, 145–155. doi:10.1080/0743461 9912331278685 Banajee, M., Dicarlo, C., & Stricklin, S. B. (2003). Core vocabulary determination for toddlers. Augmentative and Alternative Communication, 19, 67–73. doi:10.1080/0743461031000112034 Beukelman, D. (1991). Magic and cost of communicative competence. Augmentative and Alternative Communication, 7, 2–10. doi:10.1080/ 07434619112331275633 Beukelman, D., & Mirenda, P. (2013). Augmentative and alternative communication: Supporting children & adults with complex communication needs (4th ed.). Baltimore: Paul H. Brookes Publishing Co. Beukelman, D., Jones, R., & Rowan, M. (1989). Frequency of word usage by nondisabled peers in integrated preschool classrooms. Augmentative and Alternative Communication, 5, 243–248. doi:10 .1080/07434618912331275296 Beukelman, D., McGinnis, J., & Morrow, D. (1991). Vocabulary selection in augmentative and alternative communication. Augmentative and Alternative Communication, 7, 171–185. doi: 10.1080/07434619112331275883 Binger, C., & Light, J. (2007). The effect of aided AAC modeling on the expression of multi-symbol messages by preschoolers who use AAC. Augmentative and Alternative Communication, 23, 30–43. doi:10.1080/07434610600807470 Cafiero, J. M. (2001). The effect of an augmentative communication intervention on the communication, behavior, and academic program of an adolescent with autism. Focus on Autism and Other Developmental Disabilities, 16, 179–189. doi:10.1177/108835760101600306 Cafiero, J. M., & Delsack, B. S. (2007). AAC and autism: Compelling issues, promising practices and future directions. Perspectives on Augmentative and Alternative Communication, 16, 23–26. doi:10.1044/aac16.2.23 Costigan, F. A., & Light, J. (2010). A review of preservice training in augmentative and alternative communication for speechlanguage pathologists, special education teachers, and occupational therapists. Assistive Technology, 22, 200–212. doi:10.1080/10400435.2010.492774 Costigan, F. A., & Light, J. (2011). Functional seating for schoolage children with cerebral palsy: An evidence-based tutorial. Language, Speech, and Hearing Services in Schools, 42, 223–236. doi:10.1044/0161-1461(2010/10-0001) Dietz, A., Quach, W., Lund, S. K., & McKelvey, M. (2012). AAC assessment and clinical-decision making: The impact of experience. Augmentative and Alternative Communication, 28, 148–159. doi:10.3109/07434618.2012.704521 Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method. Hoboken, NJ: John Wiley & Sons. Drager, K. D., Light, J. C., Carlson, R., D’Silva, K., Larsson, B., Pitkin, L., & Stopper, G. (2004). Learning of dynamic display AAC technologies by typically developing 3-year-olds: Effect of different layouts and menu approaches. Journal Notes 1. Qualtrics and all other Qualtrics product or service names are registered trademarks or trademarks of Qualtrics, Provo, UT, USA. http://www.qualtrics.com 2. The complete survey included five sections; the broad design questions were asked within Sections 1 and 2; the demographics questions were asked within Section 5. The case study specific questions asked in Sections 3 and 4 will be reported in another paper. Acknowledgements This research was conducted in partial fulfillment of the first author’s doctoral training. The authors would like to thank student researchers Lauren Cherry, Marni Gruber, Samantha McDonald, and Paige McManus for their assistance on this project. Funding The first author received funding from the U.S. Department of Education, doctoral training grant [#H325D110008]. Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper. Augmentative and Alternative Communication AAC Display Design Decisions of Speech, Language, and Hearing Research, 47, 1133–1148. doi:10.1044/1092-4388(2004/084) Drager, K. D., Light, J. C., Speltz, J. C., Fallon, K. A., & Jeffries, L. Z. (2003). The performance of typically developing 2 ½-yearolds on dynamic display AAC technologies with different system layouts and language organizations. Journal of Speech, Language, and Hearing Research, 46, 298–312. doi:10.1044/10924388(2003/024) Drager, K. D., Postal, V. J., Carrolus, L., Castellano, M., Gagliano, C., & Glynn, J. (2006). The effect of aided language modeling on symbol comprehension and production in 2 preschoolers with autism. American Journal of Speech-Language Pathology, 15, 112–125. doi:10.1044/1058-0360(2006/012) Fallon, K. A., Light, J. C., & Achenbach, A. (2003). The semantic organizationpatternsofyoungchildren:Implicationsforaugmentative and alternative communication. Augmentative and Alternative Communication, 19, 74–85. doi:10.1080/0743461031000112061 Fitts, P. M., & Posner, M. I. (1967). Human performance. Belmont, CA: Brooks/Cole Publishing Co. Fleiss, J. L., Levin, B., & Paik, M. C. (2003). Statistical methods for rates and proportions. Hoboken, NJ: John Wiley & Sons, Inc. Fraenkel, P. (2006). Engaging families as experts: Collaborative family program development. Family Process, 45, 237–257. doi:10.1111/ j.1545-5300.2006.00093.x Fuller, D. & Lloyd, L. (1991). Toward a common usage of iconicity terminology. Augmentative and Alternative Communication, 7, 215–220. doi:10.1080/07434619112331275913 Gegenfurtner, K. R., & Rieger, J. (2000). Sensory and cognitive contributions of color to the recognition of natural scenes. Current Biology, 10, 805–808. doi:10.1016/S0960-9822(00)00563-7 Goossens’, C., Crain, S. S., & Elder, P. S. (1999). Engineering the preschool environment for interactive symbolic communication: 18 months to 5 years developmentally (4th ed.). Birmingham, AL: Southeast Augmentative Communication. Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey methodology (2nd ed.). Hoboken, NJ: John Wiley & Sons. Hanson, E., Yorkston, K. M., & Britton, D. (2011). Dysarthria in amyotrophic lateral sclerosis: A systematic review of speech characteristics, speech treatment and AAC options. Journal of Medical Speech-Language Pathology, 19, 12–30. Hetzroni, O. E., & Ne’eman, A. (2013). Influence of colour on acquisition and generalisation of graphic symbols. Journal of Intellectual Disability Research, 57, 669–680. doi:10.1111/j.13652788.2012.01584.x Hunt, P., Soto, G., Maier, J., Müller, E., & Goetz, L. (2002). Collaborative teaming to support students with augmentative and alternative communication needs in general education classrooms. Augmentative and Alternative Communication, 18, 20–35. doi:10.1080/aac.18.1.20.35 Johnson, J. M., Inglebret, E., Jones, C., & Ray, J. (2006). Perspectives of speech language pathologists regarding success versus abandonment of AAC. Augmentative and Alternative Communication, 22, 85–99. doi:10.1080/07434610500483588 Johnston, S. S., McDonnell, A. P., Nelson, C., & Magnavito, A. (2003). Teaching functional communication skills using augmentative and alternative communication in inclusive settings. Journal of Early Intervention, 25, 263–280. doi:10.1177/105381510302500403 Jones, S. D., Angelo, D. H., & Kokoska, S. M. (1999). Stressors and family supports: Families with children using augmentative & alternative communication technology. Communication Disorders Quarterly, 20, 37–44. doi:10.1177/152574019902000205 Kent-Walsh, J., & Binger, C. (2009). Addressing the communication demands of the classroom for beginning communicators and early language users. In G. Soto & C. Zangari (Eds.), Practically speaking: Language, literacy, and academic development for students with AAC needs (pp. 143–172). Baltimore, MD: Paul H. Brookes Publishing Co. Koppenhaver, D. A., & Erickson, K. A. (2003). Natural emergent literacy supports for preschoolers with autism and severe © 2015 International Society for Augmentative and Alternative Communication 135 communication impairments. Topics in Language Disorders, 23, 283–292. doi:10.1097/00011363-200310000-00004 Koul, R., & Lloyd, L. (1994). Survey of professional preparation in augmentative and alternative communication (AAC) in speech-language pathology and special education programs. American Journal of Speech-Language Pathology, 3, 13–22. doi: 10.1044/1058-0360.0303.13 Kovach,T. M., & Kenyon, P. B. (2003).Visual issues and access to AAC. In J. Light, D. Beukelman, & J. Reichle (Eds.), Communicative competence for individuals who use AAC: From research to effective practice (pp. 277–319). Baltimore, MD: Brookes Publishing Co. Light, J., & Drager, K. (2005, November). Maximizing language development with young children who require AAC. Presented at the American Speech-Language-Hearing Association annual convention, San Diego, CA. Light, J., & Drager, K. (2007). AAC technologies for young children with complex communication needs: State of the science and future research directions. Augmentative and Alternative Communication, 23, 204–216. doi:10.1080/07434610701553635 Light, J., Drager, K. D., & Nemser, J. G. (2004). Enhancing the appeal of AAC technologies for young children: Lessons from the toy manufacturers. Augmentative and Alternative Communication, 20, 137–149. doi:10.1080/07434610410001699735 Light, J., & Lindsay, P. (1991). Cognitive science and augmentative and alternative communication. Augmentative and Alternative Communication, 7, 186–203. doi:10.1080/07434619112331275 893 Light, J., & McNaughton, D. (2013a). Putting people first: Re-thinking the role of technology in augmentative and alternative communication intervention. Augmentative and Alternative Communication, 29, 299–309. doi:10.3109/07434618.2013. 848935 Light, J., & McNaughton, D. (2013b). Literacy intervention for individuals with complex communication needs. In D. Beukelman & P. Mirenda (Eds.), Augmentative and alternative communication: Supporting children and adults with complex communication needs (4th ed., pp. 309–351). Baltimore, MD: Paul H Brookes Pub Co. Light, J., Page, R., Curran, J., & Pitkin, L. (2007). Children’s ideas for the design of AAC assistive technologies for young children with complex communication needs. Augmentative and Alternative Communication, 23, 274–287. doi:10.1080/07434610701390475 Marvin, L. A., Montano, J. J., Fusco, L. M., & Gould, E. P. (2003). Speech-language pathologists’ perceptions of their training and experience in using alternative and augmentative communication. Contemporary Issues in Communication Science and Disorders, 30, 76–83. Mayer-Johnson, R. (1992). The picture communication symbols. Solana Beach, CA: Mayer-Johnson. McFadd, E., & Wilkinson, K. (2010). Qualitative analysis of decision making by speech-language pathologists in the design of aided visual displays. Augmentative and Alternative Communication, 26, 136–147. doi:10.3109/07434618.2010.481089 Mizuko, M., & Reichle, J. (1989). Transparency and recall of symbols among intellectually handicapped adults. Journal of Speech and Hearing Disorders, 54, 627–633. doi: 10.1044/jshd.5404.627 Murphy, J., Markova, I., Collins, S., & Moodie, E. (1996). AAC systems: Obstacles to effective use. International Journal of Language & Communication Disorders, 31, 31–44. doi:10.3109/13682829609033150 Nederhof , A. J. (1985). Methods of coping with social desirability bias: A review. European Journal of Social Psychology, 15, 263– 280. doi:10.1002/ejsp.2420150303 Ogletree, B. T. (2012). Stakeholders as partners: Making AAC work better. Perspectives on Augmentative and Alternative Communication, 21, 151–158. doi:10.1044/aac21.4.151 Olin, A. R., Reichle, J., Johnson, L., & Monn, E. (2010). Examining dynamic visual scene displays: Implications for arranging and teaching symbol selection. American Journal of Speech-Language Pathology, 19, 284–297. doi:10.1044/1058-0360(2010/09-0001) 136 J. J. Thistle & K. M.Wilkinson Podsakoff , P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff , N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88, 879. doi:10.1037/0021-9010.88.5.879 Ratcliff , A., Koul, R., & Lloyd, L. L. (2008). Preparation in augmentative and alternative communication: An update for speech-language pathology training. American Journal of Speech-Language Pathology, 17, 48–59. doi: 10.1044/10580360(2008/005) Romski, M. A., & Sevcik, R. A. (1996). Breaking the speech barrier: Language development through augmented means. Baltimore, MD: Paul H. Brookes Publishing Co. Romski, M. A., & Sevcik, R. A. (2003). Augmented input. In J. C. Light, D. R. Beukelman, & J. Reichle (Eds.), Communicative competence for individuals who use AAC: From research to effective practice (pp. 147–162). Baltimore, MD: Paul H. Brookes Publishing Co. Rosenbaum, D. A. (2009). Psychological foundations. In D. Rosenbaum (Ed.), Human motor control (2nd ed., pp. 93–134). Amsterdam; Boston, MA: Elsevier Inc. Ryan, G. W., & Bernard, H. R. (2003). Techniques to identify themes. Field Methods, 15, 85–109. doi:10.1177/1525822X02239569 Schlosser, R.W., & Sigafoos, J. (2002). Selecting graphic symbols for an initial request lexicon: Integrative review. Augmentative and Alternative Communication, 18, 102–123. doi:10.1080/07434610212331281201 Shane, H. C. (2006). Using visual scene displays to improve communication and communication instruction in persons with autism spectrum disorders. Perspectives on Augmentative and Alternative Communication, 15, 8–13. doi:10.1044/aac15.1.8 Snell, M. E., Brady, N., McLean, L., Ogletree, B. T., Siegel, E., Sylvester, L., … & Sevcik, R. (2010). Twenty years of communication intervention research with individuals who have severe intellectual and developmental disabilities. American Journal on Intellectual and Developmental Disabilities, 115, 364– 380. doi:10.1352/1944-7558-115-5.364 Stuart, S., & Ritthaler, C. (2008). Case studies of intermediate steps between AAC evaluations and implementation. Perspectives on Augmentative and Alternative Communication, 17, 150–155. doi:10.1044/aac17.4.150 Sutherland, D. E., Gillon, G. G., & Yoder, D. E. (2005). AAC use and service provision: A survey of New Zealand speech-language therapists. Augmentative and Alternative Communication, 21, 295–307. doi:10.1080/07434610500103483 The Center for AAC and Autism. (2009). What is LAMP? Retrieved from http://www.aacandautism.com/lamp Thistle, J. J., &Wilkinson, K. (2009).The effects of color cues on typically developing preschoolers’ speed of locating a target line drawing: Implications for augmentative and alternative communication display design. American Journal of Speech-Language Pathology, 18, 231–240. doi:10.1044/1058-0360(2009/08-0029) Wilkinson, K. M, Carlin, M., & Jagaroo, V. (2006). Preschoolers’ speed of locating a target symbol under different color conditions. Augmentative and Alternative Communication, 22, 123–133. doi:10.1080/07434610500483620 Wilkinson, K. M., Carlin, M., & Thistle, J. (2008). The role of color cues in facilitating accurate and rapid location of aided symbols by children with and without Down syndrome. American Journal of Speech-Language Pathology, 17, 179–193. doi:10.1044/10583060(2008/018) Wilkinson, K. M., & Light, J. (2011). Preliminary investigation of visual attention to human figures in photographs: Potential considerations for the design of aided AAC visual scene displays. Journal of Speech Language Hearing Research, 54, 1644–1657. doi:10.1044/1092-4388(2011/10-0098) Wilkinson, K. M., & Snell, J. (2011). Facilitating children’s ability to distinguish symbols for emotions: The effects of background color cues and spatial arrangement of symbols on accuracy and speed of search. American Journal of Speech-Language Pathology, 20, 288–301. doi:10.1044/1058-0360(2011/10-0065) Wilkinson, K. M., O’Neill, T., & McIlvane, W. J. (2013). Eye-tracking measures reveal how changes in the design of aided AAC displays influence the efficiency of locating symbols by school-aged children without disabilities. Journal of Speech, Language, and Hearing Research, 57, 455–466. doi:10.1044/2013_JSLHR-L-12-0159 Williams, M. B., Krezman, C., & McNaughton, D. (2008). “Reach for the stars”: Five principles for the next 25 years of AAC. Augmentative and Alternative Communication, 24, 194–206. doi:10.1080/08990220802387851 Wormnæs, S., & Abdel Malek, Y. (2004). Egyptian speech therapists want more knowledge about augmentative and alternative communication. Augmentative and Alternative Communication, 20, 30–41. doi:10.1080/07434610310001629571 Wurm, L. H., Legge, G. E., Isenberg, L. M., & Luebker, A. (1993). Color improves object recognition in normal and low vision. Journal of Experimental Psychology: Human Perception and Performance, 19, 899–911. doi:10.1037//00961523.19.4.899 Xu, Y. (2002). Limitations of object-based feature encoding in visual short-term memory. Journal of Experimental Psychology: Human Perception and Performance, 28, 458–468. doi:http://dx.doi.org. ezaccess.libraries.psu.edu/10.1037/0096-1523.28.2.458 Yorkston, K., Honsinger, M., Dowden, P., & Marriner, N. (1989). Vocabulary selection: A case report. Augmentative and Alternative Communication, 5, 101–108. doi:10.1080/07434618 912331275076 Zangari, C., Lloyd, L., & Vicker, B. (1994). Augmentative and alternative communication: An historic perspective. Augmentative and Alternative Communication, 10, 27–59. doi: 10.1080/07434619412331276740 Supplementary material available online Supplementary Appendix A and B to be found online at http://informahealthcare.com/doi/abs/10.3109/07434618. 2015.1035798 Augmentative and Alternative Communication Copyright of AAC: Augmentative & Alternative Communication is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.
Integrating AAC Into the Classroom "'^^ Low-Tech Strategies by Debora Downey, Peggy Dau^ierty, Sharon Hell, and Deanna Daughrrty ccording to ASHA, approximately 2 luillioii people in ibe t'niied States have flifliiuhy or are uiiabk' u> cotuiuiiiiii aii' using oral laugtiage (,see www.asha.org/piiblic/ speech/disorders / Aiigmo ntalivo-andAhcrnalive.htin). l-^n a huge iiiiiiil)ei of the.se indivithials, an augmenlative/alternative cotnmunication (AAC) system may be a tool lo either supplement <)r replace iheii liriiiled oral commnnication attempts. Although many equate AAC with bigh-end technology and high expense, for some potential users die most ideal AAC systems are often low-tech solutions with a minimal price tag. Tbe key lu iiuplenieiilitig low-tech options in the cla.ssroom is ideuiifyiiig appropriate low-tech sirategies and ])aiiing them with motivating (lassroom activities that are i ic h wilh coiiinuiiiication prospects. separate arademie goals from communicaiion goats. This is necessarv beran.se lliere may IH' tiini-s when oveila]> of these Iwo ilems occurs and the Icani musi idcutilv, lor each instance, when the academic goal takes priority over the coniruiini< atioti goal or vice versa. N<) sysu'tu is a cmc for tbe ailily to comtntmicate. Ck'arly, dii're will be instances wbeti usitig jnst one low-tech or high-tech Milution will noi be viable in ihc (lassrooin .setting. Theiefoie, ilie challenj^e ol the lF.P team is to identify luuliiplc sirategies and pair iheni willi tbi' right dassiooiu ai ii\ily lo allow for ease 1)1 itilcgialioii and opportunities loi (oiiiiiiitint ation and/or learning. assist with tecbnolog)' assessment, planning for implementation, and integrating communication systems in school settings. Since several low-tech AA<.; strategies may be used, it is important for tbe speech-language |)aiholi>gist to be able to identify aiul ediu ale ihe W.V team regarding the various low-tech stiateg)' opiions (see tbe sidebar oti p. Sti). Altliough siiuients may bave a inain speech-geiieiatiiig device, ibey may use one or more of tbe follnwing low-ierh solutions depending on time constiainis, setting, aud level of fatigue dtie to neuromusciilar slaltis. The fblif vitice used to read the story. .\n aided lauguagi'-stimul.uion approach can be used legaExling set up activities (i.e., book choice, ieak {i.e., HiDWii Ih'dr—ii iin < lioiccs ran be a l)rowii bear, bhie horse, goldfish, and eyes for "I s<'e a...") Post-Literature Activity • A picttirc coininuniraiion board can be used for tlie stiidcnt lo comment on a slory or ID make requests. For example, (he sttidcni"s communication options inighi be 'That's scaiy," "1 bat's fuiniy." "Read il again," Yes/no questions or live voice scan can be used to assess the student's comprehension of the book. Social Studies • I be sHidini can activate a sequential message device to call on peers or identify a state and bavc a peer name tlie corresponding capiial. I'be student can use a switch-activated spinner to select a picttire symbi>l of a stale anfl activate a single message device lo request tlie name of ihe stale. Picture symbols can be sequeneed lo leprescnl evenis of a trip. A multiple location overlay can be used on a voice output device to direct peers to move from location to location on a map. Math • T h e s l i i d c n l l a i i u s e a switcli- activated spinner lo select numerals lo create matb calculation pioblcins Ibi tbeir classmaies lo < om|)iiU'. A nnilliple localioii l day lo t h e lionie setting. • Live voice scan can be used to liave t h e student select whom ihey want to sit by on (be bus. II i,s iinpoi taut to note IKHV we ])ic)graiii die devices. Messages can be dull or depict t b e child in an iinnatui al way as a well-niaiinered small adult. We often tend to program ibe devices to reflect o u r own vocabulary iin
AJSLP Tutorial Brain–Computer Interfaces for Augmentative and Alternative Communication: A Tutorial Jonathan S. Brumberg,a Kevin M. Pitt,b Alana Mantie-Kozlowski,c and Jeremy D. Burnisond Purpose: Brain–computer interfaces (BCIs) have the potential to improve communication for people who require but are unable to use traditional augmentative and alternative communication (AAC) devices. As BCIs move toward clinical practice, speech-language pathologists (SLPs) will need to consider their appropriateness for AAC intervention. Method: This tutorial provides a background on BCI approaches to provide AAC specialists foundational knowledge necessary for clinical application of BCI. Tutorial descriptions were generated based on a literature review of BCIs for restoring communication. Results: The tutorial responses directly address 4 major areas of interest for SLPs who specialize in AAC: (a) the current state of BCI with emphasis on SLP scope of practice (including the subareas: the way in which individuals access AAC with BCI, the efficacy of BCI for AAC, and the effects of fatigue), (b) populations for whom BCI is best suited, (c) the future of BCI as an addition to AAC access strategies, and (d) limitations of BCI. Conclusion: Current BCIs have been designed as access methods for AAC rather than a replacement; therefore, SLPs can use existing knowledge in AAC as a starting point for clinical application. Additional training is recommended to stay updated with rapid advances in BCI. I & Baker, 2012). In the most serious cases of total paralysis with loss of speech (e.g., locked-in syndrome; Plum & Posner, 1972), even these advanced methods are not sufficient to provide access to language and literacy (Oken et al., 2014). Access to communication is critical for maintaining social interactions and autonomy of decision-making in this population (Beukelman & Mirenda, 2013); therefore, individuals with paralysis and akinetic mutism have been identified as potential candidates for brain–computer interface (BCI) access to AAC (Fager et al., 2012). BCIs for communication take AAC and access technology to the next level and provide a method for selecting and constructing messages by detecting changes in brain activity for controlling communication software (Wolpaw, Birbaumer, McFarland, Pfurtscheller, & Vaughan, 2002). In particular, they are devices that provide a direct link between an individual and a computer device through brain activity alone, without requiring any overt movement or behavior. As an access technique, BCIs have the potential to reduce or eliminate some physical barriers to successful AAC intervention for individuals with severe speech and physical impairments. Similar to AAC and associated access techniques, current BCI technology can take a variety of forms on the basis of the neural signal targeted and the method used for individuals to interact with the communication ndividuals with severe speech and physical impairments often rely on augmentative and alternative communication (AAC) and specialized access technologies to facilitate communication on the basis of the nature and severity of their speech, motor, and cognitive impairments. In some cases, people who use AAC are able to use specially modified computer peripherals (e.g., mouse, joystick, stylus, or button box) to access AAC devices, whereas in other, more severe cases, sophisticated methods are needed to detect the most subtle of movements (e.g., eye gaze tracking; Fager, Beukelman, Fried-Oken, Jakobs, a Department of Speech-Language-Hearing: Sciences and Disorders, Neuroscience Graduate Program, The University of Kansas, Lawrence b Department of Speech-Language-Hearing: Sciences and Disorders, The University of Kansas, Lawrence c Communication Sciences and Disorders Department, Missouri State University, Springfield d Neuroscience Graduate Program, The University of Kansas, Lawrence Correspondence to Jonathan S. Brumberg: brumberg@ku.edu Editor-in-Chief: Krista Wilkinson Editor: Erinn Finke Received December 31, 2016 Revision received April 6, 2017 Accepted August 14, 2017 https://doi.org/10.1044/2017_AJSLP-16-0244 Disclosure: The authors have declared that no competing interests existed at the time of publication. American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 • Copyright © 2018 American Speech-Language-Hearing Association 1 interface. Each of these factors may impose different demands on the cognitive and motor abilities of individuals who use BCI (Brumberg & Guenther, 2010). Although the field of BCI has grown over the past decade, many stakeholders including speech-language pathologists (SLPs), other practitioners, individuals who use AAC and potentially BCI, and caretakers are unfamiliar with the technology. SLPs are a particularly important stakeholder given their role as the primary service providers who assist clients with communicative challenges secondary to motor limitations through assessment and implementation of AAC interventions and strategies. A lack of core knowledge on the potential use of BCI for clinical application may limit future intervention with BCI for AAC according to established best practices. This tutorial will offer some basic explanations regarding BCI, including the benefits and limitations of this access technique, and the different varieties of BCI. It will also provide a description of individuals who may be potentially best suited for using BCI to access AAC. An understanding of this information is especially important for SLPs specializing in AAC who are most likely to interact with BCI as they move from research labs into real-world situations (e.g., classrooms, home, work). Tutorial Descriptions by Topic Area Topic 1: How Do People Who Use BCI Interact With the Computer? BCIs are designed to allow individuals to control computers and communication systems using brain activity alone and are separated according to whether signals are recorded noninvasively from/through the scalp or invasively through implantation of electrodes in or on the brain. Noninvasive BCIs, those that are based on brain recordings made through the intact skull without requiring a surgical procedure (e.g., electroencephalography or EEG, magnetoencephalography, functional magnetic resonance imaging, functional near-infrared spectroscopy), often use an indirect technique to map brain signals unrelated to communication onto controls for a communication interface (Brumberg, Burnison, & Guenther, 2016). Though there are many signal acquisition modalities for noninvasive recordings of brain activity, noninvasive BCIs typically use EEG, which is recorded through electrodes placed on the scalp according to a standard pattern (Oostenveld & Praamstra, 2001) and record voltage changes that result from the simultaneous activation of millions of neurons. EEG can be analyzed for its spontaneous activity, or in response to a stimulus (e.g., event-related potentials), and both have been examined for indirect access BCI applications. In contrast, another class of BCIs attempts to directly output speech from imagined/attempted productions (Blakely, Miller, Rao, Holmes, & Ojemann, 2008; Brumberg, Wright, Andreasen, Guenther, & Kennedy, 2011; Herff et al., 2015; Kellis et al., 2010; Leuthardt et al., 2011; Martin et al., 2014; Mugler et al., 2014; Pei, Barbour, Leuthardt, & Schalk, 2011; Tankus, Fried, & Shoham, 2012); however, these 2 techniques typically rely on invasively recorded brain signals (via implanted microelectrodes or subdural electrodes) related to speech motor preparation and production. Though in their infancy, direct BCIs for communication have the potential to completely replace the human vocal tract for individuals with severe speech and physical impairments (Brumberg, Burnison, & Guenther, 2016; Chakrabarti, Sandberg, Brumberg, & Krusienski, 2015); however, the technology does not yet provide a method to “read thoughts.” For the remainder of this tutorial, we focus on noninvasive, indirect methods for accessing AAC with BCIs, and we refer readers to other sources for descriptions of direct BCIs for speech (Brumberg, Burnison, & Guenther, 2016; Chakrabarti et al., 2015). Indirect methods for BCI parallel other access methods for AAC devices, where nonspeech actions (e.g., button press, direct touch, eye gaze) are translated to a selection on a communication interface. The main difference between the two access methods is that BCIs rely on neurophysiological signals related to sensory stimulation, preparatory motor behaviors, and/or covert motor behaviors (e.g., imagined or attempted limb movements), rather than overt motor behavior used for conventional access. The way in which individuals control a BCI greatly depends on the neurological signal used by the device to make selections on the communication interface. For instance, in the case of an eye-tracking AAC device, one is required to gaze at a communication icon, and the system makes a selection on the basis of the screen coordinates of the eye gaze location. For a BCI, individuals may be required to (a) attend to visual stimuli to generate an appropriate visual–sensory neural response to select the intended communication icon (e.g., Donchin, Spencer, & Wijesinghe, 2000), (b) take part in an operant conditioning paradigm using biofeedback of EEG (e.g., Kübler et al., 1999), (c) listen to auditory stimuli to generate auditory–sensory neural responses related to the intended communication output (e.g., Halder et al., 2010), or (d) imagine movements of the limbs to alter the sensorimotor rhythm (SMR) to select communication items (e.g., Pfurtscheller & Neuper, 2001). At present, indirect BCIs are more mature as a technology, and many have already begun user trials (Holz, Botrel, Kaufmann, & Kübler, 2015; Sellers, Vaughan, & Wolpaw, 2010). Therefore, SLPs are most likely to be involved with indirect BCIs first as they move from the research lab to the real world. Indirect BCI techniques are very similar to current access technologies for high-tech AAC; for example, the output of the BCI system can act as an input method for conventional AAC devices. Below, we review indirect BCI techniques and highlight their possible future in AAC. The P300-Based BCI The visual P300 grid speller (Donchin et al., 2000) is the most well-known and most mature technology with ongoing at-home user trials (Holz et al., 2015; Sellers et al., 2010). Visual P300 BCIs for communication use the P300 event-related potential, a neural response to novel, rare American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 visual stimuli in the presence of many other visual stimuli, to select items on a communication interface. The traditional graphical layout for a visual P300 speller is a 6 × 6 grid that includes the 26 letters of the alphabet, space, backspace, and numbers (see Figure 1). Each row and column1 on the spelling grid are highlighted in a random order, and a systematic variation in the EEG waveform is generated when one attends to a target item for selection, the “oddball stimulus,” which occurs infrequently compared with the remaining items (Donchin et al., 2000). The event-related potential in response to the target item will contain a positive voltage fluctuation approximately 300 ms after the item is highlighted (Farwell & Donchin, 1988). The BCI decoding algorithm then selects items associated with detected occurrences of the P300 for message creation (Donchin et al., 2000). The P300 grid speller has been operated by individuals with amyotrophic lateral sclerosis (ALS; Nijboer, Sellers, et al., 2008; Sellers & Donchin, 2006) and has been examined as part of at-home trials by individuals with neuromotor impairments (Holz et al., 2015; Sellers & Donchin, 2006), making it a likely candidate for future BCI-based access for AAC. In addition to the cognitive requirements for operating the P300 speller, successful operation depends somewhat on the degree of oculomotor control (Brunner et al., 2010). Past findings have shown that the P300 amplitude can be reduced if individuals are unable to use an overt attention strategy (gazing directly at the target) and, instead, must use a covert strategy (attentional change without ocular shifting), which can degrade BCI performance (Brunner et al., 2010). An alternative P300 interface displays a single item at a time on the screen (typically to the center as in Figure 1, second from left) to alleviate concerns for individuals with poor oculomotor control. This interface, known as the rapid serial visual presentation speller, has been successfully controlled by a cohort of individuals across the continuum of locked-in syndrome severity (Oken et al., 2014). All BCIs that use spelling interfaces require sufficient levels of literacy, though many can be adapted to use icon or symbol-based communication (e.g., Figure 2). Auditory stimuli can also be used to elicit P300 responses for interaction with BCI devices for individuals with poor visual capability (McCane et al., 2014), such as severe visual impairment, impaired oculomotor control, and cortical blindness. Auditory interfaces can also be used in poor viewing environments, such as outdoors or in the presence of excessive lighting glare. Like its visual counterpart, the auditory P300 is elicited via an oddball paradigm, and has been typically limited to binary (yes/no) selection by attending to one of two different auditory tones presented monaurally to each ear (Halder et al., 2010), or linguistic stimuli (e.g., attending to a “yep” target among “yes” presentations in the right ear vs. “nope” and “no” in the left; Hill et al., 2014). The binary control achieved 1 Each individual item may also be highlighted, rather than rows and columns. using the auditory P300 interface has the potential to be used to navigate a spelling grid similar to conventional auditory scanning techniques for accessing AAC systems, by attending to specific tones that correspond to rows and columns (Käthner et al., 2013; Kübler et al., 2009). There is evidence that auditory grid systems may require greater attention than their visual analogues (Klobassa et al., 2009; Kübler et al., 2009), which should be considered when matching clients to the most appropriate communication device. Steady State Evoked Potentials BCIs can be controlled using attention-modulated steady state brain rhythms, as opposed to event-related potentials, in both visual (steady state visually evoked potential [SSVEP]) and auditory (auditory steady state response [ASSR]) domains. Both the SSVEP and ASSR are physiological responses to a driving input stimulus that are amplified when an individual focuses his or her attention on the stimulus (Regan, 1989). Strobe stimuli are commonly used for SSVEP, whereas amplitude-modulated tones are often used for ASSR (Regan, 1989). BCIs using SSVEP exploit the attention-modulated response to strobe stimuli by simultaneously presenting multiple communication items for selection, each flickering at a different frequency (Cheng, Gao, Gao, & Xu, 2002; Friman, Luth, Volosyak, & Graser, 2007; Müller-Putz, Scherer, Brauneis, & Pfurtscheller, 2005).2 As a result, all item flicker rates will be observed in the EEG recordings, but the frequency of the attended stimulus will contain the largest amplitude (Lotte, Congedo, Lécuyer, Lamarche, & Arnaldi, 2007; Müller-Putz et al., 2005; Regan, 1989) and greatest temporal correlation to the strobe stimulus (Chen, Wang, Gao, Jung, & Gao, 2015; Lin, Zhang, Wu, & Gao, 2007). The stimulus with the greatest neurophysiological response will then be selected by the BCI to construct a message, typically via an alphanumeric keyboard (shown in Figure 1), though icons can be adapted for different uses and levels of literacy (e.g., Figure 2). Major advantages of this type of interface are the following: (a) high accuracy rates, often reported above 90% with very little training (e.g., Cheng et al., 2002; Friman et al., 2007); (b) overlapping, centrally located stimuli could be used for individuals with impaired oculomotor control (Allison et al., 2008). A major concern with this technique, however, is an increased risk for seizures (Volosyak, Valbuena, Lüth, Malechka, & Gräser, 2011). BCIs that use the ASSR require one to shift his or her attention to a sound stream that contains a modulated stimulus (e.g., a right monoaural 38-Hz amplitude modulation, 1000-Hz carrier tone presented with a left monoaural 42-Hz modulated, 2500-Hz carrier; Lopez, Pomares, Pelayo, Urquiza, & Perez, 2009). As with the SSVEP, the modulation frequency of the attended sound stream is 2 There are other variants that use a single flicker rate with a specific strobe pattern that is beyond the scope of this tutorial. Brumberg et al.: AAC-BCI Tutorial 3 Figure 1. From left to right, example visual displays for the following BCIs: P300 grid speller, RSVP P300, SSVEP, and motor-based (SMR with keyboard). For the P300 grid, each row and column are highlighted until a letter is selected. In the RSVP, each letter is displayed randomly, sequentially in the center of the screen. For the SSVEP, this example uses four flickering stimuli (at different frequencies) to represent the cardinal directions, which are used to select individual grid items. This can also be done with individual flicker frequencies for all 36 items with certain technical considerations. For the motor-based BCI, this is an example of a binary-selection virtual keyboard; imagined right hand movements select the right set of letters. RSVP = rapid serial visual presentation; SSVEP = steady state visually evoked potential; SMR = sensorimotor rhythm; BCI = brain–computer interfaces. Copyright © Tobii Dynavox. Reprinted with permission. observable in the recorded EEG signal and will be amplified relative to the other competing stream. Therefore, in this example, if the BCI detects the greatest EEG amplitude at 38 Hz, it will perform a binary action associated with the right-ear tone (e.g., yes or “select”), whereas detection of the greatest EEG amplitude at 42 Hz will generate a left-ear tone action (e.g., no or “advance”). Motor-Based BCIs Another class of BCIs provides access to communication interfaces using changes in the SMR, a neurological signal related to motor production and motor imagery (Pfurtscheller & Neuper, 2001; Wolpaw et al., 2002), for individuals with and without neuromotor impairments (Neuper, Müller, Kübler, Birbaumer, & Pfurtscheller, 2003; Vaughan et al., 2006). The SMR is characterized by the μ (8–12 Hz) and β (18–25 Hz) band spontaneous EEG oscillations that are known to desynchronize, or reduce in amplitude, during covert and overt movement attempts (Pfurtscheller & Neuper, 2001; Wolpaw et al., 2002). Many motor-based BCIs use left and right limb movement imagery because the SMR desynchronization will occur on the contralateral side, and are most often used to control spelling interfaces (e.g., virtual keyboard: Scherer, Müller, Neuper, Graimann, & Pfurtscheller, 2004; DASHER: Wills & MacKay, 2006; hex-o-spell: Blankertz et al., 2006; see Figure 1, right, for an example), though they can be used as inputs to commercial AAC devices as well (Brumberg, Burnison, & Pitt, 2016). Two major varieties of motor-based BCIs have been developed for controlling computers: those that provide continuous cursor control (analogous to mouse/joystick and eye gaze) and others that use discrete selection (analogous to button presses). An example layout of keyboard-based and symbol-based motor-BCI interfaces are shown in Figures 1 and 2. Cursor-style BCIs transform changes in the SMR continuously over time into computer control signals (Wolpaw & McFarland, 2004). One example of a continuous, SMR-based BCI uses imagined movements of the hands and feet to move a cursor to select progressively refined Figure 2. From left to right, examples of how existing BCI paradigms can be applied to page sets from current AAC devices: P300 grid, SSVEP, motor based (with icon grid). For the P300 grid interface, a row or column is highlighted until a symbol is selected (here, it is yogurt). For the SSVEP, either directional (as shown here) or individual icons flicker at specified strobe rates to either move a cursor or directly select an item. For motor based, the example shown here uses attempted or imagined left hand movements to advance the cursor and right hand movements to choose the currently selected item. SSVEP = steady state visually evoked potential; SMR = sensorimotor rhythm; BCI = brain–computer interfaces; AAC = augmentative and alternative communication. Copyright © Tobii Dynavox. Reprinted with permission. 4 American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 groups of letters organized at different locations around a computer screen (Miner, McFarland, & Wolpaw, 1998; Vaughan et al., 2006). Another continuous-style BCI is used to control the “hex-o-spell” interface in which imagined movements of the right hand rotate an arrow to point at one of six groups of letters, and imagined foot movements extend the arrow to select the current letter group (Blankertz et al., 2006). Discrete-style motor BCIs perform this transformation using the event-related desynchronization (Pfurtscheller & Neuper, 2001), a change to the SMR in response to some external stimulus, like an automatically highlighted row or column via scanning interface. One example of a discrete-style motor BCI uses the event-related desynchronization to control a virtual keyboard consisting of a binary tree representation of letters, in which individuals choose between two blocks of letters, selected by (imagined) right or left hand movements until a single letter or item remains (Scherer et al., 2004). Most motor-based BCIs require many weeks or months for successful operation and report accuracies greater than 75% for individuals without neuromotor impairments and, in one study, 69% accuracy for individuals with severe neuromotor impairments (Neuper et al., 2003). Motor-based BCIs are inherently independent from interface feedback modality because they rely only on an individual’s ability to imagine his or her limbs moving, though users are often given audio or visual feedback of BCI choices (e.g., Nijboer, Furdea, et al., 2008). A recent, continuous motor BCI has been used to produce vowel sounds with instantaneous auditory feedback by using limb motor imagery to control a two-dimensional formant frequency speech synthesizer (Brumberg, Burnison, & Pitt, 2016). Other recent discrete motor BCIs have been developed for row–column scanning interfaces (Brumberg, Burnison, & Pitt, 2016; Scherer et al., 2015). 2003). Nearly all BCIs require some amount of cognitive effort or selective attention, though the amount of each depends greatly on the style and modality of the specific device. Individuals with other neuromotor disorders, such as cerebral palsy, muscular dystrophies, multiple sclerosis, Parkinson’s disease, and brain tumors, may require AAC (Fried-Oken, Mooney, Peters, & Oken, 2013; Wolpaw et al., 2002) but are not yet commonly considered for BCI studies and interventions (cf. Neuper et al., 2003; Scherer et al., 2015), due to concomitant impairments in cognition, attention, and memory. In other instances, elevated muscle tone and uncontrolled movements (e.g., spastic dysarthria, dystonia) limit the utility of BCI due to the introduction of physical and electromyographic movement artifacts (i.e., muscle-based signals that are much stronger than EEG and can distort recordings of brain activity). BCI research is now beginning to consider important human factors involved in appropriate use of BCI for individuals (FriedOken et al., 2013) and for coping with difficulties in brain signal acquisition due to muscular (Scherer et al., 2015) and environmental sources of artifacts. Developing BCI protocols to help identify the BCI technique most appropriate for each individual must be considered as BCI development moves closer to integration with existing AAC techniques. Topic 2: Who May Best Benefit From a BCI? BCI Summary BCIs use a wide range of techniques for mapping brain activity to communication device control through a combination of signals related to sensory, motor, and/or cognitive processes (see Table 1 for a summary of BCI types). The choice of BCI protocol and feedback methods trade off with cognitive abilities needed for successful device operation (e.g., Geronimo, Simmons, & Schiff, 2016; Kleih & Kübler, 2015; Kübler et al., 2009). Many BCIs require individuals to follow complex, multistep procedures and require potentially high levels of attentional capacity that are often a function of the sensory or motor process used for BCI operation. For example, the P300 speller BCI (Donchin et al., 2000) requires that individuals have an ability to attend to visual stimuli and make decisions about them (e.g., recognize the intended visual stimulus among many other stimuli). BCIs that use SSVEPs depend on the neurological response to flickering visual stimuli (Cheng et al., 2002) that is modulated by attention rather than other cognitive tasks. These two systems both use visual stimuli to elicit neural activity for controlling a BCI but differ in their demands on cognitive and attention processing. In contrast, motor-based BCI systems (e.g., Pfurtscheller & Neuper, 2001; Wolpaw et al., 2002) require individuals to have sufficient motivation and volition, as well as an ability to learn how changing mental tasks can control a communication device. At present, BCIs are best suited for individuals with acquired neurological and neuromotor impairments leading to paralysis and loss of speech with minimal cognitive involvement (Wolpaw et al., 2002), for example, brainstem stroke and traumatic brain injury (Mussa-Ivaldi & Miller, Sensory, Motor, and Cognitive Factors Alignment of the sensory, motor, and cognitive requirements for using BCI to access AAC devices with individuals’ unique profile will help identify and narrow down Operant Conditioning BCIs This interface operates by detecting a stimulusindependent change in brain activity, which is used to select options on a communication interface. The neural signals used for controlling the BCI are not directly related to motor function or sensation. Rather, it uses EEG biofeedback for operant conditioning to teach individuals to voluntarily change the amplitude and polarity of the slow cortical potential, a slow-wave (< 1 Hz) neurological rhythm that is related to movements of a one-dimensional cursor. In BCI applications, cursor vertical position is used to make binary selections for communication interface control (Birbaumer et al., 2000; Kübler et al., 1999). Brumberg et al.: AAC-BCI Tutorial 5 Table 1. Summary of BCI varieties and their feedback modality. EEG signal type Event-related potentials Sensory/Motor modality User requirements Visual P300 (grid) Visual P300 (RSVP) Auditory P300 Steady state evoked potentials Steady state visually evoked potential Motor-based Auditory steady state response Continuous sensorimotor rhythm Discrete event-related desynchronization Operant conditioning Motor preparatory signals, for example, contingent negative variation Slow cortical potentials Visual oddball paradigm, requires selective attention around the screen Visual oddball paradigm, requires selective attention to the center of the screen only (poor oculomotor control) Auditory oddball paradigm, requires selective auditory attention, no vision requirement Attention to frequency tagged visual stimuli, may increase seizure risk Attention to frequency modulated audio stimuli Continuous, smooth control of interface (e.g., cursors) using motor imagery (first person) Binary (or multichoice) selection of interface items (# choices = # of imagined movements), requires motor imagery ability Binary selection of communication interface items using imagined movements Binary selection of communication interface items after biofeedback-based learning protocol Note. BCI = brain–computer interface; EEG = electroencephalography; RSVP = rapid serial visual presentation. the number of candidate BCI variants (e.g., feature matching; Beukelman & Mirenda, 2013; Light & McNaughton, 2013), which is important for improving user outcomes with the chosen device (Thistle & Wilkinson, 2015). Matching possible BCIs should also include overt and involuntary motor considerations, specifically the presence of spasticity or variable muscle tone/dystonia, which may produce electromyographic artifacts that interfere with proper BCI function (Goncharova, McFarland, Vaughan, & Wolpaw, 2003). In addition, there may be a decline in brain signals used for BCI decoding as symptoms of progressive neuromotor diseases become more severe (Kübler, Holz, Sellers, & Vaughan, 2015; Silvoni et al., 2013) that may result in decreased BCI performance. The wide range in sensory, motor, and cognitive components of BCI designs point to a need for user-centered design frameworks (e.g., Lynn, Armstrong, & Martin, 2016) and feature matching/screening protocols (e.g., Fried-Oken et al., 2013; Kübler et al., 2015), like those used for current practices in AAC intervention (Light & McNaughton, 2013; Thistle & Wilkinson, 2015). Topic 3: Are BCIs Faster Than Other Access Methods for AAC? Current AAC devices yield a range of communication rates that depend on access modality (e.g., direct selection, scanning), level of literacy, and information represented by each communication item (e.g., single-meaning icons or images, letters, icons representing complex phrases; Hill & Romich, 2002; Roark, Fried-Oken, & Gibbons, 2015), as well as word prediction software (Trnka, McCaw, Yarrington, McCoy, & Pennington, 2008). Communication rates using AAC are often less than 15 words per minute (Beukelman & Mirenda, 2013; Foulds, 1980), and slower speeds (two to 6 five words per minute; Patel, 2011) are observed for letter spelling due to the need for multiple selections for spelling words (Hill & Romich, 2002). Word prediction and language modeling can increase both speed and typing efficiency (Koester & Levine, 1996; Roark et al., 2015; Trnka et al., 2008), but the benefits may be limited due to additional cognitive demands (Koester & Levine, 1996). Scan rate in auto-advancing row–column scanning access methods also affects communication rate, and though faster scan rates should lead to faster communication rates, slower scan rates can reduce selection errors (Roark et al., 2015). BCIs are similarly affected by scan rate (Sellers & Donchin, 2006); for example, a P300 speller can only operate as fast as each item is flashed. Increases in flash rate may also increase cognitive demands for locating desired grid items while ignoring others, similar to effects observed using commercial AAC visual displays (Thistle & Wilkinson, 2013). Current BCIs for communication generally yield selection rates that are slower than existing AAC methods, even with incorporation of language prediction models (Oken et al., 2014). Table 2 provides a summary of selection rates from recent applications of conventional access techniques and BCI to communication interfaces. Both individuals with and without neuromotor impairments using motor-based BCIs have achieved selection rates under 10 selections (letters, numbers, symbols) per minute (Blankertz et al., 2006; Neuper et al., 2003; Scherer et al., 2004), and those using P300 methods commonly operate below five selections per minute (Acqualagna & Blankertz, 2013; Donchin et al., 2000; Nijboer, Sellers, et al., 2008; Oken et al., 2014). A recent P300 study using a novel presentation technique has obtained significantly higher communication rates of 19.4 characters per minute, though the method has not been studied in detail with participants American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 Table 2. Communication rates from recent BCI and conventional access to communication interfaces. BCI method Population Selection rate Berlin BCI (motor imagery) Graz BCI (motor imagery) Graz BCI (motor imagery) P300 speller (visual) Healthy Healthy Impaired Healthy P300 speller (visual) RSVP P300 RSVP P300 ALS ALS LIS Healthy 2.3–7.6 letters/min 2.0 letters/min 0.2–2.5 letter/min 4.3 letters/min 19.4 char/min (120.0 bits/min) 1.5–4.1 char/min (4.8–19.2 bits/min) 3–7.5 char/min 0.4–2.3 char/min 1.2–2.5 letter/min SSVEP Healthy AAC (row–column) Healthy LIS Healthy AAC (direct selection) 33.3 char/min 10.6 selections/min (27.2 bits/min) 18–22 letters/min 6.0 letters/min 5.2 words/min Source Blankertz et al. (2006) Scherer et al. (2004) Neuper et al. (2003) Donchin et al. (2000) Townsend and Platsko (2016) Nijboer, Sellers, et al. (2008) Mainsah et al. (2015) Oken et al. (2014) Acqualagna and Blankertz (2013), Oken et al. (2014) Chen et al. (2015) Friman et al. (2007) Roark et al. (2015) Trnka et al. (2008) Note. BCI = brain–computer interface; ALS = amyotrophic lateral sclerosis; RSVP = rapid serial visual presentation; LIS = locked-in syndrome; SSVEP = steady state visually evoked potential; AAC = augmentative and alternative communication; char = character. with neuromotor impairments (Townsend & Platsko, 2016). BCIs, on the basis of the SSVEP, have emerged as a promising technique often yielding both high accuracy (> 90%) and communication rates as high as 33 characters per minute (Chen et al., 2015). From these reports, BCI performance has started to approach levels associated with AAC devices using direct selection, and the differences in communication rates for scanning AAC devices and BCIs (shown in Table 2) are reduced when making comparisons between individuals with neuromotor impairments rather than individuals without impairments (e.g., AAC: six characters per minute; Roark et al., 2015; BCI: one to eight characters per minute; Table 2). Differences in communication rate can also be reduced based on the type of BCI method (e.g., 3–7.5 characters per minute; Mainsah et al., 2015). These results suggest that BCI has become another clinical option for AAC intervention that should be considered during the clinical decision-making process. BCIs have particular utility when considered for the most severe cases; the communication rates described in the literature are sufficient to provide access to language and communication for those who are currently without both. Recent improvements in BCI designs have shown promising results (e.g., Chen et al., 2015; Townsend & Platsko, 2016), which may start to push BCI communication efficacy past current benchmarks for AAC. Importantly, few BCIs have been evaluated over extended periods of time (Holz et al., 2015; Sellers et al., 2010); therefore, it is possible that BCI selection may improve over time with training. Topic 4: Fatigue and Its Effects BCIs, like conventional AAC access techniques, require various levels of attention, working memory, and cognitive load that all affect the amount of effort (and fatigue) needed Table 3. Take-home points collated from the interdisciplinary research team that highlight the major considerations for BCI as possible access methods for AAC. BCIs do not yet have the ability to translate thoughts or speech plans into fluent speech productions. Direct BCIs, usually involving a surgery for implantation of recording electrodes, are currently being developed as speech neural prostheses. Noninvasive BCIs are most often designed as an indirect method for accessing AAC, whether custom developed or commercial. There are a variety of noninvasive BCIs that can support clients with a range of sensory, motor, and cognitive abilities—and selecting the most appropriate BCI technique requires individualized assessment and feature matching procedures. The potential population of individuals who may use BCIs is heterogeneous, though current work is focused on individuals with acquired neurological and neuromotor disorders (e.g., locked-in syndrome due to stroke, traumatic brain injury, and ALS); limited study has involved individuals with congenital disorders such as CP. BCIs are currently not as efficient as existing AAC access methods for individuals with some form of movement, though the technology is progressing. For these individuals, BCIs provide an opportunity to augment or complement existing approaches. For individuals with progressive neurodegenerative diseases, learning to use BCI before speech and motor function worsen beyond the aid of existing access technologies may help maintain continuity of communication. For those who are unable to use current access methods, BCIs may provide the only form of access to communication. Long-term BCI use is only just beginning; BCI performance may improve as the technology matures and as individuals who use BCI gain greater proficiency and familiarity with the device. Note. BCI = brain–computer interface; AAC = augmentative and alternative communication; ALS = amyotrophic lateral sclerosis; CP = cerebral palsy. Brumberg et al.: AAC-BCI Tutorial 7 to operate the device (Kaethner, Kübler, & Halder, 2015; Pasqualotto et al., 2015). There is evidence that scanningtype AAC devices are not overly tiring (Gibbons & Beneteau, 2010; Roark, Beckley, Gibbons, & Fried-Oken, 2013), but prolonged AAC use can have a cumulative effect and reduce communication effectiveness (Trnka et al., 2008). In these cases, language modeling and word prediction can reduce fatigue and maintain high communication performance using an AAC device (Trnka et al., 2008). Within BCI, reports of fatigue, effort, and cognitive load are mixed. Individuals with ALS have reported that visual P300 BCIs required more effort and time compared with eye gaze access (Pasqualotto et al., 2015), whereas others reported that a visual P300 speller was easier to use, and not overly exhausting compared with eye gaze, because it does not require precise eye movements (Holz et al., 2015; Kaethner et al., 2015). Other findings from these studies indicate that the visual P300 speller incurred increased cognitive load and fatigue for some (Kaethner et al., 2015), whereas for others, there is less strain compared to eye-tracking systems (Holz et al., 2015). The application of many conventional and BCI-based AAC access techniques with the same individual may permit an adaptive strategy to rely on certain modes of access based on each individual’s level of fatigue. This will allow one to change his or her method of AAC access to suit his or her fatigue level throughout the day. Topic 5: BCI as an Addition to Conventional AAC Access Technology At their current stage of development, BCIs are mainly the primary choice for individuals with either absent, severely impaired, or highly unreliable speech and motor control. As BCIs advance as an access modality for AAC, it is important that the goal of intervention remains on selecting an AAC method that is most appropriate versus selecting the most technologically advanced access method (Light & McNaughton, 2013). Each of the BCI devices discussed has unique sensory, motor, and cognitive requirements that may best match specific profiles of individuals who may require BCI, as well as the training required for device proficiency. The question then of BCIs replacing any form of AAC must be determined according to the needs, wants, and abilities of the individual. These factors play a crucial role on motivation, which has direct impact on BCI effectiveness (Nijboer, Birbaumer, & Kübler, 2010). Other assessment considerations include comorbid conditions, such as a history of seizures, which is a contraindication for some visual BCIs due to the rapidly flashing icons (Volosyak et al., 2011). Cognitive factors, such as differing levels of working memory (Sprague, McBee, & Sellers, 2015) and an ability to focus one’s attention (Geronimo et al., 2016; Riccio et al., 2013), are also important considerations because they have been correlated to successful BCI operation. There are additional considerations for motor-based BCIs, including (a) a well-known observation that the SMR, which is necessary for device control, cannot be adequately 8 estimated in approximately 15%–30% of all individuals with or without impairment (Vidaurre & Blankertz, 2010) and (b) the possibility of performance decline or instability as a result of progressive neuromotor disorders, such as ALS (Silvoni et al., 2013). These concerns are currently being addressed using assessment techniques to predict motor-based BCI performance, including a questionnaire to estimate kinesthetic motor imagery (e.g., first person imagery or imagining performing and experiencing the sensations associated with motor imagery) performance (Vuckovic & Osuagwu, 2013), which is known to lead to better BCI performance compared with a third person motor imagery (e.g., watching yourself from across the room; Neuper, Scherer, Reiner, & Pfurtscheller, 2005). Overall, there is limited research available on the inter- and intraindividual considerations for BCI intervention that may affect BCI performance (Kleih & Kübler, 2015); therefore, clinical assessment tools and guidelines must be developed to help determine the most appropriate method of accessing AAC (that includes both traditional or BCI-based technologies) for each individual. These efforts have already begun (e.g., Fried-Oken et al., 2013; Kübler et al., 2015), and more work is needed to ensure that existing AAC practices are well incorporated with BCI-based assessment tools. In summary, the ultimate purpose of BCI access techniques should not be seen as a competition or a replacement for existing AAC methods that have a history of success. Rather, the purpose of BCI-based communication is to provide a feature-matched alternate or complementary method for accessing AAC for individuals with suitability, preference, and motivation for BCI or for those who are unable to utilize current communicative methods. Topic 6: Limitations of BCI and Future Directions Future applications of noninvasive BCIs will continue to focus on increasing accuracy and communication rate for use either as standalone AAC options or to access existing AAC devices. One major area of future work is to improve the techniques for noninvasively recording brain activity needed for BCI operation. Though a large majority of people who may potentially use BCI have reported that they are willing to wear an EEG cap (84%; Huggins, Wren, & Gruis, 2011), the application of EEG sensors and their stability over time are still obstacles needed to be overcome for practical use. Most EEG-based BCI systems require the application of electrolytic gel to bridge the contact between electrodes and the scalp for good signal acquisition. Unfortunately, this type of application has been reported to be inconvenient and cumbersome by individuals who currently use BCI and may also be difficult to set up and maintain by a trained facilitator (Blain-Moraes, Schaff, Gruis, Huggins, & Wren, 2012). Further, electrolytic gels dry out over time, gradually degrading EEG signal acquisition. Recent advances in dry electrode technology may help overcome this limitation (Blain-Moraes et al., 2012) by allowing for recording of EEG without electrolytic solutions and may lead to easier American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 application of EEG sensors and prolonged stability of EEG signal acquisition. In order to be used in all environments, EEG must be portable and robust to external sources of noise and artifacts. EEG is highly susceptible to electrical artifacts from the muscles, environment, and other medical equipment (e.g., mechanical ventilation). Therefore, an assessment is needed for likely environments of use, as are guidelines for minimizing the effect of these artifacts. Simultaneous efforts should be made toward improving the tolerance of EEG recording equipment to these outsize sources of electrical noise (Kübler et al., 2015). The ultimate potential of BCI technology is the development of a system that can directly decode brain activity into communication (e.g., written text or spoken), rather than indirectly operate a communication device. This type of neural decoding is primarily under investigation using invasive methods using electrocorticography and intracortical microelectrodes and has focused on decoding phonemes (Blakely et al., 2008; Brumberg et al., 2011; Herff et al., 2015; Mugler et al., 2014; Tankus et al., 2012), words (Kellis et al., 2010; Leuthardt et al., 2011; Pei et al., 2011), and time-frequency representations (Martin et al., 2014). Invasive methods have the advantage of increased signal quality and resistance to sources of external noise but require a surgical intervention to implant recording electrodes either in or on the brain (Chakrabarti et al., 2015). The goal of these decoding studies and other invasive electrophysiological investigations of speech processing is to develop a neural prosthesis for fluent-like speech production (Brumberg, Burnison, & Guenther, 2016). Although invasive techniques come at a surgical cost, one study reported that 72% of individuals with ALS indicated they were willing to undergo outpatient surgery, and 41% were willing to have a surgical intervention with a short hospital stay to access invasive BCI methods (Huggins et al., 2011). That said, very few invasive BCIs are available for clinical research or long-term at-home use (e.g., Vansteensel et al., 2016); therefore, noninvasive methods will likely be first adopted for use in AAC interventions. Conclusions This tutorial has focused on a few important considerations for the future of BCIs as AAC: (a) Despite broad speech-language pathology expertise in AAC, there are few clinical guidelines and recommendations for the use of BCI as an AAC access technique; (b) the most mature BCI technologies have been designed as methods to access communication interfaces rather than directly accessing thoughts, utterances, and speech motor plans from the brain; and (c) BCI is an umbrella term for a variety of brain-tocomputer techniques that require comprehensive assessment for matching people who may potentially use BCI with the most appropriate device. The purpose of this tutorial was to bridge the gaps in knowledge between AAC and BCI practices, describe BCIs in the context of current AAC conventions, and motivate interdisciplinary collaborations to pursue rigorous clinical research to adapt AAC feature matching protocols to include intervention with BCIs. A summary of take-home messages to help bridge the gap between knowledge of AAC and BCI was compiled from our interdisciplinary team and summarized in Table 3. Additional training and hands-on experience will improve acceptance of BCI approaches for interventionists targeted by this tutorial, as well as people who may use BCI in the future. Key to the clinical acceptance of BCI are necessary improvements in communication rate and accuracy via BCI access methods (Kageyama et al., 2014). However, many people who may use BCIs understand the current limitations, yet they recognize the potential positive benefits of BCI, reporting that the technology offers “freedom,” “hope,” “connection,” and unlocking from their speech and motor impairments (Blain-Moraes et al., 2012). A significant component of future BCI research will focus on meeting the priorities of people who use BCIs. A recent study assessed the opinions and priorities of individuals with ALS in regard to BCI design and reported that individuals with ALS prioritized performance accuracy of at least 90% and a rate of at least 15 to 19 letters per minute (Huggins et al., 2011). From our review, most BCI technologies have not yet reached these specifications, though some recent efforts have made considerable progress (e.g., Chen et al., 2015; Townsend & Platsko, 2016). A renewed emphasis on user-centered design and development is helping to move this technology forward by best matching the wants and needs of individuals who may use BCI with realistic expectations of BCI function. It is imperative to include clinicians, individuals who use AAC and BCI, and other stakeholders into the BCI design process to improve usability and performance and to help find the optimal translation from the laboratory to the real world. Acknowledgments This work was supported in part by the National Institutes of Health (National Institute on Deafness and Other Communication Disorders R03-DC011304), the University of Kansas New Faculty Research Fund, and the American Speech-LanguageHearing Foundation New Century Scholars Research Grant, all awarded to J. Brumberg. References Acqualagna, L., & Blankertz, B. (2013). Gaze-independent BCIspelling using rapid serial visual presentation (RSVP). Clinical Neurophysiology, 124(5), 901–908. Allison, B. Z., McFarland, D. J., Schalk, G., Zheng, S. D., Jackson, M. M., & Wolpaw, J. R. (2008). Towards an independent brain– computer interface using steady state visual evoked potentials. Clinical Neurophysiology, 119(2), 399–408. Beukelman, D., & Mirenda, P. (2013). Augmentative and alternative communication: Supporting children and adults with complex communication needs (4th ed.). Baltimore, MD: Brookes. Birbaumer, N., Kübler, A., Ghanayim, N., Hinterberger, T., Perelmouter, J., Kaiser, J., . . . Flor, H. (2000). The thought translation device (TTD) for completely paralyzed patients. IEEE Transactions on Rehabilitation Engineering, 8(2), 190–193. Brumberg et al.: AAC-BCI Tutorial 9 Blain-Moraes, S., Schaff, R., Gruis, K. L., Huggins, J. E., & Wren, P. A. (2012). Barriers to and mediators of brain–computer interface user acceptance: Focus group findings. Ergonomics, 55(5), 516–525. Blakely, T., Miller, K. J., Rao, R. P. N., Holmes, M. D., & Ojemann, J. G. (2008). Localization and classification of phonemes using high spatial resolution electrocorticography (ECoG) grids. In 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (pp. 4964–4967). https://doi.org/10.1109/IEMBS.2008.4650328 Blankertz, B., Dornhege, G., Krauledat, M., Müller, K.-R., Kunzmann, V., Losch, F., & Curio, G. (2006). The Berlin brain–computer interface: EEG-based communication without subject training. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 14(2), 147–152. Brumberg, J. S., Burnison, J. D., & Guenther, F. H. (2016). Brainmachine interfaces for speech restoration. In P. Van Lieshout, B. Maassen, & H. Terband (Eds.), Speech motor control in normal and disordered speech: Future developments in theory and methodology (pp. 275–304). Rockville, MD: ASHA Press. Brumberg, J. S., Burnison, J. D., & Pitt, K. M. (2016). Using motor imagery to control brain–computer interfaces for communication. In Foundations of Augmented Cognition: Neuroergonomics and Operational Neuroscience. Cham, Switzerland: Spring International Publishing. Brumberg, J. S., & Guenther, F. H. (2010). Development of speech prostheses: Current status and recent advances. Expert Review of Medical Devices, 7(5), 667–679. Brumberg, J. S., Wright, E. J., Andreasen, D. S., Guenther, F. H., & Kennedy, P. R. (2011). Classification of intended phoneme production from chronic intracortical microelectrode recordings in speech-motor cortex. Frontiers in Neuroscience, 5, 65. Brunner, P., Joshi, S., Briskin, S., Wolpaw, J. R., Bischof, H., & Schalk, G. (2010). Does the “P300” speller depend on eye gaze? Journal of Neural Engineering, 7(5), 056013. Chakrabarti, S., Sandberg, H. M., Brumberg, J. S., & Krusienski, D. J. (2015). Progress in speech decoding from the electrocorticogram. Biomedical Engineering Letters, 5(1), 10–21. Chen, X., Wang, Y., Gao, S., Jung, T.-P., & Gao, X. (2015). Filter bank canonical correlation analysis for implementing a highspeed SSVEP-based brain–computer interface. Journal of Neural Engineering, 12(4), 46008. Cheng, M., Gao, X., Gao, S., & Xu, D. (2002). Design and implementation of a brain–computer interface with high transfer rates. IEEE Transactions on Biomedical Engineering, 49(10), 1181–1186. Donchin, E., Spencer, K. M., & Wijesinghe, R. (2000). The mental prosthesis: Assessing the speed of a P300-based brain–computer interface. IEEE Transactions on Rehabilitation Engineering, 8(2), 174–179. Fager, S., Beukelman, D., Fried-Oken, M., Jakobs, T., & Baker, J. (2012). Access interface strategies. Assistive Technology, 24(1), 25–33. Farwell, L., & Donchin, E. (1988). Talking off the top of your head: Toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and Clinical Neurophysiology, 70(6), 510–523. Foulds, R. (1980). Communication rates of nonspeech expression as a function in manual tasks and linguistic constraints. In International Conference on Rehabilitation Engineering (pp. 83–87). Fried-Oken, M., Mooney, A., Peters, B., & Oken, B. (2013). A clinical screening protocol for the RSVP keyboard brain–computer 10 interface. Disability and Rehabilitation. Assistive Technology, 10(1), 11–18. Friman, O., Luth, T., Volosyak, I., & Graser, A. (2007). Spelling with steady-state visual evoked potentials. In 2007 3rd International IEEE/EMBS Conference on Neural Engineering (pp. 354–357). Kohala Coast, HI: IEEE. Geronimo, A., Simmons, Z., & Schiff, S. J. (2016). Performance predictors of brain–computer interfaces in patients with amyotrophic lateral sclerosis. Journal of Neural Engineering, 13(2), 026002. Gibbons, C., & Beneteau, E. (2010). Functional performance using eye control and single switch scanning by people with ALS. Perspectives on Augmentative and Alternative Communication, 19(3), 64–69. Goncharova, I. I., McFarland, D. J., Vaughan, T. M., & Wolpaw, J. R. (2003). EMG contamination of EEG: Spectral and topographical characteristics. Clinical Neurophysiology, 114(9), 1580–1593. Halder, S., Rea, M., Andreoni, R., Nijboer, F., Hammer, E. M., Kleih, S. C., . . . Kübler, A. (2010). An auditory oddball brain– computer interface for binary choices. Clinical Neurophysiology, 121(4), 516–523. Herff, C., Heger, D., de Pesters, A., Telaar, D., Brunner, P., Schalk, G., & Schultz, T. (2015). Brain-to-text: Decoding spoken phrases from phone representations in the brain. Frontiers in Neuroscience, 9, 217. Hill, K., & Romich, B. (2002). A rate index for augmentative and alternative communication. International Journal of Speech Technology, 5(1), 57–64. Hill, N. J., Ricci, E., Haider, S., McCane, L. M., Heckman, S., Wolpaw, J. R., & Vaughan, T. M. (2014). A practical, intuitive brain–computer interface for communicating “yes” or “no” by listening. Journal of Neural Engineering, 11(3), 035003. Holz, E. M., Botrel, L., Kaufmann, T., & Kübler, A. (2015). Long-term independent brain–computer interface home use improves quality of life of a patient in the locked-in state: A case study. Archives of Physical Medicine and Rehabilitation, 96(3), S16–S26. Huggins, J. E., Wren, P. A., & Gruis, K. L. (2011). What would brain–computer interface users want? Opinions and priorities of potential users with amyotrophic lateral sclerosis. Amyotrophic Lateral Sclerosis, 12(5), 318–324. Kaethner, I., Kübler, A., & Halder, S. (2015). Comparison of eye tracking, electrooculography and an auditory brain–computer interface for binary communication: A case study with a participant in the locked-in state. Journal of Neuroengineering and Rehabilitation, 12(1), 76. Kageyama, Y., Hirata, M., Yanagisawa, T., Shimokawa, T., Sawada, J., Morris, S., . . . Yoshimine, T. (2014). Severely affected ALS patients have broad and high expectations for brain-machine interfaces. Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration, 15(7–8), 513–519. Käthner, I., Ruf, C. A., Pasqualotto, E., Braun, C., Birbaumer, N., & Halder, S. (2013). A portable auditory P300 brain–computer interface with directional cues. Clinical Neurophysiology, 124(2), 327–338. Kellis, S., Miller, K., Thomson, K., Brown, R., House, P., & Greger, B. (2010). Decoding spoken words using local field potentials recorded from the cortical surface. Journal of Neural Engineering, 7(5), 056007. Kleih, S. C., & Kübler, A. (2015). Psychological factors influencing brain–computer interface (BCI) performance. 2015 IEEE International Conference on Systems, Man, and Cybernetics, 3192–3196. American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 Klobassa, D. S., Vaughan, T. M., Brunner, P., Schwartz, N. E., Wolpaw, J. R., Neuper, C., & Sellers, E. W. (2009). Toward a high-throughput auditory P300-based brain–computer interface. Clinical Neurophysiology, 120(7), 1252–1261. Koester, H. H., & Levine, S. (1996). Effect of a word prediction feature on user performance. Augmentative and Alternative Communication, 12(3), 155–168. Kübler, A., Furdea, A., Halder, S., Hammer, E. M., Nijboer, F., & Kotchoubey, B. (2009). A brain–computer interface controlled auditory event-related potential (p300) spelling system for locked-in patients. Annals of the New York Academy of Sciences, 1157, 90–100. Kübler, A., Holz, E. M., Sellers, E. W., & Vaughan, T. M. (2015). Toward independent home use of brain–computer interfaces: A decision algorithm for selection of potential end-users. Archives of Physical Medicine and Rehabilitation, 96(3), S27–S32. Kübler, A., Kotchoubey, B., Hinterberger, T., Ghanayim, N., Perelmouter, J., Schauer, M., . . . Birbaumer, N. (1999). The thought translation device: A neurophysiological approach to communication in total motor paralysis. Experimental Brain Research, 124(2), 223–232. Leuthardt, E. C., Gaona, C., Sharma, M., Szrama, N., Roland, J., Freudenberg, Z., . . . Schalk, G. (2011). Using the electrocorticographic speech network to control a brain–computer interface in humans. Journal of Neural Engineering, 8(3), 036004. Light, J., & McNaughton, D. (2013). Putting people first: Re-thinking the role of technology in augmentative and alternative communication intervention. Augmentative and Alternative Communication, 29(4), 299–309. Lin, Z., Zhang, C., Wu, W., & Gao, X. (2007). Frequency recognition based on canonical correlation analysis for SSVEPBased BCIs. IEEE Transactions on Biomedical Engineering, 54(6), 1172–1176. Lopez, M.-A., Pomares, H., Pelayo, F., Urquiza, J., & Perez, J. (2009). Evidences of cognitive effects over auditory steadystate responses by means of artificial neural networks and its use in brain–computer interfaces. Neurocomputing, 72(16–18), 3617–3623. Lotte, F., Congedo, M., Lécuyer, A., Lamarche, F., & Arnaldi, B. (2007). A review of classification algorithms for EEG-based brain–computer interfaces. Journal of Neural Engineering, 4(2), R1–R13. Lynn, J. M. D., Armstrong, E., & Martin, S. (2016). User centered design and validation during the development of domestic brain computer interface applications for people with acquired brain injury and therapists: A multi-stakeholder approach. Journal of Assistive Technologies, 10(2), 67–78. Mainsah, B. O., Collins, L. M., Colwell, K. A., Sellers, E. W., Ryan, D. B., Caves, K., & Throckmorton, C. S. (2015). Increasing BCI communication rates with dynamic stopping towards more practical use: An ALS study. Journal of Neural Engineering, 12(1), 016013. Martin, S., Brunner, P., Holdgraf, C., Heinze, H.-J., Crone, N. E., Rieger, J., . . . Pasley, B. N. (2014). Decoding spectrotemporal features of overt and covert speech from the human cortex. Frontiers in Neuroengineering, 7, 14. McCane, L. M., Sellers, E. W., McFarland, D. J., Mak, J. N., Carmack, C. S., Zeitlin, D., . . . Vaughan, T. M. (2014). Brain– computer interface (BCI) evaluation in people with amyotrophic lateral sclerosis. Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration, 15(3–4), 207–215. Miner, L. A., McFarland, D. J., & Wolpaw, J. R. (1998). Answering questions with an electroencephalogram-based brain–computer interface. Archives of Physical Medicine and Rehabilitation, 79(9), 1029–1033. Mugler, E. M., Patton, J. L., Flint, R. D., Wright, Z. A., Schuele, S. U., Rosenow, J., . . . Slutzky, M. W. (2014). Direct classification of all American English phonemes using signals from functional speech motor cortex. Journal of Neural Engineering, 11(3), 035015. Müller-Putz, G. R., Scherer, R., Brauneis, C., & Pfurtscheller, G. (2005). Steady-state visual evoked potential (SSVEP)-based communication: Impact of harmonic frequency components. Journal of Neural Engineering, 2(4), 123–130. Mussa-Ivaldi, F. A., & Miller, L. E. (2003). Brain-machine interfaces: Computational demands and clinical needs meet basic neuroscience. Trends in Neurosciences, 26(6), 329–334. Neuper, C., Müller, G. R., Kübler, A., Birbaumer, N., & Pfurtscheller, G. (2003). Clinical application of an EEG-based brain–computer interface: A case study in a patient with severe motor impairment. Clinical Neurophysiology, 114(3), 399–409. Neuper, C., Scherer, R., Reiner, M., & Pfurtscheller, G. (2005). Imagery of motor actions: Differential effects of kinesthetic and visual-motor mode of imagery in single-trial EEG. Cognitive Brain Research, 25(3), 668–677. Nijboer, F., Birbaumer, N., & Kübler, A. (2010). The influence of psychological state and motivation on brain–computer interface performance in patients with amyotrophic lateral sclerosis —A longitudinal study. Frontiers in Neuroscience, 4, 1–13. Nijboer, F., Furdea, A., Gunst, I., Mellinger, J., McFarland, D. J., Birbaumer, N., & Kübler, A. (2008). An auditory brain–computer interface (BCI). Journal of Neuroscience Methods, 167(1), 43–50. Nijboer, F., Sellers, E. W., Mellinger, J., Jordan, M. A., Matuz, T., Furdea, A., . . . Kübler, A. (2008). A P300-based brain–computer interface for people with amyotrophic lateral sclerosis. Clinical Neurophysiology, 119(8), 1909–1916. Oken, B. S., Orhan, U., Roark, B., Erdogmus, D., Fowler, A., Mooney, A., . . . Fried-Oken, M. B. (2014). Brain–computer interface with language model-electroencephalography fusion for locked-in syndrome. Neurorehabilitation and Neural Repair, 28(4), 387–394. Oostenveld, R., & Praamstra, P. (2001). The five percent electrode system for high-resolution EEG and ERP measurements. Clinical Neurophysiology, 112(4), 713–719. Pasqualotto, E., Matuz, T., Federici, S., Ruf, C. A., Bartl, M., Olivetti Belardinelli, M., . . . Halder, S. (2015). Usability and workload of access technology for people with severe motor impairment: A comparison of brain–computer interfacing and eye tracking. Neurorehabilitation and Neural Repair, 29(10), 950–957. Patel, R. (2011). Message formulation, organization, and navigation schemes for icon-based communication aids. In Proceedings of the 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC ’11) (pp. 5364–5367). Boston, MA: IEEE. Pei, X., Barbour, D. L., Leuthardt, E. C., & Schalk, G. (2011). Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans. Journal of Neural Engineering, 8(4), 046028. Pfurtscheller, G., & Neuper, C. (2001). Motor imagery and direct brain–computer communication. Proceedings of the IEEE, 89(7), 1123–1134. Plum, F., & Posner, J. B. (1972). The diagnosis of stupor and coma. Contemporary Neurology Series, 10, 1–286. Regan, D. (1989). Human brain electrophysiology: Evoked potentials and evoked magnetic fields in science and medicine. New York, NY: Elsevier. Brumberg et al.: AAC-BCI Tutorial 11 Riccio, A., Simione, L., Schettini, F., Pizzimenti, A., Inghilleri, M., Belardinelli, M. O., . . . Cincotti, F. (2013). Attention and P300based BCI performance in people with amyotrophic lateral sclerosis. Frontiers in Human Neuroscience, 7, 732. Roark, B., Beckley, R., Gibbons, C., & Fried-Oken, M. (2013). Huffman scanning: Using language models within fixed-grid keyboard emulation. Computer Speech and Language, 27(6), 1212–1234. Roark, B., Fried-Oken, M., & Gibbons, C. (2015). Huffman and linear scanning methods with statistical language models. Augmentative and Alternative Communication, 31(1), 37–50. Scherer, R., Billinger, M., Wagner, J., Schwarz, A., Hettich, D. T., Bolinger, E., . . . Müller-Putz, G. (2015). Thought-based row– column scanning communication board for individuals with cerebral palsy. Annals of Physical and Rehabilitation Medicine, 58(1), 14–22. Scherer, R., Müller, G. R., Neuper, C., Graimann, B., & Pfurtscheller, G. (2004). An asynchronously controlled EEG-based virtual keyboard: Improvement of the spelling rate. IEEE Transactions on Biomedical Engineering, 51(6), 979–984. Sellers, E. W., & Donchin, E. (2006). A P300-based brain–computer interface: Initial tests by ALS patients. Clinical Neurophysiology, 117(3), 538–548. Sellers, E. W., Vaughan, T. M., & Wolpaw, J. R. (2010). A brain– computer interface for long-term independent home use. Amyotrophic Lateral Sclerosis, 11(5), 449–455. Silvoni, S., Cavinato, M., Volpato, C., Ruf, C. A., Birbaumer, N., & Piccione, F. (2013). Amyotrophic lateral sclerosis progression and stability of brain–computer interface communication. Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration, 14(5–6), 390–396. Sprague, S. A., McBee, M. T., & Sellers, E. W. (2015). The effects of working memory on brain–computer interface performance. Clinical Neurophysiology, 127(2), 1331–1341. Tankus, A., Fried, I., & Shoham, S. (2012). Structured neuronal encoding and decoding of human speech features. Nature Communications, 3, 1015. Thistle, J. J., & Wilkinson, K. M. (2013). Working memory demands of aided augmentative and alternative communication for individuals with developmental disabilities. Augmentative and Alternative Communication, 29(3), 235–245. Thistle, J. J., & Wilkinson, K. M. (2015). Building evidence-based practice in AAC display design for young children: Current 12 practices and future directions. Augmentative and Alternative Communication, 31(2), 124–136. Townsend, G., & Platsko, V. (2016). Pushing the P300-based brain–computer interface beyond 100 bpm: Extending performance guided constraints into the temporal domain. Journal of Neural Engineering, 13(2), 026024. Trnka, K., McCaw, J., Yarrington, D., McCoy, K. F., & Pennington, C. (2008). Word prediction and communication rate in AAC, In Proceedings of the IASTED International Conference on Telehealth/Assistive Technologies (Telehealth/AT ’08) (pp. 19–24). Baltimore, MD: ACTA Press Anaheim, CA, USA. Vansteensel, M. J., Pels, E. G. M., Bleichner, M. G., Branco, M. P., Denison, T., Freudenburg, Z. V., . . . Ramsey, N. F. (2016). Fully implanted brain–computer interface in a locked-in patient with ALS. New England Journal of Medicine, 375(21), 2060–2066. Vaughan, T. M., McFarland, D. J., Schalk, G., Sarnacki, W. A., Krusienski, D. J., Sellers, E. W., & Wolpaw, J. R. (2006). The Wadsworth BCI Research and Development Program: At home with BCI. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 14(2), 229–233. Vidaurre, C., & Blankertz, B. (2010). Towards a cure for BCI illiteracy. Brain Topography, 23(2), 194–198. Volosyak, I., Valbuena, D., Lüth, T., Malechka, T., & Gräser, A. (2011). BCI demographics II: How many (and what kinds of ) people can use a high-frequency SSVEP BCI? IEEE Transactions on Neural Systems and Rehabilitation Engineering, 19(3), 232–239. Vuckovic, A., & Osuagwu, B. A. (2013). Using a motor imagery questionnaire to estimate the performance of a brain–computer interface based on object oriented motor imagery. Clinical Neurophysiology, 124(8), 1586–1595. Wills, S. A., & MacKay, D. J. C. (2006). DASHER—An efficient writing system for brain–computer interfaces? IEEE Transactions on Neural Systems and Rehabilitation Engineering, 14(2), 244–246. Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G., & Vaughan, T. M. (2002). Brain–computer interfaces for communication and control. Clinical Neurophysiology, 113(6), 767–791. Wolpaw, J. R., & McFarland, D. J. (2004). Control of a twodimensional movement signal by a noninvasive brain–computer interface in humans. Proceedings of the National Academy of Sciences of the United States of America, 101(51), 17849–17854. American Journal of Speech-Language Pathology • Vol. 27 • 1–12 • February 2018 Copyright of American Journal of Speech-Language Pathology is the property of American Speech-Language-Hearing Association and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.
International Journal of Speech-Language Pathology, 2012; 14(2): 165–173 Use of simulated patients for a student learning experience on managing difficult patient behaviour in speech-language pathology contexts TIM BRESSMANN & ALICE ERIKS-BROPHY Department of Speech-Language Pathology, University of Toronto,Toronto, Canada Abstract A student learning experience about managing difficult patients in speech-language pathology is described. In 2006, 40 students participated in a daylong learning experience. The first part of the experience consisted of presentations and discussions of different scenarios of interpersonal difficulty. The theoretical introduction was followed by an active learning experience with simulated patients. A similar experience without the simulated patients was conducted for 45 students in 2010. Both years of students rated the experience with an overall grade and gave qualitative feedback. There was no significant difference between the overall grades given by the students in 2006 and 2010. The qualitative feedback indicated that the students valued the experience and that they felt it added to their learning and professional development. The students in 2006 also provided detailed feedback on the simulation activities. Students endorsed the experience and recommended that the learning experience be repeated for future students. However, the students in 2006 also commented that they had felt inadequately prepared for interacting with the simulated patients. A learning experience with simulated patients can add to students’ learning. The inclusion of simulated patients can provide a different, but not automatically better, learning experience. Keywords: Difficult patients, simulated patients, speech-language pathology, communication disorders, action learning, experiential learning. Introduction In the course of the 20th century, the theory and practice of education in the health sciences has evolved and has changed from an apprenticeship style to a clearly focused and competency-driven educational activity (Nestel & Kneebone, 2010). With this change, health sciences educators have also strived to improve their students’ communication and interaction skills. The simulated patient is a powerful tool for teachers in health sciences to allow their students to develop their interactional skills (Barrows, 1968). A simulated patient (also called a standardized patient) is an individual who is trained to act like a patient and simulate the symptoms of a medical condition. Simulated patient interactions lend themselves naturally to the pedagogical concept of experiential learning, also often called action learning (Kolb, 1976, 1984). Simulated patients are used in medical education to help students hone their diagnostic and interviewing skills in realistic and lifelike scenarios (Kneebone & Nestel, 2005). Simulated patients may serve to build student confidence for the handling of difficult interactions or clinical decision-making processes. The knowledge that the patient is a paid medical actor may also comfort the student in the case of a negative experience that leads to developmental feedback. In this case, the student can rest assured that no real patient was affected by his or her insufficient clinical skills or interaction style (Kneebone & Nestel, 2005). Simulated patients are convenient and reliable for the educator. Scenarios can be standardized and may be acted out repeatedly. The simulation situation allows students to train for clinical situations without imposing on a real patient’s time and without the same need for supervision (Nestel & Kneebone, 2010). Simulated patients have been used successfully to train students in general medicine (Monahan, Grover, Kavey, Greenwald, Jacobsen, & Weinberger, 1988), psychiatry (Rimondini, Del Piccolo, Goss, Mazzi, Paccaloni, & Zimmermann, 2006), and nursing (Festa, Baliko, Mangiafico, & Jarosinski, 2000). The simulated patient can be used to teach general skills such as conducting patient interviews (McKenna, Innes, French, Streitberg, & Gilmour, 2011) as well as specific skills, such as Correspondence: Tim Bressmann, PhD, Associate Professor, Department of Speech-Language Pathology, University of Toronto, 160-500 University Avenue, Toronto, ON, M5G 1V7, Canada. Tel: ⫹1-416-978-7088. Fax: ⫹1-416-978-1596. Email: tim.bressmann@utoronto.ca ISSN 1754-9507 print/ISSN 1754-9515 online © 2012 The Speech Pathology Association of Australia Limited Published by Informa UK, Ltd. DOI: 10.3109/17549507.2011.638727 166 T. Bressmann & A. Eriks-Brophy communicating with a person with aphasia (Ramsay, Keith, & Ker, 2008). Simulated patients are currently not used as commonly in speech-language pathology programs as in other health sciences disciplines. There may be a variety of reasons for this. Costs may be one factor because simulated patients typically require payment. Training medical actors to convincingly simulate a communication or swallowing disorder is difficult and may not always lead to an acceptable result, owing to the complex and dynamic nature of these disorders. For example, it would be difficult to train (and maintain) a sufficiently large and varied set of simulated patients who could portray different types of aphasia well enough to participate in student-administered standardized neuropsychological tests. In the first study published on this topic in the field of communication disorders, Edwards, Franke, and McGuiness (1995) described a group work exercise with a patient simulating Broca’s aphasia for Australian students of speech-language pathology. The authors concluded that the experience was positive and helpful for student learning but did not base these conclusions on a systematic assessment of student feedback or learning outcomes. The authors also noted that a high level of commitment and planning would be required to use simulated patients on a regular basis. Syder (1996) described how a roster of simulated clients was developed to teach generic clinical skills to two groups of British speech and language pathology students. The author concluded that teaching with simulated patients can be an important adjunct to traditional individual clinical placements. In the most ambitious and systematic study in this area to date, Zraick, Allen, and Johnson (2003) used a simulated patient to teach American students in speech-language pathology interpersonal and communication skills for interacting with aphasic patients. Using objective structured clinical examinations (OSCEs) (Harden, Stevenson, Downie, & Wilson, 1975), the authors were not able to demonstrate significant differences in OSCE scores between students who had only attended lectures and students who had also practiced with a simulated patient. Wilson, Hill, Hughes, Sher, and Laplante-Levesque (2010) used a questionnaire study to evaluate student audiologists’ assessments of a learning activity that involved simulated patients. Students indicated that they appreciated the opportunity to work with the simulated patients and that they felt the experience had helped their clinical learning. Hill, Davidson, and Theodoros (2010) conducted a review of the use of standardized patients in clinical education with a particular focus on speech-language pathology. The authors concluded that standardized patients could prove to be an important adjunct to clinical education in speech-language pathology, but that more research is needed. Based on the literature available, simulated patient experiences may have a place in the teaching of general counselling and interaction skills in speech-language pathology. To our knowledge, there are no published accounts of the use of simulated patients in Canadian speech-language pathology programs. One counselling skill that lends itself to a simulation activity is the interaction with difficult patients and difficult relatives who provide a challenge to the speech-language pathologist in undertaking assessment or intervention. We decided to devise a 1-day experience to give our students an opportunity to learn about an academic model of interpersonal difficulty and different pertinent management strategies. The academic framework of the experience was based on different sections of the book Difficult patients by Duxbury (2000). Duxbury’s book is specifically written for nurses and aims to offer useful and easily applicable strategies for managing challenging situations. Duxbury draws on a simplified model of interpersonal difficulty that assumes five basic dimensions, based on Miller (1990). This inventory of difficult behaviour defines the following types: 1) Withdrawal is the refusal to interact; 2) Passivity is characterized by failure to take action; 3) Manipulative behaviour employs devious and potentially dishonest means to reach the patient’s aims; 4) Confrontational and aggressive behaviour expresses anger; and 5) Violent behaviour intends to do damage against the self, others, or inanimate objects. Duxbury classifies withdrawal and passivity as protective behaviours while manipulation and confrontation/aggression are defensive in nature. Violence is socially inappropriate and therefore in a category of its own. Duxbury’s model of behaviour is obviously simplified and the prototypical categories may not represent the full range and depth of difficult patient behaviours. However, the simplicity of the model lends it appeal and made it ideal for use in a 1-day learning experience. Duxbury combines her five-dimensional model of interpersonal difficulty with the Six Category Intervention Analysis (SICA) according to Heron (1990). In this model, the practitioner identifies the patient’s behaviour and chooses an appropriate intervention strategy to diffuse the behaviour. Heron (1990) defines the SICA strategies as follows: 1) Prescriptive interventions direct patient behaviour, often through direct instructions. 2) Informative interventions impart knowledge or meaning. Simulated patients in SLP 3) Confrontational interventions aim to make the patient aware of inappropriate aspects of his or her behaviour. 4) Cathartic interventions enable the patient to relieve painful emotions. 5) Catalytic interventions serve to facilitate self-knowledge and self-care. 6) Supportive interventions affirm the patient’s self-worth. While the first three strategies are examples of authoritative interventions (the clinician guides the patient), the latter three are facilitative in nature (the clinician helps the patient effect changes). Duxbury’s (2000) attempt to fit Heron’s (1990) six SICA strategies onto Miller’s (1990) five dimensions of interpersonal difficulty results in a model that appears conceptually elegant and clear, even when the dimensions of difficulty and the SICA strategies were created independently. This paper reports on the practical organization of a student learning experience about difficult patient behaviour. The project gave us an opportunity to gain experience with the use of simulated patients in an academic program for speechlanguage pathology. We were interested in the feasibility and the student impressions for such a learning experience. The focus of our evaluation was on the student experience rather than on testable learning outcomes. The same learning experience was conducted with two years of students. The goal of this research was to evaluate whether the inclusion of simulated patients changed the student experience and whether students would endorse this type of learning experience. Method Participants Two versions of this learning experience were conducted in two different academic years. In the academic year of 2006, 40 students were enrolled in the professional Master of Health Sciences program in the Department of SpeechLanguage Pathology at the University of Toronto. Thirty-nine students were female and one student was male. In 2010, 45 students took part in the experience. Forty-two students were female and three students were male. This gender distribution is typical for a professional speech-language pathology program (Boyd & Hewlett, 2001). The learning experience took place halfway through the summer term of the first year of the professional program. During this term, the students took concurrent courses on fluency disorders, aural rehabilitation, and voice disorders. The simulated patients in the learning experience were five university trained professional medical actors (three female, two male). All actors had 167 considerable experience with medical acting as simulated patients in different medical disciplines at the University of Toronto and its affiliated teaching hospitals. The actors were paid for their preparation time and their time portraying difficult patients for the students. Structure of the experience and data collection In both 2006 and 2010, the students were divided into six groups. The groups were assigned readings based on the book by Duxbury (2000). Before the day of the learning experience, all students read the introductory chapters that described the dimensions of inter-personal difficulty based on Miller (1990). The groups were also given specific readings about the different intervention strategies. A group each was assigned to the prescriptive, confrontational, cathartic, and catalytic interventions, one group was tasked to learn about the informative and supportive interventions, and one group read the chapter on dealing with aggression and violence in a clinical setting. The readings were evenly balanced in length and content between the groups. In 2006, the students were given 4 days ahead of the learning experience day to read the assigned texts, meet in their group, and prepare a presentation and a short summary essay on their assigned intervention strategies. The presentations on the morning of the day of the learning experience were 25 minutes in duration and included role-plays to illustrate the incorrect and correct use of the intervention strategies as described by Duxbury (2000). The presentation slides and notes from each group were submitted a day ahead of the experience and collated into a document that was photocopied and handed out to all students so that every student was provided with the summaries of the different strategies. In the afternoon of the learning experience day, the students in 2006 interacted with the simulated patients. Because there were only five patients for 40 students, the class was split in two groups of 20 for this part of the experience. While the first group of 20 interacted with the five simulated patients, the second group of 20 was on a break. After an hour (and a short break for the simulated patients), the two groups switched and the second group had an hour with the simulated patients. In order to allow for a smooth transition between the different simulated patients, the groups of 20 were further divided into five smaller jig saw groups of four students each. These jig saw groups were made up of different members of the six presentation groups so that each team member was the group ‘expert’ for a particular SICA strategy (but not all SICA strategies were represented in each group). The members of the small jig saw groups individually encountered different simulated patients who each portrayed one of the different types of interpersonal difficulty according to the model by Miller (1990). 168 T. Bressmann & A. Eriks-Brophy In preparation for the learning experience, the authors oriented the simulated patients to the scenarios and the required difficult behaviours. The simulated patients decided between themselves who was best suited for the different scenarios. They also discussed how the scenarios would be portrayed. During the orientation session, the authors were available for questions by the simulated patients. The students were not given any advance information about this part of the experience other than the date and time. Prior to interacting with the simulated patients, the students were provided with the clinician’s task for the scenario but not with any information about how the situation would unfold. In the withdrawal scenario, the student’s job was to practice chant talk with a patient with vocal nodules. The passive behaviour scenario required the student to counsel the parent of an AAC user who has not used the device often enough. In the manipulative behaviour scenario, the student had to inform a hearing parent about a child’s ineligibility for a cochlear implant. In the confrontational behaviour scenario, the student counselled the user of a broken voice amplifier who demanded a replacement device. Finally, the aggression and violence scenario confronted the student with a frustrated parent shouting and throwing foam blocks around the room. The scenarios are provided in the Appendix, available online at http://informahealthcare.com/doi/abs/ 10.3109/17549507.2011.638727. The students engaged in individual 5-minute conversations with the simulated patients while their jig saw group was watching from either the back of the room or through a one-way mirror. The groups were rotated to the next actor after every encounter in order to minimize repetitiveness of the scenarios. Every student had the opportunity to interact with at least two simulated patients in different scenarios. The instruction to the students was to interact with the simulated patients and to use the SICA strategies to the best of their abilities. There was no expectation that the students would be able to completely resolve any of the scenarios. The encounters with the simulated patients were followed by a final discussion with the whole class, the simulated patients and the instructors. The students in 2010 were given the same readings a few days ahead of the learning experience. They met in their groups on the learning experience day and spent the morning preparing the group presentations and role plays. The afternoon of the day was spent with the student groups presenting to the assembled class and enacting role plays that illustrated the different management techniques. This was followed by a final discussion of the class and the instructors. Data collection In both 2006 and 2010, the students filled in a generic evaluation form after the learning experience. The students were asked to grade the learning experience with a percentage grade. For this task, the students used the University of Toronto’s graduate grading scale, where A (very good) grades range from 80–100%, B (good) grades range from 70–79%, and failed grades are 69% or lower. The form also required the students to comment on (1) what they liked about the learning experience, (2) what they did not like, and (3) how the learning experience could have been improved. In 2006, the students also filled in an additional feedback form related to the encounters with the simulated patients. The questions in this form required the students to rate whether they had used the intervention strategies suggested by Heron (1990) and Duxbury (2000), and whether the effectiveness and subjective comfort of using the strategies improved during the experience. The students also rated whether they had used the strategies as described by Duxbury (2000) and ranked the perceived usefulness of the different strategies according to the SICA model (Heron, 1990). The students evaluated the realism and usefulness of the simulated patients and rated whether a similar experience should be offered to future classes. All ratings were made on 5-point equal appearing interval scales with semantic descriptors ranging from strong agreement to strong disagreement (with not sure as the neutral point). The students were asked to elaborate on each of their ratings with a free comment in a textbox, in order to obtain additional qualitative student feedback. The quantitative feedback was summarized in statistical spreadsheet software and the mean values and standard deviations for the responses were calculated and reported. In order to probe for statistically significant differences in the responses of the 2 years of students, we calculated an independent t-test. The level of significance was set at p ⫽.05. The qualitative feedback was summarized according to the most frequently recurring topics, and common themes were identified. Results Overall evaluation of the learning experience: Quantitative results The students graded the learning experience based on the University of Toronto graduate grading scheme. In the year 2006, the mean percentage grade was 83.68% (SD ⫽ 5.73, range ⫽ 67–95%). In the year 2010, the mean percentage grade was 84.48% (SD ⫽ 5.87, range ⫽ 71–100%). Both average grades corresponded to an A-. An independent samples t-test indicated that there were no significant differences between the student grades of the learning experience in the 2 years (t ⫽ ⫺.906, df ⫽ 80, p ⫽.367). Simulated patients in SLP Overall evaluation of the learning experience: Qualitative results In response to the question about “good things about the learning experience”, the 2006 students commented most frequently that the simulated patient activity had added interest and realism to the learning experience. I think it was a fantastic opportunity for all of us to be in a realistic clinical situation with a “client” and realistic problems. It was stressful but an amazing experience. (Student #37) Many of the students commented that they had found the group presentations in the morning enjoyable and informative. I felt that the role-plays were an effective way of demonstrating the strategies and that the simulations were a good opportunity to try them out. (Student #15) In terms of things disliked about the learning experience, a number of students in 2006 commented on a lack of feedback from the simulated patients and the instructors about their performance in the simulation activity. A debriefing after each encounter would have helped me understand my strengths and areas for development. (Student #35) A number of students commented that more preparation for the client scenarios would have been helpful. I would have preferred a little more info/warning before the simulations (although not too much because that would have made me nervous). (Student #2) In fact, a small number of students indicated that they had been confused initially whether the clients were real or simulated patients. I think that everything was great. Just let us know at the beginning that they’re actors! (Student #20) Various students commented that the relatively quick pace of the simulated patient activity added stress and made it difficult to absorb information. In terms of suggestions for improvement of the learning experience, the most frequent suggestion was to provide time for more feedback from the actors or the instructors. Some students also felt that more time should have been allowed to discuss and consolidate the intervention strategies according to the SICA model (Heron, 1990). A few students suggested a more thorough orientation to the clinical scenarios. The students in 2010 participated in a different learning experience that did not include a simulated patient activity. In terms of positive feedback, a large number of students commented positively on the group work, presentations, and role-plays. The topics discussed and the group presentations were perceived as interesting and relevant to clinical practice. With regards to aspects that the students in the year 2010 did not like about the learning experience, 169 the most frequent comments were that the group work in the morning and the presentations in the afternoon made for a long and exhausting day. On the other hand, a number of students argued that there should have been more time allocated for group work. A few students also commented that they would have liked to see more materials from their current disorder area courses incorporated into the learning experience. In keeping with the feedback on elements of the experience which were not liked by students, the most frequent suggestion for changing the learning experience related to shortening the schedule, although a number of students advocated for more time for group work at the same time. Individual students also suggested the incorporation of more specific disorder information or case studies. Evaluation of the simulated patient experience: Quantitative results An overview of the questions and the mean values for the scores can be found in Table I. The results indicated that the students were neutral or agreed that they had used the SICA communication strategies effectively (question 1) and that the effectiveness had improved over time (question 2). Students tended to feel comfortable using the strategies (question 3) and indicated that this subjective comfort increased moderately over time (question 4). However, the majority of students felt that they had not used any of the strategies according to the descriptions by Duxbury (2000) or the morning presentations (question 5). Most students felt that the simulated patient experiences were realistic (question 7), and that the experience would be beneficial for their next clinical placement (question 8). Finally, the majority of students agreed that a similar learning experience with simulated patients should be organized for the next years of students. In question 6, the students were asked to rank the six intervention strategies from the SICA model according to their preference. From the rankings, means and standard deviations were calculated to establish the students’ preferences. Overall, the students ranked the strategies in the following order: 1) Supportive interventions (mean ranking ⫽ 2.5, SD ⫽ 1.42), 2) Informative interventions (mean ranking ⫽ 2.78, SD ⫽ 1.59), 3) Cathartic interventions (mean ranking ⫽ 3.5, SD ⫽ 1.81), 4) Catalytic interventions (mean ranking ⫽ 3.69, SD ⫽ 1.67), 5) Prescriptive interventions (mean ranking ⫽ 3.89, SD ⫽ 1.55), and 6) Confrontational interventions (mean ranking ⫽ 4.11, SD ⫽ 1.89). 170 T. Bressmann & A. Eriks-Brophy Table I. Results for the questionnaire on the simulated patient experience for the students in the year 2006. Only valid percentages are reported (i.e., missing data were excluded from the analysis). 1. Did you use your intervention strategies effectively? 2. Did you improve your effectiveness of using your intervention strategies over time? 3. Did you feel comfortable using your intervention strategies? 4. Did your level of comfort using the intervention strategies improve over time? 5. Do you feel that you used the intervention strategies by the book? 7. Did you find the experience with the actors realistic? 8. Do you think that this experience has prepared you better for your next clinical placement? 9. Should we organize a similar learning experience for future students? Strongly disagree Disagree Not sure 0.0% 7.50% 40% 52.50% 0.0% 2.60% 48.70% 0.0% 7.50% 0.0% Strongly agree Mean SD 0.0% 3.45 0.64 41% 7.70% 3.53 0.68 25% 67.50% 0.0% 3.6 0.63 5.10% 35.90% 43.60% 15.40% 3.69 0.8 25.60% 61.50% 7.70% 2.60% 0.0% 1.84 0.67 0.0% 5% 7.50% 27.50% 60% 4.42 0.84 0.0% 5.10% 10.30% 66.70% 17.90% 3.92 0.87 2.50% 0.0% 5% 45% 47.50% 4.35 0.8 Evaluation of the simulated patient experience: Qualitative results In response to question 1, many students indicated that they tried to use the SICA strategies, sometimes with mixed results. Other students reported that they did not have time to consciously employ the SICA interventions in the situation. I found this experience to be very interesting. It was neat to be exposed to what could happen in the clinical setting and how I would react. I found at times I used some of the strategies effectively but just handling the situation was challenging enough. I became so overwhelmed that I simply tried my best to accomplish the task. (Student #12) Agree more practice would have been helpful to consolidate the intervention strategies. My comfort in my role increased—or, at least, I was now mentally and emotionally prepared to take on the second client. ... Not being familiar with the client, I can’t say I had a specific strategy prepared to try out. (Student #38) The majority of the students reported that they had not used the SICA intervention strategies by the book (question 5). However, many students commented that this reflected clinical realities. Not at all. I feel that you cannot counsel someone with by the book strategies. You have to modify the strategies for the individual client. (Student #10) With regards to improving the effectiveness of the intervention (question 2), most students reported that they felt more secure over time and that they also learned from observing their peers interact with the simulated patients. The brevity of the learning experience was seen as a drawback. When asked to comment on the realism of the experience (question 7), most students answered that they found the experience realistic. Some students indicated that they had felt stressed and emotionally involved when interacting with the simulated patients. Time was quite limited although it helped to observe classmates’ use/non-use of strategies and attempt to improve my own use of them. (Student #23) The actors were very intense, especially the man who threw the foam blocks. They were very quick with their responses. (Student #17) Students commented that they felt reasonably comfortable with the strategies (question 3), although many students commented that they fell back on intuitive strategies. The majority of students felt that the experience had better prepared them for their next clinical placement (question 8). Kind of, but I think I used my own personality traits more than learned strategies. When you are in a pressure situation, you tend to forget what you just learned. (Student #18) Most students reported that they developed more comfort with the counselling strategies over time (question 4). A number of students indicated that Even though it was a simulated activity, it definitely provided concrete examples how to use the strategies. It is helpful to have actors being the client, not just a class partner. (Student #3) Finally, a large number of students stressed in response to question 9 that the experience had been informative as well as fun. While it was suggested by some students that future years of students should Simulated patients in SLP receive more preparation for the encounters with the simulated patients, other students commented that they appreciated being thrown into the situation. It was a valuable experience—should not be changed. The surprise element made it more intense. (Student #9) Discussion The goal of this study was to describe and evaluate a student learning experience about managing difficult patients in speech-language pathology contexts.Two different versions of the learning experience were conducted in 2006 and 2010. In 2006, students participated in an active learning experience with simulated patients. A similar experience was conducted in 2010 without the simulated patients. The student evaluations and feedback for the learning experience are described. Overall, the experience was successful and the students in both years agreed that it was worthwhile and enlightening. The student feedback for both years was largely positive and the qualitative student feedback confirmed the usefulness of the learning experience. The quantitative evaluations for the experience as a whole were not different in both years. The inclusion of simulated patients in the experience did not result in better student evaluations. The qualitative analysis of the student comments indicated that the students were focused on different aspects of the experience in both years. In 2006, the most frequent positive comment was that the simulated patients had added interest to the day. While the 2006 students also commented positively on the group work and presentations, this theme was not emphasized as much. In contrast, the 2010 students mainly commented on the group work and the role-plays in their positive feedback. In terms of dislikes and suggestions for improvements, the 2006 students focused their comments mainly on shortcomings of the simulated patient experience, such as more preparation and more detailed feedback from the simulated patients and the instructors. In contrast, the 2010 students reflected mainly on the duration of the experience. It should be noted that the overall length of time was equivalent for the two versions of the learning experience. The 2006 students had the opportunity to reflect in more detail on the simulated patient experience. The results demonstrated that the students appreciated the opportunity to try the SICA strategies with the simulated patients. On the other hand, many students indicated that the experience was rushed on them before they had had time to consolidate the different intervention styles and make them their own. As a result, the majority of students stated that they had reacted in the moment without consciously employing SICA strategies. The students’ ranking of 171 SICA strategies revealed that most students preferred supportive and informative interventions, which are probably styles of intervention that will come naturally to many people. Prescriptive and confrontational interventions were ranked the lowest, indicating that the 2006 students preferred facilitative over authoritative approaches. Most students felt that the experience served as a good preparation for a placement and stated that the experience should be repeated for future years. Some modifications to the procedures of the learning experience were suggested, such as a longer preparation before the task with the simulated patients. The comparison of the different learning experiences in 2006 and 2010 is limited because the evaluation form used only a single quantitative metric (school grade) and the rest of the feedback was qualitative and open ended. The students commented on substantially different learning experiences in the 2 years, and their developmental feedback focused on very different concepts. The overall grade of the learning experience provided a quantitative measure for the students’ overall impressions of the day. One might expect that a better student experience may be reflected in higher overall ratings. However, based on the quantitative overall ratings of the learning experience, it cannot be concluded that the students valued the overall experience more because of the inclusion of simulated patients. The enthusiasm that the 2006 students indicated for a repetition of the experience for future students (question 9) was not reflected in the overall grades they gave. By the same token, the student assessments of the 2010 experience were comparable even though it did not include simulated patients. We conclude from this finding that one should not use simulated patients with the goal of improving student evaluations but rather with the ambition to improve student learning. We stated in our introduction that simulated patients are not commonly used in speech-language pathology, particularly when compared with other medical disciplines. Syder (1996) used paid amateur actors in individual and group therapy sessions for student placements. The actors were trained to portray different medical conditions and were seen by student clinicians in a role-play activity that allowed the participants to call time-outs and even do-overs for the interaction. The students were also given extensive feedback, including a video review of their interaction with the simulated patient. In contrast, our activity was comparatively brief and the students were not given individual feedback or suggestions for improvement based on their performances. There were a number of reasons why the learning activity in the present study was less well developed. First, we wanted to remove any fear of being monitored or graded on the part of the students. The goal was to create a stress-free experiential activity that would allow the students to 172 T. Bressmann & A. Eriks-Brophy consolidate some of the information from the learning activity and help them reflect on their reactions to different challenging scenarios involving interpersonal difficulty. Time was another important factor. The simulated patient part of the learning experience had to be relatively short to ensure that the learning experience did not become unreasonably long for the students. Finally, there was no budget to fund the simulated patients. The learning experience was financed with minimal resources, which limited the available time with the actors. With regards to the instructors’ goals for the learning experience, a more extensive simulated patient experience with more time and feedback would not have been desirable. Bligh and Bleakley (2006) stress that a simulation may turn into a simulacrum: A simulation that is too well-developed may become self-referential and float free from reality. In the same vein, Brenner (2009) cautions against an unreflected use of simulated patients because a student who can handle simulated patient scenarios well may develop an unrealistic feeling of confidence. It was not the goal of our learning experience to develop simulated patients whose behavioural difficulties could be solved in a few minutes with strict adherence to the SICA model strategies because such a simulation would not have been grounded in clinical reality. Since the experience was deliberately limited, it followed naturally that many students experienced a degree of failure when interacting with the simulated difficult patients. The comparison of the student evaluations and comments in 2006 and 2010 confirms Syder’s (1996) observation that activities with simulated clients are not better per se, but simply different. Based on a literature review, Lane and Rollnick (2007) argued that there is too little systematic research to demonstrate whether patient simulations have better learning outcomes than role-plays. In their own research, Lane, Hood, and Rollnick (2008) found that both approaches were equally useful in the teaching of motivational interviewing skills. A learning experience with simulated clients will not automatically lead to better student evaluations, nor should this be the goal of such an activity. Simulated patients do have their use and their potential in speech-language pathology, but it is important to understand the limitations and the pitfalls of simulated clinical scenarios. Conclusion Simulated patients offer interesting new possibilities for the education of students in speech-language pathology. Compared to other medical disciplines, teaching with simulated patients appears to be seldom used in our discipline. After the learning experience described in this paper, students felt that the simulated patients had added interest and a perceived realism to the activities. We did not have an opportunity to systematically assess student learning within the parameters of the present research. A systematic evaluation of similar learning activities should be undertaken in the future to evaluate how simulated patients can be put to regular use in speech-language pathology. Acknowledgements The authors would like to acknowledge the medical actors Julia Gray, Steven James, Sarah Machin Gale, Melina Nacos, and Mark Prince, who jointly comprise the Ruckus Ensemble of Toronto. More information about the ensemble may be found at http://www. ruckusensemble.com/. The authors also acknowledge the partial funding provided by the Department of Speech-Language Pathology at the University of Toronto. Jennifer Allegro, Joanne Deluzio, Brenda Lewson, Luc De Nil, Loralee McLean, Penny Parnes, and Yana Yunusova were involved with the conceptual and practical organization of different learning experiences in the years 2005–2010. Christina Khaouli helped with the data analysis. Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper. References Barrows, H. S. (1968). Simulated patients in medical teaching. Canadian Medical Association Journal, 98, 674–676. Bligh, J. J., & Bleakley, A. A. (2006). Distributing menus to hungry learners: Can learning by simulation become simulation of learning? Medical Teacher, 28, 606–613. Boyd, S., & Hewlett, N. (2001). The gender imbalance among speech and language therapists and students. International Journal of Language and Communication Disorders, 36, 167–172. Brenner, A. M. (2009). Uses and limitations of simulated patients in psychiatric education. Academic Psychiatry, 33, 112–129. Duxbury, J. (2000). Difficult patients. Boston, MA: ButterworthHeinemann. Edwards, H., Franke, M., & McGuiness, B. (1995). Using simulated patients to teach clinical reasoning. In J. Higgs, & M. Jones (Eds.), Clinical reasoning in the health professions (pp. 269–278). Oxford: Butterworth-Heinemann. Festa, L. M., Baliko, B., Mangiafico, T., & Jarosinski, J. (2000). Maximizing learning outcomes by videotaping nursing students’ interactions with a standardized patient. Journal of Psychosocial Nursing and Mental Health Services, 38, 37–44. Harden R. M., Stevenson, M., Downie, W. W., & Wilson, G. M. (1975). Assessment of clinical competence using objective structured examination. British Medical Journal, 1, 447–451. Heron, J. (1990). Helping the client: A creative, practical guide. London: Sage. Hill, A. E., Davidson, B. J., & Theodoros, D. G. (2010). A review of standardized patients in clinical education: Implications for speech-language pathology programs. International Journal of Speech-Language Pathology, 12, 259–270. Kneebone, R., & Nestel, D. (2005). Learning clinical skills: The place of simulation and feedback. The Clinical Teacher, 2, 86–90. Kolb, D. A. (1976). The learning style inventory. Boston, MA: McBer. Kolb, D. A. (1984). Experiential learning. Englewood Cliffs, NJ: Prentice Hall. Simulated patients in SLP Lane, C., & Rollnick, S. (2007). The use of simulated patients and role-play in communication skills training: A review of the literature to August 2005. Patient Education and Counselling, 67, 13–20. Lane, C., Hood, K., & Rollnick, S. (2008). Teaching motivational interviewing: Using role play is as effective as using simulated patients. Medical Education, 42, 637–644. McKenna, L., Innes, K., French, J., Streitberg, S., & Gilmour, C. (2011). Is history taking a dying skill? An exploration using a simulated learning environment. Nurse Education in Practice, 11, 234–238. Miller, R. (1990). Managing difficult patients. London: Faber & Faber. Monahan, D. J., Grover, P. L., Kavey, R. E., Greenwald, J. L., Jacobsen, E. C., & Weinberger, H. L. (1988). Evaluation of a communication skills course for second-year medical students. Journal of Medical Education, 63, 372–378. Nestel, D., & Kneebone, R. (2010). Perspective: Authentic patient perspectives in simulations for procedural and surgical skills. Academic Medicine, 85, 889–893. Supplementary material available online Appendix, available online at http://informahealth care.com/doi/abs/10.3109/17549507.2011.638727. 173 Ramsay, J., Keith, G., & Ker, J. S. (2008). Use of simulated patients for a communication skills exercise. Nursing Standard, 22, 39–44. Rimondini, M., Del Piccolo, L., Goss, C., Mazzi, M., Paccaloni, M., & Zimmermann, C. (2006). Communication skills in psychiatry residents: How do they handle patient concerns? An application of sequence analysis to interviews with simulated patients. Psychotherapy and Psychosomatics, 75, 161–169. Syder, D. (1996). The use of simulated clients to develop the clinical skills of speech and language therapy students. European Journal of Disorders of Communication, 31, 181–192. Wilson, W. J., Hill, A., Hughes, J., Sher, A., & Laplante-Levesque, A. (2010). Student audiologists’ impressions of a simulation training program. The Australian and New Zealand Journal of Audiology, 32, 19–30. Zraick, R. I., Allen, R. M., & Johnson, S. B. (2003). The use of standardized patients to teach and test interpersonal and communication skills with students in speech-language pathology. Advances in Health Sciences Education, 8, 237– 248. Copyright of International Journal of Speech-Language Pathology is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Tutor Answer

Robert__F
School: University of Maryland

Please let me know if there is anything needs to be changed or added. I will be also...

flag Report DMCA
Review

Anonymous
Outstanding Job!!!!

Similar Questions
Hot Questions
Related Tags

Brown University





1271 Tutors

California Institute of Technology




2131 Tutors

Carnegie Mellon University




982 Tutors

Columbia University





1256 Tutors

Dartmouth University





2113 Tutors

Emory University





2279 Tutors

Harvard University





599 Tutors

Massachusetts Institute of Technology



2319 Tutors

New York University





1645 Tutors

Notre Dam University





1911 Tutors

Oklahoma University





2122 Tutors

Pennsylvania State University





932 Tutors

Princeton University





1211 Tutors

Stanford University





983 Tutors

University of California





1282 Tutors

Oxford University





123 Tutors

Yale University





2325 Tutors